A 25-year-old Frenchman who declared loyalty to the Islamic State (IS) militant group opened a new chapter in global terrorism this week by using social media to stream video live from the scene of his gruesome attack.
Larossi Abballa used Facebook to broadcast his 12-minute video on June 13 from the home near Paris of his victims, showing images of the slain police couple while holding their 3-year-old son hostage.
Police shot Abballa dead and rescued the traumatized child after a three-hour standoff.
But Abballa's video, including his calls for others to follow his example, remained on Facebook long enough to be copied and redistributed by pro-IS websites. And although the video was taken down quickly, Facebook did not disable his account until the following day.
IS has used social media for years to attract and train new recruits, spread propaganda, and publicize its attacks and killings.
FBI Director James Comey has described IS media as a "highly sophisticated" effort that "utilizes all the tools and techniques of modern-day, social-media, Internet-based advertising."
Security officials have expressed concern that developing technology could make it easier for lone-wolf terrorists with smartphones, like Abballa, to produce their own live, point-of-view broadcasts while carrying out attacks.
Meanwhile, three of the biggest global social-media platforms for live streaming -- Facebook Live, Twitter's Periscope, and Google's YouTube -- have expanded services in recent months to bring live video capabilities to hundreds of millions of users.
Live Crime
Even before that expansion, shocking violent crime videos on social media were already foreshadowing the risk.
In August -- in what was dubbed America's first social-media murder -- a reporter and a cameraman in Virginia were shot dead by a disgruntled former colleague during a live television interview. The killer wore a head-mounted camera and uploaded the video to his Facebook account before killing himself.
Then in Ohio in April, a Russian-born teenager witnessing the rape of her 17-year-old friend pointed her smartphone at the attack and streamed live video using Twitter's Periscope app.
In London, also in April, police broke up a clash between two girls gangs after they used Periscope to organize the street brawl.
Policing Themselves
Such violence reveals the challenges faced by Facebook, Twitter, and Google with policing their own community standards -- a process known in the industry as "privatized enforcement."
Joe McNamee, executive director of a Brussels-based NGO called European Digital Rights, says privatized enforcement is the result of agreements made between social-media firms and governments -- including the European Commission and the German government.
The agreement aims to preserve free speech on the Internet and protect the privacy rights of users by keeping government agents out of the vetting process. It calls for social-media firms to employ their own teams to take down content that breaches their terms of service.
But McNamee says the agreement is a "perfect compromise where everybody loses and nobody wins." That, he says, is because users can have their account removed for legally expressing their opinion while others who violate the law do not face criminal justice.
"From the perspective of somebody that would be doing something illegal online, they don't really have to worry too much," McNamee tells RFE/RL. "Instead of having a law being enforced, the only punishment that a social-media company can mete out is to delete the content."
Increasing Response Time
Facebook said in a statement after Abballa's attack that there were additional "unique challenges when it comes to content and safety for live videos."
In fact, all of the global social-media platforms rely on reports from users about content that violates their terms of service. Once offending content has been reported, the review process typically can take at least 24 hours.
But in order to prevent terrorists from using the new live-stream video platforms to spread their doctrine or glorify their attacks, the review teams for social-media firms need to respond to live-stream videos almost instantly.
Twitter says that "there is no 'magic algorithm' for identifying terrorist content on the Internet, so global online platforms are forced to make challenging judgment calls based on very limited information and guidance." The teams are able to proactively flag a live video when it attracts large numbers of viewers.
Facebook, Twitter, and Google say they have increased the size of the teams that review reports about content promoting terrorism. In May, Twitter announced that the increased size of the review teams helped reduce response time significantly, leading since mid-2015 to the suspension of more than 125,000 accounts with ties to the IS network.
Twitter also said it looked "into other accounts similar to those reported" and used "proprietary spam-fighting tools to surface other potentially violating accounts for review by our agents."
Twitter also launched a global outreach campaign in 2013 together with NGOs to fight violent extremism online.
An Impossible Task?
YouTube says it has teams around the world that review reported videos 24 hours a day. It says it will terminate an account when it has reasonable belief that the person behind it is part of group that the U.S. government has identified as a "foreign terrorist organization."
But technical experts in the industry say existing algorithms used to flag violations are focused mainly on identifying copyright violators rather than content that promotes terrorism.
Facebook says it is working to create artificial-intelligence tools that can interpret live videos in real time. But the company is not widely using such tools.
Martin Hutty, head of public service at the London Internet Exchange, says there can "never be something that magically distinguishes something that is bad from something that is not bad" because such questions are a matter of "human judgment and human standards."
Hutty, who serves as a board member for the European Trade Association for Internet Service Providers, says social-media platforms have got "an impossible job if we expect them to ensure that their platforms are free of this content."
"The best that can be hoped for is that people who are using it for malign intent can be identified and removed as soon as reasonably possible," Hutty says. "But we have to be realistic about what is achievable."