Artificial Intelligence failure or not in New Zealand terror attack

Many people have asked why artificial intelligence (AI) didn’t detect the video from last week’s attack automatically. AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove. But it’s not perfect.
AI systems are based on “training data”, which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video. This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems. However, this particular video did not trigger our automatic detection systems. To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare. Another challenge is to automatically discern this content from visually similar, innocuous content – for example if thousands of videos from live-streamed video games are flagged by our systems, our reviewers could miss the important real-world videos where we could alert first responders to get help on the ground.
AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year we more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers, and why we encourage people to report content that they find disturbing.
Reporting
During the entire live broadcast, we did not get a single user report. This matters because reports we get while a video is broadcasting live are prioritized for accelerated review. We do this because when a video is still live, if there is real-world harm we have a better chance to alert first responders and try to get help on the ground.
Last year, we expanded this acceleration logic to also cover videos that were very recently live, in the past few hours. Given our focus on suicide prevention, to date we applied this acceleration when a recently live video is reported for suicide.
In Friday’s case, the first user report came in 29 minutes after the broadcast began, 12 minutes after the live broadcast ended. In this report, and a number of subsequent reports, the video was reported for reasons other than suicide and as such it was handled according to different procedures. As a learning from this, we are re-examining our reporting logic and experiences for both live and recently live videos in order to expand the categories that would get to accelerated review.
Circulation of the Video
The video itself received fewer than 200 views when it was live, and was viewed about 4,000 times before being removed from Facebook. During this time, one or more users captured the video and began to circulate it. At least one of these was a user on 8chan, who posted a link to a copy of the video on a file-sharing site and we believe that from there it started circulating more broadly. Forensic identifiers on many of the videos later circulated, such as a bookmarks toolbar visible in a screen recording, match the content posted to 8chan.

Comments

Popular Posts