Facebook explained in more detail its new test for combating revenge porn following mass confusion over how exactly the system works and whether it puts users at a higher risk for abuse. The system, currently being piloted in Australia in partnership with the country’s eSafety Commissioner, allows users to upload nude photos preemptively directly to Facebook Messenger, so the company can create a digital fingerprint of sorts for the file to then prevent it from being uploaded maliciously in the future.
Because the pilot test was reported first by the Australian Broadcasting Corporation, with no input from Facebook itself, many users found the notion of uploading their own nude photos directly to the social network a bit unsettling. While the initial report made clear that it was not in fact counter-productive and gave Facebook the means to track the files across its network, many people still walked away from the story bewildered.
Now, Facebook is clarifying how the system works via a blog post from Antigone Davis, the company’s global head of safety. First, a user must decide to upload the image or video they fear may be used by a malicious third-party, like a vindictive ex partner or an online harasser. This has a necessary risk built in, but “it’s a risk we are trying to balance against the serious, real-world harm that occurs every day when people (mostly women) can’t stop NCII from being posted,” Facebook security chief Alex Stamos explained on Twitter. Stamos is using the abbreviated form of “non-consensual intimate image,” more colloquial known as revenge porn.
From there — and once the user has completed an online form through the website of Australia’s eSafety Commissioner — a member of Facebook’s Community Operations team reviews the image and then “hashes” it, or creates a numerical representation of the image that Facebook says cannot be read by humans. The company considered blurring out images before they ended up in the hands of human reviewers, but decided against it because that may have resulted in accidentally hashing legitimate images. So to clarify, someone at Facebook is indeed looking at the nude photos, but the company stresses that these are “specially trained representatives.”
According to Stamos, “There are algorithms that can be used to create a fingerprint of a photo/video that is resilient to simple transforms like resizing.” So Facebook is saying there shouldn’t be easy workarounds like changing some basic aspect of the photo file to bypass the company’s detection system.
Some salient points:
1) We already have a mechanism for victims of NCII to report images that are posted on our products. This test is intended to help those victims who are being blackmailed by an abusive partner or criminal and who want to take action.— Alex Stamos (@alexstamos) November 9, 2017
“Once we hash the photo, we notify the person who submitted the report via the secure email they provided to the eSafety Commissioner’s office and ask them to delete the photo from the Messenger thread on their device,” Davis writes. “Once they delete the image from the thread, we will delete the image from our servers.” Facebook says it’s storing the file for only a brief amount of time, and that it’s still only a blurred out version of the file that only a small number of employees on the Community Operations team has access to.
“To prevent adversarial reporting, at this time we need to have humans review the images in a controlled, secure environment,” Stamos further explained on Twitter. “We are not asking random people to submit their nude photos. This is a test to provide some option to victims to take back control. The test will help us figure out how to best protect people on our products and elsewhere.”
Comments
Source Article from https://redice.tv/news/facebook-will-review-nude-photos-to-stop-revenge-porn
Related posts:
Views: 0