Study: Twitter removes deepfakes more quickly when reported as copyright violations

0
28
Study: Twitter removes deepfakes more quickly when reported as copyright violations


X is more likely to remove the deepfake if it is reported on the basis of copyright violation. Researchers at the University of Michigan have discovered this. Accordingly, X, formerly Twitter, removed from the platform after several weeks only content that was reported as non-consensual intimate representation or none at all.

Advertisement


As part of the study, researchers posted 50 AI-generated nude photos on Twitter. They reported half of them as non-consensual depictions of nudity and the other half as copyright violations. Twitter removed all 25 images reported as copyright violations from the platform within 25 hours. The accounts used to post the images have been temporarily blocked.

Images that researchers reported as non-consensual depictions of nudity were still online three weeks after they were reported. The accounts that posted these were neither blocked, nor informed or warned in any way.

Cartel Office: Meta is not allowed to merge user data without askingCartel Office: Meta is not allowed to merge user data without asking

Obviously, in case of copyright infringement in the United States, the Digital Millennium Copyright Act (DMCA) applies. A US law passed in 1998 was intended to create a legal framework for copyright protection in the digital sphere. Accordingly, platforms should respond promptly to reports of copyright infringement and remove the relevant content after investigation. This is clearly incentive enough for Twitter to address the reports immediately.

Individual states also have laws against the dissemination of non-consensual depictions of nudity. Currently, efforts are being made for a nationwide law. At the moment, there appears to be a lack of incentive for Twitter and other platforms to block reports of some depictions of nudity and deepfakes at the same pace as copyright violations.

However, according to the paper’s authors, photographs published non-consensually online cannot be removed via DMCA notification to every victim: the copyright of a photograph always belongs to the person who took the photograph. If a photo is taken by someone else, the DMCA does not apply. Apart from this, when you hear any such message, you will have to speak loudly. 404 media Apparently provide relatively comprehensive information. Although there are services that can be hired to make such reports, not every victim can afford the associated costs.

The study’s authors conclude that a law against non-consensual distribution of intimate personal material should encourage Internet platforms to respond promptly to such reports.

They cite GDPR as a positive example: The General Data Protection Regulation has changed the way platforms previously handled users’ data and content. The data protection requirements formulated therein are important steps in the right direction. According to the authors, the protection of intimate personal representations requires an equally binding legal framework. a peer review of study is still pending.

In the European Union, non-consensual publications of nude images and deepfakes fall under the Digital Services Act or DSA for short. The DSA is in full force from February 2024 and was implemented in Germany with the Digital Services Act. The law requires platforms to immediately moderate such content.

The short messaging service run by Elon Musk was already introduced into the law in late 2023. At that time, the EU Commission initiated formal proceedings against the service to investigate whether X might be violating the DSA in the areas of risk management, content moderation and dark patterns, among other things.


(KST)

“Cyber ​​resilience must be top of mind”: Bundestag discusses NIS2“Cyber ​​resilience must be top of mind”: Bundestag discusses NIS2

LEAVE A REPLY

Please enter your comment!
Please enter your name here