EU data protection line on AI: Banning ChatGPT is not possible

0
11
EU data protection line on AI: Banning ChatGPT is not possible


Large AI models are trained with large amounts of personal information without the consent of those affected. This is a major data security issue. The European Data Protection Board (EDPB) has issued an opinion on artificial intelligence (AI) models in the light of the General Data Protection Regulation (GDPR). This is just the beginning of the actual testing work by national supervisory authorities.

Advertisement


EU data protection authorities have defined a framework and developed a 3-step test for legitimate AI solutions. This framework leaves civil society organizations and various associations wondering what would happen in a European Economic Area with large language models trained with personal information and assistants and bots based on them. It is not clear what the data protection authorities’ decisions will be, given the many unclear points in the paper. The EDPB does not rule out banning illegally created AI models or applications. However, at the same time, measures for their application have also been implemented, which may be of a technical or organizational nature.

Internet Governance Forum: Experts warn against UN cybercrime conventionInternet Governance Forum: Experts warn against UN cybercrime convention

Data protection activists are now increasing pressure on supervisory authorities. “Essentially, EDSA says: If you follow the law, everything is fine,” NOEB (None of Your Business), a civil rights organization founded by Max Schrems, told Euractiv. “But as far as we know, none of the major players in the AI ​​landscape comply with the GDPR.” Privacy International last week A presentation to EDSA Created, according to which models such as GPT, Gemini or Cloud were “trained without sufficient legal basis” using personal information and are not able to protect the rights of those affected.

Italian data protection regulator Garante has already temporarily blocked ChatGPT. They justified this by saying, among other things, that the large-scale storage and use of personal data for “training purposes” was non-transparent and not in line with the GDPR. Garante now has the possibility to reopen the case in accordance with ESDA requirements. Its French counterpart, CNIL, is already committed to “finalizing the EDSA recommendations and ensuring the coherence of its work with this first harmonized European situation.” The main focus will be on web scraping, i.e. massive reading of data from more or less open online sources. EDSA itself wants to continue working on this point.

Data Protection Officers of Baden-Württemberg and Rhineland-Palatinate, Tobias Kober and Dieter Kugelmann, The first reaction to the EDSA decision said: “This statement does not make any statements about the acceptability of specific AI models that are already in the market.” Rather, the committee has established “guardrails for data protection review of AI systems and their design in individual cases.” Basically, it is “an important step towards legal certainty for developers and users of AI systems, as well as those whose data is processed in this context.”

Deputy Federal Data Protection Commissioner Andreas Hartl stressed that ESDA enables “responsible AI”. Furthermore, there is a need for politics: “We would also like to see legal rules that are as clear as possible about when training data can be processed.” The Federal Association of the Digital Economy (BVDW) was hardly enthused: EDSA had provided “little clarity and orientation.” The interpretation and demarcation of the line is complex and difficult. “Essential questions remain unanswered on more than 36 pages, creating more legal uncertainty for developers and users of AI.” There was a lack of appropriate and technically feasible solutions.

meta boss mark zuckerberg true, “sad“At this point I basically have to tell my teams to implement our new AI advancements everywhere except the EU.” He was responding to a comment by META chief lobbyist Nick Clegg that the work of the EU watchdogs was “disappointingly” slow. Clegg appealed to national auditors to implement the new principles “quickly, pragmatically and transparently”, otherwise, necessary AI advancements will not occur in the EU.


(ds)

Right to “fast” Internet: Federal Council votes for higher minimum bandwidthRight to “fast” Internet: Federal Council votes for higher minimum bandwidth

LEAVE A REPLY

Please enter your comment!
Please enter your name here