Providers of advanced artificial intelligence (AI) models with general-purpose applications that could pose systemic risks in the future must submit a comprehensive “security framework” before launching them on the European market. They must also undertake to fully comply with European copyright rules. This is provided for in the first draft of a code of conduct for operators such as OpenAI, Google or Meta, which aims to make it easier to implement the AI regulation passed by the EU Parliament in March. It only contains very vague requirements for providers of large AI basic models. For example, they must make public a summary of the training data used. If the risk is high, documentation and some form of instructions for use should be provided.
Advertisement
It was published by the EU Commission on Thursday. 36 page paper EU AI Office appoints independent experts As developed by AI veteran Yoshua Bengio in workshops with several hundred participants. From Germany, Frankfurt information lawyer Alexander Peukert and systems designer Alexander Zacherl were involved. The code is designed to be self-regulating and applies until a standard is established. For example, signatories must undertake to “enable a meaningful, independent expert assessment of the risks” of general purpose AI models collected “throughout their entire life cycle” and mitigate identified threats.
Additionally, providers must present their own taxonomy of systemic risks. In particular, they must monitor cyber attacks, chemical, biological, radiological and nuclear threats or loss of control. The second focus is on preventing “unexpected developments”, manipulation and disinformation, as well as “large-scale illegal discrimination against individuals, communities or societies”. AI models could have “far-reaching negative impacts” on public health, security, democratic processes, critical infrastructure, fundamental rights, environmental resources, human activity or society as a whole, it continues. This should be made in advance.
“No ban on piracy websites”
Signatories should also acknowledge that use of copyrighted material requires the permission of rights holders “unless appropriate exceptions and limitations to copyright apply.” They will be required to properly review copyright claims “before entering into a contract with a third party for the use of the dataset to develop general AI models.” When using clauses for text and data mining contained in EU copyright law, they must guarantee that “they have lawful access to copyrighted material” and, for example, potentially form a reservation of rights expressed in a robots.txt file. Must comply with. “No ban on piracy websites” is another claim. There should be an overall responsibility to ensure maximum possible transparency.
The document, which still outlines various open questions, will now be discussed next week in four working groups set up by the KI office and at a plenary meeting on 22 November. Stakeholders, Member State representatives and European and international observers can also submit comments through the consultation until 28 November. The final version is expected to be presented in May 2025 and will be in effect by summer at the latest. Observers criticize that this year’s program in particular is too ambitious and hardly enables effective participation. Last year, the federal government advocated that mandatory self-regulation, if possible in the form of binding codes of conduct and transparency requirements, would be sufficient for basic models.
(MKI)