That’s the Parliament wishlist, not the actual text of the law. (At least I think that’s the version that got passed).
Stuff like that is why it’s a good idea parliamentarians aren’t drafting stuff, but an army of technocrats. It’s all too easy to vote in a training requirement into a section about transparency when it’s 3 o’clock in the morning and you and everyone else in the committee wants to go home.
Here’s the transparency article:
Article 52
Transparency obligations for certain AI systems
Providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to detect, prevent, investigate and prosecute criminal offences, unless those systems are available for the public to report a criminal offence.
Users of an emotion recognition system or a biometric categorisation system shall inform of the operation of the system the natural persons exposed thereto. This obligation shall not apply to AI systems used for biometric categorisation, which are permitted by law to detect, prevent and investigate criminal offences.
Users of an AI system that generates or manipulates image, audio or video content that appreciably resembles existing persons, objects, places or other entities or events and would falsely appear to a person to be authentic or truthful (‘deep fake’), shall disclose that the content has been artificially generated or manipulated.
However, the first subparagraph shall not apply where the use is authorised by law to detect, prevent, investigate and prosecute criminal offences or it is necessary for the exercise of the right to freedom of expression and the right to freedom of the arts and sciences guaranteed in the Charter of Fundamental Rights of the EU, and subject to appropriate safeguards for the rights and freedoms of third parties.
Paragraphs 1, 2 and 3 shall not affect the requirements and obligations set out in Title III of this Regulation.
Most of the AI uses out there only have these very limited requirements mostly around transparency. There’s some stuff about training in Article 2 listing outlawed practices, e.g. you may not train models to be subliminal.
Where things get strict is around things like using AI to screen prospective employees where you have to make sure they’re not picking up any unwarranted biases, e.g. judging by sex or nationality. Even more stricter are high-risk systems, listed in Annex III, which are largely uses in administration, critical infrastructure, etc.
All in all I’d say as a first of its kind, the law is pretty darn good, in particular that it classifies requirements for systems not by technology employed, but by their area of application. And the “likeness of natural person” has arts and freedom of speech exception so this kind of stuff doesn’t even need disclosure.
I can mostly find myself agreeing (or at least not having big issues with) with all of the points, except for that one.
Let’s just hope they mean requiring a best effort, rather than outright preventing it in the first place.
Since the article doesn’t actually say what the rules and regulations are, here is a link:
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Yeah, good luck designing that.
That’s the Parliament wishlist, not the actual text of the law. (At least I think that’s the version that got passed).
Stuff like that is why it’s a good idea parliamentarians aren’t drafting stuff, but an army of technocrats. It’s all too easy to vote in a training requirement into a section about transparency when it’s 3 o’clock in the morning and you and everyone else in the committee wants to go home.
Here’s the transparency article:
Most of the AI uses out there only have these very limited requirements mostly around transparency. There’s some stuff about training in Article 2 listing outlawed practices, e.g. you may not train models to be subliminal.
Where things get strict is around things like using AI to screen prospective employees where you have to make sure they’re not picking up any unwarranted biases, e.g. judging by sex or nationality. Even more stricter are high-risk systems, listed in Annex III, which are largely uses in administration, critical infrastructure, etc.
All in all I’d say as a first of its kind, the law is pretty darn good, in particular that it classifies requirements for systems not by technology employed, but by their area of application. And the “likeness of natural person” has arts and freedom of speech exception so this kind of stuff doesn’t even need disclosure.
No way someone is reading this wall of text lol
Speak about yourself.
The law makers doesn’t even know what how the internet works, and they’re supposed to write the laws around it? Sounds like your general politicians.
In other news, they also regulated that knives must be designed to prevent stabbing people, and guns must be designed to only shoot bad guys.
I can mostly find myself agreeing (or at least not having big issues with) with all of the points, except for that one.
Let’s just hope they mean requiring a best effort, rather than outright preventing it in the first place.