AI Legislative Guide |
|
European Union |
|
(Europe)
Firm
Arthur Cox
Contributors
Rob Corbet |
|
Has specific legislation, final regulations or other formal regulatory guidance addressing the use of AI in your jurisdiction been implemented (vs reliance on existing legislation around IP, cyber, data privacy, etc.)? | Yes. |
Please provide a short summary of the legislation/regulations/guidance and explain how legislators aim to strike the balance between innovation and regulation. | The EU Artificial Intelligence Act ("EU AI Act") is the world's first-ever comprehensive legal framework on AI. The EU AI Act aims to provide AI developers and deployers with clear requirements and obligations on specific uses of AI while also seeking to reduce administrative and financial burdens for businesses developing or using AI (especially SMEs). The EU AI Act classifies AI systems according to four risk categories: (i) unacceptable risk AI systems that are prohibited from use (e.g. AI systems used for social scoring by governments); (ii) high-risk AI systems that are subject to the highest level of regulation under the EU AI Act (e.g. AI systems used for evaluating job candidates or staff performance); (iii) limited risk AI systems which are subject to a lower level of regulation under the EU AI Act (e.g. chatbots and voice assistants); and (iv) minimal risk AI systems are not subject to requirements of the EU AI Act as they pose little or no risk (e.g. spam filters). The majority of the EU AI Act's obligations fall on high-risk AI systems. The requirements relating to such systems include conducting adequate risk assessments and implementing risk mitigation measures; ensuring high data quality standards; maintaining detailed documentation and record-keeping; and providing certain information relating to the AI systems' intended use and capabilities. The majority of these requirements fall on developers of high-risk AI systems but deployers (i.e. users) of high-risk AI systems must also meet certain of these requirements (e.g. requirements regarding transparency and data accuracy). The requirements under the EU AI Act relating to limited-risk AI systems are mainly focused on transparency (i.e. providing certain information relating to the AI system). The EU AI Act also contains specific provisions governing General Purpose AI ("GPAI") systems, with additional requirements applying to GPAI systems considered as having systemic risk under the EU AI Act. The EU AI Act seeks to strike a balance between innovation and regulation by providing clear requirements to AI systems on a tiered basis (with less onerous requirements applying to lower-risk AI systems) and by seeking to specifically cater to certain needs of SMEs (e.g. providing SMEs with priority access to AI regulatory sandboxes and mandating training/awareness campaigns relating to the AI Act that are specifically tailored to SMEs). The EU AI Act also excludes from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development. |
Which agency regulates the use of AI in your jurisdiction? | The AI Office, as established within the European Commission is responsible for regulating AI in the European Union. |
AI Legislative Guide
Yes.
The EU Artificial Intelligence Act ("EU AI Act") is the world's first-ever comprehensive legal framework on AI. The EU AI Act aims to provide AI developers and deployers with clear requirements and obligations on specific uses of AI while also seeking to reduce administrative and financial burdens for businesses developing or using AI (especially SMEs).
The EU AI Act classifies AI systems according to four risk categories: (i) unacceptable risk AI systems that are prohibited from use (e.g. AI systems used for social scoring by governments); (ii) high-risk AI systems that are subject to the highest level of regulation under the EU AI Act (e.g. AI systems used for evaluating job candidates or staff performance); (iii) limited risk AI systems which are subject to a lower level of regulation under the EU AI Act (e.g. chatbots and voice assistants); and (iv) minimal risk AI systems are not subject to requirements of the EU AI Act as they pose little or no risk (e.g. spam filters). The majority of the EU AI Act's obligations fall on high-risk AI systems. The requirements relating to such systems include conducting adequate risk assessments and implementing risk mitigation measures; ensuring high data quality standards; maintaining detailed documentation and record-keeping; and providing certain information relating to the AI systems' intended use and capabilities. The majority of these requirements fall on developers of high-risk AI systems but deployers (i.e. users) of high-risk AI systems must also meet certain of these requirements (e.g. requirements regarding transparency and data accuracy). The requirements under the EU AI Act relating to limited-risk AI systems are mainly focused on transparency (i.e. providing certain information relating to the AI system). The EU AI Act also contains specific provisions governing General Purpose AI ("GPAI") systems, with additional requirements applying to GPAI systems considered as having systemic risk under the EU AI Act. The EU AI Act seeks to strike a balance between innovation and regulation by providing clear requirements to AI systems on a tiered basis (with less onerous requirements applying to lower-risk AI systems) and by seeking to specifically cater to certain needs of SMEs (e.g. providing SMEs with priority access to AI regulatory sandboxes and mandating training/awareness campaigns relating to the AI Act that are specifically tailored to SMEs). The EU AI Act also excludes from its scope AI systems and models specifically developed and put into service for the sole purpose of scientific research and development.
The AI Office, as established within the European Commission is responsible for regulating AI in the European Union.