AI Legislative Guide |
|
Finland |
|
|
(Europe)
Firm
Roschier, Attorneys Ltd.
Contributors
Johanna Lilja |
|
| Has specific legislation, final regulations or other formal regulatory guidance addressing the use of AI in your jurisdiction been implemented (vs reliance on existing legislation around IP, cyber, data privacy, etc.)? | No. Finland does not have standalone national legislation specifically governing the use of artificial intelligence. However, Finland is currently in the process of transposing the EU AI Act into national legislation. The national implementation of the AI Act is being carried out in two phases. In the first phase, the authorities responsible for supervising compliance with the AI Act will be designated, and provisions will be enacted regarding their powers to impose sanctions for non-compliance. As part of this phase, a new Act on the Supervision of AI Systems will be adopted. The government proposal was submitted to Parliament on 8 May 2025 and is still under consideration. Therefore, the national legislation concerning the tasks of national authorities, the designation of notified bodies, and the imposition of sanctions will not apply as of 2 August 2025, in accordance with the transition period set out in the AI Act. Instead, these provisions will take effect after the proposal has been adopted later in the autumn. As the general application of the AI Act will begin on 2 August 2026, the second phase will involve adopting national provisions related to the implementation of the remaining parts of the AI Act. This phase will cover, among other things, high-risk AI systems related to the safety components of critical infrastructure and the establishment of a national testing environment, known as a regulatory sandbox, for AI systems that support innovation and legal certainty. |
| Please provide a short summary of the legislation/regulations/guidance and explain how legislators aim to strike the balance between innovation and regulation. | The EU AI Act applies a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk, and minimal risk. Most of the AI Act addresses high-risk AI systems, while low-risk AI systems are subject to lighter transparency obligations (such as developers must ensure that end-users are aware that they are interacting with AI), and minimal risk systems remain unregulated. The AI Act aims to support innovation and include some key elements to strike a balance between innovation and regulation, such as:
|
| Which agency regulates the use of AI in your jurisdiction? | According to the government's proposal, the supervision of the AI Act will be divided among several authorities in Finland. Finnish Transport and Communications Agency (Traficom) will serve as the central point of contact, responsible for liaising with EU cooperation bodies and supporting other national authorities in this role. The Office of the Data Protection Ombudsman will, alongside its other supervisory tasks, have central responsibility for supervising prohibited AI-related practices. |
AI Legislative Guide
No. Finland does not have standalone national legislation specifically governing the use of artificial intelligence. However, Finland is currently in the process of transposing the EU AI Act into national legislation.
The national implementation of the AI Act is being carried out in two phases. In the first phase, the authorities responsible for supervising compliance with the AI Act will be designated, and provisions will be enacted regarding their powers to impose sanctions for non-compliance. As part of this phase, a new Act on the Supervision of AI Systems will be adopted.
The government proposal was submitted to Parliament on 8 May 2025 and is still under consideration. Therefore, the national legislation concerning the tasks of national authorities, the designation of notified bodies, and the imposition of sanctions will not apply as of 2 August 2025, in accordance with the transition period set out in the AI Act. Instead, these provisions will take effect after the proposal has been adopted later in the autumn.
As the general application of the AI Act will begin on 2 August 2026, the second phase will involve adopting national provisions related to the implementation of the remaining parts of the AI Act. This phase will cover, among other things, high-risk AI systems related to the safety components of critical infrastructure and the establishment of a national testing environment, known as a regulatory sandbox, for AI systems that support innovation and legal certainty.
The EU AI Act applies a risk-based approach, classifying AI systems into four categories: unacceptable risk (prohibited), high risk, limited risk, and minimal risk. Most of the AI Act addresses high-risk AI systems, while low-risk AI systems are subject to lighter transparency obligations (such as developers must ensure that end-users are aware that they are interacting with AI), and minimal risk systems remain unregulated.
The AI Act aims to support innovation and include some key elements to strike a balance between innovation and regulation, such as:
- AI regulatory sandboxes foster innovation by providing a controlled environment where developers can develop, train, test, and validate new AI systems before they are placed on the market or put into service.
- The AI Act recognises the value of open source in driving innovation and includes certain exemptions for developers of AI systems released under free and open-source licences.
- AI systems developed and used solely for scientific research and development are excluded from the scope of the AI Act. Additionally, for product-oriented research, the Act does not apply to AI systems until they are placed on the market or put into service.
According to the government's proposal, the supervision of the AI Act will be divided among several authorities in Finland. Finnish Transport and Communications Agency (Traficom) will serve as the central point of contact, responsible for liaising with EU cooperation bodies and supporting other national authorities in this role. The Office of the Data Protection Ombudsman will, alongside its other supervisory tasks, have central responsibility for supervising prohibited AI-related practices.