This blog post is written in collaboration with the HR experts at Töölön Vire.
Artificial intelligence has become a natural part of everyday digital work for most people – and it brings plenty of benefits. But it also comes with risks, and the EU has taken notice. Since early 2025, the Union has been gradually introducing a regulatory framework for a technology that has taken the world by storm. The main provisions will take effect on 2 August 2026, and there are a few things you’ll want to have under control before then.
Preparing for compliance is not only a legal exercise. It is also a strategic leadership question. In many organisations, HR plays a central role in ensuring that AI use in recruitment, performance management and employee data processing is both compliant and ethically sound.
Who does this apply to?
-
Organisations operating in an EU or EEA country.
-
Anyone using AI systems classified as high-risk.
-
Companies using AI in areas such as recruitment, critical infrastructure (including banking and insurance), or education.
-
Developers and distributors of AI systems.
What’s the new main rule?
Even though the main provisions don’t take effect until August 2, it’s important to start preparing now.
The new rules primarily target high-risk AI systems. In practice, this means:
-
Risk management: You must establish structured processes to identify, assess and mitigate risk.
-
Data quality: If you use data to train AI systems, you need to ensure that the data is accurate, relevant and of high quality.
-
Documentation: Your organisation must be able to provide technical documentation demonstrating compliance with the regulation.
-
Transparency: Users must be informed about how the system works and how it affects them.
-
Human oversight: AI systems must be meaningfully supervised by humans.
In practice, meaningful human oversight requires more than a name on a document. Organisations need clear governance models, defined accountability and trained supervisors who understand both the technology and its risks. This is where structured AI capability building becomes critical.
Four Levels of Risk
The AI Act operates with four levels of risk. General-purpose AI models such as ChatGPT and Copilot are also covered by the regulation. Different requirements apply depending on the level of risk:
-
Unacceptable risk: AI systems that pose an unacceptable level of risk are outright prohibited.
-
High risk: The AI Act imposes strict requirements on high-risk systems, including ensuring that they do not discriminate and that proper safeguards are in place.
-
Limited risk: If a system is considered to pose limited risk, you are required to inform users that they are interacting with AI.
-
Minimal or no risk: Systems that pose little to no risk are generally not subject to strict regulation under the AI Act. However, establishing ethical guidelines for their use is still strongly recommended.
In short
We said it at the beginning of this post, and we’ll say it again: AI is incredibly powerful and useful, but it comes with real risks, and there are quite a few of them. If you work systematically with the four risk levels and regularly evaluate the systems you use, you’ll be on solid ground. And remember: the human touch is still essential – and always will be.
Organisations that treat the AI Act merely as a compliance checklist risk missing the bigger opportunity. Those that integrate AI governance into their HR and leadership strategy can strengthen trust, transparency and long-term competitiveness at the same time.
About Töölön VireAt Töölön Vire, they build HR services for small and medium-sized businesses — the way companies actually want them. Often in a way they hardly dared to believe was possible. At Töölön Vire, HR doesn’t stand for Human Resources. It stands for Human Results. They simplify processes, develop better practices, and support their clients in HR administration and leadership support — and when needed, also in leadership and communication. Always with clear goals in sight. |
