The European Union Artificial Intelligence Act is a risk-based regulation introduced by the European Union (EU). Any organization that sells products and services to the EU market must comply with this AI Act. The EU AI Act ensures that harmony, fairness, and transparency are preserved in society so that companies building AI systems have a fair go. This regulation is very similar to GDPR, where the EU sets global standards on how personal data is being collected, managed, and used. Given the prevalence of AI in our day-to-day lives, it is very important to understand the nuances of AI systems and how they shape our lives.
Starting February 02, 2025, the European Union will begin enforcing Chapters I and II of the Act where organizations must devote resources to developing AI literacy so that everyone is aware of the EU AI Act and its regulations.
EU AI Act Framework
The EU AI Act categorizes AI systems into three categories as
- High Risk: AI systems that produce significant or unacceptable risks to individuals and society. These types of AI systems involve some kind of scoring/rating system to discriminate against individuals who have a lot of inherent bias
- Limited Risk: AI systems that produce limited risk such as chatbots whereby people are made aware that they are chatting with a machine
- Low Risk: AI systems that produce minimal risk or no risk. This includes spam filtering systems, weather forecasting, and so on
High-Risk AI System
Few AI systems are prohibited according to the EU AI Act, which includes
- Scoring/Rating system that uses past data and makes decisions that profoundly impact human lives. For example, an AI system that does not provide loans to certain characteristics of people and so on.
- Social profiling system that discriminates certain types of people based on various characteristics
- AI system that manipulates and exploits vulnerabilities of individuals to cause significant harm
The high-risk AI systems are the ones that profile individuals using various data and make decisions based on those profiled data that cause harm to individuals. These AI systems are highly regulated as per the EU AI Act.
Limited Risk AI System
Few AI systems are permitted for use in the EU as they are classified as limited-risk systems. This includes
- Chatbot: Generates a response to customer questions based on a knowledge source
- Virtual assistant: Generates audio or video output to offer assistance to any creative tasks
- Generative tool: Generates text, audio, and video all together for content creation or helping humans
These AI systems are allowed to operate as long as customers are informed that they are interacting with an AI system and system outputs are labeled as content being artificially generated. In addition to transparency notifications, it is important to monitor the outputs of the AI system so that all data is logged for compliance purposes.
Low-Risk AI System
These are a few low-risk AI systems that can be deployed with minimal transparency and technical documentation that includes
- Spam filters: Detects and identify spam messages in an email or other system where AI system detects unwanted messages at scale
- Simple recommendation systems: Recommends relevant content based on existing content
- Video games: AI system used inside the video games for basic simulation and functionalities
- Simple prediction system: This includes weather forecasting, traffic jam predictions, and so on
Even though it is recommended to have transparency documentation and standard testing procedures, it is not mandatory.
General Purpose AI Models
Any provider or Large Language Models are classified as a systemic risk, and it has many obligations. The systemic risk includes the number of parameters in Large Language Models, compute power used to train the model, and so on. The EU AI Act provides a code of practice such that Large Language Models are built ethically and serves humanity responsibly without any discrimination/bias.
Also Read: The Era of LLM Agents: Next Big Wave in Knowledge Management
EU Enforcement
The EU has set up an AI office to govern the implementation of the EU AI Act. A European Artificial Intelligence Board will be established to oversee the implementation of this regulation and representatives of this board will be from each member state. The board will facilitate the development of practices that can be adhered to by the vendors, providers, and so on. An advisory forum is created to provide technical expertise to the board via consulting with all stakeholders. An EU database containing a list of high-risk AI systems will be created along with other data required to monitor and enforce this regulation. This regulation also applies to companies operating outside of the EU market but offering products and services to the EU region.
Role of a Technical Writer
To comply with the EU AI Act, providers of AI systems must disclose all necessary information about how their AI system works and backend implementation. Technical writers will play an important role in documenting AI systems which includes
- Clear documentation on algorithms that are being used by the AI system
- Technical documentation on what data is being used and what data is captured as part of the input and outputs respectively
- Ethical and safety documentation covering responsible AI practices and Intellectual Property
- Documentation covering essentials of cyber security
It is high time that technical writer learns the basics of AI technology to be able to synthesize information from data scientists, legal teams, ML engineers, and product managers to collect important information for compliance with the EU AI Act.
Also Read: Examples of Technical Writing That Every Technical Writer Should Know
Conclusion
Understanding the EU AI Act is indispensable to understanding your legal obligations and implementing appropriate measures for compliance. The EU Act is the first step in ensuring that software vendors and model vendors practice responsible AI. Software vendors must be transparent with their AI systems being deployed in their products & services by publishing necessary technical documentation associated with their AI product features and functionalities. In addition to this, vendors can also publish AI literacy artifacts to help their customers get educated on AI fundamentals and technology advancements.
Join the thousands of industry leaders who trust Document360 for their documentation needs. Start your free trial now!
GET STARTED