Gaining approval in May 2024 the buzz around the EU AI Act is all about addressing the growing concerns over the ethical, legal, and social impacts of AI technologies. This framework is all about providing legal clarity and consistency for businesses in the AI space. It applies to all member states of the European Union. Additionally, its scope extends beyond the EU in certain cases:
- It applies to providers and users of AI systems located within the EU, regardless of where the AI system is developed.
- It also applies to providers outside the EU if their AI systems are used in the EU market or affect EU citizens.
For data engineers, this means helping businesses operate confidently in the AI space by following clearer guidelines for development. It’s all about building investor trust and sparking innovation while ensuring that high-quality data and solid data governance practices are at the forefront. By sticking to these regulations, you can help tackle issues like algorithmic bias and privacy risks, paving the way for responsible AI development that benefits society as a whole.
Key points of the AI Act
- Risk-Based Approach
- Guardrails for AI Systems
- Transparency and Accountability
- Conformity Assessment
Risk-Based Approach
As the landscape of AI technology evolves, so do the regulations that govern it. Under the new EU AI Act, companies are now required to notify users when they interact with specific AI systems—think chatbots, emotion recognition, and biometric categorization. For essential service providers in sectors like insurance and banking, this means conducting thorough impact assessments to understand how AI affects the fundamental rights of the public.
Consequently, data engineers roles will be crucial in building infrastructure that is complaint with the updated regulations. There’s a renewed emphasis on keeping meticulous documentation of data sources, preprocessing steps, and model architectures. Additionally, development of innovative algorithms to track user interactions with these AI features and integrated notification prompts that keep users informed will be required.
Guardrails for AI Systems
Lawmakers have determined that Tech companies creating those cutting-edge AI systems now have some extra hoops to jump through. Going forward they are going to be required to disclose the security and energy efficiency of their creations. To determine which companies are subject to stricter regulations, they will be categorised based on the computing power necessary to train their systems. Businesses will be required to present comprehensive documentation to allow for transparent evaluation of the AI systems’ risks, performance, and alignment with regulatory requirements. This detailed technical documentation would include:
- Training and Dataset Information: Descriptions of the datasets used for training, including their size, sources, and representativeness
- Computing Power and Resources: The technical specifications of the resources utilized, which may include FLOPs (floating-point operations) or computational energy consumption if relevant for assessing systemic risks
- Capabilities and Limitations: Detailed explanations of the model’s design, intended use, performance, and known risks.
- Conformity Documentation: A declaration of compliance with EU standards and a CE marking for high-risk AI systems
Initially, concerns arose regarding whether these regulations would stifle innovation. However, lawmakers have granted a two-year implementation period for the rules spanning 2024–2025, with the AI usage ban taking effect after six months and compliance required for companies developing foundation models within one year. High-risk systems will have an extended timeline to meet the requirements, as the obligations for these systems will take effect after 36 months. These measures aim to safeguard against potential issues stemming from unregulated development.
However, responsible AI practices may introduce performance and scalability considerations. Data scientists and engineers will need to allocate additional resources towards developing solutions that align with compliance standards. Techniques such as differential privacy or federated learning, which are used to protect sensitive data and promote collaboration while preserving privacy, can impact the performance and scalability of AI systems. Data engineers are now tasked with optimising system performance while maintaining responsible AI practices.
Transparency and Accountability
A new European AI office, consisting of a scientific panel of independent experts, will be established to oversee compliance, implementation, and enforcement. The AI act states non-compliance may result in fines ranging from about 1.5% to 7% of global sales turnover, depending on the severity of the offense. Therefore, importance is placed on the explainable AI aspect owing to lawmakers viewing this measure as crucial, not just for safeguarding the interests of stakeholders and end-users but also for safeguarding the developers themselves.
With the spotlight on explainability, we might find ourselves diving deeper into interpretability methods for machine learning models. As a result, data engineers may need to spend more time and effort understanding and incorporating interpretability methods into their machine learning models, which can increase the overall development time and complexity. Balancing model interpretability with predictive accuracy will be their new challenge. They may need to make trade-offs between incorporating interpretability techniques and maintaining high levels of accuracy, which can be a challenging task.
As explainability becomes a key focus in AI development, companies are facing the challenge of incorporating interpretability methods into machine learning models without compromising performance. Data engineers will need to balance the time and resources spent on ensuring models are interpretable with the need to maintain high predictive accuracy. Businesses stand the risk of non-compliance with regulatory standards like the EU AI Act, loss of stakeholder trust, and ethical concerns around model biases. To mitigate these, businesses must adopt transparent AI practices, using tools like SHAP or LIME, conducting regular bias audits, and considering hybrid approaches that balance explainability with model performance. Prioritizing explainability not only ensures compliance but builds trust and mitigates ethical risks, making it a strategic necessity for organizations navigating the evolving AI landscape.
Conformity Assessment
Given their potential for significant harm to health, safety, fundamental rights, environment, democracy, and the rule of law, the EU has tightened the reins on high-risk AI systems prior to market release or deployment. These systems must assess and mitigate risks, maintain usage logs, ensure transparency and accuracy, and incorporate human oversight. To add to the accountability, the public can now voice their concerns about these systems and demand explanations for decisions that affect their rights. Already some AI applications, such as specific biometric categorization systems and practices like untargeted scraping of facial images or social scoring, have already been prohibited in the EU.
Data engineers will need to implement thorough risk assessments, maintain documentation of model design and training data, and use tools for continuous monitoring and compliance checks to ensure ongoing conformity. For businesses to stay ahead of the game, it is advised to invest in ongoing training and education to navigate these shifting regulations. Close collaboration with data scientists, policymakers, and ethicists is also recommended to mitigate errors. Only by embracing these challenges and leading the charge toward a responsible AI future can we all make AI safer for everyone.
Useful Links & Sources:
European Parliament. (2023, December 19). EU AI Act: First regulation on artificial intelligence. EU AI Act: first regulation on artificial intelligence | Topics | European Parliament (europa.eu)
European Parliament. (2024, March 13). Artificial intelligence act: MEPs adopt landmark law.
Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament (europa.eu)
EU Artificial Intelligence Act. (n.d.). Article 13: Transparency and Provision of Information to Deployers. https://artificialintelligenceact.eu/article/13/
EU Artificial Intelligence Act. (n.d.). Article 53: Obligations for Providers of General-Purpose AI Models. https://artificialintelligenceact.eu/article/53/
European Council. 2024, May 21. Artificial intelligence (AI) act: Council gives final green light to the first worldwide rules on AI. https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-final-green-light-to-the-first-worldwide-rules-on-ai/