Everything You Need to Know About the EU AI Act and How Software Visibility Can Support Compliance
It’s been two years since the release of OpenAI's ChatGPT, which has quickly become a tool used in both personal and business settings. Since then, numerous AI models have been globally deployed across business sectors. A recent global study found that nearly a third of employees already use Generative AI in their work, and a further 32% expect to use it soon. The development of AI services happened faster than legislation could keep up, but the EU AI Act is changing that. The act will have sweeping implications for implementing and using AI across the EU and for countries outside the EU deploying their AI systems. In this article, we’ll unpack the Act's key provisions, the implementation timeline, and the possible benefits of software inventory technology as a first step to achieving compliance.
Critical Components of the EU AI Act
The Purpose of the Act: The EU AI Act has several broad purposes, which include ensuring that AI technology is “safe, transparent, traceable, non-discriminatory and environmentally friendly.” Additionally, the Act aims to set forth a uniform and technology-neutral definition of AI, as well as to establish principles of human rather than automated oversight of AI to prevent harmful outcomes.
As the Act rolls out, it may prompt the adoption of country-specific regulations within the EU on the deployment and use of AI systems. Similar to adopting GDPR, it may also prompt international legislators to follow suit.
The Act’s Definition of an AI System: The Act defines an AI system as “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
Who the Act Affects: The Act identifies two main parties to whom its requirements will apply. The first category is providers, defined as those who have or intend to place AI systems on the market. The second category is users of AI systems, defined as natural or legal persons who deploy an AI system professionally. The provisions of the Act apply to those who intend to place AI systems on the market or in service within the EU, regardless of whether they are based in the EU or elsewhere.
Different Rules for Different Risk: The Act classifies AI systems into four categories based on risk: unacceptable, high, limited, and minimal risk. Each category has distinct compliance and transparency obligations.
Unacceptable Risk Systems: Unacceptable risk AI systems include those used by governments for social scoring, expanding facial recognition databases by scraping data from CCTV or the internet, or using subliminal techniques to manipulate adverse behavioral outcomes. These are prohibited by the Act.
High-Risk Systems: High-risk systems include AI used for biometric purposes, AI used for critical infrastructure, law enforcement, education and vocational training, and more. Most of the Act addresses regulation of these high-risk systems, which require rigorous compliance practices. Some requirements include:
The Act also outlines specific compliance requirements for providers of high-risk AI systems, including importers and distributors of these systems in the EU.
Limited-Risk Systems: The final categorization in the Act encompasses limited-risk systems that generate synthetic audio, image, video, or text content, such as chatbots. These systems must meet specific transparency obligations, such as making their output detectable as artificially generated or manipulated, informing users they are interacting with AI, marking their output as AI-generated, and providing records of any copyrighted material used to train the AI model. This will be particularly important for companies that seek to remain legally sound when using AI tools.
The Intersection of Transparent AI and Data Privacy
While limited-risk AI systems are mostly subject to transparency and reporting guidelines, this will have a unique intersection with GDPR regulations and data privacy. While companies may become compliant with the EU AI Act with transparent reporting, this reporting will likely bring to light automated data processing events taking place within limited-risk AI systems that raise GDPR concerns.
Several companies have already been punished under GDPR regulations for breaching provisions through use of AI tools. According to global law firm DLA Pipe, the EU data protection authorities (DPA) have acted against companies in Italy, France, and the Netherlands and more for “lack of legal basis to process personal data or special categories of personal data, lack of transparency, automated decision-making abuses, failure to fulfil data subject rights and data accuracy issues” related to the use of AI systems.
The assessment of AI tool risk across businesses will be more complex than just compliance with the EU AI Act as it intersects so closely with GDPR and data protection regulations.
Penalties for Non-Compliance
The EU AI Act employs a three-tier penalty structure that targets operators of AI systems, providers of general-purpose AI models, and Union institutions, agencies, and bodies. The Act has outlined a steep maximum penalty for non-compliance at €35,000,000 or 7% of annual global turnover. Penalties also apply for the provision of incorrect, incomplete, or misleading information regarding high-risk systems.
Getting Prepared: A Compliance Timeline
The Act came into force on August 2nd, 2024, meaning companies must start evaluating their paths to compliance now. Application timelines vary based on the risk associated with the AI system.
-
The provisions relating to unacceptable risk systems will apply six months after entry into force.
-
Codes of practice will apply nine months after entry into force.
-
Transparency requirements for general-purpose AI systems will apply 12 months after entry into force.
-
Because of their extensive nature, provisions relating to high-risk systems will apply 36 months after entry into force.
Help or Hindrance to AI Innovation?
As with any robust regulatory effort, some skeptics believe that the Act's bureaucratic controls will inhibit the progress of innovation in AI development. Other experts question whether the large corporations that need regulation will breeze through the financial penalties related to non-compliance, escaping the weight of compliance. At the same time, SMBs get caught up in regulatory red tape.
SMB business owner and AI/ML researcher Jeffrey Rivero believes that the addition of a research provision to the AI Act could help to strike a balance between fostering ongoing innovation of advanced AI technology and ensuring transparent and ethical systems.
“Recent advancements in OpenSource and OpenWeight AI systems have opened the door to groundbreaking applications and innovations. However, as the European Union introduces new AI regulations, it is crucial to ensure that these measures do not unintentionally hinder creativity, research, and innovation in this field.
With careful consideration and transparency, these regulations can elevate AI systems to new heights, achieving a delicate balance between progress and accountability. Crucially, incorporating a research provision into the new EU AI Safety Act would provide a vital safeguard, allowing for the continued development of more sophisticated models and driving progress in AI research.
By finding this balance, we can unlock the full potential of AI while ensuring that these powerful technologies are developed and deployed responsibly. This will enable us to harness the benefits of AI while protecting individuals, communities, and society as a whole.”
- Jeffrey Rivero, Founder of Check AI
In the Gartner® Hype Cycle™ for Emerging Technologies 2024, Generative AI is poised to enter the Trough of Disillusionment after reaching a fever pitch at the Peak of Inflated Expectations. While the regulations of the EU AI Act may contribute to changes in speed of delivery across the market, Xensam’s AI Product Manager, Sonny Mir, believes that the Act is a necessary and vital turning point in the industry that will have lasting positive impacts.
“Transparency is incredibly important for ethical AI. The Act aims to demystify AI systems, making it easier for users to understand how these systems operate and form conclusions. The Act's focus on clear communication about how AI systems interact with users is a significant change.”
- Sonny Mir, AI Product Manager at Xensam
The Act also emphasizes the importance of data quality, which addresses a significant issue in AI ethics. Mir believes that emphasis on data integrity will likely drive major improvements in AI development and deployment in the future.
“The Act will empower us, the people, and help build trust in AI technologies, leading to broader acceptance and more responsible use. It's a chance to create AI systems that are ethical and resilient enough to withstand public examination.”
- Sonny Mir, AI Product Manager at Xensam
Software Visibility Can Support Compliance
As companies assess their unique path to compliance, an inventory of all AI software will be a critical starting place. Lack of visibility is a notable barrier as these tools are often used without authorization from IT departments. In a recent survey of 14,000 employees across 14 countries, 55% of respondents utilize AI tools without consent.
Another survey found that 7 in 10 workers who use AI tools like ChatGPT do so without consent from their organization, raising significant data privacy concerns. Large companies like JPMorgan Chase, Apple, and Spotify opted to ban or limit the use of AI software, including ChatGPT and GitHub’s CoPilot. JPMorgan Chase reported that they could not determine “how many employees were using the chatbot or for what functions they were using it" before instituting a ban. The consequences of unauthorized use of higher-risk AI systems could be substantial.
With the right software inventory capabilities, companies can ensure that AI asset usage is detected, assessed, and brought into compliance on an ongoing basis. Xensam’s VP of International Sales, Alex Geuken, believes the SAM department will take center stage in supporting organization efforts to reach compliance.
“With the new EU AI Act, SAM managers get another task. On top of compliance, optimization, SaaS management, GDPR, CSRD, and all other related tasks they now have a significant role in oversight of AI tools together with Security and Legal departments.
You can't manage what you can't see, and the SAM department will answer critical questions like: Do we have visibility of what AI tools are installed or used? Who is using the AI tool? How much is the tool being used? What is it being used for? What data is being shared with AI provider? Where is the data stored? This kind of visibility will be crucial to managing AI assets under the EU AI Act.”
- Alex Geuken, Xensam
Steps to Get Compliant with the EU AI Act
These are crucial steps to prepare as companies become familiar with the new legislation and evaluate which provisions are relevant to their daily operations.
Visibility Comes First: Visibility of all software assets is crucial to building a compliance roadmap. Investing in technology that helps you detect applications, whether authorized or not, is critical. According to analysts at Forrester, an inventory of tools can raise critical red flags around risk categorization:
- Lindsey Wilkinson, CIO Dive
Define AI Use Cases: Regulation of AI tools will differ based on the purposes for which they are used within businesses. After you know what AI tools your organization uses, you can define the specific use case of these tools.
Determine Risk Categorization: Determining the risk category will be more straightforward once you know what you have and what it’s used for. The Act includes extensive definitions of each risk category and use case.
Evaluate Codes of Conduct and Transparency Guidelines: While many companies may not use high-risk AI tools, assessing voluntary or mandatory transparency guidelines and codes of conduct is highly advisable. Safe and transparent use of AI is the future; preparing now will prevent it.
Document Organizational Use of AI and Compliance Measures: It is crucial to maintain ongoing documentation of all AI systems, their use cases, risk categorization, and any mandatory regulations and transparency guidelines your company undertakes about them. This type of documentation provides a clear picture of compliance. It empowers organization-wide education on how your company utilizes AI tools now and in the future, as regulated by the EU AI Act.
Conclusion
The widespread adoption of AI across business sectors has enabled leaps in productivity and innovation. Still, as with any new advancement, it has raised global questions about fair, transparent, and ethical implementation. The EU AI Act is the first legislation to regulate the development and use of AI systems at such a large scale and will have significant implications for the EU and beyond. Companies must start assessing their use of AI tools to ensure compliance. Software visibility is the crucial first step to understanding how AI is adopted across the business.
Connect with Jeffrey Rivero >>
Connect with Sonny Mir >>
Connect with Alex Geuken >>
With the EU AI Act in effect, IT leaders must tackle the challenge of managing AI risk. Join industry expert Alex Geuken to learn
Xensam is heading to Chicago from December 8th to 10th for the SAMS USA 2024 event! Register for a free ticket and join us there.
Cost, compliance, consolidation, automation – these are just a few of the challenges of cloud-based software. Explore where the cl