Everything You Need to Know About the EU AI Act and How Software Visibility Can Support Compliance

2024-09-11

Unacceptable Risk Systems: Unacceptable risk AI systems include those used by governments for social scoring, expanding facial recognition databases by scraping data from CCTV or the internet, or using subliminal techniques to manipulate adverse behavioral outcomes. These are prohibited by the Act. 

 

The Act also outlines specific compliance requirements for providers of high-risk AI systems, including importers and distributors of these systems in the EU.
 

Limited-Risk Systems: The final categorization in the Act encompasses limited-risk systems that generate synthetic audio, image, video, or text content, such as chatbotsThese systems must meet specific transparency obligations, such as making their output detectable as artificially generated or manipulated, informing users they are interacting with AI, marking their output as AI-generated, and providing records of any copyrighted material used to train the AI model. This will be particularly important for companies that seek to remain legally sound when using AI tools.  

 

The Intersection of Transparent AI and Data Privacy  

While limited-risk AI systems are mostly subject to transparency and reporting guidelines, this will have a unique intersection with GDPR regulations and data privacy. While companies may become compliant with the EU AI Act with transparent reporting, this reporting will likely bring to light automated data processing events taking place within limited-risk AI systems that raise GDPR concerns.  

Several companies have already been punished under GDPR regulations for breaching provisions through use of AI tools. According to global law firm DLA Pipe, the EU data protection authorities (DPA) have acted against companies in Italy, France, and the Netherlands and more for “lack of legal basis to process personal data or special categories of personal data, lack of transparency, automated decision-making abuses, failure to fulfil data subject rights and data accuracy issues” related to the use of AI systems. 

The assessment of AI tool risk across businesses will be more complex than just compliance with the EU AI Act as it intersects so closely with GDPR and data protection regulations.  

 

Penalties for Non-Compliance  

The EU AI Act employs a three-tier penalty structure that targets operators of AI systems, providers of general-purpose AI models, and Union institutions, agencies, and bodies.  The Act has outlined a steep maximum penalty for non-compliance at 35,000,000 or 7% of annual global turnover. Penalties also apply for the provision of incorrect, incomplete, or misleading information regarding high-risk systems.

 

Getting Prepared: A Compliance Timeline 

The Act came into force on August 2nd, 2024, meaning companies must start evaluating their paths to compliance now. Application timelines vary based on the risk associated with the AI system.  

 

 

Help or Hindrance to AI Innovation?  

As with any robust regulatory effort, some skeptics believe that the Act's bureaucratic controls will inhibit the progress of innovation in AI development. Other experts question whether the large corporations that need regulation will breeze through the financial penalties related to non-compliance, escaping the weight of compliance. At the same time, SMBs get caught up in regulatory red tape. 


SMB business owner and
AI/ML researcher
Jeffrey Rivero believes that the addition of a research provision to the AI Act could help to strike a balance between fostering ongoing innovation of advanced AI technology and ensuring transparent and ethical systems.  

 

“Recent advancements in OpenSource and OpenWeight AI systems have opened the door to groundbreaking applications and innovations. However, as the European Union introduces new AI regulations, it is crucial to ensure that these measures do not unintentionally hinder creativity, research, and innovation in this field. 

 

With careful consideration and transparency, these regulations can elevate AI systems to new heights, achieving a delicate balance between progress and accountability. Crucially, incorporating a research provision into the new EU AI Safety Act would provide a vital safeguard, allowing for the continued development of more sophisticated models and driving progress in AI research.

By finding this balance, we can unlock the full potential of AI while ensuring that these powerful technologies are developed and deployed responsibly. This will enable us to harness the benefits of AI while protecting individuals, communities, and society as a whole.”

- Jeffrey Rivero, Founder of Check AI

 

In the Gartner® Hype Cycle for Emerging Technologies 2024, Generative AI is poised to enter the Trough of Disillusionment after reaching a fever pitch at the Peak of Inflated Expectations. While the regulations of the EU AI Act may contribute to changes in speed of delivery across the market, Xensam’s AI Product Manager, Sonny Mir, believes that the Act is a necessary and vital turning point in the industry that will have lasting positive impacts.

 

Transparency is incredibly important for ethical AI. The Act aims to demystify AI systems, making it easier for users to understand how these systems operate and form conclusions. The Act's focus on clear communication about how AI systems interact with users is a significant change.

- Sonny Mir, AI Product Manager at Xensam

 

The Act also emphasizes the importance of data quality, which addresses a significant issue in AI ethics. Mir believes that emphasis on data integrity will likely drive major improvements in AI development and deployment in the future. 

 

“The Act will empower us, the people, and help build trust in AI technologies, leading to broader acceptance and more responsible use. It's a chance to create AI systems that are ethical and resilient enough to withstand public examination.”

- Sonny Mir, AI Product Manager at Xensam

 

Software Visibility Can Support Compliance  

As companies assess their unique path to compliance, an inventory of all AI software will be a critical starting place. Lack of visibility is a notable barrier as these tools are often used without authorization from IT departments. In a recent survey of 14,000 employees across 14 countries, 55% of respondents utilize AI tools without consent. 

Another survey found that 7 in 10 workers who use AI tools like ChatGPT do so without consent from their organization, raising significant data privacy concerns. Large companies like JPMorgan Chase, Apple, and Spotify opted to ban or limit the use of AI software, including ChatGPT and GitHub’s CoPilot. JPMorgan Chase reported that they could not determine how many employees were using the chatbot or for what functions they were using it" before instituting a ban. The consequences of unauthorized use of higher-risk AI systems could be substantial
 

 

With the right software inventory capabilities, companies can ensure that AI asset usage is detected, assessed, and brought into compliance on an ongoing basis. Xensam’s VP of International Sales, Alex Geuken, believes the SAM department will take center stage in supporting organization efforts to reach compliance.  

 

“With the new EU AI Act, SAM managers get another task. On top of compliance, optimization, SaaS management, GDPR, CSRD, and all other related tasks they now have a significant role in oversight of AI tools together with Security and Legal departments.  

You can't manage what you can't see, and the SAM department will answer critical questions like: Do we have visibility of what AI tools are installed or used? Who is using the AI tool? How much is the tool being used? What is it being used for? What data is being shared with AI provider Where is the data stored? This kind of visibility will be crucial to managing AI assets under the EU AI Act.” 

- Alex Geuken, Xensam

 

Steps to Get Compliant with the EU AI Act 

These are crucial steps to prepare as companies become familiar with the new legislation and evaluate which provisions are relevant to their daily operations. 

 

Visibility Comes First: Visibility of all software assets is crucial to building a compliance roadmap. Investing in technology that helps you detect applications, whether authorized or not, is critical. According to analysts at Forrester, an inventory of tools can raise critical red flags around risk categorization:  

 

 

Define AI Use Cases: Regulation of AI tools will differ based on the purposes for which they are used within businesses. After you know what AI tools your organization uses, you can define the specific use case of these tools.  

 

Determine Risk Categorization: Determining the risk category will be more straightforward once you know what you have and what it’s used for. The Act includes extensive definitions of each risk category and use case.  

 

Evaluate Codes of Conduct and Transparency Guidelines: While many companies may not use high-risk AI tools, assessing voluntary or mandatory transparency guidelines and codes of conduct is highly advisable. Safe and transparent use of AI is the future; preparing now will prevent it 

 

Document Organizational Use of AI and Compliance Measures: It is crucial to maintain ongoing documentation of all AI systems, their use cases, risk categorization, and any mandatory regulations and transparency guidelines your company undertakes about them. This type of documentation provides a clear picture of compliance. It empowers organization-wide education on how your company utilizes AI tools now and in the future, as regulated by the EU AI Act. 

 

Conclusion 

The widespread adoption of AI across business sectors has enabled leaps in productivity and innovation. Still, as with any new advancement, it has raised global questions about fair, transparent, and ethical implementation. The EU AI Act is the first legislation to regulate the development and use of AI systems at such a large scale and will have significant implications for the EU and beyond. Companies must start assessing their use of AI tools to ensure compliance. Software visibility is the crucial first step to understanding how AI is adopted across the business.  

Connect with Jeffrey Rivero >>

Connect with Sonny Mir >> 

Connect with Alex Geuken >>

 
Webinar: Mastering AI Governance - How to Leverage AI for Automated Control

With the EU AI Act in effect, IT leaders must tackle the challenge of managing AI risk. Join industry expert Alex Geuken to learn

Xensam x SAMS USA 2024

Xensam is heading to Chicago from December 8th to 10th for the SAMS USA 2024 event! Register for a free ticket and join us there.

The Cost and Complexity of Managing Cloud Software

Cost, compliance, consolidation, automation – these are just a few of the challenges of cloud-based software. Explore where the cl