Money Matters

EU AI Act: What the artificial intelligence laws mean for businesses

Learn about the European Union's new artificial intelligence legislation and how to prepare your business for the EU AI Act.

Artificial Intelligence (AI) is not just about innovation—it’s a fundamental business driver.

The European Union (EU) believes AI needs better conditions for its development, so it’s setting a new standard by introducing the European Union Artificial Intelligence Act (EU AI Act).

In this article, we explore what the EU AI Act means for you, whether at the helm of a startup or guiding your scale-up financial operations.

Don’t think of it just as a set of rules and regulations; it’s about understanding how this Act can shape your AI strategies and investments now and in the future.

Here’s what we cover:

What is the EU AI Act?

Likely to come into force in 2025 after a final vote, the EU AI Act is a legislative framework designed to regulate the use and development of AI within the EU. It is the first region to provide comprehensive and transparent rules for AI systems.

AI usage, data protection, and risk management are already relevant to business planning and corporate governance. If you do business in the EU, you may need to adapt to new compliance requirements.

When reviewing AI regulation and legislation, look at it both as:

A challenge: Making sure your AI-driven projects adhere to these regulations.

An opportunity: While all companies must adhere to the same rules, how you do so could vary significantly from others. Going beyond compliance and proactively embracing ethical and responsible AI principles could set your business apart.

“With AI tipping into the mainstream, there are strong benefits that can come from educating the public to build trust,” said Mark Brown, global managing director, Digital Trust Consulting at BSI.

“Organisations that take the long-term view may see that AI can allow them to enhance their cybersecurity, privacy, or digital risk landscape.”

Why is the Act coming into force?

The answer lies in the dual nature of AI.

AI opportunities

AI offers unprecedented opportunities for innovation, efficiency, and problem-solving.

Regarding innovation, competition, and intellectual property, the EU wants to foster a competitive landscape by encouraging businesses to invest in research and development of AI technologies.

AI challenges

AI poses significant risks and challenges, especially without adequate oversight.

The Act requires you to develop AI systems that respect human autonomy and prevent harm. It aims to create a safe and trustworthy environment for AI deployment so that you manage your business risks while maximising the benefits.

It categorises AI applications based on their risk, ranging from unacceptable to minimal risk, imposing corresponding requirements and legal obligations.

Privacy and personal rights

One of the most pressing concerns the Act addresses is the potential for AI systems to infringe on privacy and personal rights.

AI’s ability to process vast amounts of data can lead to intrusive surveillance or biased decision-making, impacting everything from job opportunities to access to services.

Lack of transparency

A lack of transparency in AI algorithms can make understanding how they make their decisions difficult, challenging the principle of accountability.

AI autonomy

There are ethical concerns regarding the autonomy of AI systems.

As AI becomes more advanced, the risk of these systems making decisions without human oversight increases.

In scenarios where AI-driven decisions may have harmful consequences, who is responsible?

What the Act covers—scope and provisions

The EU AI Act casts a wide net, encompassing various aspects of AI development and deployment within its regulatory scope.

It categorises AI systems into different levels based on their potential risks to society and individuals.

High-risk AI systems

The EU AI Act focuses mainly on “high-risk” AI systems, recognising their potential for significant impact.

These include AI technologies used in critical infrastructure, education, employment, essential private and public services, law enforcement, migration, and the administration of justice.

Market Research Future team lead Aarti Dhapte says: “Your business may have to implement measures such as robust cybersecurity measures and data encryption, ensuring your AI systems are resistant to adversarial attacks.”

The EU AI Act will have compliance requirements asking you to ensure data quality, transparency, and human oversight.

As AI advances, the risk of these systems making decisions without human oversight increases, potentially leading to unintended outcomes or ethical dilemmas where machine logic conflicts with human values and societal norms.

For instance, an AI system used for recruitment must be transparent in its decision-making process and avoid biases, ensuring fair treatment for all job applicants.

Minimal risk AI systems

AI applications deemed minimal risk, such as AI-enabled video games or spam filters, are subject to minimal regulatory requirements.

The objective here is to encourage innovation while ensuring that these applications, though low risk, still maintain user trust and safety.

Prohibited practices

The Act also identifies certain AI practices as unacceptable due to their clear threat to safety, livelihoods, and rights.

These include AI systems that manipulate human behaviour to circumvent users’ free will (such as deep fakes used to spread misinformation) and systems that allow “social scoring”, where governments use technology to assess and rank citizens based on their behaviour, activities, and various other criteria.

Transparency obligations

The Act mandates transparency for specific AI systems, such as chatbots, so users know they are interacting with an AI.

This is crucial in contexts such as customer service, where understanding whether one communicates with a human or an AI can significantly impact the quality of the interaction for the customer.

What are the practical implications of the EU AI Act?

The EU AI Act emphasises the importance of thoroughly testing and understanding AI systems to mitigate risks before they are deployed to the public.

Compliance standards

For businesses developing or using AI, it means following stringent compliance standards, especially for high-risk applications.

The cybersecurity industry has jumped on the need for a framework where generative AI systems are developed and operated under strict “Zero Trust” principles.

Instead of assuming that everything inside an organisation’s network can be trusted, Zero Trust operates on the principle that nothing (inside or outside the network) should be authorised by default.

Zero Trust requires strict identity verification for every person and device trying to access resources on a private network, regardless of whether they are within or outside the network.

Tim Freestone, chief marketing officer at Kiteworks, says: “Zero Trust Generative AI (ZTGAI) principles would ensure compliance with the EU AI Act by mandating stringent user authentication, data integrity checks, and continuous process monitoring.

“ZTGAI advocates for multi-factor authentication, detailed data source validation, and real-time anomaly detection to secure AI systems.

“It emphasises the importance of output screening against ethical and policy benchmarks, with an accountability framework through end-to-end activity audits.”

Furthermore, Tim says ZTGAI would strengthen AI governance with content layer security policies, restricting the use of sensitive data and enforcing rigorous compliance checks.

“This framework would guarantee that AI operations are transparent, traceable, and uphold the highest standards of data stewardship, aligning with the Act’s goals for a trustworthy AI-driven future.”

Piyush Tripathi is a tech lead at Square and has worked on several high-profile projects, including an API platform used in New York’s coronavirus contact-tracing programme.

Piyush says: “We developed a communication API for SMBs [small and medium businesses].

“We understood that the quality of data is everything. A small error or lapse in this foundational aspect can lead to significant issues.

“Our focus was often on earning and retaining consumer trust. Ethical data collection and management standards play a big role in this digital age.”

Education

The EU Act does not explicitly mandate specific educational or training requirements.

However, it does imply a need for adequate knowledge and understanding of AI technologies for those who develop, deploy, and use them—especially in high-risk scenarios.

Igor Jablokov, CEO and founder of AI business Pryon, says: “Businesses are beginning to realise that staff members don’t know how to properly use these AI tools, resulting in sensitive content and information being leaked or landing in the wrong places.

“Until now, the AI industry was unclear how these systems are governed and how information is protected.

“Your business should create a governance structure so that employees understand what is inside these platforms and can more appropriately manage themselves, allowing you to reap the value of AI without unintentional breaches.”

At London Tech Week, Cleo lead product manager Kate Janssen said CEOs and leaders should seize opportunities to provide guidance and context rather than exercising control over AI applications.  

She added: “AI tools can significantly increase productivity, but their usage should align with your organisation’s risk appetite and data privacy policies. 

“To ensure the right balance, it’s crucial for leaders to communicate what types of data employees can safely share or use in applications like ChatGPT.”

EU AI Act: What your business needs to consider

If the EU AI Act applies to your business,  it could mean significant adjustments in your AI development and deployment strategies.

  • If your business operates within the EU or deals with EU citizens’ data, you must comply. Understand which category or categories of AI risk your business falls into and adapt accordingly.
  • High-risk AI applications require rigorous assessment, ensuring data quality, transparency, human oversight, and robust record-keeping.
  • You may need to make operational changes, such as redesigning AI systems to meet standards, training staff on compliance requirements, and establishing regular audits and risk-assessment processes.

The landscape of AI regulation is varied and evolving across the globe. Understanding these differences is critical to navigating the complex terrain of AI compliance and strategy.  

The UK

Like the EU, the UK’s approach leans towards a balance between promoting innovation and ensuring responsible use of AI. But it doesn’t have the rigidity of overarching AI-specific legislation.

The focus has been on adapting existing laws and sector-specific regulations to encompass AI technologies. This includes updates to data protection laws and ethical guidelines specific to AI.

This more flexible approach may offer easier integration for AI technologies but still requires careful navigation of the existing legal landscape.

The US

The US has adopted a more decentralised approach to AI regulation.

Instead of a single, comprehensive AI Act, there are various initiatives and guidelines at both federal and state levels, often focused on specific sectors such as healthcare, finance, and transportation.

This approach reflects the US’s emphasis on market-driven innovation, with regulation often playing catch-up to technological advancements.

This sector-specific focus necessitates a deep understanding of the regulations pertinent to each AI application area.

International businesses need to adopt a versatile approach to AI

Your business needs to be agile enough to comply with the EU’s comprehensive regulations, adapt to the UK’s evolving legal landscape, and navigate sector-specific rules in the US.

Staying aware of these varying regulatory environments is crucial for maintaining compliance and lets you fully capitalise on the global AI market’s opportunities.

Potential compliance challenges

One of your biggest challenges may be the cost and effort involved in compliance.

Adapting existing AI systems to meet the new standards could be resource-intensive.

Additionally, the Act’s broad scope and evolving nature may create uncertainties, requiring you to stay agile and informed about ongoing regulatory developments.

Dr Clare Walsh is the director of education at the Institute of Analytics and a leading academic voice in data analytics and AI.

Clare shared with Sage Advice: “The legislation shines a light on the problem where we have technical experts and domain experts, and communication between the two groups may be at best strained, and sometimes entirely at odds.

“The legislation does not outlaw any one technology or any use but requires consideration of the limitations and risks of a particular technology in a specific use case.

“In other words, we need better communication across multi-functional teams than ever. It is not enough to have experts working in silos. We all need to become better with data.”

Should accounting and finance teams be concerned about the new laws?

Clare Walsh says it’s doubtful that the EU AI Act will immediately impact accountancy and financial work, stating: “The bulk of accountancy work has involved historical record keeping and will be in the very low- to no-risk analytics categories.”

However, as work adapts to the influence of AI, more accountancy and finance work will come under the EU AI regulations.

Clare continues: “Increasingly, the value financial professionals bring to their profession is through predictive analytics, and these come under the definition of probability-based approaches that the EU Act regulates.”

It is already illegal to use a solely probability-based decision in a situation that will have a real-life impact on customers—GDPR Article 21 and the US Bill of Rights provide similar ”opt-out” clauses.

Clare says: “An example of this might be profiling a mortgage applicant to determine a mortgage rate to offer. Provided that a human can explain the reasoning, the process is legal.

“If the technology is ‘black-boxed’, we can never know the machine’s reasons for selection. It may be based on sensible reasons, such as proven financial responsibility. It may be based on something completely random, like the person’s first name is ‘Susan’.

“For finance professionals who are used to working with third-party software output, they may need to engage more with how the decisions are reached as well as the results.”

Other potential finance applications under the new regulations might include fraud detection profiling or how audits are carried out.

Clare says: “We will need financial professionals who can oversee automated accountancy processes and bring some of their auditing skills to bear on the accounts and the mathematical data processing.

“These technologies rarely simply replace human processes but require some adaptation of the working environment around them.”

Just like warehouses must be specially designed depending on whether humans or robots staff them, decision-making processes must be adapted to AI team members.

Clare says there will need to be more robust feedback mechanisms to ensure adequate oversight.

“Financial professionals need to step up to this,” she states. “The new legislation is an opportunity for people like accountants looking to secure their transition into digitised offices.

“Someone needs to provide transparency, and there is currently a huge gap in the labour market of people with the ability and experience to produce reports and audits.”

7 tips to help businesses prepare for and follow the EU AI Act

By proactively addressing the challenges and changes brought about by the EU AI Act, you can ensure compliance and position your business to use AI technologies more responsibly and effectively.

1. Conduct a comprehensive AI audit

Assess your current AI systems and processes to determine how they align with the EU AI Act.

Identify areas that require changes or enhancements to meet compliance standards.

2. Develop a risk management strategy

For high-risk AI applications, establish a robust risk management framework. Include mechanisms for monitoring, reporting, and mitigating risks associated with AI systems.

For low-risk applications, maintaining transparency in their operations and ensuring the accuracy of the information it processes is still crucial.

Clearly communicate on how the AI functions and the nature of data it handles.

3. Invest in training and awareness

Ensure your staff is well-informed about the EU AI Act and its implications.

Regular training sessions can help build a compliance-focused culture within the organisation.

4. Engage with AI ethics and compliance experts

Consult AI ethics and compliance experts to navigate the complex regulatory environment effectively.

They can provide insights into best practices and help you stay ahead of regulatory changes.

5. Foster transparency and accountability

Develop clear policies and procedures for AI transparency and accountability.

Maintain detailed records of AI decision-making processes and outcomes.

6. Leverage technology for compliance

Use general and AI-specific compliance management software and tools to streamline and automate parts of your compliance processes, making them more efficient and less prone to errors.

7. Stay informed and agile

Keep abreast of regulatory updates and be prepared to adapt your AI strategies as the regulatory landscape evolves.

Final thoughts

Understanding and complying with the EU AI Act regulations is essential.

However. it’s also an excellent opportunity to set standards that ensure you’re using AI responsibly, which increases your trust with customers and other businesses.

The Act presents an opportunity to view it as more than a regulatory hurdle.

Make it your roadmap for innovative, ethical AI deployment, and promise enhanced transparency and accountability in your financial operations.

Using the EU AI Act in this manner could be a chance to show that your business not only adapts to change but drives it.