search icon
Back

Digital Newsroom

The ethics of AI: how to ensure your firm is fair and transparent

As part of the second edition of the Sage Download, the newsletter that covers all you need to know about how technology and innovation is driving business growth, we’ve spoken to business experts to examine the ethics of AI. We explore why companies should ensure they use AI and machine learning responsibly and in a way that aligns with their values, and we explain how to create an AI policy that suits your organisation.

By Chris Torney

Artificial intelligence (AI) and machine learning have the potential to offer significant benefits and opportunities to businesses, from greater efficiency and productivity to transformational insights into customer behaviour and business performance. But it is vital that firms take into account a number of ethical considerations when incorporating this technology into their business operations. 

The adoption of AI is still in its infancy and, in many countries, there are few clear rules governing how companies should utilise the technology. However, experts say that firms of all sizes, from small and medium-sized businesses (SMBs) to international corporations, need to ensure their implementation of AI-based solutions is as fair and transparent as possible. Failure to do so can harm relationships with customers and employees, and risks causing serious reputational damage as well as loss of trust.

What are the main ethical considerations around AI?

According to Pierluigi Casale, professor in AI at the Open Institute of Technology, the adoption of AI brings serious ethical considerations that have the potential to affect employees, customers and suppliers. “Fairness, transparency, privacy, accountability, and workforce impact are at the core of these challenges,” Casale explains. “Bias remains one of AI’s biggest risks: models trained on historical data can reinforce discrimination, and this can influence hiring, lending and decision-making.”

Part of the problem, he adds, is that many AI systems operate as ‘black boxes’, which makes their decision-making process hard to understand or interpret. “Without clear explanations, customers may struggle to trust AI-driven services; for example, employees may feel unfairly assessed when AI is used for performance reviews.”

Casale points out that data privacy is another major concern. “AI relies on vast datasets, increasing the risk of breaches or misuse,” he says. “All companies operating in Europe must comply with regulations such as GDPR and the AI Act, ensuring responsible data handling to protect customers and employees.”

A third significant ethical consideration is the potential impact of AI and automation on current workforces. Businesses may need to think about their responsibilities in terms of employees who are displaced by technology, for example by introducing training programmes that will help them make the transition into new roles.

Olivia Gambelin, an AI ethicist and the founder of advisory network Ethical Intelligence, says the AI-related ethical considerations are likely to be specific to each business and the way it plans to use the technology. “It really does depend on the context,” she explains. “You're not going to find a magical checklist of five things to consider on Google: you actually have to do the work, to understand what you are building.”

This means business leaders need to work out how their organisation’s use of AI is going to impact the people – the customers and employees – that come into contact with it, Gambelin says. “Being an AI-enabled company means nothing if your employees are unhappy and fearful of their jobs, and being an AI-enabled service provider means nothing if it's not actually connecting with your customers.”

The importance of transparency

“All businesses should have a simple and clear statement about what AI they use and why, what AI they develop and why, as well as clarity on their security protocols to protect any customer data being used in AI,” says Dr Mark Powell, a partner at EY and a consultant specialising in data and analytics. “In the AI world, it only takes one slip up to lose credibility: trust is everything. Customers assume you will use their data but also assume you will not use that data in non-ethical ways.”

Gambelin adds: “The last thing you want to do is hide the use of AI and pass it off as human. People can tell when they are dealing with AI, and this can lead to an immediate breach of trust. Be transparent about when customers or other stakeholders are coming into contact with AI. I have found that when companies communicate where they are not using AI, or how they are limiting the use of AI, that is far more impactful than hiding, not communicating or over-promising.”

Businesses should be clear about the potential consequences of failing to address ethical issues, Casale warns. “Ignoring AI ethics, or addressing it incorrectly, can have serious consequences, particularly for the SMBs that may lack the resources to recover from a crisis. The risks range from legal penalties to reputational damage, operational failures and employee resistance.

“If people feel that AI-driven services are unfair, intrusive or unreliable, they might choose a competitor instead. Negative reactions can spread quickly on social media, causing long-term harm to a company’s reputation. A single AI failure, such as biased hiring tools or discriminatory pricing algorithms, can undo years of brand loyalty.”

Operational inefficiencies can be another problem. “AI systems trained on biased or low-quality data can produce flawed decisions, leading to financial losses and compliance failures,” Casale adds. “Employees, too, may resist AI adoption if they feel monitored, undervalued or at risk of job displacement. Without clear ethical guidelines and human oversight, AI can create a workplace culture of mistrust. For businesses, AI ethics is not just about avoiding risks – it’s about maintaining trust, ensuring fair outcomes and building a sustainable future in an AI-driven economy.”

Conversely, getting AI ethics right can deliver significant benefits, Casale says. “Prioritising AI ethics is more than a safeguard against risk: it’s a strategic advantage that can help businesses stand out in a competitive landscape. Companies that embed ethical principles into AI adoption build trust, ensure regulatory compliance and create long-term value. Transparency and fairness in AI-driven decisions foster customer loyalty.”

Dr Powell adds: “If you can be trusted to use AI, then you can leverage this trust to move beyond your competitors. It’s an interesting fact that while everyone seems terrified of some of the downsides of AI, the reality is we are only just scratching the surface of what you can do with it.

“AI offers the potential to radically reimagine almost all businesses and how they deliver to customers. If you have a high ethical rating, you will be able to go further than others.”

For information about Sage’s AI commitments, which outline its standards in AI and data ethics, ensuring customers’ data integrity, visit Sage Ai.

More Sage Download articles

Seven steps for SMBs to get started with AI

28 Feb 2025

How tech-enabled accountants are driving business success

28 Jan 2025

How to embrace innovation to drive growth

28 Jan 2025