ATLANTA (June 4, 2025) - Sage, the leader in accounting, financial, HR and payroll technology for small and mid-sized businesses (SMBs), today announced the development of its AI Trust Label - a first-of-its-kind initiative to bring greater clarity and accountability to how AI is developed and used in business software.
The AI Trust Label is designed to provide customers with clear, accessible information about the way AI functions across Sage products. It focuses on key trust indicators such as compliance with privacy and data regulations, how customer data is used, the presence of safeguards to prevent bias and harm, and the systems in place to monitor accuracy and ethical performance. This initiative allows SMBs to understand how AI impacts them - without needing a technical background.
“AI adoption should never come down to blind trust,” said Aaron Harris, Chief Technology Officer at Sage. “Businesses deserve to know how the technology works, how their data is used, and what safeguards are in place. The AI Trust Label is a direct response to that need—for transparency, not assumptions.”
Sage’s research shows a direct correlation between trust and adoption. While 94% of SMBs already using AI report seeing benefits, the majority - 70% have yet to fully adopt the technology. The difference is trust.
Among those who trust AI, 85% say they actively use it in their business. That drops to just 48% among those who don’t. Additionally, 43% of SMBs say they have low trust in the companies building AI tools for business.
Later this year, Sage will begin rolling out the AI Trust Label across selected AI-powered products in the UK and US. Customers will see the label within the product experience and have access to additional details via Sage’s Trust & Security Hub. The label was designed based on direct feedback from SMBs and reflects the signals they said they need to build confidence in using AI tools.
This announcement follows a series of steps Sage has taken to ensure it develops technology responsibly. In 2023, the company published its AI and data ethics principles. It has also adopted the US NIST AI Risk Management Framework globally to guide the responsible design and use of AI, signed the Pledge for Trustworthy AI in the World of Work to support fairness and inclusion, and implemented emerging standards like the UK Government’s AI Cyber Security Code of Practice.
Sage is now calling for collaboration between industry and government to create a transparent, certified AI labelling system that encourages wider adoption of the technology. The company is also exploring opportunities to share its own framework more widely.
“We’re not just building a label for Sage,” said Harris. “We’re building a model for how AI can earn trust across the business software sector. If we want AI to truly empower SMBs, this kind of transparency isn’t optional, it’s essential.”
1,500 SMB decision makers were surveyed by Global Counsel Insight, online, between May 3rd and 19th across the US, UK, France and Spain, with each of the four countries weighted equally in overall results. Respondents were screened in for having decision-making responsibilities and for working at least 20 hours a week at the firm. Within each country results reflect the proportion of sole-proprietors in the wider SMB universe and are broadly representative of by sector. In line with official definitions, SMBs were defined as having up to 500 employees in the US, and 250 employees in other markets.
Sage exists to knock down barriers so everyone can thrive, starting with the millions of Small and Mid-Sized Businesses served by us, our partners and accountants. Customers trust our finance, HR and payroll software to make work and money flow. By digitising business processes and relationships with customers, suppliers, employees, banks and governments, our digital network connects SMBs, removing friction and delivering insights. Knocking down barriers also means we use our time, technology and experience to tackle digital inequality, economic inequality and the climate crisis.