Latest News on Sage Asia

The Ethics of Code: Developing AI for Business with Five Core Principles

11 August 2017

As featured in SME Magazine - 18th August, 2017

You all know what artificial intelligence (AI) is, right?

Early adopters of AI technology have already reaped great benefits. For example, businesses have been using AI-powered chat bots to ask timely questions about tax year end, to file accounts for customers, or find out the time of their lunch meeting. Asia is the fastest growing region for AI use, and is expected to grow at a compound annual growth rate (CAGR) of 46.9% between 2016-2021, due to the region’s booming economy and large electronics industry.

I would describe AI as simply the creation of intelligent machines that think and learn like humans. Every time Google predicts your search, you use Siri with your voice, or your iPhone predicts your next word in a text – that’s AI in action.

And less obviously, like when you make an unusual purchase with your card and get a fraud alert from your bank. AI is everywhere, and it’s making a huge difference in our lives every day.


Game changing

I began working with AI a few years ago, and even in this short time the game has changed massively.

As AI engineers, coders, and hackers – whatever you want to call us – we now have a massive choice about how we implement AI into the products we are developing. We can create our own AI technology, or we can also simply leverage generic tools and apply them to specialist problems we are working on.

Let me give you an example of how we worked like this at Sage when building our own AI chatbot, Pegg.

First up, we developed and trained our own AI for the financial domain with skills to take the admin out of accounting, payments, invoices and expenses. We partnered with Microsoft, Amazon and Facebook to teach the AI to understand generic entities like date and location.

We then chose to design our own personality for Pegg to suit the needs of our business users. Pegg has British accounting humour, does not pretend to be human and is proud of being a bot!


The democratisation of technology

The democratisation of technology we are experiencing with AI is awesome. As well as helping to reduce time to market, it is deepening the talent pool and helping businesses of all size have access to the most modern of technology.

But, with great power comes great responsibility. With a few large organisations developing the AI fundamentals that all businesses can use, we need to take a step back and ensure that the work happening is ethical and responsible.

Summarised below are a set of values I work to when building AI, and the guiderails I believe the tech community should adopt to develop AI that is accountable and fit for purpose, at a point in time when AI is poised to revolutionise our lives.


The ethics of code 

Below you will find an abridged version of the 5 core principles:

AI should reflect the diversity of the users it serves

Both industry and community must develop effective mechanisms to filter bias as well as negative sentiment in the data that AI learns from - ensuring AI does not perpetuate stereotypes.

AI must be held to account - and so must users

Users build a relationship with AI and start to trust it after just a few meaningful interactions. With trust, comes responsibility and AI needs to be held accountable for its actions and decisions, just like humans. Technology should not be allowed to become too clever to be accountable. We don't accept this kind of behaviour from other 'expert' professions, so why should technology be the exception.

Reward AI for 'showing its workings'

Any AI system learning from bad examples could end up becoming socially inappropriate - we have to remember that most AI today has no cognition of what it is saying. Only broad listening and learning from diverse data sets will solve for this.

One of the approaches is to develop a reward mechanism when training AI. Reinforcement learning measures should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result.

AI should level the playing field

Voice technology and social robots provide newly accessible solutions, specifically to people disadvantaged by sight problems, dyslexia and limited mobility. The business technology community needs to accelerate the development of new technologies to level the playing field and broaden the available talent pool.

AI will replace, but it must also create

There will be new opportunities created by the robotification of tasks, and we need to train humans for these prospects. If business and AI work together it will enable people to focus on what they are good at - building relationships and caring for customers.


Kriti Sharma is Vice President of Bots and AI, Sage


The Ethics of Code by Kriti Sharma

Sage ©2018, The Sage Group plc or its licensors . All Rights Reserved.