Ethical AI: Designing a smarter, more inclusive (and more productive) future

Published · 2 min read

Are you concerned about artificial intelligence and its potential impact on our future?

Anxiety about AI has made for compelling Sci-Fi movies, but the reality is likely to be rosier than doomsday filmmakers might have suggested. That was the consensus at a recent panel, hosted by Sage, about creating AI tools which improve human life, in all of its diversity.

In many ways, it already has.

Every time you ask Alexa for the weather forecast, search Google, or scan product recommendations on Amazon, AI is working quietly (and obediently) in the background. But this technology is still nascent, and it’s up to us to decide what it should do—and how it should behave.

The biggest risk is not from AI running amok, but rather from humans contaminating AI with our own shortcomings and prejudices.

A recent article highlighted,  “ … AI, when defined, built, cultivated and deployed with the right human oversight, has the potential to do significantly more good for the world than harm,” wrote Kriti Sharma, vice president of AI and Bots at Sage.

Sharma joined Amir Shevat (director of developer relations at Slack), Deepti Yenniredy (founder of MyAlly) and Dr. Shannon Vallor (philosophy professor at Santa Clara University), to discuss how to design AI solutions that work for us and make life easier.

There were three big takeaways from the event:

Don’t panic: humans are in control

We create this technology, and the choices we make will determine whether AI is a force for good or not. The point of technology is to make our lives better — “We don’t build tech to make our lives worse,” joked Dr. Vallor — and the first step is deciding what we want technologies like AI to do for us. Then, we must make deliberate design decisions to create AI that does these tasks.

Our working lives are one area where AI can help us make smarter decisions, quickly. Business leaders today spend a significant amount of time on rote financial and administrative tasks, like tracking expenses and approving PTO requests, which could be spent more strategically, said Sharma, who, last year, created the world’s first accounting AI enabled bot, Pegg, for Sage.

Technology reflects our choices

Still worried about a future governed by artificially intelligent overlords? The panelists reminded the audience that all technologies, even the “smartest” AI — are the result of human values and choices. “Technology is not the result of an inevitable evolutionary process,” said Dr. Vallor.

We create these tools to do specific tasks, and we decide how intelligent they need to be. If we’re creating AI to track invoices or submit an expense report, chances are it doesn’t need human-level intelligence — “smart enough” is OK.

Tech is only as good as its human designers

Not every AI bot needs to match our cognitive abilities, but the humans who create these technologies must be smart enough to keep their own biases out of the design. AI must ensure that negative sentiment is excluded and remove preconceptions from decision-making. That’s one reason why Yenniredy from MyAlly created a genderless AI assistant, Alex, which the company refers to as “agender.”

The fear is not creating sentient technologies that could take over the world, but rather infecting smart technologies with human biases and blind spots.

“I’m more worried about the humans behind AI,” concluded Shevat from Slack, “not AI itself.”

The Ethics of Code: Exploring Diversity, Inclusion, and the Future of AI Development Discussion

Watch now

Leave a response