Diversity, inclusion and equity & the future of AI
Is artificial intelligence at odds with diversity and inclusion? Not if we are intentional about developing ethical AI. Here's how we get there.
Picture an artificial intelligence (AI) algorithm screening resumes for a job interview. In looking for candidates who might be a good fit, it made certain assumptions.
One such premise: employees who earn the most money or get frequently promoted are better hires. Will such an assumption hold under close scrutiny? No. In the real world, we know the wage gap is real, and that businesses have a lot of work to do in keeping diversity and inclusion in mind for promotions.
When AI is based on biased assumptions, it makes biased recommendations.
The growing clamor for diversity and inclusion in every aspect of business means we cannot afford to overlook DEI processes when designing AI systems. The case for diversity and inclusion in AI is becoming ever more urgent as the technology is being pressed into service across a range of industries — from human resources to healthcare.
The many ethical challenges of AI
The expansion of AI is bringing a whole host of ethical questions in its wake.
-
How do we account for bias?
AI is not a sentient being. It is a human-made product, built on data, meant to make human tasks easier. Unfortunately, humans are often biased. As a result the data sets they feed into AI systems might also be biased. AI works on the garbage-in, garbage-out theory: Feed biased data into the algorithm, it will spit out biased decisions.
-
How do we take privacy concerns into account?
AI data sets need to be based on consent. Can Jane Doe give consent if she does not know what she is consenting to? Does the company using the AI understand every potential use case of the software it is developing?
-
How do we account for cultural differences?
A case of workers in India tagging images of same-sex couples as indecent, illustrates that cultural values differ around the globe. Similarly, an AI bot in a woman’s voice acting as a personal assistant reinforces existing gender biases and stereotypes. English might be a gender-neutral language but not all are, so these factors need consideration.
-
How do we make AI more transparent?
Diversity and inclusion means anyone should be able to challenge an AI’s decision. Such a process requires transparency in the decision-making process. You cannot understand decisions systems arrive at without seeing the entire algorithm in place. Consumers need to see which of their data was used to arrive at the decision that the AI system did. The other problem here is we have been conditioned to accept decisions made by machines as gospel so questioning the process becomes that much more difficult.
How do we build ethical AI?
Notwithstanding the many questions surrounding AI, there’s a strong business case for ethical AI. Not only is ethical or responsible AI the right path to invest in, there is movement toward mandating use of these AI systems in specific ways.
For example, the European Union is working toward legislation that will regulate all AI, with especially stringent oversight in “high risk” applications. Developing an AI model around an ethical framework will save a lot of headaches later, especially if datasets of questionable integrity end up being used in opaque ways.
So, how do we go about developing an ethical framework and ensuring that all AI is responsible? Here a few basic guidelines worth considering:
-
Check for bias in datasets
Question the integrity of the data source. Does the data polling reflect the larger population as a whole? Have you viewed the data through the lens of diversity and inclusion? This will mean leveling the playing field for datasets used in complex use cases like credit grants and hiring processes.
-
Test drive before executing and make it auditable
Does the AI model produce the kinds of results it should? Is it mirroring stereotypes and biases? Test the AI models on a smaller scale before it hits prime time. Whether through documentation or otherwise, ensure that AI models leave an auditable data trail.
-
Distribute responsibility equally
Many stakeholders have a vested interest in AI: developers and companies, governments, and consumers. The burden of responsibility needs to be shared by all. All sets need to ask the right questions.
-
Develop measurable metrics
How will we know when AI works keeping diversity and inclusion principles in mind? Measure the final outcomes. Does a bank now give out loans to minority segments in equal proportion? Does it receive enough applications in the first place? Are you hiring keeping diversity and inclusion in mind? If companies answer no to valuable metrics, it might be a signal that the AI model is based on shaky datasets.
-
Make diversity and inclusion part of design and coding
Invite different backgrounds and perspectives into the design and coding of the AI algorithms. Diversity and inclusion need to be part of the process from the start, not a patched-up afterthought. Diversity is important to sniff out the bad data sets or when design is not being ensured for equity. Of course, ensuring that diversity and inclusion gets baked in, is about more than just inviting everyone to the party. You also have to invite them to dance. From the C-suite on, we need to make everyone heard and valued.
Finally, rinse and repeat. Diversity and inclusion in ethical AI is not a one-and-done approach. We have to constantly evaluate where we are against our goalposts. Recalibrate if we need to.
AI is transformational technology that can change the world. We need to make sure that it is developed ethically so it changes the world for the better.
Ask the author a question or share your advice