Guiding leaders to greatness

BlogArtificial intelligenceBreakthrough technologiesDiversity, equity & inclusionTeam dynamic

Unconscious Bias is Harming Our Teams and Technology: How Can We Tackle It?

Bias in technology is real, and as we design and build increasingly powerful systems, our obligations to those using the technology increase in step. Two experts share what they’ve learned for building diverse and efficient teams and ethical AI frameworks.

25 October 2022 • 3 min read

In April 2022, speakers from NTT DATA UK’s Women’s Business and Culture & Ethnicity Networks took the time to give us two fascinating presentations. The content of the speeches may have been different, but the overarching theme was the same. The topic was the ‘Big Conversation’: Bias in Tech.

The first speaker, Lucinda Faucher, covered the importance of building diverse and efficient working teams. The second, Andrea Cornavaca, took a different approach. She highlighted for us the growing global awareness of our collective responsibility to develop ethical AI frameworks.

There are some notorious examples of bias impacting the implementation of technology. 

Amazon, Apple and the UK’s passport photo checker have all been accused of faulty, biased design. Amazon scrapped a recruitment algorithm because it had effectively taught itself that male candidates were superior. US regulators investigated Apple because of claims that its credit card offered different credit limits for men and women. And women with darker skin were twice as likely to be told that their photos failed UK passport rules compared to lighter-skinned men.

Each of these case studies reveals how just one type of mistake can creep into the system. Perhaps they were the product of working teams that didn’t reflect their target audience? Perhaps these algorithms weren’t built with responsible development and futureproofing in mind? Whatever the reason, the incident had significant consequences for the organizations involved. 

Culture fit

Lucinda headed our first presentation. She wears many hats: Head of Product at Aptitude Software, host of the ProdBox podcast, and a mentor to those looking to get into the industry. Her passion lies in building diverse teams and initiatives. Lucinda shows people just how easy it is for unconscious bias to slip into and derail these processes.

How often have you heard the words ‘culture fit’? Too often, the phrase is used without a clear understanding of what it means. For many, the phrase is a shorthand for ‘people I feel comfortable talking to’. The problem with this approach is that it leads to homogenous product teams: how can we build products that serve everybody with a group of people who all think the same way?

In 2019, gender-diverse teams were 25% more likely to achieve above-average profits. Ethnically diverse teams were 36% more likely.

Diverse teams will improve both your efficiency and your products. According to McKinsey research, more diverse executive teams financially outperform their counterparts by double-digit percentage figures. In 2019, gender-diverse teams were 25% more likely to achieve above-average profits. Ethnically diverse teams were 36% more likely.

Lucinda left us with a parting thought. As IT professionals and individuals with a passion for design, analysis, security and a hundred other hands-on practices, we like to find a straightforward way to do things. If it doesn’t work, we want to take that feedback and apply it to the next attempt. So that’s precisely the approach that Lucinda recommended. Implementing diversity doesn’t have to be complicated.

Test. Iterate. Get feedback. Then do it all over again, but better.

Diversity and AI

In Andrea’s talk, we learned about the importance of responsible AI. It is a topic she is well-versed in. Andrea is the Head of AI Governance and Regulation Practices for NTT DATA Europe and Latin America. She has spent three years in NTT DATA’s only European AI Center of Excellence in Barcelona and, over that time, her work has transformed from a line of research to a line of services.

As the adoption of AI increases and its use scales across industries, we must anticipate its risks and build AI that puts society first.

At the Center of Excellence, they work every day to help businesses break any bias that could be embedded into the algorithms of AI services. As the adoption of AI increases and its use scales across industries, we must anticipate its risks and build AI that puts society first.

To illustrate her point about this rapidly growing, largely unregulated field, Andrea showed us how many companies are following ethical AI principles. They’re developing manifestos, guidelines and strategies to ‘parent’ a growing, immature technology. Last year, H&M Group, the fashion and design conglomerate, announced that they would be putting all of their new AI models up against a ‘Checklist for Responsible AI’.

The speed of this evolution is stunning. It’s clear as day in the legislation. Only last year UNESCO brought together 100 countries to sign the first global agreement on best practices for developing ethical AI.

New technology has the potential to impact the lives of so many people. Therefore, we are responsible for ensuring that it serves the people it was designed to affect, not marginalizes them.

A better tomorrow

New technology has the potential to impact the lives of so many people. Therefore, we are responsible for ensuring that it serves the people it was designed to affect, not marginalizes them. We must make sure that our product teams reflect the audience that the product is intended for. At NTT DATA, we already have our own global set of AI ethics guidelines. Free, unregulated AI usage holds undeniable danger. We know that, and we all must work towards a balanced and fair society where humans and AI can coexist.

Artificial intelligenceBreakthrough technologiesDiversity, equity & inclusionTeam dynamic

Discover more in

Blog