Opinion • Jillur Quddus

Bias Amplification

Technology provides us with the toolkit to change lives for the better - but if unchecked it also has the power to discriminate and reinforce stereotypes and bias.

Bias Amplification

Bias Amplification

Jillur Quddus • Founder & Chief Data Scientist • 15th June 2019

  Back to Knowledge Base

Introduction

Technology provides us with the toolkit to change lives for the better - but if unchecked it also has the power to discriminate and reinforce stereotypes and bias. This is true of any technology past, present and future. But it is the advancement of distributed systems capable of storing and processing massive amounts of previously disparate data coupled with the emergence of artificial intelligence (AI) into our everyday lives that requires us to urgently refocus on how technology is being architected, engineered, tested, deployed and governed, to ensure that its impact remains positive – and only positive.

First Principles

Whether we explicitly asked for it or not, the fact is that today and right at this very moment artificial intelligence is impacting our lives. The majority of the time AI is invisible to us – whenever we enter a search term into Google, visit websites, use social media, browse Netflix, use online banking, travel by public transport or even walk the streets of any major city, AI algorithms are busy working away in the background to return relevant search results, analyse our browsing habits and viewing preferences, detect signs of fraud and to capture images of our faces. With such a deep and far-reaching, yet paradoxically invisible, global footprint the dangers of AI if left unchecked and not properly governed are significant and tangible, not least to the groups of people it will adversely affect the most (usually those already at a socioeconomic disadvantage).

But before we begin to explore these dangers, let us go back to first principles and definitions. AI is a broad term given to the theory and application of machines that exhibit intelligent behaviour. Machine learning (ML) is an applied field within the broader subject of artificial intelligence that focuses on learning from data by detecting patterns, trends, boundaries and relationships in order to make predictions and ultimately deliver actionable insights to help inform decision making. A fundamental tool used in machine learning is that of probabilistic reasoning – creating mathematical models that help us to draw population inferences in order to understand or test hypotheses about how a system or environment behaves. Consequently, machine learning is data-dependent and both supervised and unsupervised machine learning models make an implicit assumption that trends and patterns identified in historic data in order to learn a mathematical function can be used to make inferences and predictions about the future using that same mathematical function.

Deep Learning

Deep learning (DL) is a sub-field within machine learning where the goal remains to learn a mathematical function that maps inputs to outputs. However deep learning learns this function by employing an architecture that mimics the neural architecture found in the human brain in order to learn from experience using a hierarchy of concepts or representations, and gradient-based optimisation algorithms. Deep learning is still data-dependent and, generally speaking, the more data used to train a neural network the better it performs.

Overview of artificial intelligence
Overview of artificial intelligence

Bias Amplification

Now that we have an understanding of what artificial intelligence is, it is clear that data – and lots of it – is required for machine learning to be effective. But what if the data itself contains bias? In this case the model itself will not just perpetuate that bias but in many cases will amplify it. So why do machine learning algorithms amplify bias? To answer this question, we must revisit the mathematical nature of their training structure.

Recall that the goal of machine learning algorithms is to learn a mathematical function to map input data points to outputs. Normally these outputs take the form of a value or classification. In order to learn this mathematical function and to maximise its accuracy, machine learning algorithms seek to minimise the number of errors it makes by reducing the variance in the predictions made by the final model. To reduce variance, algorithms introduce assumptions, referred to as bias in the model, to make the function easier to approximate. By introducing more bias there will be less variance in your final model, but at the expense of greater divergence from the real-world reality in which your model operates. But not enough bias and the predictive accuracy of your final model is reduced as there is greater variance. As such many machine learning algorithms expose a bias-variance trade-off through the manipulation of hyperparameters so that the data scientist or mathematician may control this balance.

However therein lies a problem – if the goal of machine learning is to train models to maximise their predictive accuracy, then any bias in the data that feeds these algorithms will be amplified due to the algorithm generalising the data in order to reduce variance. Therefore, if the data contains gender or racial bias for example, this bias will be preserved and amplified to help make the model more accurate, perpetuating a vicious circle of bias and discrimination.

Word Embeddings

Let us take a look at an example algorithm to highlight how the problem of bias amplification is a physical reality in the modern world. Natural language processing (NLP) refers to a family of computer science disciplines (including information engineering, linguistics, data management and machine learning) whose goal is to help computers understand the natural language used in speech and text. Natural language processing is employed in a variety of scenarios from interaction with chatbots and virtual agents such as Alexa to automated foreign language translation services and understanding written text such as legal contracts and health reports.

One family of feature-learning techniques commonly used in natural language processing is word embeddings. A word embedding represents each word or phrase found in a given collection of texts as a vector. Words with similar semantic meaning will have vectors that are close together, and the relationship between words can be quantified through the vector differences. As such, word embeddings are an extremely useful tool for a wide variety of predictive tasks related to natural language. For example when we wish to determine sentence similarity, such as when asking a chatbot or Google a question, it will employ word embeddings to see if that question has been asked before and take action accordingly based on a similarity index. Another example is analogical reasoning where we wish to predict semantic relationships. A widely used implementation that learns vector representations of words is word2vec which provides semantic relationships between words as illustrated below.

Vector representation of words
Vector representation of words

Analogical Reasoning

A version of a word2vec model was trained by Google engineers on a dataset of hundreds of millions of Google News articles, containing hundreds of billions of words in total. What happens when we apply these trained word2vec word embeddings to analogical reasoning tasks? For example, take the following analogical phrase:

Dog is to Puppy as Cat is to __________?

The Google-trained word2vec embeddings predicts "Kitten". Let us look at another example:

Man is to Computer Programmer as Woman is to __________?

The trained word2vec embeddings in this case predicts "Homemaker". This is a clear example of how gender bias in the data used to train a model (in this case Google News) perpetuates through to the final classifications and predictions made. A demo web application that uses the word2vec model trained by Google on the Google News dataset may be found at https://rare-technologies.com/word2vec-tutorial/.

Gender Pronouns

Word embeddings are a notable example of bias perpetuation and amplification, exacerbated by the fact that their usage is extremely common in tools developed by large technology companies and which we use every day. Microsoft Bing Translator is an example of this – a tremendously useful tool but which uses word embeddings and hence can preserve gender bias. For example enter the following English phrase into Microsoft Bing Translator and select a target language that has no gender-distinction pronouns such as Finnish or Hungarian:

English
He is a nurse. She is an astronaut.
Finnish
Hän on sairaanhoitaja. Hän on astronautti.
Hungarian
Ő egy nővér. Ő egy űrhajós.

If you then use Microsoft Bing Translator again to translate the Finnish or Hungarian back into English, you get:

She is a nurse. He is an astronaut.

You will see that the gender pronouns have been swapped around thus helping to reinforce gender stereotypes.

Algorithmic Determinism

Recall that both supervised and unsupervised machine learning models make an implicit assumption that trends and patterns identified in historic data in order to learn a mathematical function can be used to make inferences and predictions about the future using that same mathematical function. Personalised recommendations from services like Netflix and Amazon, and personal assistants and virtual agents like Alexa and Siri, all work by providing services using this same fundamental principle of algorithmic determinism – that what has happened before can be used as a context in which to predict and recommend actions in the future.

However the real danger of the inferences and predictions provided by these machine learning models is that they may perpetuate bias, stereotypes and discrimination by reinforcing historic and existing beliefs. We will continue to watch only those shows on Netflix, or continue to read only those books on Amazon or tweets on Twitter, whose content we liked before, rarely or even never to expand our horizons, open our minds to new ways of thinking nor meet people and engage with content we may initially disagree with. Artificial intelligence then becomes a tool with which to maintain existing behaviour and beliefs, inflexible to behavioural change. And this is inherently dangerous to any free society – where your past dictates your future.

The Future

Artificial intelligence has the power to positively transform society in a way very few technologies can, and inspire future generations through its seemingly limitless potential. The applications of artificial intelligence are bounded only by our imaginations, and the underlying mathematics driving the latest research and developments is both beautiful and mind-blowing. However as with all such things that have the power to affect people’s lives, it can also be exploited and misappropriated – in the case of artificial intelligence however, the dangers are just as profound even when the intention is good, through the unwitting perpetuation and amplification of bias and discrimination.

Sadly, bias and discrimination is a way of life for many people and groups of people, and has been for hundreds and probably thousands of years. So whilst eradicating something so deeply entrenched in society may take generations, we can take tangible steps now to ensure that any and all data-driven systems that we develop, and have developed, minimise bias perpetuation and amplification. This includes designing and enforcing, through effective policy and governance, fully transparent and accountable algorithms coupled with adherence to ethical frameworks that seek to mandate governance and accountability at each stage of the system development life cycle. Finally, and arguably most importantly, we must strive to improve diversity in the fields of mathematics, science, engineering, technology and philosophy. By improving diversity in these fields along the entire academic and professional chain, we allow for all areas of society to have its say on the future relationship between humans and artificial intelligence.

Knowledge Base

Latest News and Posts

Python Taster Course

Python Taster Course

A fun and interactive introduction to both the Python programming language and basic computing concepts using programmable robots.

Jillur Quddus
Jillur Quddus
Founder & Chief Data Scientist
Introduction to Python

Introduction to Python

An introductory course to the Python 3 programming language, with a curriculum aligned to the Certified Associate in Python Programming (PCAP) examination syllabus (PCAP-31-02).

Jillur Quddus
Jillur Quddus
Founder & Chief Data Scientist
DDaT Ontology

DDaT Ontology

Automated parsing, and ontological & machine learning-powered semantic similarity modelling, of the Digital, Data and Technology (DDaT) profession capability framework website.

Jillur Quddus
Jillur Quddus
Founder & Chief Data Scientist