Are machine learning tools reinforcing bias in society? And if so, what can be done about it? Richard Welsh explores some of the issues affecting artificial intelligence.
Bias is everywhere in our society, it is well documented and when it comes to equality most people are in agreement that it’s not a force for good.
But bias can be useful. For instance, when we make the decision to not step in front of a bus, which most people are biased towards.
However, when bias unfairly disadvantages one group over another, such as gender bias or racial bias, however, it’s going to make the headlines in a bad way.
This article isn’t about that, but I wanted to put into context that bias itself is not good or bad, it is simply a decision (in the case of humans) made one way or another based on experience.
We consciously use tools every day, such as search engines, social networks and banking, which in some way exhibit bias.
Why? Because so much of the technology we use today relies on machine learning and deep learning networks that are by their nature biased. They wouldn’t work if they weren’t.
New suit
Some of these tools are, on the whole, good for society. Take banking, which uses machine learning tools to assess the usage on your credit card to ensure that when something unusual happens it is flagged, and when something really unusual happens it stops your card.
That might be annoying if you just bought a Calvin Klein suit for the first time. But it would be useful if someone has stolen your credit card and bought themselves a Calvin Klein suit.
That tool is biased towards your normal pattern of spending and a behaviour positioned too far from that bias triggers the security action.
However, these very same technologies are working to reinforce unfair biases that society exhibits, purely due to the fact that society itself holds these biases in the first place.
It is important to state at this point that these biases are in no way the result of the people who created the tools.
Why then does bias exist in machine learning and deep learning networks? In short, simply because of the training data that’s inputted into these tools which is itself biased.
Let’s explore a simplified example of the introduction of bias to a machine learning tool which illustrates a binary scenario of good and bad outcomes.
Say if we were to train a machine learning network to recognise the difference between cats and cows. In the first instance we want to deliberately bias the network to demonstrate a bad outcome, so we feed it pictures of cats, cows and other random animals and we tell the network that all the pictures of cats and cows are actually just cats.
Now the network recognises cats and cows, but believes they are all cats. This is a deliberate and maybe malicious training exercise that purposefully biased the network towards a bad outcome.
Let’s do it again, except this time we’re aiming for a good outcome so we train the network with cats, cows and random animals, and we correctly identify to the network which are cats, cows and random animals.
Now we believe we have trained the network well, however all the cows we trained the network on were Friesians and we have unwittingly introduced a bias.
The tool is now given a picture of a Jersey cow, and whilst the shape correlates reasonably well with a cow, the colour correlates very strongly with the thousands of ginger cat pictures the network was trained with (and not at all with the black and white cows) so the network labels it “cat”. Still a bad outcome, but not deliberate this time, just a function of the bias in the training data.
This is a crude example but it demonstrates the point: the data we train machine learning networks with may (in fact almost certainly will) have biases that we don’t recognise, know about, or simply choose to ignore.
The machine learning tools we deploy so widely in today’s technology do not have the benefit of context or a wider understanding of the world as humans would. In fact, they don’t “understand” anything, they simply make decisions based on the historical bias they are trained with.
Most of the really good stuff we enjoy such as natural language processing (voice control and translation), image recognition, recommendation engines for content, shopping and news, are based on real world data. That data has invariably been gathered by and from humans.
Webinar on demand Hear more from Richard Welsh in the Broadcast under attack webinar.
Society’s biases have naturally influenced this data and therefore the tools are also absorbing this deeply ingrained bias concealed within the data, and far more troubling is that they are not only learning these biases, they are potentially reinforcing them.
Consider that most of these networks are not trained once and let loose, they are continually learning from new input data reaching into almost every aspect of our behaviour.
If you assume that society is in any way influenced by technological input such as the presentation of the news or what your friends are doing and saying, or advertising, this is very obviously a reinforcing feedback loop.
So what’s the solution?
Unfortunately, the answer to that isn’t as easy as highlighting the problem.
Over simplifying again, to remove unhealthy biases from widespread and everyday tools we could attempt to clean up the data to remove any bias, but how do we identify it in order to remove it, and what would be the consequences?
It’s entirely plausible that an attempt to sanitise training data to remove one or more set of biases might reduce the effectiveness of the resulting tool, such that it’s recommendations weren’t deemed to be “as good” and customers leave in their droves.
This would be commercial disaster and the old algorithm would quickly be reinstated. Attempting (for all the right reasons) to skew the training data to achieve a more socially desirable result could easily have the opposite effect.
The bottom line is that we don’t really understand what a machine learning algorithm is doing when it makes a decision, we just keep on training them until they appear to do what we want.
These algorithms don’t have the benefit of true intelligence, understanding of context or any semblance of what we could call empathy for the human experience.
As we move ever closer to general artificial intelligence, we may see some of these challenges resolve themselves, but for now we must simply be conscious of the risk.
Understanding the implications of the use of these tools is no different to understanding the health risks of a fast food diet, it is simply a matter of public education.
Importantly, we should understand that while we are necessarily training the machines and imparting our collective biases upon them, we must be very careful not to allow those machines to train us.
Richard Welsh is SMPTE Education Vice President, Co-Chair HPA Tech Retreat UK and Chief Executive of Sundog Media Toolkit
No comments yet