Biased Robot

Jade McDaniels
3 min readMay 7, 2021

The things that we don’t see.

Photo by Jason Leung on Unsplash

As technology has progressed, parts of our lives that would have previously required some form of effort can now be done with only a few keystrokes or the click of a button. Bits and pieces of our daily lives are replaced with artificial intelligence(AI) and we love it. It is no secret. As it should, technology makes our lives easier. From the applications on our cellphones to the software used in doctors’ offices, technology is helping with decision making by compiling, manipulating data, and offering a conclusion. Artificial intelligence’s modern definition is defined as a computer or machine capable of learning and decision making. Within the realm of AI and algorithmic decision making (ADM) is the concern of bias. Bias is defined as a disproportionate weight in favor or against an idea or thing; and statistical bias results from an unfair sampling of a population, or from an estimation process that does not give accurate results on average. Algorithmic decision making machines learn by being fed copious amounts of data related to a topic and the idea is that when presented with a similar situation, the machine will be able to make an appropriate judgement. But what happens when the learning data is flawed?

Bias in algorithms is a sticky debate as it intersects social constructs and technology. Technology has the power to either help remedy bias present in society or to exacerbate it further. There are several issues when it comes to measuring bias. The first is, how do you measure something that you are unaware of? You don’t know what you don’t know and not knowing makes it more difficult to create a test that can assess the presence of bias. Further complicating matters, testers and researchers have different tools for measuring bias which can effect the level of the suspected correlation of bias.

The next argument focuses on what is fair and how do we measure it. There is not an agreed upon mathematical definition or legal definition of a fair algorithm. Some would say that an algorithm is only as fair as its data. The thought is that an algorithm can make a correct decision on the faulty dataset that it was given but in the greater context where that decision is applied in a human situation, we have reason for concern.

The question then becomes is it the job of the algorithm creator, a researcher or the machine user to assess our machines for bias. There is no correct answer to this question. The thought is that the algorithm’s engineer may not have a thorough understanding of social sciences and the implications of their chosen dataset. Researchers have different tools for testing algorithmic bias but no concrete standard of procedure for that testing. A lot of pressure is applied to the machine user in being able to differentiate between possible bias and non- biased outcomes. Users’ often blindly trust ADM machines assuming that the intentions behind them and outcomes of their use are fair. So the question becomes, just how much of the decision making process should AI be responsible for?

My research has honestly left me with more questions than answers. The way I see it, having questions is good. Questions signify an awareness of work that needs to be done and promises the possibility of change in the future.

--

--