Surveillance Cameras
Graduate College Technology

Let’s Face It: Bias in Technology

How can an algorithm or data itself have bias?

From street corners to sports stadiums to our own doorsteps, surveillance technology is ever-present. While many people are embracing this type of technology, others are worried that it goes too far. For example, after people have complained about bias, some cities are banning the use of facial recognition technology by their police and city officials.

According to Jennifer Priestley, a data scientist at Kennesaw State University, there is some validity to their concerns.

“Facial recognition technology is based on algorithms and data sets,” she explains. “In this case, it’s the algorithm that inconsistently recognizes faces of people of color, and it’s that inherent bias in technology that’s justifiably calling the applications into question.”

Facial Recognition Illustration

But just how can an algorithm – a set of rules in problem-solving operations – or even the data itself have bias?

“Bias can be manifested in an algorithm because they are developed by humans and humans have bias. Those biases often manifest in the technology as it is built.”

Jennifer Priestley, data scientist and director of the Analytics and Data Science Institute at Kennesaw State University

Priestley says that there are generally three types of bias that impact the types of algorithms that we encounter everyday:

1. Pre-Existing Bias

This bias is inherent to human beings. Priestley points to the example of Google and LinkedIn, which recently discovered misogynistic bias within their algorithms. In LinkedIn’s case, she explains, when typing a name in the search bar, the auto-generation feature almost always populated a traditional male name.

2. Technical Bias

In this case, the bias comes from our overconfidence in the ability of an algorithm to make intelligent decisions. Priestley explains the  technology in a self-driving car as an example. “If the technology cannot decipher an object – or a person  –  in the road, it will continue to decipher, delaying a decision, which could lead to injury or death,” she says.

3. Emergent Bias

This type of bias comes from the technology used by Netflix and Amazon to provide their customers with personalized recommendations. “The feedback loops built into their products, and similar technology, which help to curate our newsfeeds on social media, have resulted in people being essentially cut off from interests outside of their own,” she says. “What developers didn’t know was that downstream these recommendations would be more divisive than helpful.” For example, if you click on something that is more extremist once, you’ll likely keep getting things that are comparable in the future, which Priestley contends “is causing a greater divide in this country on the socio-political spectrum.” 

Whatever the type of bias, Priestley says that the first step is understanding the problem. Like many algorithms that are built to solve specific problems, facial recognition technology can be used in ways that wasn’t imagined when the technology was first built.

Whether it is the algorithm itself, or the programmer that developed it, Priestley believes that there’s a simple answer to solving some of the problem.

“It comes down to diversity. If we are going to tackle bias, the only way we can do that is to increase the diversity among data scientists,” she says. “The more people involved in the process who have different life experiences, and the more diverse data scientists and programmers are, the more likely it is that we will have less bias in algorithms and technology.” And ultimately, she says, that will make technology like facial recognition safer and more effective.

%d bloggers like this: