ChatGPT may be the most influential “person” of 2023. When the artificial intelligence chatbot rolled out at the end of last year, it thrust a conversation that has mostly happened in tech circles for decades into the public eye.
Academics panicked at the thought of students submitting papers written entirely by AI, tech companies rushed to quickly push out their own AI chatbots and small-business owners and entrepreneurs behind the curve were suddenly faced with incorporating machine-learning technologies (which comprise most of what we call artificial intelligence) into their day-to-day business practices.
Here’s the thing: Artificial intelligence is not intelligent. Machine-learning applications “learn” by recognizing complex patterns in data. What they are really doing is using algorithms to predict what makes the most sense based on the data used to train them. What this means, unfortunately, is that human biases end up in the applications.
The most famous case of this happened in 2018, when Amazon had to ditch an AI hiring tool because it was discriminating against women. How did this happen? Well, the system showed the resumes of successful candidates to the AI. This data was used to train the AI on what to look for in the selection process. Well, tech is a male-dominated industry. So, the AI started rejecting resumes that had “female” attributes—for example, a women’s college listed on the resume.
The myth of objective tech
Andrea L. Guzman, Ph.D., an associate professor of journalism at Northern Illinois University, focuses her research on human-machine communication, specifically the way that people interact with lifelike or advanced technologies.
Guzman says there is no such thing as unbiased technologies.
“People trying to sell AI technologies will say that they’re not as biased as humans,” Guzman says. “That is just a tactic to sell. These applications are created by humans and processing data created by humans. Bias can still enter in.”
How humans unconsciously created AI bias
Meredith Broussard explores the occurrence and significance of bias in technology in her book, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, which came out in March.
“AI is not neutral or objective,” Broussard says. “[It] is a socio-technical system.” In order to understand bias in AI, she adds, “you need to start by learning more about how bias works in the real world.”
Broussard, an associate professor at the Arthur L. Carter Journalism Institute at New York University, has been signaling these issues for years, including in her 2018 book, Artificial Unintelligence: How Computers Misunderstand the World.
“We embed our own unconscious bias in the technologies we create,” Broussard says. “So when you have a small homogeneous group of people creating technology, then the technology gets the collective unconscious biases of that small homogeneous group of people.”
One solution: “Having diverse teams creating our technologies,” Broussard says. “We can empower the different voices on the team and listen to them.”
Broussard emphasizes that, just as AI is built and designed by humans, it is beholden to human laws.
“The FTC has published a strongly worded letter pointing out that there is no exemption from the law for AI systems,” Broussard says. “Entrepreneurs need to familiarize themselves with existing nondiscrimination laws and make sure that tech systems obey the law.”
Counteracting AI bias when you use artificial intelligence
Lachlan de Crespigny is cofounder and co-CEO at Revelo, a technology company that helps U.S. companies hire Latin America-based remote software developers. “Every day, we deal with new technology to better find, source, select and match candidates with companies across the U.S.,” de Crespigny says. “It’s really important that we know if there are any biases.”
De Crespigny says Revelo uses human testing every step of the way to compare the results of the human and machine teams so they can pull out the machines’ biases and errors. And his company comes across another issue: AI hallucinations. “AI tools, for some reason, hallucinate results,” he says. “For some reason, AI makes up imaginary information, and that can obviously lead to very wrong results.”
Troubleshooting bias in your own AI
Accounting for technological pitfalls is especially important when AI is core to your company or offering.
The HR tech platform Peoplelogic uses AI and machine-learning software to provide companies with surveyless employee engagement and retention insights, measuring engagement based on the way employees use the organization’s tools.
“Bias in these systems starts with the data that you put into it,” says Matt Schmidt, Peoplelogic’s founder and CEO. “We’re very intentional about how we build our models. We vet the research that we base our models on very carefully to make sure that those are done in a way that isn’t marginalizing one audience or another. When we’re building our models, we’re making sure that we’re taking enough data in a way that lacks any particular identifiers that might cause us to have an unconscious bias as we’re building it.”
Schmidt also notes that, although the software measures engagement without a direct conversation, it’s not a replacement for human interaction.
“Managers and HR folks tend to incorporate this into their one-on-ones or their weekly team meetings,” Schmidt says. “That’s part of a healthy culture—to be able to give them data to service into these conversations that are already part of their workflow.”
Schmidt also agrees that it’s important to have diverse perspectives anytime you are attempting to understand machine bias.
“We have people from multiple different countries on our teams and multiple different backgrounds,” he says. “Where we lack a particular diversity—maybe it’s gender or ethnicity—then, I go and look to seek out additional advisers to help inform us.”
This article originally appeared in the Sept/Oct 2023 issue of SUCCESS magazine. All Rights Reserved.