The Ethics of AI: Navigating Bias, Manipulation and Beyond

Softensity Monika Mueller Forbes Blog Graphic

By Monika Mueller

Softensity’s EVP Consulting Services and Head of LATAM Monika Mueller is a Forbes Technology Council member, and this article originally appeared on Forbes.com.

Artificial intelligence (AI) is nothing new. It’s been around since the 1950s, but 2023 certainly feels like a tipping point. No longer is AI the sole provenance of academics and tech professionals. With the introduction of ChatGPT, Google Bard and the like, the technology is now easily accessible to all. And therein lies the challenge.

As AI becomes ever more consumable and its capabilities continue to evolve at breakneck speed, so too will the implications for society as a whole. In an ideal world, government, industry and civil society should work together to ensure that AI is developed and implemented ethically. But the genie is out of the bottle, so to speak, and despite growing concern from AI pioneers and thought leaders alike, there’s likely no slowing it down.

Even so, there’s plenty that we can do to set up some guardrails around sticky ethical considerations. It begins with recognizing bias and minimizing manipulation by increasing transparency and opening a dialogue about the ethical challenges that AI presents.

How AI Reinforces Bias

One of the key concerns surrounding the ethics of AI is the potential for reinforcing existing biases. As discussed in a conversation with Michelle Yi, Senior Director of Applied Artificial Intelligence at RelationalAI, bias in AI systems can have far-reaching consequences. When biased data is fed into AI models, it can perpetuate biases on an unprecedented scale.

It all begins with the concept of “data in, data out.” If biased data is used to train AI models, the resulting outputs will inevitably reflect those biases. Machine learning algorithms have the power to amplify these biases, and unless we actively check for and address them, we risk perpetuating societal prejudices unintentionally.

This issue becomes especially significant when AI is employed in decision-making processes, such as hiring, lending or criminal justice. Addressing bias in AI is crucial to ensure fairness and equity in all of its applications.

The Potential For AI To Manipulate Behavior

Another area of concern is the use of AI to manipulate people’s behavior. We all know how annoying it is when Alexa or Siri picks up on our conversations and serves up targeted ads accordingly. For example, you talk about needing a new bathing suit for an upcoming vacation to Hawaii, and the next thing you know, you’re inundated with swimsuit ads. With the integration of AI, the potential for behavior manipulation grows exponentially.

Imagine a future where AI can understand our sentiment or tone of voice even when we don’t explicitly, or directly, express our opinions. AI will be able to use these subtle intonations to make assumptions and predictions about our behaviors, opinions and ideas. This opens the door for potential manipulation that could be used in everything from targeted ads all the way to political persuasion.

Addressing Bias And Manipulation In AI

So what’s an organization to do? For starters, all AI systems should be designed so that they can be audited and reviewed. And organizations should check for biases within the data used to train AI models. A steering committee, or “model committee,” can be set up to look at models, scrutinize the rules that support them and analyze their behavior to identify and remove any built-in biases. “It can go all the way from the top down to a process level improvement,” says Michelle Yi, “and there are a lot of ways that organizations can focus on helping to address this issue.”

Organizations must also prioritize transparency and accountability by making their policies around AI clear to both employees and the public. It may help to create a vision statement about how the organization will leverage AI, including the company’s stance—and ethics—around it, and how AI maps back to the company’s mission statement. Bottom line, the objectives and approach of how an organization uses AI must be clear to consumers, stakeholders and shareholders alike.

Industry leaders should also work with the government to establish clear rules and regulations that foster innovation while ensuring accountability and transparency. Cooperation between government, industry and civil society will be crucial in order to harness the power of AI for good and avoid the pitfalls of what could go wrong.

Playing Our Part

The ethics of AI will impact everyone—not just people in the business world. As human beings and consumers, technology’s influence is inescapable, like it or not. This is why it’s so important to have the conversation now, in the early phases of what AI is potentially going to grow into.

On an individual level, we must all become more discerning consumers and question the information that’s fed to us. Awareness is the first step toward mitigating the impact of manipulation. By being critical of sources and not taking information at face value, we can better protect ourselves.

Addressing the ethical challenges AI presents now is the best way to ensure that the technology reaches its potential to benefit society. Putting steps in place to remove bias and being vigilant about manipulation is the first step. We must start the conversations now in order to build a framework that safeguards society’s values and fosters responsible and beneficial AI implementation.