Can AI Be As Unethical And Biased As Humans?

News Room

Cathy Ross, the finance and tech expert behind Fraud.net‘s AI-powered risk management platform.

AI technology has bolstered many businesses’ ability to produce, often increasing production exponentially by eliminating slow processes. For example, at my company, we’ve used it to streamline the fraud prevention process and reduce the time to review for fraud and risk teams.

AI is of great interest to me as someone who has leveraged it as an extremely helpful tool in operating a fraud management company—the amount of manual labor that AI can reduce is incredibly impactful. However, despite this innovation and efficiency, there is still a great deal to teach our AI models. As a mentor to up-and-coming woman entrepreneurs and as one of the first African American women to own a firm on Wall Street, I think it’s high time we have this discussion.

Just like humans, AI can be biased, because they are ultimately a product of the humans they’re being trained by. We teach our children to respect others and to do right instead of wrong, and our AI models learn the same from the information we give them. So if we give an AI model incorrect information, or it mirrors the wrong behavior, the machine learning starts making decisions that can be incorrect at best and harmful at worst.

For example, if you’re a lender trying to decide who to give a loan to and you’re relying on AI to make that decision for you, you could end up making an unfair choice if the AI model was taught using data that favors one group of people over another or doesn’t take into account systemic issues (for example, “red-lining” of communities of color).

How has AI bias caused trouble for businesses and individuals?

In 2018, Amazon received major backlash for discriminating against women. However, they didn’t intend to discriminate. In fact, they were unaware that it was happening for an entire decade. But because Amazon was an early adopter of AI, they were also among the first to experience not only the positives but also the negatives of it.

They were using an AI recruitment tool to make quicker decisions on new hires. What they didn’t know was that they had accidentally trained it to be biased against women. The majority of résumés submitted to Amazon over a 10-year period were from men, which meant that the AI automatically rejected résumés that included words like “woman” or “female.”

Not only was Amazon affected negatively by AI bias in this situation, but so were the individuals that never got a call for an interview through no fault of their own—but because of bias from a machine.

Can unethical AI actually cause harm?

Imagine a series of gas stations were robbed at gunpoint in your city last night, and the police caught a glimpse of the perpetrator on some security camera footage.

So they feed that image into a facial recognition system that scans driver’s license databases, mugshot databases and every other database they have to see if there’s a match, and your face comes up. But you didn’t commit the crime—you were in bed by 10 p.m., already sleeping.

Why did your face come up as a match then? This is exactly what happened to Robert Julian-Borchak Williams when he was wrongfully arrested by police in Michigan after being falsely identified in a case where AI facial recognition was used.

It happened again when Nijeer Parks was wrongfully accused of shoplifting in Detroit. Without accounting for systemic biases, or refining AI recognition and decision tools, some communities can be unfairly affected and lives upended. Establishing an ethical AI can help, but there are some barriers.

What are some challenges to establishing ethical AI in your business?

A big challenge is privacy. One of the easiest ways to engineer a more ethical AI is to broaden the information you feed your AI models, but if you’re a company that uses customer data to train them, you’re beholden to privacy, data consent and compliance requirements, and gathering and using that data may be slow and difficult.

Another challenge is development. With a lack of new customer data, you may have to retrain how the AI model interprets your existing data, which can be a hassle and an added expense to your development team. However, it may have a profound impact on your customers, your profits and the greater good.

How can we create an ethical AI?

Without oversight and close scrutiny, AI can act as an amplifier of biases. However, these guiding principles may provide guideposts for future issues.

Nonmaleficence

This principle aligns closely with ethics in healthcare and the adage “do no harm.” Nonmaleficence in this context is making decisions and implementing AI models that will cause the least harm to individuals and society.

It begins with employing more oversight of AI models, including a deeper analysis of underlying training data and algorithms to remove any potential bias and perpetuation of previous discriminatory patterns. This analysis often consists of ensuring robust and equitable data sets, prescribed sets of audits and clear explainability.

Transparency

The most powerful guard against biases, injustices and other inequities in your AI model is transparency. Designing AI systems to be transparent and modular, with comprehensive documentation, testing and oversight, provides a necessary safeguard from unexpected results and damaging repercussions.

Accountability And Governance

To be ethical, AI technology could have accountability, explainability and enablement. Accountability means creating a diverse mix of backgrounds and functions in your AI governance teams; explainability means requiring AI teams to document with non-technical descriptions the objectives, decisions and expected outcomes for the model; and enablement means embedding the necessary discipline around data ownership, storage, usage and sharing.

Learn how to establish AI ethics to reduce bias.

If you’re not already using AI in your business right now, it is likely you will be someday. And without guardrails, we may see an overwhelmingly negative effect on already marginalized communities as we increase our use of AI.

In the cases above, each unfortunate incident wasn’t intentional, but the outcomes were still catastrophic or harmful regardless of intention. So it’s important to ensure that AI is being used in an ethical and responsible manner, especially if your intentions are pure. With these principles and guidelines, maybe we can create a world where AI can benefit everyone.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment