The Key To A Better Future

News Room

Raj Verma is the CEO of SingleStore. He brings more than 25 years of global experience in enterprise software and operating at scale.

AI is everywhere.

For many people, the popularity and accessibility of generative artificial intelligence (AI) has made it feel like the future we see in science fiction movies is finally at our doorstep. This technology has launched excitement—and trepidation—worldwide among not only investors and governments, but also in people like you and me.

Especially for businesses, the “AI Boom” (subscription required) represents both the potential for unprecedented innovation and the chance to bolster profit margins. However, as AI becomes more ubiquitous, it is imperative that the initial adrenaline rush of its seemingly infinite capabilities not cause us to overlook (or deliberately ignore) the risks posed by possible bias in AI algorithms.

As a computer science engineer and the CEO of a database firm, I am thrilled by the possibilities of AI, although I fully believe that this technology should be handled with care. AI or machine learning bias is “a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning (ML) process.” Large amounts of data are required to train computers to carry out tasks and recognize patterns.

My motto is that without data, AI wouldn’t exist—but with bad or incorrect data, it can be dangerous.

Bias can be introduced to algorithms in multiple ways: It can reflect the unconscious or conscious prejudices of the individuals who design the ML systems, be a result of faulty data sets used to train the systems, and more. Biased technology, when released into the world, can have detrimental effects. Its impact can include discrimination in hiring, digital redlining and misidentification in facial recognition software. I have already read far too many heartbreaking incidents of Black men being arrested—and even jailed—after being falsely identified by facial recognition technology. These incidents are not just minor inconveniences for those affected, but alarming phenomena that have cost people time, opportunities and even their freedom. It is to our collective benefit that safeguards be established so that AI does more good than harm—and that the most vulnerable in our communities are not further marginalized.

This leads me to my key question: Do businesses have the capabilities in place to prevent or address AI bias within their systems? While it may not be possible to make AI perfect, there are steps business and tech leaders can take to mitigate biased algorithms. Here are my recommendations:

1. Collect accurate data and communicate clearly.

Your AI algorithm will only be as accurate as its data set. As always, developers should keep in mind the common types of data bias, such as cognitive and analytics biases. Additionally, we must be very careful to understand where our data is coming from. For example, ChatGPT was trained using internet text databases, from sources including Wikipedia. Businesses should be transparent about where their data comes from.

2. Educate your team.

Understanding AI bias is not only critical for developers but for your entire team—especially those who will be working directly with the technology and using it to make decisions. For example, companies can host training sessions to teach employees to analyze and contextualize the outcomes of AI results and understand how personal biases may impact their work. This is especially important in cases where AI is used to manage human interactions (like identifying hate speech) or directly interacting with users, such as via chatbots. I fully believe that human judgment (informed by bias training) is a key strategy to combating bias.

3. Remain vigilant.

Maintaining efforts to mitigate AI bias can easily be placed at the bottom of corporate “to-do” lists, and familiar habits have the opportunity to reappear despite the best efforts of companies to implement safeguards against such behavior. Checking a box is not sufficient. Steps must be taken to analyze the outputs of your algorithms and to reprogram them when bias appears. Furthermore, combating personal bias is a muscle that each of us must constantly exercise—for the good of our technology and our communities.

While I’m generally not one for regulation, I do agree there must be some oversight and consensus in the private and public sectors to ensure generative AI is handled responsibly. Recently, the Federal Trade Commission, the Consumer Financial Protection Bureau, the Civil Rights Division of the U.S. Department of Justice, and the U.S. Equal Employment Opportunity Commission released a joint statement on AI, signaling that it would prohibit the sale or use of biased AI in an effort to uphold the core American principles of fairness, equality and justice. Likewise, the European Union proposed the first legal framework to regulate AI. As governments make steps to protect consumers, it is also incumbent upon companies to not only have a broader appreciation for the impact AI currently has and will have on society, but to implement training and tangible solutions that mitigate its shortcomings.

The incredible possibilities of generative AI may feel like something out of a movie, but the impact of this technology is very real, particularly for vulnerable members of society. As companies rush to innovate, they must equally ensure that their value proposition aligns with core values—and that these principles are rooted in ensuring positive outcomes for humanity at large. Only then can we achieve a better future for all.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment