Adrian Carr is CEO of global master data management solution provider Stibo Systems, which empowers companies through data transparency.
Since the first half of the 20th century, humans have been mesmerized by the concept of artificial intelligence and machine learning. It has roots in the zeitgeist of science fiction, glamorizing humanoid robots and spurred by people like Alan Turing, who was writing about computing machinery and intelligence as far back as 1950.
“I propose to consider the question, ‘Can machines think?'” opens Turing’s landmark AI manifesto, “Computing Machinery and Intelligence.” In the Turing test, or “imitation game,” a human observes a conversation between a human and a machine trained to generate natural language responses. If the observer can’t tell which is which, the machine is said to have passed the test.
Today, anyone with a functioning device can tap into the world of AI. It is quite literally at our fingertips. We’ve been hearing ad nauseam about how companies are integrating tools like ChatGPT. There is a constant flow of new use cases, and it is easy to get caught up in the fervor.
The application of generative AI within inbound and outbound marketing, code generation, customer service and support, blog and social media posts (no—this piece is not written by ChatGPT) and even storytelling, causing Hollywood actors to raise strong opinions on the threat from AI, are all general business examples of AI accelerating processes.
For businesses, these tools can save time and human effort, helping us with mundane, time-consuming and repetitive tasks. It’s a conversation everyone feels excited—and likely somewhat pressured—to take part in. These days, to not engage with AI will be falling behind.
With the rise of easy-to-use AI-fueled applications, anyone can use AI—but how do we use it responsibly? There’s no single answer, but we know for sure that data is a driving force.
A Human In The Loop
Consider a big-name retailer planning to introduce one of its best-selling products to a new market. The first step is for the product manager to define a market-specific product description; they could prompt ChatGPT or another AI assistant for suggestions.
But to get these suggestions to be high quality and efficient, the data that drives the query to the AI must be accurate. These prompts contain directives like context, defining the target audience, purpose, writing style and guidelines such as tone or word choice.
What happens next is essentially a back-and-forth between the product manager and the AI, an editing process likely going through several iterations before the team approves a final product description for the new market. It’s a delicate dance—AI accelerates the work, but a human expert’s say is crucial to refine and validate the project.
Stanford University’s Human-Centered Artificial Intelligence program imagines automation as the selective inclusion of humans instead of the removal of human involvement from a task. This human-in-the-loop approach results in a process that harnesses the efficiency of AI combined with human feedback, leading to a “greater sense of meaning.”
We, as humans, may be able to control our end, but what about the data that’s fueling the AI? What are the problems stemming from machines using data that may not be accurate or complete?
With Reward Comes Risk
We are already beginning to see issues with generative AI tools. What stops companies from exposing their data to the whole wide world when they send it to a large language model that is hosted in the cloud, for instance?
In May 2023, Samsung restricted the use of ChatGPT following a leak of sensitive internal data. “Interest in generative AI platforms such as ChatGPT has been growing internally and externally,” Samsung told staff in a memo. “While this interest focuses on the usefulness and efficiency of these platforms, there are also growing concerns about security risks presented by generative AI.”
Responsible AI means prioritizing ethical and societal considerations while minimizing potential harms, such as bias and privacy violations. It involves ensuring transparency and accountability in AI systems. For AI to work in a meaningful, responsible way, the data used to prompt the machine must be accurate, relevant and subject to strong governance. In other words, AI needs to be asked the right questions with the right data.
Master data management (MDM) is pivotal for data governance and provides a single source of truth businesses can use to drive compelling product storytelling and consumer engagement. By improving data quality and consistency, MDM helps craft better prompts, for instance, leading to accurate, relevant and reliable chatbot responses.
A Thousand Questions
If he were here today, Alan Turing would likely have a lot to say. He might want to discuss burning questions: Do we want machines to think? How far do we want to take this? And do we even have a choice at this point?
We don’t yet have the answers to the host of ethical issues swarming artificial intelligence and machine learning. But we do know that combining a human eye with well-governed data certainly positions a business to mitigate some of that risk.
Machines think; there’s no stopping them. And when done right, AI can give businesses a competitive edge, freeing up time and energy that can be put into other important projects. And inevitably, with the rewards of AI come risk, so let’s do what we can to make that thinking as responsible and ethical as possible.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here