Do you trust AI?
We all know that AI, rapidly developing as it’s been, is still an unreliable partner as we scour the universe for answers, data, information, and sources.
70% of AI output is suspect.
By many experts’ accounts, up to 70% of AI’s output is or tends to be faulty, from slightly off to blatantly false. While much of this has to do with how we ask questions or state instructions to AI, which then uses its vast resources, instant response, and natural language processing (NLP), AI is still the villain as it just flat-out makes stuff up. We all know it, but spotting the problems is easier said than done.
The First Step To Taming AI
Before doing another AI query, understand what categories of blunders AI makes, and lo and behold, we find that this is more of a communication issue than it is a data or other technical issue. Knowing that, it’s easier to deal with specific foibles.
From The Classroom to Practice
Having taught Executive Communication in the MBA and MAS programs at Fairleigh Dickinson University for 15 years before retiring from teaching in 2018, I didn’t realize at the time that the principles of good communication would apply not only to traditional practices, but also to this as-yet futuristic thing called AI. Now, though, it’s obvious.
See my 9/13 Forbes.com post
So, while AI has made significant advances in NLP, it still has the very real potential to throw a monkey wrench in the works, more times than not. Please see my post of 9/13/23, “Pitfalls in Artificial Intelligence”…
That’s the groundwork.
To attempt to begin defining AI blunders, I called on my own experience in leadership and communication, the advice of three respected colleagues, and AI itself. Yes, I asked AI what’s wrong with AI. In character, a couple of its answers were spot-on and more were superfluous, but the biggest blunder was missing some of the most obvious ones. In other words, as I’ve written before, AI has no self-awareness.
AI Blunders: 10 Big Ones
Here are 10 common blunders or shortfalls that AI may exhibit when answering some of your most simple questions.
Not handling ambiguity well. Not even close. Usually.
It’s one of AI’s biggest faults. AI may not effectively handle ambiguous questions, either providing multiple conflicting answers or asking for clarification when it’s not necessary. AI is really a dumb system.
Biased responses.
If the training data is biased, so will AI’s responses be to you. The trouble is, this is tough to spot, but it will generate harmful stereotypes or opinions, not to mention bad decision-making.
Misinterpreting context.
AI struggles with context, usually unable to determine intent, relevance, or historic underpinnings.
Unnecessarily verbose responses.
Because AI starts its response with one word and then predicts – linearly – what’s to follow one word at a time, it doesn’t understand a basic communication principle: When you’re done communicating, stop talking.
Repetition.
AI may repeat certain phrases or keywords excessively, making answers sound robotic or unnatural. By junior high school, we should have learned to avoid this.
Lack of current information
All AI models may have a knowledge cutoff date, meaning they cannot provide information or updates beyond that date. In response to a recent query of mine, AI admitted its cutoff date was September 2021. Really. In September 2023.
Inappropriate content.
Lacking judgment or morals, AI tries to be responsible, but that ends where the programming does. Usually, AI will decline to answer a query on these grounds, but you never know when that will play out. Brace yourself.
Unable to infer intent.
AI often struggles to figure out the underlying intent of a question, resulting in answers that don’t fully address the user’s needs. Humans do this and it’s called associative thinking. Or hunches. Impossible with AI in its current form.
Unwanted complexity.
Have you ever worked for someone who didn’t understand the “K.I.S.S.” (Keep It Simple, Stupid) principle? Ask him what time it is and he’ll tell you how to build a watch. AI often answers simple questions like that. Annoying.
Incorrect information.
This is not new to AI. It’s the age-old garbage-in-garbage-out conundrum. One issue at a time, we can deal with that, but with the sheer volume of AI usage, good luck.
It’s important to remember that AI users – that would be us humans – should always verify and critically assess what we get from AI. Otherwise, we’ll hasten the day that AI does what we’re all afraid of. Y’know…
Read the full article here