Pitfalls In Artificial Intelligence (At Least, For Now)

News Room

Artificial Intelligence, we all know, has tremendous – incalculable – potential. but it also poses real threats, mostly due to its newness and rawness, and to our relative unfamiliarity of this new world order. As leaders take their organizations’ first steps into this new wilderness, they would be well advised to temper their eagerness and desire to be early adopters. There are serious pitfalls ahead if not approached with caution. In other words, how fast can you go without the wheels falling off?

Here’s what to consider.

Ethical Challenges

AI can raise complex ethical challenges, such driverless vehicles on the same roads and streets with human drivers), robotic surgery, and urban planning (especially regarding under-represented groups in the population). Deciding how AI systems should behave in morally ambiguous situations is a challenge.

We always develop new technologies faster than our ability to recognize where regulation is needed or to enact that regulation. This is a challenge for government and the private sector to display common concern and progress.

Bias and Fairnessata Bias

AI systems learn from collecting data, and data is often biased. If an AI system is biased, it most likely lead to biased predictions (which is basically what AI’s output is) and that leads to biased decisions. There is a multiplier effect here, perpetuating and growing inequalities.

Algorithmic Bias

Algorithms themselves can introduce bias, and therefore be more hidden than data biases. Nefarious by nature.

Lack of Transparency

Producers of AI can be strongly tempted to fight transparency, and that can lead mistrust and difficulties in debugging and maintaining AI systems.

Privacy Concerns

We all know that vast amounts of personal data are out there – data on healthcare, finances, job history, ancestry, insurance, and so on. That, plus algorithmic biases, makes for big trouble.

Safety and Security

In critical domains – especially newly emerging ones like autonomous vehicles or healthcare, where algorithm-based decisions clash with human value-based decision-making – safety and even deference must rule the day. In many cases, it might not.

Energy Consumption

Training large AI models can consume huge amounts of energy; we’ve already seen that in the cryptocurrency universe. When fusion energy takes hold, this will be a non-issue, but we’re 20 years away from that.

Data Privacy and Ownership

Who owns and controls data? What liabilities will pop up?

Job Loss or Displacement

As AI advances, we are rightfully concerned for our jobs. But as with all major workplace revolutions – the PC is most relevant – while some jobs will be lost, far more jobs will be created. They’ll be different, they’ll be in other places, and they’ll merge, but they’ll be there. We just have to get used to looking for them.

Explainability

AI makes stuff up. It starts with one word and then predicts – based on biased data – what the next word should be ad infinitum. That’s what’s called a dumb system. But when you ask AI to explain how it arrived at its answer, it can’t. That requires intelligence, and it exposes AI as an impostor in the field of intelligence.

Hype and Expectations

We are overestimating what AI can do now, but underestimating what it will be able to do five years from now. Managing our expectations and using AI appropriately is the challenge.

Anticipating and addressing these pitfalls requires a common dedication to collaboration among all parties involved – a difficult assignment under any circumstances It’s the old conundrum of “Should we do what we could do?”

Read the full article here

Share this Article
Leave a comment