AI And The Role Of Humans In AI Decision Making

News Room

by Dr. Linden Millwood, Founder-CEO at Global Reach Management Consultants, LLC.

Artificial intelligence (AI) is no longer confined to the realm of science fiction. It has rapidly evolved and become an integral part of our daily lives and industries.

AI technologies such as natural language processing, machine learning, emerging quantum computing and deep learning have made significant advancements, enabling practical applications in areas such as medicine, manufacturing and banking—just to name a few.

Caution And Fear Of AI

But this advancement does not come without its fair share of cautionary tales and doomsayers within society and the business community. Among the common thread of concerns are matters of privacy, job displacements, loss of human control, AI uprising (thank you, Hollywood), bias and discrimination.

According to some publications, there are over 500 types of phobias. While there’s no official phobia for fear of artificial intelligence listed, there is a condition known as algorithmophobia, which could be gaining recognition due to its widespread relevance.

But, there are steps being taken, and others that could be embraced, to lessen some of these fears. Here are just a few.

1. Rethink. Reimagine artificial intelligence as augmented intelligence. Thinking of AI in this way may influence your attitude, for it emphasizes the technology’s assistive qualities over replacement. This can also be helpful in how you utilize the tool within a business setting.

2. Consider the human first principle. It is important to maintain intellectual humility, particularly with respect to the data being used to train AI systems. The principle “garbage in, garbage out” still rules, as well as “bias in, bias out.”

To stand on the premise that AI can be self-correcting may be constructed on a faulty premise, which may inevitably lead to a faulty conclusion. The human first principle suggests that all systems are designed to serve and not be served. Consider this with the data you input into these systems.

3. Humans make the final call. While the human element as a safeguard should not be concluded as absolute and all-encompassing, it should certainly be an integral component of the overall safety and security strategy. AI should assist, but humans must decide. It is often suggested to use AI to fulfill automated work so that people can take care of the more nuanced elements.

4. Contribute to transparency. Be part of the push for AI systems to be open and explainable, particularly regarding algorithmic fairness, data collection, use and distribution. Also, champion it so any exposure to errors is shared among developers and the public at large so that ethical use can be achieved.

5. Push for oversight and accountability. As part of the safeguard and security strategy, there needs to be a regulatory body with international influence where oversight is given to the ethical use of AI in societies and ensure it’s for the benefit of all people.

Furthermore, the question of accountability is of significance. What happens when AI action results in adverse results? Who will be held accountable? What recourse will be available to victims of such actions? Therefore, a key component in addressing the mitigating of the fear of AI is oversight and accountability as part of the fundamental framework and not simply as reactive or an afterthought.

In Summary

AI should be considered a tool to assist humans with decision making, not the decision maker. It may be worthwhile to contemplate a reevaluation of the term artificial intelligence and consider the adoption of augmented intelligence as a means to emphasize the concept that AI serves to enhance human intelligence rather than supplant it.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment