As Google Retracts Gemini, Deepak Chopra Reflects On Morality In AI

News Room

No algorithm is morally agnostic.

The latest controversy over Google’s generative AI model, Gemini, reiterates just that. Shortly after the news broke, I had a profound conversation with Dr. Deepak Chopra, the physician, teacher and proliferative author who coined the term ”quantum healing,” exploring consciousness and moral values in a rapidly changing technological sphere.

Gemini, Google’s new large language model was just issued a very public penalty card, leading the tech giant to temporarily block access to Gemini’s image-generating abilities. Benching the generative AI model was a result of widespread criticism over ethnically biased and historically inaccurate generated photos, potentially stemming from Google’s attempt to introduce diversity to its model and thus fix past issues.

In the case of image-generating models, revealing bias can be straightforward, or at least somewhat attainable for users with different levels of literacy and tech savviness. But detecting it is not as straightforward in algorithms of health and well-being.

Bias is much more difficult to detect in an intricate non-transparent medical model recommending a course of treatment or a diagnosis – but it nevertheless exists. The challenge stems from lack of diversity in the information we collect (developers opting for the more accessible electronic data of Western and affluent population visiting digitized clinics), to human-derived interpretation in labeling data and up to weighting different variables over others in training. Alongside the immense potential AI holds for improving quality and equality of care, the field is still highly unregulated with long-term risk insufficiently mapped.

Given all this, could AI improve our health and well-being while respecting moral values? Dr. Chopra thinks it could.

“AI is invented by humans, so it will always be influenced by moral values,” Chopra acknowledges. “But in fact, let’s change AI to human augmented reality. These new technologies are not essentially different, they are only extending the virtual reality we are already immersed in. We are all living in a virtual reality; your body is part of the virtual reality, your mind is part of the virtual reality – everything is virtual reality. However, consciousness is not a product of the physical world. Consciousness is that in which all experience occurs; mind, body and matter are all human constructs, there is nothing other than consciousness modifying itself into perceptual and mental activity.”

So the moral and ethical risk in AI is not different? “We first have to agree that the evolution of technology is unstoppable,” Chopra says. “That has been the history of technology since fire was discovered, the wheel was discovered, the industrial revolution came about and so forth. Technology comes with great risk as well, but it’s unstoppable. Its evolution is unstoppable. The sad thing in my view is that the emotional and spiritual evolutions have not actually kept up with our technological capacities.”

I wonder then if AI could be used to catch up. Dr. Chopra smiles. “This very issue is covered in my next book,” he says, adding that it’s scheduled to be published by Penguin Random House around December. “Its title will be Digital Dharma: how to use AI to raise spiritual intelligence and personal well-being.”

By ‘raising spiritual intelligence,’ do you mean AI could elevate our consciousness? “Consciousness is everything,” he says. “It’s infinite–you cannot improve something that is infinite. But you can affect the trajectory it takes.”

Dr. Chopra’s poignant words in the context of recent technological advancements reaffirm our personal and collective responsibility to consciously guide the trajectory of AI implementation.

Read the full article here

Share this Article
Leave a comment