Leadership In The Age Of AI: Collaborate With Your Competitors

News Room

Beena Ammanath – Global Deloitte AI Institute Leader, Founder of Humans For AI and Author “Trustworthy AI.”

Artificial intelligence (AI) is one the most powerful tools for business competitiveness and differentiation. Alongside the adoption and deployment of automation and machine learning models, in the past several months, we have also seen the rapid proliferation of a new kind of AI known as “generative AI.” With large language models, image generators, code generators and more, the AI landscape is becoming more crowded by the day.

As with other valuable business investments, there might be an inclination to closely hold hard-earned insights and use cases. Yet, when it comes to AI’s implications for enterprise risk and ethics, sharing knowledge with competitors in your industry is important for the longer-term use of these transformative tools.

The reason is two-fold. First, enhancing risk mitigation and governance can promote more valuable, trustworthy AI applications. Second, governments worldwide are exploring how to regulate AI, and leading by example can encourage rulemaking that enhances AI use, rather than stifles it. This only becomes more important as governments face calls to regulate generative AI. From my perspective, the timeline for regulatory action might be shortening.

With so much AI potential, businesses across every industry are making significant investments. One outcome, however, is that half of executives point to managing AI-related risk as a top challenge, according to the fifth State of AI in the Enterprise by Deloitte, where I help lead the AI Institute. Mitigating risk with AI is complicated by the fact that the ways these tools are being used today are, in many cases, fundamentally new. The “rules of the road” for the trustworthy use of AI are still being discovered.

Every pilot or deployment has the potential to reveal an unrecognized risk or an effective method to mitigate it. Questions arise: What lessons do your competitors hold that could enable greater trust in your own applications? What lessons do you have that may be helpful to them? As experimentation and deployments increase across an industry, each organization is gaining valuable knowledge, and there is business logic in sharing it.

A good analogy is the mass availability of the consumer car. When automobiles became ubiquitous, there was a need to invent best practices for this new fact of everyday life—the speed limits, safety requirements, standards for parts and all the things that impact how a car is built, operated and maintained. In that time, it was in the interest of auto manufacturers and their customers to share knowledge across the industry.

I see a similar environment today with AI. Lessons learned through the process of developing and deploying AI can be fed back into development cycles to mitigate risk and improve applications. This has important impacts in terms of enterprise security, efficiency, customer engagement and return on investments. Yet, if learning by doing is a quality of today’s AI application, it is logical that there is only so much one enterprise can glean from its deployments. It’s said a rising tide raises all ships, and when it comes to AI risk and governance, collaboration can elevate AI applications across an entire industry.

The exchange of leading practices is important, in part, because it moves toward industry self-regulation, wherein businesses acknowledge and follow standards and leading practices because they elect to and because it is in the best interest of their business and customers—and not because regulations compel it. Taking this proactive approach toward AI risk mitigation and governance secures early confidence that the enterprise is taking steps to use AI in a trustworthy way, and it also sets an important example.

AI regulations and laws will proliferate in the months and years ahead, propelled in part by growing industry calls to regulate generative AI. An industry wherein companies are already self-regulating is positioned to help shape government rulemaking. Regulators cannot inspect at a technical level all of the AI applications that are emerging across industries, particularly as innovation and deployment are occurring at such a rapid pace. When regulators consider how to develop rules that guide AI in the marketplace, they will likely look to known harms, as well as to known remedies and preventative measures. What becomes codified may be informed by industry example. This moves toward meaningful regulations that accommodate how AI is being used. Conversely, an industry that lacks examples of trustworthy AI applications may encounter regulations that are more stringent and disruptive because regulators lack case studies in excellence.

To begin steering the organization and the industry toward self-regulatory action, look toward meetings and forums where people can gather to share insights, lessons learned, emerging risks and potential solutions. Bringing together executives, subject matter experts and even a line of business users creates fertile ground for creative thinking and momentum toward self-regulation. Business leadership might coordinate with competitors to schedule a conference, seek one another out at industry events or leverage digital platforms to connect anywhere in the world.

The goal is to create a collaborative environment that is conducive to sharing insights and working to advance the entire industry toward the trustworthy application of AI.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Read the full article here

Share this Article
Leave a comment