Seth Rainford is the president and co-founder of Digital Diagnostics, a pioneering AI diagnostics company.
Artificial intelligence (AI) has been in the news constantly. The New York Times, for example, published an article titled, “A.I. Poses ‘Risk of Extinction,’ Industry Leaders Warn,” and the Guardian published an article titled “AI poses existential threat and risk to health of millions, experts warn.” However, I believe coverage like this can give an overabundance of attention to the fears surrounding AI without considering the type, application or industry of AI or weighing fears against benefits.
Media that focuses solely on the threats of AI also raises some important questions: How is AI being defined in these articles? In other words, is all AI created equally? Who are the “experts,” and what are they experts of?
To that end, establishing what AI actually means seems like a logical place to start.
Defining AI
The term “artificial intelligence” is often used ubiquitously and might not provide the right level of detail and context. There are many AI types, and calling them all by the same name is like calling every shade of blue the same—it just doesn’t paint the full picture. There is meaningful variation in AI that should be distinguished to make a valid comparison.
AI Design
The topic of AI design is varied and includes topics such as deep learning, convolutional neural networks, explainability, transparent or black-box algorithms, locked systems, continuous learning systems and more.
AI Capabilities
The capabilities of AI can be unique depending on the AI design. It could be assistive AI that is used in tandem with human efforts, or it could be autonomous AI that completes a function without human oversight.
AI Applications
The applicability can be widespread. Using the example of assistive and autonomous AI, each capability is likely to produce a different application. I’ve seen assistive AI being used by some healthcare companies, which use AI-focused technology to develop precision pathology solutions. In my experience, most healthcare AI falls into this category. Alternatively, autonomous AI can be used to produce a diagnostic medical report without the need for humans to weigh in.
Weighing Expert Opinions On AI
These definitions clarify why leaders need to exercise caution when placing instant trust in “AI experts.” When commentators use blanket terms for AI and spend their time highlighting uncertainties and dangers without explanation and context, it can lead to widespread fear and sensationalism. I find this especially true in healthcare, where the level of regulation and oversight differs from other industries.
If we don’t differentiate between the types of AI and the needs of different industries, I believe we could find ourselves in a potential AI moratorium. We could see AI development get suspended or greatly halted as the least regulated sectors are evaluated, thereby negatively impacting the real-world positive effect that AI is having in other sectors.
AI Regulation Based On Type And Industry
There is a clear need for guardrails and regulations for AI, and I’m encouraged to see this already being addressed in some places. Somewhat surprisingly, healthcare is leading the way in establishing some of those parameters. For example, autonomous AI systems are rare in the commercial space. Few examples exist where humans aren’t central to process completion or the end result.
All of this said, there are still plenty of issues to address regarding AI regulation, including:
• Who should regulate AI in different industries?
• Where do ethics figure into AI for developers and regulators?
• What steps are being taken to address hacking, data privacy and bias?
• Should AI regulation be globally centralized or defined within each country?
The list goes on, and I don’t pretend to have the answers to many of these questions. However, I have found a few things to be consistently true in my exploration of this topic that developers should consider:
• When building healthcare AI applications, having transparent, accessible and explainable data that can be trusted goes a long way to mitigating the fears that most people have.
• Having high-fidelity data is also tremendously helpful and further avoids dangerous pitfalls.
• Focusing on creating AI that is built on a foundation of ethical principles that consider bias mitigation, liability, explainability and patient benefit (again, in healthcare, at least) from the beginning is critical to success.
The Future Of AI
As we look to the future, it’s important for developers to keep central the principle that AI development cannot just be about creating technological advancement—so-called “glamour AI.” Rather, development should be focused on the common good and designed to be useful, impactful and applicable to most people. Finding appropriate checks and balances in this new AI landscape is essential. We need to develop robust systems to determine what guardrails are needed for different AI types. Focusing on end-user safety, developer accountability, continuous efficacy monitoring and the ethical implications of AI design should be part of the larger conversation, no matter your industry or geographical location.
As AI continues to become standard in our lives, we need education around AI, not just for specialists but also for all individuals who might be impacted by AI. The next generation will need better AI understanding and strong foundations to evaluate AI across various domains and sectors.
Final Thoughts
As developers and regulators, it’s important to exercise caution and curiosity regarding the dangers of AI as well as its vast potential. One approach might be for the masses to maintain a healthy skepticism as they dive into the latest news and not assign arbiters of truth around AI generically. My hope is for a thoughtful approach to where we place our trust and how we ensure those making decisions fully understand the technology and its implications, both good and bad. There needs to be a healthy balance, and leaning too far in either direction will do more harm than good. Prioritizing widespread AI education while focusing on how AI can be created to serve the common goal of improving people’s lives is a great place to start.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
Read the full article here