A Practical Guide For Selecting Trustworthy AI Vendors

News Room

In the fast-paced race to adopt AI, many organizations are caught between the pressure to be the first to capture the most value and the risks they invite by being the fastest to move into the unknown. Amid this excitement, however, even the most risk-averse and conservative organizations face a challenge: AI’s inevitable integration into existing tools, sometimes without notification or consent from their users. Similarly, many of their existing tool providers, facing the same pressure to adopt AI as their customers, are quietly updating existing terms of service to allow for AI model training.

Moreover, the hype of Generative AI has birthed promises of 10x efficiency gains and cost cuts – many from products that can neither deliver nor endure. The challenge now is how organizations unfamiliar with the highly technical AI space can adopt their procurement and 3rd party vendor risk processes to spot unsustainable, unethical, or untrustworthy AI systems. In this article, we cover the key elements of AI vendor risk management, outlining pragmatic considerations for organizations.

Assess Vendor AI Governance

Before considering specific AI solutions, the pivotal first step is scrutinizing the vendor’s own AI governance. Ask for evidence of AI governance processes, defined roles & responsibilities for AI development, ethical codes of AI conduct, and any conformity to AI governance standards. This isn’t about favoring industry giants over startups; it’s about ensuring an AI vendor’s adherence to sound AI governance practices. Even smaller players can shine here by demonstrating outsized commitment to building effective — and safe — AI tools.

Demand Evidence for AI Solution Claims

AI claims can be grandiose, but the ‘devil lies in the details.’ In addition to relying on the watchful eye of government agencies such as the Federal Trade Commission (FTC), organizations should also demand evidence substantiating performance claims. Be wary of push back based on ‘trade secrets’ – the core AI’s model structure and training techniques can be highly sensitive, but evidence of quality or performance should not be proprietary.

Understanding Data Sources and Processing Methods

One of the greatest limitations of any AI solution is the dataset used to train its underlying models. It is important to understand the source and collection methods to understand what population the dataset represents how the data was filtered for low quality or inappropriate content, and how it gets updated and kept fresh. Each of these elements inform what potential risks there may be in the downstream models and use cases that leverage this data. Metadata, descriptive statistics, and data provenance documentation should be disclosed in a datasheet and evaluated for appropriateness for the intended use case. Assessing these aspects enable your organization to implement additional controls and safeguards when needed.

Monitoring and Incident Reporting

AI safety research is still in its infancy, and the actual day-to-day risks of novel AI systems remain largely unknown. As your organization could be among the first to use an AI solution, you must anticipate the unexpected and demand vendors to have robust incident reporting mechanisms for the inevitable issues. Organizations should expect a timely response and continuous improvement efforts signal a responsible AI provider. There should also be a clear path for reporting relevant incidents between both parties.

Cybersecurity and Confidentiality

Anxiety over data misuse and confidentiality breaches loom large in the AI ecosystem – particularly for highly regulated sectors such as finance and healthcare where even ‘back of office’ data may be highly sensitive and confidential. Many AI solutions risk leaking sensitive information about their training datasets in pursuit of optimal functionality and explainability. Ask vendors about their secure data handling procedures, and ensure it meets your own internal standards.

A Solid Foundation Tailored to Your Context

These 5 aspects pave the way for managing risks in AI procurement. It will help your organization assess risks inherent to the vendor, the risks of the exact AI product being purchased, and what key risks may still need to be mitigated by your organization. While global organizations like the IEEE are working to set future standards, organizations of all sizes should start getting more sophisticated about their AI procurement now to avoid getting caught in the latest AI marketing hype. Simply asking vendors these questions and documenting their responses is a strong step in the right direction.

Read the full article here

Share this Article
Leave a comment