What Are Good AI Governance Practices And AI Ethics Sources? (#3)

News Room

There are so many strong sources of AI governance, and ethical guidelines, principles or policies. Currently, this is an area fraught with complexity, redundancy as each country is creating their own AI governance, and ethical frameworks to advance their own jurisdiction needs. Although democratic jurisdictions are communicating in collaborative governing bodies within OECD, or the EU, the reality is international AI legislative frameworks are not yet unified and remains a work in progress. What is good, however, is that everyone in a democratic country seems to be very aligned with the OECD guidelines that were initiated in 2016. Now that we are fast approaching 2024, already nearly a decade has passed and our legislative bodies are still working on their governance positions.

Perhaps an AI Generative Bot could do this so much faster than all of us humans? Makes you wonder?

The information collected below provides a wealth of knowledge on the breadth of perspectives but also the repetitiveness in key AI ethical constructs that are encouraging but also daunting. There is no question that all publicly traded companies will need to have an AI Ethicist familiar with the diverse geo-political AI legislation. This will undoubtedly be a critical skill to have on a Board of Directors and within larger organizations to keep abreast of this rapidly developing field.

Below are a number of helpful sources to support your learning journey on AI governance and AI Ethics.

  • Carnegie Mellon has an interesting research paper which discusses international frameworks and highlights the variances impacting governance. It highlights that numerous codes of conduct or lists of principles for the responsible use of AI already exist. UNESCO and the OECD/G20 are the two most widely endorsed. In recent years, various institutions have been working to turn these principles into practice through domain-specific standards. For example, the European Commission released a comprehensive legal framework (EU AI Act) aiming to ensure safe, transparent, traceable, non-discriminatory, and environmentally sound AI systems overseen by humans. The Beijing Artificial Intelligence principles were followed with new regulations placed upon corporations and applications by the Cyberspace Administration of China. Various initiatives at the federal and state level in the United States further emphasize the need for a legislative framework. The UN Secretary-General also recently proposed a High-Level Panel to consider IAEA-like oversight of AI.
  • Google’s AI Principles. In addition to Google’s AI principles, they also include a clear description of areas they will not engage in AI use cases. Some of these areas noted are: technologies that cause or are likely to cause overall harm like in weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms and also technologies whose purpose contravenes widely accepted principles of international law and human rights.
  • IBM has a wealth of information on AI Governance and Ethicals Guidelines
  • IEEE is the world’s largest technical professional organization for the advancement of technology, which has created the Ethically Aligned Design business standards.
  • The AI Now Institute focuses on the social implications of AI and policy research in responsible AI. Research areas include algorithmic accountability, antitrust concerns, biometrics, worker data rights, large-scale AI models and privacy. The report “AI Now 2023 Landscape: Confronting Tech Power” provides a deep dive into many ethical issues that can be helpful in developing a responsible AI policy.
  • The Australian government released the AI Ethics Framework that guides organizations and governments in responsibly designing, developing and implementing AI.
  • The Berkman Klein Center for Internet & Society at Harvard University fosters research into the big questions related to the ethics and governance of AI.
  • The Canadian Ethical AI Framework and Positioning, this includes Canada’s AI ethics and guiding principles as well as communication on legislation developments.
  • The CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21) is an ongoing EU initiative for various responsible AI standards. The group plans to produce standards for the European market and inform EU legislation, policies and values.
  • The European Commission proposed what would be the first legal framework for AI, which addresses the risk of AI and aims to provide AI developers, deployers and users with a clear understanding of the requirements for specific uses of AI.
  • The Montreal AI Ethics Institute, a nonprofit organization, which regularly produces “State of AI ethics reports” and helps democratize access to AI ethics knowledge.
  • The OECD AI Ethical Principles, one of the first AI ethical frameworks developed.The OECD has undertaken empirical and policy activities on AI in support of the policy debate, starting with a Technology Foresight Forum on AI in 2016 and an international conference on AI: Intelligent Machines, Smart Policies in 2017. The organization has conducted analytical and measurement work that provides an overview of the AI technical landscape, maps economic and social impacts of AI technologies and their applications, identifies major policy considerations, and describes AI initiatives from governments and other stakeholders at national and international levels.
  • The Singapore government, which has been a pioneer and released the Model AI Governance Framework to provide actionable guidance for the private sector on how to address ethical and governance issues in AI deployments.
  • The Institute for Technology, Ethics and Culture (ITEC) Handbook is a collaborative effort between Santa Clara University’s Markkula Center for Applied Ethics and the Vatican to develop a practical, incremental roadmap for technology ethics. The handbook includes a five-stage maturity model, with specific measurable steps that enterprises can take at each level of maturity. It also promotes an operational approach for implementing ethics as an ongoing practice, akin to DevSecOps for ethics.
  • The ISO/IEC 23894:2023 IT-AI-Guidance on risk management standard describes how an organization can manage risks specifically related to AI. It can help standardize the technical language characterizing underlying principles and how these principles apply to developing, provisioning or offering AI systems.
  • The NIST AI Risk Management Framework (AI RMF 1.0) guides government agencies and the private sector on managing new AI risks and promoting responsible AI. Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to the depth of the NIST framework, especially its specificity in implementing controls and policies to better govern AI systems within different organizational contexts.
  • The Nvidia/NeMo Guardrails provides a flexible interface for defining specific behavioral rails that bots need to follow. It supports the Colang modeling language. One chief data scientist said his company uses the open source toolkit to prevent a support chatbot on a lawyer’s website from providing answers that might be construed as legal advice.
  • The Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides ongoing research and guidance into best practices for human-centered AI. One early initiative in collaboration with Stanford Medicine is Responsible AI for Safe and Equitable Health, which addresses ethical and safety issues surrounding AI in health and medicine.
  • The UK AI Ethical Framework for AI decision making. According to a recent EU survey and a British Computer Society survey in the UK, there is a distinct distrust in the regulation of advanced technology. A review by the Committee on Standards in Public Life found that the government should produce clearer guidance on using artificial intelligence ethically in the public sector. Hence, this framework is advancing the UK AI ethical governance approach.
  • The University of Turku (Finland), in coordination with a team of academic and industry partners, formed a consortium and created the Artificial Intelligence Governance and Auditing (AIGA) Framework, which illustrates a detailed and comprehensive AI governance life cycle that supports the responsible use of AI.
  • The USA Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values.
  • The “Towards unified objectives for self-reflective AI” is a paper by Matthias Samwald, Robert Praas and Konstantin Hebenstreit that takes a Socratic approach to identify underlying assumptions, contradictions and errors through dialogue and questioning about truthfulness, transparency, robustness and alignment of ethical principles.
  • The Vector Research Institute – 6 AI Ethical Principles (Toronto, Canada), an article I wrote earlier in the year.
  • Wharton AI Risk and Governance Framework includes a good paper highlighting the governance risks of AI.
  • The World Economic Forum’s “The Presidio Recommendations on Responsible Generative AI” white paper includes 30 “action-oriented” recommendations to “navigate AI complexities and harness its potential ethically.” It includes sections on responsible development and release of generative AI, open innovation and international collaboration, and social progress.

I no doubt have missed many other countries and noteworthy research sources. Here is another helpful list I found. I will summarize the most important principles from these research sources and frame into some helpful board director or C Level questions that you can ask on ensuring your organization is AI Ready to Govern with Responsible and Trusted AI Practices.

Stay tuned, as I expect this will take me a while to do well.

In the meantime, you need to know that there are many emerging AI legislation frameworks on top of this information that is framing legislation in all jurisdictions and politicians, governments and society are concerned about the flurry of AI innovation and potential risks to jobs, and society at large without more sufficient guard rails in place. What is clear is that third party AI audits will soon come into effect for all high risk applications – a boon to the auditing firms.

As I am teaching AI Ethics and Law and a board advisor to a few international organizations advancing education or market research on AI , I love it when my readers want to speak about their challenges in advancing their AI Strategy and Governance systems to be more AI Risk Ready.

The best way to reach me is on my LinkedIn. You can also see our company’s AI Ethics Policy here as every company needs one. Just ensure you have your own lawyer review it.

Research Sources:

Lawton, George. Resources to help Understand Ethical AI Governance, August, 2023.

Read the full article here

Share this Article
Leave a comment