Get Instant Access
to This Blueprint

Data Business Intelligence icon

AI Governance

A framework for building responsible, ethical, fair, and transparent AI.

  • The use of AI and machine learning (ML) has gained momentum as organizations evaluate the potential applications of AI to enhance the customer experience, improve operational efficiencies, and automate business processes.
  • Growing applications of AI have reinforced concerns about ethical, fair, and responsible use of the technology that assists or replaces human decision making.

Our Advice

Critical Insight

  • Implementing AI systems requires careful management of the AI lifecycle, governing data, and machine learning model to prevent unintentional outcomes not only to an organization’s brand reputation but, more importantly, to workers, individuals, and society.
  • When adopting AI, it is important to have a strong ethical and risk management framework surrounding its use.

Impact and Result

  • AI governance enables management, monitoring, and control of all AI activities within an organization.

AI Governance Research & Tools

1. AI Governance Deck – A framework for building responsible, ethical, fair, and transparent AI.

Create the foundation that enables management, monitoring, and control of all AI activities within the organization. The AI governance framework will allow you to define an AI risk management approach and defines methodology for managing and monitoring the AI/ML models in production.


Member Testimonials

After each Info-Tech experience, we ask our members to quantify the real-time savings, monetary impact, and project improvements our research helped them achieve. See our top member experiences for this blueprint and what our clients have to say.

10.0/10


Overall Impact

$389


Average $ Saved

3


Average Days Saved

Client

Experience

Impact

$ Saved

Days Saved

Benchmade

Guided Implementation

10/10

N/A

N/A

State Universities Retirement System Of Illinois

Guided Implementation

10/10

N/A

3


AI Governance

A Framework for Building Responsible, Ethical, Fair, and Transparent AI

Are you ready for AI?

Business leaders must manage the associated risks as they scale their use of AI

In recent years, following technological breakthroughs and advances in development of machine learning (ML) models and management of large volumes of data, organizations are scaling their use of artificial intelligence (AI) technologies.

The use of AI and ML has gained momentum as organizations evaluate the potential applications of AI to enhance the customer experience, improve operational efficiencies, and automate business processes.

Growing applications of AI have reinforced concerns about ethical, fair, and responsible use of the technology that assists or replaces human decision-making.

Implementing AI systems requires careful management of the AI lifecycle, governing data, and machine learning model to prevent unintentional outcomes not only to an organization’s brand reputation but also, more importantly, to workers, individuals, and society. When adopting AI, it is important to have strong ethical and risk management frameworks surrounding its use.

“Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence.” (World Economic Forum)

Regulations and risk assessment tools

Governments around the world are developing AI assessment methodologies and legislation for AI. Here are a couple of examples:

  • Responsible use of artificial intelligence (AI) guiding principles (Canada):
    1. understand and measure the impact of using AI by developing and sharing tools and approaches
    2. be transparent about how and when we are using AI, starting with a clear user need and public benefit
    3. provide meaningful explanations about AI decision-making, while also offering opportunities to review results and challenge these decisions
    4. be as open as we can by sharing source code, training data, and other relevant information, all while protecting personal information, system integration, and national security and defense
    5. provide sufficient training so that government employees developing and using AI solutions have the responsible design, function, and implementation skills needed to make AI-based public services better
  • The Algorithmic Impact Assessment tool (Canada) is used to determine the impact level of an automated decision-system. It defines 48 risk and 33 mitigation questions. Assessment scores consider factors such as systems design, algorithm, decision type, impact, and data.
  • The National AI Initiative Act of 2020 (DIVISION E, SEC. 5001) (US) became law on January 1, 2021. This is a program across the entire Federal government to accelerate AI research and application.
  • Bill C-27, Artificial Intelligence and Data Act (AIDA) (Canada), when passed, would be the first law in Canada regulating the use of artificial intelligence systems.
  • The EU Artificial Intelligence Act (EU) assigns applications of AI to three risk categories: applications and systems that create an unacceptable risk, such as government-run social scoring; high-risk applications, such as a CV-scanning tool that ranks job applicants; and lastly, applications not explicitly listed as high-risk.
  • The FEAT Principles Assessment Methodology was created by the Monetary Authority of Singapore (MAS) in collaboration with other 27 industry partners for financial institutions to promote fairness, ethics, accountability, and transparency (FEAT) in the use of artificial intelligence and data analytics (AIDA).

AI policies around the world

Map of AI policies around the world, marked by circles of varying color and size. The legend on the right indicates '# of AI Policies (2019-2021)' by color.
Source of data: OECD.AI (2021), powered by EC/OECD (2021), database of national AI policies, accessed on 7/09/2022, https://oecd.ai.

The need for AI governance

“To adopt AI, organizations will need to review and enhance their processes and governance frameworks to address new and evolving risks.” (Canadian RegTech Association, Safeguarding AI Use Through Human-Centric Design, 2020)

To ensure responsible, transparent, and ethical AI systems, organizations will need to review existing risk control frameworks and update them to include AI risk management and impact assessment frameworks and processes.

As ML and AI technologies are constantly evolving, the AI governance and AI risk management frameworks will need to evolve to ensure the appropriate safeguards and controls are in place.

This applies not only to the machine learning models and AI system custom built by the organization’s data science and AI team, but it also includes AI-powered vendor tools and technologies. The vendors should be able to explain how AI is used in their products, how the model was trained, and what data was used to train the model.

AI governance enables management, monitoring, and control of all AI activities within an organization.

Stock image of a chip o a circuitboard labelled 'AI'.

Key concepts

Info-Tech Research Group defines the key terms used in this document as follows:

Machine learning systems learn from experience and without explicit instructions. They learn patterns from data, then analyze and make predictions based on past behavior and the patterns learned.

Artificial intelligence is a combination of technologies and can include machine learning. AI systems perform tasks that mimic human intelligence, such as learning from experience and problem solving. Most importantly, AI makes its own decisions without human intervention.

We use the definition of data ethics by Open Data Institute: “Data ethics is a branch of ethics that considers the impact of data practices on people, society and the environment. The purpose of data ethics is to guide the values and conduct of data practitioners in data collection, sharing and use.”

Algorithmic or machine bias is systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others. Algorithmic bias is not a technical problem. It’s a social and political problem, and in the context of implementing AI for business benefits, it’s a business problem.

Download the blueprint Mitigate Machine Bias blueprint for detailed discussion on bias, fairness, and transparency in AI systems

Key concepts – explainable, transparent and trustworthy

Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence” (CIFAR).

The AI system is considered trustworthy when people understand how the technology works and when we can assess that it’s safe and reliable. We must be able to trust the output of the system and understand how the system was designed, what data was used to train it, and how it was implemented.

Explainable AI, sometimes abbreviated as XAI, refers to the ability to explain how an AI model makes predictions, its anticipated impact, and its potential biases.

Transparency means communicating with and empowering users by sharing information internally and with external stakeholders, including beneficiaries and people impacted by the AI-powered product or service.

68% [of Canadians] are concerned they don’t understand the technology well enough to know the risks.

77% say they are concerned about the risks AI poses to society (TD, 2019)

AI Governance Framework

Monitoring
Monitoring compliance and risk of AI/ML systems/models in production

Tools & Technologies
Tools and technologies to support AI governance framework implementation

Model Governance
Ensures accountability and traceability for AI/ML models

AI Governance Framework with the surrounding 7 headlines and an adjective between each pair: 'Accountable', 'Trustworthy', 'Responsible', 'Ethical', 'Fair', 'Explainable', 'Transparent'. Organization
Structure, roles, and responsibilities of the AI governance organization

Operating Model
How AI governance operates and works with other organizational structures to deliver value

Risk and Compliance
Alignment with corporate risk management and ensuring compliance with regulations and assessment frameworks

Policies/Procedures/ Standards
Policies and procedures to support implementation of AI governance
AI Governance preview picture

About Info-Tech

Info-Tech Research Group is the world’s fastest-growing information technology research and advisory company, proudly serving over 30,000 IT professionals.

We produce unbiased and highly relevant research to help CIOs and IT leaders make strategic, timely, and well-informed decisions. We partner closely with IT teams to provide everything they need, from actionable tools to analyst guidance, ensuring they deliver measurable results for their organizations.

MEMBER RATING

10.0/10
Overall Impact

$389
Average $ Saved

3
Average Days Saved

After each Info-Tech experience, we ask our members to quantify the real-time savings, monetary impact, and project improvements our research helped them achieve.

Read what our members are saying

What Is a Blueprint?

A blueprint is designed to be a roadmap, containing a methodology and the tools and templates you need to solve your IT problems.

Each blueprint can be accompanied by a Guided Implementation that provides you access to our world-class analysts to help you get through the project.

Talk to an Analyst

Our analyst calls are focused on helping our members use the research we produce, and our experts will guide you to successful project completion.

Book an Analyst Call on This Topic

You can start as early as tomorrow morning. Our analysts will explain the process during your first call.

Get Advice From a Subject Matter Expert

Each call will focus on explaining the material and helping you to plan your project, interpret and analyze the results of each project step, and set the direction for your next project step.

Unlock Sample Research

Author

Irina Sedenko

Contributors

  • Ellie D. Norris, Associate Director, Merck
Visit our IT Cost Optimization Center
Over 100 analysts waiting to take your call right now: 1-519-432-3550 x2019