How responsible AI can prepare you for AI regulations

IBM Technology
6 Jun 202409:12

Summary

TLDRChristina Montgomery, IBM's Chief Privacy and Trust Officer, discusses the burgeoning field of AI ethics amidst increasing generative AI capabilities. She emphasizes the importance of guiding AI development with ethical principles to maximize benefits and minimize risks. Montgomery advocates for a regulatory approach that focuses on responsible AI application rather than restricting technology, highlighting the EU AI Act as a model for risk-based regulation. She stresses the necessity of an AI Ethics Board for organizations to ensure accountability and build trust in AI systems.

Takeaways

  • 🗣️ Christina Montgomery, IBM's Chief Privacy and Trust Officer, emphasizes the importance of AI ethics, highlighting the need for a global conversation on the responsible use of AI.
  • 🏗️ AI ethics are defined as moral principles guiding the development, deployment, and use of AI to maximize benefits while minimizing risks.
  • 🔄 AI is a 'force multiplier', enhancing individual capabilities, but also a 'risk multiplier', necessitating careful consideration of AI ethics at an institutional level.
  • 📉 Existing AI regulations are part of consumer protection and privacy laws, indicating that AI is not a liability shield and companies must be accountable for its use.
  • 🚫 The debate on AI regulation includes proposals for licensing regimes that could limit market access, potentially stifling innovation and centralizing control among a few companies.
  • 🌐 Montgomery advocates for a risk-based regulatory approach that focuses on the application of technology rather than restricting the core technology itself.
  • 🤝 IBM and Meta co-founded the AI Alliance to support an open, transparent, and inclusive approach to AI development, reflecting a commitment to diverse perspectives in AI ethics.
  • 💡 The EU AI Act is praised for its risk-based approach to AI regulation, distinguishing between low-risk applications and those that pose significant threats to human rights.
  • 💼 The Act includes requirements for transparency, human oversight, data quality, and fairness, underlining the importance of these aspects in AI ethics and compliance.
  • 🏛️ Montgomery suggests that organizations using AI at scale should establish an AI Ethics Board to foster open debate and ensure ethical decision-making and accountability.

Q & A

  • What was the context of Christina Montgomery's testimony before Congress in May 2023?

    -Christina Montgomery testified before Congress a few months after the ChatGPT moment, when generative AI was new to the public, and lawmakers and regulators were scrambling to understand its implications.

  • What role does Christina Montgomery hold at IBM?

    -Christina Montgomery is the Chief Privacy and Trust Officer at IBM and Co-Chair of IBM’s AI Ethics Board.

  • How does Christina define AI ethics?

    -AI ethics are the principles that guide the responsible development, deployment, and use of AI to optimize its beneficial impact while reducing risks and adverse outcomes.

  • What is the dual nature of AI according to the transcript?

    -AI is described as both a lever and a consequence multiplier, meaning it can amplify individual capabilities but also scale risks.

  • Why is it important to consider AI ethics at an institutional level?

    -Considering AI ethics at an institutional level ensures everyone operates from a shared set of principles with defined guardrails, which is crucial as AI scales in business for greater reach and impact.

  • What is the stance of Christina Montgomery on AI regulations?

    -Christina Montgomery advocates for a regulatory approach that focuses on the responsible application of technology rather than restricting core technology itself, emphasizing the importance of context in AI deployment.

  • What is the potential impact of an AI licensing regime according to the transcript?

    -An AI licensing regime could consolidate the market around a few companies, potentially stifling open innovation and giving an outsized influence to a select few entities.

  • Why did IBM and Meta cofound the AI Alliance?

    -IBM and Meta co-founded the AI Alliance to support a regulatory perspective that emphasizes accountability and the right balance between innovation and accountability.

  • What does the EU AI Act introduce that is significant for AI regulation?

    -The EU AI Act introduces a risk-based approach to regulate AI systems, with different levels of regulatory requirements depending on the risk posed by the AI application.

  • What are some of the requirements for AI systems under the EU AI Act?

    -Requirements under the EU AI Act include transparency, human oversight, data quality and fairness, and compliance with standards to prevent discrimination and ensure safety and security.

  • Why is an AI Ethics Board important for organizations using AI?

    -An AI Ethics Board is important for fostering open consideration and debate on AI decisions, building an ethics framework into corporate practices, and ensuring mechanisms for company accountability.

Outlines

00:00

🤖 AI Ethics and Regulation

Christina Montgomery, Chief Privacy and Trust Officer at IBM and Co-Chair of IBM’s AI Ethics Board, discusses the importance of AI ethics in the wake of generative AI's emergence. She emphasizes the need for a national and global debate on AI's responsible development and use, highlighting AI's potential as both a force multiplier and a risk multiplier. Montgomery argues for a regulatory approach that focuses on the responsible application of AI rather than restricting core technology, advocating for an open, transparent, and inclusive AI development process.

05:05

📜 The EU AI Act: A Model for Risk-Based Regulation

The EU AI Act is highlighted as a pioneering piece of legislation that introduces a risk-based approach to AI regulation. The Act differentiates between low-risk AI applications, such as video games and spam filters, and high-risk applications, like facial recognition databases for social scoring, which are prohibited. It outlines requirements for transparency, human oversight, data quality, and fairness. The Act also ensures that AI systems cannot be used to discriminate and mandates compliance with standards to avoid severe penalties. Montgomery suggests that organizations should establish AI Ethics Boards to foster a culture of trustworthy AI and maintain accountability.

Mindmap

Keywords

💡AI Ethics

AI ethics refers to a set of moral principles that guide the development, deployment, and use of AI systems. In the video, Christina Montgomery emphasizes that AI ethics ensure AI is used responsibly, minimizing risks and maximizing its benefits. These principles are crucial for ensuring that AI systems align with human values, maintain fairness, and do not cause harm.

💡Accountability

Accountability means holding individuals or organizations responsible for the actions and decisions they make regarding AI. The video stresses the importance of making AI creators and users accountable for its applications, as AI cannot be used as a 'shield' to escape liability. This ensures that companies are responsible for the consequences of their AI systems, especially when it comes to ethics and regulatory compliance.

💡Risk-based regulatory approach

A risk-based regulatory approach focuses on regulating AI based on the level of risk associated with its application rather than the technology itself. In the video, this concept is supported by the EU AI Act, which categorizes AI systems into different risk levels. Low-risk systems, like video games, are lightly regulated, while high-risk applications, like facial recognition, face stricter oversight. This method ensures regulations are proportionate to potential harms.

💡Transparency

Transparency in AI involves providing clear and understandable information about the purpose, functionality, and limitations of AI systems. The video mentions that users must be informed about how AI systems work, including any biases or limitations. This fosters trust and allows users to make informed decisions about interacting with AI, a core principle of ethical AI.

💡Human-in-the-loop

Human-in-the-loop (HITL) refers to AI systems that require human oversight and intervention to ensure they remain aligned with human values. The video explains that HITL systems help mitigate risks by allowing humans to monitor and adjust AI decisions when necessary. This approach is crucial for maintaining control over AI, especially in high-risk scenarios.

💡Open innovation

Open innovation involves encouraging diverse contributions to the development of AI technology rather than limiting innovation to a few large companies. In the video, Montgomery warns against AI licensing regimes that could stifle innovation by concentrating power in the hands of a few corporations. Open innovation ensures a broad range of voices and perspectives contribute to AI, fostering more inclusive and creative advancements.

💡Data governance

Data governance refers to the management and oversight of data to ensure its quality, fairness, and compliance with legal requirements. The video highlights the importance of understanding the origins of data used in AI models, ensuring it is free from bias, and respecting copyright laws. Strong data governance is a foundational element of ethical AI development.

💡Compliance

Compliance refers to adhering to regulations and standards when developing and deploying AI systems. In the video, Montgomery discusses how AI companies must meet specific legal requirements, like those in the EU AI Act, to avoid penalties. Compliance ensures that AI systems are developed in ways that respect ethical guidelines, safety standards, and human rights.

💡AI licensing regime

An AI licensing regime is a regulatory framework that controls which companies can develop and use AI technologies. The video critiques this concept, warning that such regimes could limit innovation by concentrating power in a few companies. Instead, the video advocates for regulating the application of AI rather than restricting its development through licensing.

💡Trust

Trust in AI is the confidence that users have in the reliability, fairness, and transparency of AI systems. The video stresses that trust is central to IBM's brand and the success of AI. Building trust requires clear ethical standards, open communication, and mechanisms for accountability. Without trust, AI adoption may face significant resistance from the public and businesses alike.

Highlights

Testimony before Congress on generative AI's societal implications.

AI ethics is becoming the most important global conversation.

Ethics are moral principles guiding AI's responsible use.

AI is a force multiplier with both benefits and risks.

AI ethics should guide development to reduce adverse outcomes.

AI regulations are present within consumer protection and privacy laws.

AI is not a shield against liability for discriminatory practices.

Regulations are emerging, and AI ethics can help anticipate them.

Debate on regulating AI technology itself versus its application.

AI licensing could limit market participation and stifle innovation.

Support for regulating AI applications based on risk.

IBM and Meta co-founded the AI Alliance to promote ethical AI.

EU AI Act introduces risk-based regulation for AI systems.

The Act prohibits high-risk AI applications that threaten human rights.

Requirements for transparency, human oversight, and data quality in AI.

AI systems must not discriminate and must comply with standards.

Non-compliance can result in significant fines.

Ethics goes beyond compliance to include corporate character and trust.

The importance of an AI Ethics Board for open consideration and debate.

Building a culture of trustworthy AI and holding the company accountable.

The future of ethical AI requires collective effort.

Transcripts

play00:00

[Music]

play00:02

In May of 2023 I was asked to testify before Congress.

play00:06

This was just a few months after the ChatGPT moment.

play00:09

Generative AI was new to the public.

play00:12

Lawmakers and regulators were scrambling to understand the implications.

play00:16

I didn’t anticipate the attention this hearing would attract.

play00:18

I’d just spent three years building accountability for AI at IBM,

play00:22

...trying to make sure that what’s invented, used and sold was trustworthy.

play00:26

In a way, I took it for granted.

play00:28

But as I listened to the questions and other testimony that day,

play00:30

...and heard calls for strict regulation to govern the behavior of AI companies, it clicked.

play00:35

Not everyone is ready for this.

play00:37

There would be a national debate, a global debate,

play00:40

...and AI ethics was about to become the most important conversation of our time.

play00:44

Welcome to AI Academy.

play00:46

My name is Christina Montgomery.

play00:48

I’m the Chief Privacy and Trust Officer at IBM and Co-Chair of IBM’s AI Ethics Board.

play00:53

There’s a rich philosophical history around ethics, but I’m going to boil it down to this;

play00:57

...ethics are a set of moral principles that guide decision-making.

play01:01

We all have instincts about what is right and wrong,

play01:04

...but a consistent set of principles can help us work through complex decisions or novel scenarios.

play01:10

It seems like every day we hear something new that AI can do.

play01:13

So every day we have to revisit the question of what AI should do and when and where and how we should use it.

play01:21

AI ethics are the principles that guide the responsible development, deployment and use of AI,

play01:26

...to optimize its beneficial impact while reducing risks and adverse outcomes.

play01:31

Like most technology, AI is a lever,

play01:34

...a force multiplier allowing each individual to do a lot more than they could without a system, which is great.

play01:39

But the flipside is that AI is also a consequence multiplier, a risk multiplier.

play01:45

So as you scale AI in your business for greater reach and impact, you need to be thinking about AI ethics at an institutional level,

play01:52

...so that everyone can operate from a shared set of principles with defined guardrails.

play01:57

And AI regulations are already here,

play01:59

...either in standalone legislation or as part of existing consumer protection and privacy laws, for example.

play02:05

AI is not a shield to liability.

play02:07

You can’t just blame AI if your company hiring decisions discriminate, for example.

play02:11

By taking account of AI ethics, you can get ahead of regulations,

play02:15

...which is good, because more robust regulation is coming.

play02:19

There are different regulatory philosophies that are sort of competing right now.

play02:23

And these divergent views became apparent during my testimony last year.

play02:27

Some of the most visible players in the AI space are saying that we should regulate the fundamental technology of AI itself.

play02:33

That a licensing regime should be established to control what and how AI gets built and by whom,

play02:39

...effectively dictating who can participate in the AI marketplace.

play02:43

This approach could consolidate the market around a small handful of companies.

play02:47

And while that’s a winning proposition for companies with the resources to comply,

play02:51

...it’s a losing proposition for everyone else.

play02:54

An AI licensing regime would be a serious blow to open innovation.

play02:58

And from an ethical perspective, you have to ask whether it’s just or fair,

play03:03

...for a few companies to have such an outsized influence on people’s daily lives.

play03:08

Again, AI is going to touch every aspect of business in society, so shouldn’t it be built by the many and not the few?

play03:14

And shouldn’t we hear from not just the loudest voices, but from many voices?

play03:19

It’s also just not very practical to regulate technology granularly in the face of rapid innovation.

play03:26

Before the ink is dry on a new piece of regulation,

play03:28

...technologists will have rolled out many alternative approaches to achieve the same outcome.

play03:33

And it’s the outcomes that really matter.

play03:36

That’s why I support a regulatory approach based not on the restriction of core technology,

play03:41

...but on the responsible application of technology.

play03:44

Regulate the use of technology, not the technology itself.

play03:48

Not all uses of AI carry the same level of risk and because each AI application is unique,

play03:54

...it’s critical that regulation must account for the context in which AI is deployed.

play03:59

We also believe that those who create and deploy AI should be accountable, not immune from liability.

play04:06

It’s essential to find the right balance between innovation and accountability.

play04:11

The support for this regulatory perspective is one of the reasons IBM and Meta cofounded the AI Alliance,

play04:18

...with a group of corporate partners, startups and academic and research institutions.

play04:24

It’s why we joined the consortium to support the US AI Safety Institute at NIST.

play04:30

Whatever comes next for AI, it’s going to be safer if it’s open, transparent and inclusive.

play04:36

So you can have research universities; you can have regulators and independent 3rd parties poking holes and testing.

play04:42

You can have an open community of experts from around the globe, different voices, different perspectives,

play04:48

...all vetting the technology instead of one company saying no, trust me, it’s safe.

play04:53

And while the debate around these competing regulatory approaches is still very active,

play04:57

...we now have a practical example of a risk-based regulatory approach that I think is likely to be a model for the rest of the world.

play05:04

IBM has supported the EU AI Act for a few reasons.

play05:09

First, the law introduces a risk-based approach to regulate AI systems.

play05:13

Most generally available AI today, like AI-enabled video games or spam filters are unregulated.

play05:21

Something like a chatbot is a limited risk application and will have light touch regulatory requirements.

play05:26

Some applications like the creation of facial recognition databases,

play05:30

...through the untargeted scraping of facial images from the internet, for social scoring systems,

play05:36

...these compose a significant threat to human rights and are prohibited.

play05:41

And then you have activities and uses that pose some risk to human health safety or fundamental rights, but are allowed.

play05:47

That’s where some business activities will fall, and those uses will face high standards for compliance.

play05:53

Some of the requirements would be things you would probably expect.

play05:57

For example, there’ll be a requirement for transparency that will require users be provided,

play06:02

...with clear and understandable information about the systems purpose, functionality and intended use.

play06:09

This includes information about any biases or limitations that may affect the systems performance.

play06:15

There’ll be requirements for human oversight, such as human-in-the-loop systems,

play06:19

...to ensure that AI systems remain aligned with human values and expectations.

play06:24

And there’ll be standards for data quality and fairness.

play06:27

Data governance and data provenance are crucial for AI ethics.

play06:32

And that means understanding where the data used to train a model came from;

play06:36

...ensuring you have the right to use it; ensuring that the data isn’t biased and that it respects copyright law.

play06:42

These are all issues addressed by the Act.

play06:45

We talked earlier about AI not being a shield to liability.

play06:48

And the Act makes it clear these systems cannot be used to discriminate against people,

play06:52

...based on attributes like race, ethnicity, religion or sexual orientation,

play06:57

...and then things like safety and security as well.

play07:01

You have to be able to demonstrate compliance with these standards or face serious consequences.

play07:06

Fines can be up to 35 million euros of 7% of a company’s annual revenue, whichever is higher.

play07:13

And in the same way that the General Data Protection Regulation was a landmark legislation for data privacy and protection,

play07:20

...the EU AI Act is landmark legislation for AI.

play07:24

And also like the GDPR, this EU law will be influential in serving as a model for other jurisdictions.

play07:32

But there is more to ethics than compliance.

play07:35

There’s your corporate character; there’s good corporate citizenship; and there’s trust.

play07:40

There’s a saying that trust is earned in drops but lost in buckets.

play07:43

And it’s absolutely true.

play07:45

Trust is central to our company’s brand,

play07:47

...and maybe the biggest part of my job is working to ensure that the technology IBM makes and uses,

play07:52

...the things people interact with every day, are things they can trust.

play07:56

It’s one thing to have ethical principles, but they’re meaningless without a mechanism for holding yourself accountable.

play08:03

I propose that any organization using AI at scale needs an AI Ethics Board or equivalent governing mechanism.

play08:11

I Co-Chair IBM’s Board and I can’t tell you how important it is to make your AI decisions,

play08:17

...in an environment of open consideration and debate,

play08:20

...with a diverse group of others who are viewing the business through the lens of ethics,

play08:25

...and who bring different backgrounds, domain expertise and experiences into that debate.

play08:30

On our Board, for example, we have lawyers, policy professionals,

play08:34

...communications professionals, HR professionals, researchers, sellers, product teams and more.

play08:41

And then through that Board you work to build an ethics framework into your corporate practices and instill a culture of trustworthy AI,

play08:48

...and ensure you have mechanisms to hold your company accountable.

play08:52

The specific use cases of AI in your businesses might be different than ours,

play08:56

...but I bet that once you start defining your own principles and pillars, you’ll find that we all have a lot in common.

play09:02

We all want to build strong, trusted brands.

play09:05

We all want to do the right thing.

play09:07

Because the future of ethical AI is something we all need to build together.

play09:12

[Music]

Rate This

5.0 / 5 (0 votes)

Related Tags
AI EthicsRegulationIBMTrustAccountabilityData PrivacyInnovationTechnologyGovernanceEU AI Act