AI4E V3 Module 4
Summary
TLDRThis course module delves into AI's limitations, emphasizing the distinction between current 'narrow AI' and the futuristic 'general AI'. It discusses the importance of unbiased, accurate data in AI training to avoid perpetuating societal biases. The script addresses ethical concerns, the potential for AI misuse, and the significance of explainability and transparency in AI decision-making. It also highlights the role of AI governance frameworks in ensuring ethical AI deployment and the impact of AI on society and business.
Takeaways
- 🧠 AI has limitations: The script emphasizes that AI, particularly 'narrow AI,' is limited to the tasks it has been trained for and cannot think or act like humans autonomously.
- 🤖 Hollywood portrayal is unrealistic: AI in movies often shows AI as sentient beings, but in reality, we are far from creating AI that can independently think or act against humans.
- 🔮 AGI is a distant goal: Artificial General Intelligence (AGI), where AI has human-like intelligence, is not yet achievable with current technology.
- 🔍 AI is a pattern recognizer: AI's capabilities are best described as advanced pattern recognition, rather than human-like cognition.
- 🚫 Garbage in, garbage out: The quality of AI predictions is directly affected by the quality of the input data; bad data leads to inaccurate outcomes.
- 📈 Data can introduce bias: AI systems can perpetuate and even amplify existing biases if trained on biased data sets.
- 👁️ Vision systems can be fooled: AI-based computer vision systems can be misled by images they have not been trained on, showing their brittleness.
- 🔏 Ethical considerations in AI: AI itself is neutral, but it can be used to implement or amplify unethical practices, policies, or decisions.
- 🛑 The Trolley Problem in AI: AI does not have ethics; it's a tool that can be influenced by the ethical considerations of its creators and users.
- 🚫 AI gone wrong: The script provides examples of AI projects that failed due to issues like bias, racism, and misinformation.
- 🌐 Global AI governance frameworks: Various countries and organizations are developing frameworks to guide the ethical and responsible use of AI.
Q & A
What is the primary distinction between AI as depicted in movies and the current state of AI technology?
-The primary distinction is that movies often show AI as robots that can think for themselves and may even turn against humans, while in reality, AI today is known as 'narrow AI,' which is only effective at specific tasks it has been trained for and cannot think or act like humans.
What is 'narrow AI' and how does it differ from 'Artificial General Intelligence' (AGI)?
-Narrow AI refers to AI systems that are highly specialized and can only perform well in the specific tasks they have been trained for. AGI, on the other hand, is a theoretical form of AI that would possess intelligence comparable to humans, capable of understanding, learning, and applying knowledge across a wide range of tasks.
Why is it important to ensure the data used to train AI systems is unbiased?
-It is important because if the training data is biased, the AI system will learn and perpetuate those biases, leading to unfair and potentially harmful outcomes. For example, an AI system trained to recommend salaries based on biased historical data may continue to recommend lower salaries for women.
How can AI systems be fooled or manipulated by external factors?
-AI systems can be fooled by carefully crafted inputs designed to trick them, such as color stickers on a stop sign that can make a self-driving car misinterpret it as a different sign, or specific patterns worn by individuals to avoid detection in CCTV footage.
What is the 'garbage in, garbage out' principle in the context of AI?
-The 'garbage in, garbage out' principle means that if the data input into an AI system is of poor quality, incomplete, or incorrect, the AI's predictions and outputs will also be of poor quality and accuracy.
What is the main ethical concern regarding the deployment of AI systems?
-The main ethical concern is that AI systems can amplify and scale the improper implementation of policies or biases present in the data they are trained on, leading to unfair or harmful consequences for certain groups of people.
What are some examples of AI projects that have gone wrong due to ethical or bias issues?
-Examples include Microsoft's chatbot Tay, which learned to generate racist and bigoted comments from users, and AI systems that claimed to identify criminals based on facial features, which can be biased due to the source of the training data.
What is the 'Trolley Problem' in the context of AI and autonomous vehicles?
-The Trolley Problem is a thought experiment that poses a moral dilemma about choosing between two harmful outcomes. In the context of AI, it is used to discuss the ethical decisions autonomous vehicles might have to make, such as choosing between two groups of people to minimize casualties in an unavoidable accident.
What are the two guiding principles of the Model AI Governance Framework proposed by PDPC and IMDA?
-The two guiding principles are that decisions made by AI should be explainable, transparent, and fair, and that AI systems should be human-centric, focusing on the benefit to humanity before other purposes.
How does the AI Model Audit Framework help in ensuring the ethical and responsible use of AI?
-The AI Model Audit Framework provides a holistic view of what it takes to get an AI model to market safely and ethically, covering aspects such as internal governance, human involvement, decision making, operations management, and stakeholder interactions.
What role do quality and standards play in the adoption and implementation of AI technologies?
-Quality and standards set a bar for the industry to meet, ensuring that AI technologies are developed and implemented in a way that is efficient, secure, and reliable. They also facilitate trade and strengthen competitiveness by providing a benchmark for quality and performance.
Outlines
🤖 Understanding AI Limitations
This paragraph introduces the concept of AI limitations and the importance of ethical and responsible AI deployment. It clarifies that the portrayal of AI in media as self-thinking entities that turn against humans is far from reality. The current state of AI is described as 'narrow AI,' which is limited to specific tasks and cannot think like humans. The paragraph also discusses the challenges of AI with biased or incomplete data, emphasizing the need for accurate and unbiased data to ensure AI's effectiveness. An example of a Microsoft AI program misinterpreting a photo is given to illustrate the point that AI systems are only as good as the data they are trained on.
🔍 AI's Ethical Concerns and Bias
The second paragraph delves into the ethical concerns surrounding AI, particularly the issue of data bias. It explains how AI systems can perpetuate existing biases if trained on biased data, using the example of an AI system that would recommend lower salaries for women based on historical data. The paragraph also touches on the limitations of AI in computer vision, where the system's training data can lead to incorrect interpretations, as demonstrated by Microsoft's 'Caption Bot' mishap. It further explores the potential for AI to be manipulated or 'tricked,' highlighting the brittleness of AI systems and their vulnerability to adversarial attacks.
👥 The Social Impact of AI Misuse
This paragraph discusses the social impact of AI misuse, with examples of AI projects that went awry due to improper implementation or biased data. It mentions Microsoft's AI chatbot 'Tay,' which learned to generate offensive content from Twitter users, and an AI program that claimed to identify criminals based on facial features, raising ethical questions about the use of such technology. The paragraph also references issues with the Apple Card and Amazon's AI recruiting tool, both of which exhibited gender bias, underscoring the importance of unbiased data and ethical considerations in AI deployment.
🌐 AI Governance and the Role of Standards
The final paragraph focuses on the governance of AI and the role of quality and standards in ensuring ethical AI use. It introduces the AI governance framework proposed by PDPC and IMDA, emphasizing the principles of explainability, transparency, fairness, and human-centricity in AI decision-making. The paragraph also highlights the importance of quality and standards in the digital economy, particularly in emerging technologies like AI, virtual reality, and blockchain. It discusses Singapore's efforts to establish its own AI standards and best practices, based on the collective experience of more than 500 organizations, and the development of an AI Model Audit Framework to ensure the safe and ethical deployment of AI models.
Mindmap
Keywords
💡AI Limitations
💡Narrow AI
💡Artificial General Intelligence (AGI)
💡Garbage In, Garbage Out
💡Biased Data
💡Computer Vision
💡Brittle AI
💡Ethical AI
💡Trolley Problem
💡Deepfakes
💡AI Governance Framework
Highlights
AI has limitations and must be developed and deployed ethically and responsibly.
Current AI is 'narrow AI', effective only in specific, constrained tasks unlike human-like 'general AI'.
AI's performance is heavily reliant on the quality and bias of the training data it receives.
Garbage in, garbage out: poor data quality leads to inaccurate AI predictions.
Data bias in AI models can perpetuate existing societal biases if not addressed.
AI is a pattern recognizer and not capable of human-like thinking or understanding.
AI can be fooled with specific patterns or manipulated data, showing its brittleness.
AI ethics is not about the AI itself but the implementation and放大 of existing policies.
The Trolley Problem illustrates the complexity of ethical decision-making in AI, without clear answers.
AI projects can go wrong due to lack of oversight or understanding of data biases.
Microsoft's chatbot Tay and other AI systems have shown the risks of unsupervised learning from biased data.
AI systems must be designed with explainability, transparency, and fairness in mind.
Human-centric AI systems should prioritize benefit to humanity over other purposes.
AI governance frameworks like the one by PDPC and IMDA provide guidelines for ethical AI use.
The EU's AI regulatory framework classifies AI systems based on risk levels to ensure safety and ethics.
Quality and standards are crucial for the ethical and effective use of AI and other emerging technologies.
Singapore is leading in developing AI standards and best practices based on extensive experience.
AI engineers should use frameworks like the AI Model Audit Framework for certifying AI projects.
Ethical and responsible use of AI is a broad topic that requires continuous exploration and understanding.
Transcripts
welcome back to the fourth module of this course
ai has limits and hence it is important that we understand what are some of these limitations
so that we can develop and deploy ai ethically and responsibly when we see ai on tv or in the movies
it is usually shown as robots who have learned to think for themselves and have turned against the
humans the metrics irobot and terminator movies are just a few to name is this realistic well
we are still very very far away from creating ai models which can actually think and act like
us humans ai today is what is known as narrow ai it can only be effective at the task that they
have been trained in and in a very constrained manner as shared earlier if the ai systems have
not seen enough pictures of cats or dogs it would not have been able to recognize it
similarly an ai system trained to recognize speech and english cannot work well with singlish whereas
we humans can operate in such fuzzy domains artificial general intelligence or agi
where the ai system has intelligence comparable to humans is very far away
i hope from the earlier modules especially module 2 how ai works you can now understand
why ai is just mass and a glorified pattern recognizer at best it cannot think like us humans
but when trained for very specific tasks like object detection it could do very
well and far surpass us humans it can do the tasks faster and more accurately hopefully
before we discuss responsible use of ai we have to understand why people are often concerned
by ai systems ai can't work well if bad data is given to it
a common saying in computing is garbage in garbage out sometimes data can be incomplete
and some parts of the data can be missing other times data is wrongly entered into a database
by human operators incorrect data will make the ai less accurate at making predictions
data can also be very messy if the data is not neatly organized into proper rows and columns the
computer will not be able to learn from it finally and most importantly data can also be biased
recall we earlier said that the ai model build is a representation of the physical world
let's say a new tech savvy ceo joins the organization and wants to build an ai system to
recommend salaries for staff based on skills and remove the human biases however the organization
had always paid lower salaries to women with the same skills compared to men if the ai engineer
were to just take all the existing data to train the ai model to recommend the salaries
the ai model would continue to recommend lower salaries for women and would not have fixed the
issue this is because the underlying data already have the biasedness the ai model is not biased
it is the data that is biased so the data needs to be corrected before the ai model is trained
let's look at this picture for most of us we will recognize that this is a man standing on the moon
however an ai program created by microsoft back in 2016 called caption board when shown the picture
said i'm not really confident but i think it is men standing on top of a dead few
why from the earlier part of this course you learn that such computer vision systems learns by being
shown lots of pictures of the object you want them to detect or classify in this case microsoft
used images from the internet those pictures you uploaded when you are playing with your kids out
in the field or just hanging out with your friends to train the model how many of us have gone to the
moon and took pictures on the moon probably not any one of us here so with lots of pictures of
people on earth in fields and very few pictures of men on the moon what the ai model learned was bias
towards fields on earth the main limitations of the ai-based computer vision system in this case
is the data it has to train on you can see that ai is unable to learn things outside of the data that
we have not shown it it cannot infer intelligently like us humans
ai models today are brittle and can be easily fooled by a savvy hacker
researchers have shown that with the right color stickers and knowing where to stick
them on a stop sign you can trick the car into thinking it is for example a speed limit sign
to the human eye the stop sign with the stickers is still a stop sign
similarly if you want to avoid detection by ai systems in cctv
you can use a pattern like that shown maybe wear a cap or jacket with those patterns and the ai
systems will not detect you as a person at all of course these are not just any
colorful patterns but carefully computer generated often by ai patterns to trick another ai systems
how can ai be unethical when it is just maths ai is just maths and has no feelings
and hence cannot be unethical however ai is a powerful tool which can easily scale and amplify
the improper implementation of a business or government policy so whether you use ai
traditional programming methods humans or monkeys to build the system it does not really matter
what matters is is it the right thing to do how can we ensure that the data we have is unbiased
some of you may have been posed this question the trolley problem
various version of the trolley problem exists but basically you have a trolley
rolling down the track and you are controlling a switch that can divert the trolley to the right
where five persons will be killed or left where only one person
will be killed in the ai autonomous car scenario the problem is stated as whether the car should
turn to kill the baby or the grandma the trolley problem first proposed in 1967 is not about eli
it is a thought experiment in philosophy and psychology and there is no right or wrong answers
but what is clear is that with existing technology and barring any mechanical failure and reasonable
road conditions most autonomous vehicles would be able to see the baby and grandma 50 to 100 meters
away and stop in time and if there is an accident it would be a standard car accident investigation
and insurance would be the payout ai like we have discussed is just a tool it is just maths
and yes it may fail because the ai engineer miss out on training for the situation where there is a
baby crawling on the road but it has no feelings and has nothing to do with an ethics discussion
it should be a discussion on about better safety engineering and the insurance model
let's look at some cases of interesting ai projects gone wrong
in 2016 microsoft launched an ai chatbot on twitter tay was created to have casual
and playful conversations on twitter with youth between 18 to 24 years old
the conversations did not stay casual and playful for long as they started to learn
racism and bigotry from the users after some races and pro hitler tweets they had to be shut down
the second ai program claims it could identify criminals by their faces
like the size and shape of the upper lip and the distance between one's eyes a side note
this profiling based on how a person look is not new in fact the chinese and even the europeans in
the 13th century practice face reading to predict the person's character and his life based on how
he looked now that you understand how such an ai system is trained what do you think is the problem
where do you think the researcher got his pictures of criminals probably from police mug
shots and where do you think he got pictures of non-criminals probably from crawling the internet
faces uploaded from parties and of friends and in these two instances instances
what is the main difference do you think a person having his back shock taken would be smiling
he probably would be very gloomy whereas when you upload a picture of you and your friends you would
often choose a happy picture so what did the ai learn yes a smile detector but more importantly
is it even ethical to deploy such a system to say that you are criminal based on how you look
other well-known ai deployments which had issues included apple card which launched with goldman
sachs as the underlying partner appeared to be biased against women and offered 10 to
20x less credit than the husband even when the wife had a original better credit score
and amazon deployed an ai recruiting tool which was biased against women
hello everyone i'm temple a principal air consultant at ai singapore
welcome to chapter one of ai for everyone multi-lingual version this is the first taj
singapore
today you should not trust everything you see and read on the internet
deep fakes is a class of ai work streams which uses ai to replace people's faces with
someone else not just in static images but also videos like you just saw here
computer generate phases can be very lifelike and unless you observe carefully you would
think it is a real person and i guess this was an example of an ai done badly
to help guide organizations make use of ai ethically pdpc and imda created and put forward
the model ai governance framework to the world economic forum back in 2020
it has two guiding principles the first is that decisions made by ai should be explainable
transparent and fair explainable and transparent decisions are made by the ai that we can trace
back and understand how the decisions were made for example why did ai think that this person's
insurance claim should be rejected or why did the ai reject this person's loan application
fair decisions means the ai is unbiased and should not disadvantage any group
of people the second guiding principle is that ai systems should be human-centric
in that they should always focus on the benefit to humanity before other purposes
the framework and its guidelines are long but the four core principles are
internal governance structures and measures determining the level of human involvement in ai
augmented decision making ai operations management stakeholder interactions and communications
i highly encourage all practitioners to review this framework
and to show how the ai model governance framework is used in practice ai singapore contributed four
ai projects we have done under the 100 experiments program to volume 2 of the compendium of use cases
again highly recommended to download and read these are real-world projects
completed here in singapore following the model ai governance framework
the eu have recently published their ai regulatory framework
i highly recommend those interested in air ethics to download a copy and read
it basically classify ai deployments into four categories of risks those that are minimal risk
that is the vast majority of systems these can be developed and used subject to existing legislation
and legal obligations ai systems that fall into the limited risk
and those that fall into high risk where there would be an adverse impact on people's safety
or fundamental rights and unacceptable risk
where the use of ai contravene fundamental rights and should be banned
let's watch this video on quality and standards in today's competitive global economy singapore
has established itself as a trusted innovation hub with a strong reputation for quality and
reliability underpinning this is a robust quality and standards ecosystem that helps our industries
transform and innovate facilitates trade and hence strengthen singapore's competitiveness
digitalization is a is one of those key technologies that will enable local manufacturers
to participate fully in global supply chains so things like aerospace marine and offshore medical
technology it's an opportunity for singapore to become a world leader in standards for
these new technologies that will allow singapore companies to be a first mover going international
singapore is going to be more and more technological driven
we have developed q systems and traffic systems to help our whole country to be more efficient and
more productive qis provide us a kind of system in place and by adopting all these standards
we know that our processors and services will be more efficient
so there's an increasing number of systems that are coming online it's important for partners to
understand and have the confidence that they are able to provide the level of security required
quality and standards set a bar for the industry to meet and ensures that when companies are able
to meet their quality and standard businesses that use them understand that the information
assets are well protected today we have numerous emerging technologies coming up actively explored
such as artificial intelligence virtual reality and definitely blockchain so blockchain technology
has a massive potential to transform numerous domains specifically financial payments
traceability in food logistics transparency accountability in supply chain qns will
continue to play a pivotal role in supporting the implementation of these technologies going forward
we are at the dawn of a new world where digital technologies are impacting the way businesses
operate quality and standards plays a crucial role in supporting industry transformation
enabling innovation facilitating market access for our enterprises
and driving interoperability what awaits us is a vibrant digital economy where technology drives
new and emerging sectors and creative ideas become reality quality and standards will
continue to underpin our economic transformation and play a pivotal role in our future economy
so in singapore enterprise singapore is driving the adoption of standards and the
creation of standards by singapore companies ai singapore is leading our ai standard works the
ai technical committee or aitc looks at emerging standards for ai from a global body like iso and
vote on them more importantly singapore is also developing our own best practices and standards
in particularly driven by our experience in engaging more than 500 organizations
keen to develop ai products and solutions and having worked on more than 70 of them
and bring more than 30 to the market the last four years ai singapore have accumulated a
welfare experience of what works and what does not in the aitc we have groups working
in a holistic view of what it takes to get an ai model to market safely and ethically
what we call the ai model audit framework the ai readiness index will look at how
and whether a business is ready for ai model robustness to determine if a model is secure
robust and unbiased ai model engineering to ensure that the ai model is built in a repeatable
consistent and sound manner and aiml ops to ensure that the whole end-to-end ai pipeline is complete
the intent is for a trained and certified ai engineer to use this ai model audit framework
as a reference to certify and sign off on ai projects done by organizations ethical
and responsible use of ai is a very big topic and we can only briefly discuss this important topic
and point you to more resources to read up ai is limited by the data it has to train on
not just on how accurate the model can be but more importantly would the model train be biased
because the training data itself could be biased
a number of countries including singapore have published ai ethics and governance framework and
in singapore the ai technical committee is working on several technical references and standards
and most importantly please remember the ai algorithm itself is neutral it is just maths it is
the business use case or government policies that needs to be questioned is it necessary or ethical
ai is just a tool that unfortunately allows bad ethics to be amplified at scale in the next and
last module of this course we'll discuss how ai can impact you as an individual and also show
you a very simple tool to help understand where your organization is in terms of its ai maturity
5.0 / 5 (0 votes)