AI4E V3 Module 4

AI Singapore
26 Jan 202219:39

Summary

TLDRThis course module delves into AI's limitations, emphasizing the distinction between current 'narrow AI' and the futuristic 'general AI'. It discusses the importance of unbiased, accurate data in AI training to avoid perpetuating societal biases. The script addresses ethical concerns, the potential for AI misuse, and the significance of explainability and transparency in AI decision-making. It also highlights the role of AI governance frameworks in ensuring ethical AI deployment and the impact of AI on society and business.

Takeaways

  • 🧠 AI has limitations: The script emphasizes that AI, particularly 'narrow AI,' is limited to the tasks it has been trained for and cannot think or act like humans autonomously.
  • 🤖 Hollywood portrayal is unrealistic: AI in movies often shows AI as sentient beings, but in reality, we are far from creating AI that can independently think or act against humans.
  • 🔮 AGI is a distant goal: Artificial General Intelligence (AGI), where AI has human-like intelligence, is not yet achievable with current technology.
  • 🔍 AI is a pattern recognizer: AI's capabilities are best described as advanced pattern recognition, rather than human-like cognition.
  • 🚫 Garbage in, garbage out: The quality of AI predictions is directly affected by the quality of the input data; bad data leads to inaccurate outcomes.
  • 📈 Data can introduce bias: AI systems can perpetuate and even amplify existing biases if trained on biased data sets.
  • 👁️ Vision systems can be fooled: AI-based computer vision systems can be misled by images they have not been trained on, showing their brittleness.
  • 🔏 Ethical considerations in AI: AI itself is neutral, but it can be used to implement or amplify unethical practices, policies, or decisions.
  • 🛑 The Trolley Problem in AI: AI does not have ethics; it's a tool that can be influenced by the ethical considerations of its creators and users.
  • 🚫 AI gone wrong: The script provides examples of AI projects that failed due to issues like bias, racism, and misinformation.
  • 🌐 Global AI governance frameworks: Various countries and organizations are developing frameworks to guide the ethical and responsible use of AI.

Q & A

  • What is the primary distinction between AI as depicted in movies and the current state of AI technology?

    -The primary distinction is that movies often show AI as robots that can think for themselves and may even turn against humans, while in reality, AI today is known as 'narrow AI,' which is only effective at specific tasks it has been trained for and cannot think or act like humans.

  • What is 'narrow AI' and how does it differ from 'Artificial General Intelligence' (AGI)?

    -Narrow AI refers to AI systems that are highly specialized and can only perform well in the specific tasks they have been trained for. AGI, on the other hand, is a theoretical form of AI that would possess intelligence comparable to humans, capable of understanding, learning, and applying knowledge across a wide range of tasks.

  • Why is it important to ensure the data used to train AI systems is unbiased?

    -It is important because if the training data is biased, the AI system will learn and perpetuate those biases, leading to unfair and potentially harmful outcomes. For example, an AI system trained to recommend salaries based on biased historical data may continue to recommend lower salaries for women.

  • How can AI systems be fooled or manipulated by external factors?

    -AI systems can be fooled by carefully crafted inputs designed to trick them, such as color stickers on a stop sign that can make a self-driving car misinterpret it as a different sign, or specific patterns worn by individuals to avoid detection in CCTV footage.

  • What is the 'garbage in, garbage out' principle in the context of AI?

    -The 'garbage in, garbage out' principle means that if the data input into an AI system is of poor quality, incomplete, or incorrect, the AI's predictions and outputs will also be of poor quality and accuracy.

  • What is the main ethical concern regarding the deployment of AI systems?

    -The main ethical concern is that AI systems can amplify and scale the improper implementation of policies or biases present in the data they are trained on, leading to unfair or harmful consequences for certain groups of people.

  • What are some examples of AI projects that have gone wrong due to ethical or bias issues?

    -Examples include Microsoft's chatbot Tay, which learned to generate racist and bigoted comments from users, and AI systems that claimed to identify criminals based on facial features, which can be biased due to the source of the training data.

  • What is the 'Trolley Problem' in the context of AI and autonomous vehicles?

    -The Trolley Problem is a thought experiment that poses a moral dilemma about choosing between two harmful outcomes. In the context of AI, it is used to discuss the ethical decisions autonomous vehicles might have to make, such as choosing between two groups of people to minimize casualties in an unavoidable accident.

  • What are the two guiding principles of the Model AI Governance Framework proposed by PDPC and IMDA?

    -The two guiding principles are that decisions made by AI should be explainable, transparent, and fair, and that AI systems should be human-centric, focusing on the benefit to humanity before other purposes.

  • How does the AI Model Audit Framework help in ensuring the ethical and responsible use of AI?

    -The AI Model Audit Framework provides a holistic view of what it takes to get an AI model to market safely and ethically, covering aspects such as internal governance, human involvement, decision making, operations management, and stakeholder interactions.

  • What role do quality and standards play in the adoption and implementation of AI technologies?

    -Quality and standards set a bar for the industry to meet, ensuring that AI technologies are developed and implemented in a way that is efficient, secure, and reliable. They also facilitate trade and strengthen competitiveness by providing a benchmark for quality and performance.

Outlines

00:00

🤖 Understanding AI Limitations

This paragraph introduces the concept of AI limitations and the importance of ethical and responsible AI deployment. It clarifies that the portrayal of AI in media as self-thinking entities that turn against humans is far from reality. The current state of AI is described as 'narrow AI,' which is limited to specific tasks and cannot think like humans. The paragraph also discusses the challenges of AI with biased or incomplete data, emphasizing the need for accurate and unbiased data to ensure AI's effectiveness. An example of a Microsoft AI program misinterpreting a photo is given to illustrate the point that AI systems are only as good as the data they are trained on.

05:07

🔍 AI's Ethical Concerns and Bias

The second paragraph delves into the ethical concerns surrounding AI, particularly the issue of data bias. It explains how AI systems can perpetuate existing biases if trained on biased data, using the example of an AI system that would recommend lower salaries for women based on historical data. The paragraph also touches on the limitations of AI in computer vision, where the system's training data can lead to incorrect interpretations, as demonstrated by Microsoft's 'Caption Bot' mishap. It further explores the potential for AI to be manipulated or 'tricked,' highlighting the brittleness of AI systems and their vulnerability to adversarial attacks.

10:09

👥 The Social Impact of AI Misuse

This paragraph discusses the social impact of AI misuse, with examples of AI projects that went awry due to improper implementation or biased data. It mentions Microsoft's AI chatbot 'Tay,' which learned to generate offensive content from Twitter users, and an AI program that claimed to identify criminals based on facial features, raising ethical questions about the use of such technology. The paragraph also references issues with the Apple Card and Amazon's AI recruiting tool, both of which exhibited gender bias, underscoring the importance of unbiased data and ethical considerations in AI deployment.

15:10

🌐 AI Governance and the Role of Standards

The final paragraph focuses on the governance of AI and the role of quality and standards in ensuring ethical AI use. It introduces the AI governance framework proposed by PDPC and IMDA, emphasizing the principles of explainability, transparency, fairness, and human-centricity in AI decision-making. The paragraph also highlights the importance of quality and standards in the digital economy, particularly in emerging technologies like AI, virtual reality, and blockchain. It discusses Singapore's efforts to establish its own AI standards and best practices, based on the collective experience of more than 500 organizations, and the development of an AI Model Audit Framework to ensure the safe and ethical deployment of AI models.

Mindmap

Keywords

💡AI Limitations

AI Limitations refer to the constraints in the capabilities of artificial intelligence systems. In the video, it is explained that AI today is 'narrow AI,' meaning it can only perform specific tasks it has been trained for and cannot think or act like humans. This concept is central to the video's theme of understanding AI's scope and its responsible deployment.

💡Narrow AI

Narrow AI, also known as weak AI, is a term used to describe AI systems that are designed and trained to perform a particular task only. The video emphasizes that current AI systems are narrow AI, which are effective within a limited scope but lack the general intelligence of humans, contrasting with the fictional portrayal of AI in movies.

💡Artificial General Intelligence (AGI)

Artificial General Intelligence refers to a theoretical form of AI where the AI system possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level equal to that of a human. The video mentions AGI as a far-off goal, indicating that current AI systems are far from achieving such comprehensive cognitive abilities.

💡Garbage In, Garbage Out

This phrase is a common saying in computing that emphasizes the importance of input quality. If an AI system is fed poor-quality, incomplete, or biased data, it will produce poor-quality or biased outcomes. The video uses this concept to explain the impact of data quality on AI performance and the importance of using clean and unbiased data for training AI models.

💡Biased Data

Biased data refers to data that contains inherent prejudice or systematic errors, which can lead to unfair or skewed results when used to train AI models. The video provides an example of an AI system trained to recommend salaries that ended up perpetuating gender pay disparities due to biased historical data.

💡Computer Vision

Computer vision is a field of AI that enables computers to interpret and understand visual information from the world, such as images and videos. The video discusses the limitations of computer vision systems, such as Microsoft's 'Caption Bot', which misinterpreted a picture due to the lack of diverse training data.

💡Brittle AI

Brittle AI describes AI systems that are easily fooled or fail when faced with unexpected or unusual input. The video mentions how AI can be tricked by specific patterns or stickers, showing that despite their advanced capabilities, AI systems can be vulnerable to manipulation.

💡Ethical AI

Ethical AI pertains to the development and deployment of AI systems in a manner that is morally and socially responsible. The video discusses the importance of considering ethics in AI, such as avoiding the amplification of biases or the misuse of AI in decision-making processes.

💡Trolley Problem

The Trolley Problem is a thought experiment in ethics that presents a dilemma where a decision must be made to cause harm to one group to save another. The video uses this problem to illustrate the complexity of ethical decision-making in AI, particularly in autonomous vehicles.

💡Deepfakes

Deepfakes are AI-generated media in which a person's likeness is swapped with another's in images or videos. The video mentions deepfakes as an example of AI being used unethically, creating realistic but false representations that can mislead or deceive.

💡AI Governance Framework

An AI Governance Framework is a set of guidelines and principles designed to ensure the ethical and responsible use of AI. The video refers to the model AI governance framework by PDPC and IMDA, which includes principles such as explainability, transparency, fairness, and human-centricity in AI systems.

Highlights

AI has limitations and must be developed and deployed ethically and responsibly.

Current AI is 'narrow AI', effective only in specific, constrained tasks unlike human-like 'general AI'.

AI's performance is heavily reliant on the quality and bias of the training data it receives.

Garbage in, garbage out: poor data quality leads to inaccurate AI predictions.

Data bias in AI models can perpetuate existing societal biases if not addressed.

AI is a pattern recognizer and not capable of human-like thinking or understanding.

AI can be fooled with specific patterns or manipulated data, showing its brittleness.

AI ethics is not about the AI itself but the implementation and放大 of existing policies.

The Trolley Problem illustrates the complexity of ethical decision-making in AI, without clear answers.

AI projects can go wrong due to lack of oversight or understanding of data biases.

Microsoft's chatbot Tay and other AI systems have shown the risks of unsupervised learning from biased data.

AI systems must be designed with explainability, transparency, and fairness in mind.

Human-centric AI systems should prioritize benefit to humanity over other purposes.

AI governance frameworks like the one by PDPC and IMDA provide guidelines for ethical AI use.

The EU's AI regulatory framework classifies AI systems based on risk levels to ensure safety and ethics.

Quality and standards are crucial for the ethical and effective use of AI and other emerging technologies.

Singapore is leading in developing AI standards and best practices based on extensive experience.

AI engineers should use frameworks like the AI Model Audit Framework for certifying AI projects.

Ethical and responsible use of AI is a broad topic that requires continuous exploration and understanding.

Transcripts

play00:00

welcome back to the fourth module of this course  

play00:04

ai has limits and hence it is important that we  understand what are some of these limitations  

play00:09

so that we can develop and deploy ai ethically and  responsibly when we see ai on tv or in the movies  

play00:17

it is usually shown as robots who have learned to  think for themselves and have turned against the  

play00:23

humans the metrics irobot and terminator movies  are just a few to name is this realistic well  

play00:30

we are still very very far away from creating  ai models which can actually think and act like  

play00:35

us humans ai today is what is known as narrow ai  it can only be effective at the task that they  

play00:42

have been trained in and in a very constrained  manner as shared earlier if the ai systems have  

play00:49

not seen enough pictures of cats or dogs  it would not have been able to recognize it  

play00:55

similarly an ai system trained to recognize speech  and english cannot work well with singlish whereas  

play01:02

we humans can operate in such fuzzy domains  artificial general intelligence or agi  

play01:10

where the ai system has intelligence  comparable to humans is very far away  

play01:15

i hope from the earlier modules especially  module 2 how ai works you can now understand  

play01:20

why ai is just mass and a glorified pattern  recognizer at best it cannot think like us humans  

play01:30

but when trained for very specific tasks  like object detection it could do very  

play01:35

well and far surpass us humans it can do the  tasks faster and more accurately hopefully

play01:45

before we discuss responsible use of ai we have  to understand why people are often concerned  

play01:50

by ai systems ai can't work  well if bad data is given to it  

play01:56

a common saying in computing is garbage in  garbage out sometimes data can be incomplete  

play02:03

and some parts of the data can be missing other  times data is wrongly entered into a database  

play02:09

by human operators incorrect data will make  the ai less accurate at making predictions  

play02:17

data can also be very messy if the data is not  neatly organized into proper rows and columns the  

play02:23

computer will not be able to learn from it finally  and most importantly data can also be biased  

play02:32

recall we earlier said that the ai model build  is a representation of the physical world  

play02:38

let's say a new tech savvy ceo joins the  organization and wants to build an ai system to  

play02:45

recommend salaries for staff based on skills and  remove the human biases however the organization  

play02:52

had always paid lower salaries to women with the  same skills compared to men if the ai engineer  

play02:58

were to just take all the existing data to  train the ai model to recommend the salaries  

play03:04

the ai model would continue to recommend lower  salaries for women and would not have fixed the  

play03:09

issue this is because the underlying data already  have the biasedness the ai model is not biased  

play03:16

it is the data that is biased so the data needs  to be corrected before the ai model is trained

play03:28

let's look at this picture for most of us we will  recognize that this is a man standing on the moon  

play03:36

however an ai program created by microsoft back in  2016 called caption board when shown the picture  

play03:44

said i'm not really confident but i think  it is men standing on top of a dead few  

play03:49

why from the earlier part of this course you learn  that such computer vision systems learns by being  

play03:56

shown lots of pictures of the object you want  them to detect or classify in this case microsoft  

play04:02

used images from the internet those pictures you  uploaded when you are playing with your kids out  

play04:06

in the field or just hanging out with your friends  to train the model how many of us have gone to the  

play04:13

moon and took pictures on the moon probably not  any one of us here so with lots of pictures of  

play04:20

people on earth in fields and very few pictures of  men on the moon what the ai model learned was bias  

play04:26

towards fields on earth the main limitations of  the ai-based computer vision system in this case  

play04:34

is the data it has to train on you can see that ai  is unable to learn things outside of the data that  

play04:41

we have not shown it it cannot  infer intelligently like us humans

play04:48

ai models today are brittle and can  be easily fooled by a savvy hacker  

play04:54

researchers have shown that with the right  color stickers and knowing where to stick  

play04:58

them on a stop sign you can trick the car into  thinking it is for example a speed limit sign  

play05:07

to the human eye the stop sign with  the stickers is still a stop sign  

play05:11

similarly if you want to avoid  detection by ai systems in cctv  

play05:16

you can use a pattern like that shown maybe wear  a cap or jacket with those patterns and the ai  

play05:22

systems will not detect you as a person  at all of course these are not just any  

play05:27

colorful patterns but carefully computer generated  often by ai patterns to trick another ai systems  

play05:37

how can ai be unethical when it is just  maths ai is just maths and has no feelings  

play05:44

and hence cannot be unethical however ai is a  powerful tool which can easily scale and amplify  

play05:53

the improper implementation of a business  or government policy so whether you use ai  

play06:00

traditional programming methods humans or monkeys  to build the system it does not really matter

play06:09

what matters is is it the right thing to do how  can we ensure that the data we have is unbiased

play06:21

some of you may have been posed  this question the trolley problem

play06:27

various version of the trolley problem  exists but basically you have a trolley  

play06:31

rolling down the track and you are controlling a  switch that can divert the trolley to the right  

play06:38

where five persons will be killed  or left where only one person  

play06:42

will be killed in the ai autonomous car scenario  the problem is stated as whether the car should  

play06:48

turn to kill the baby or the grandma the trolley  problem first proposed in 1967 is not about eli  

play06:57

it is a thought experiment in philosophy and  psychology and there is no right or wrong answers  

play07:04

but what is clear is that with existing technology  and barring any mechanical failure and reasonable  

play07:12

road conditions most autonomous vehicles would be  able to see the baby and grandma 50 to 100 meters  

play07:20

away and stop in time and if there is an accident  it would be a standard car accident investigation  

play07:27

and insurance would be the payout ai like we  have discussed is just a tool it is just maths  

play07:36

and yes it may fail because the ai engineer miss  out on training for the situation where there is a  

play07:42

baby crawling on the road but it has no feelings  and has nothing to do with an ethics discussion  

play07:51

it should be a discussion on about better  safety engineering and the insurance model

play07:59

let's look at some cases of  interesting ai projects gone wrong  

play08:04

in 2016 microsoft launched an ai chatbot  on twitter tay was created to have casual  

play08:11

and playful conversations on twitter  with youth between 18 to 24 years old  

play08:17

the conversations did not stay casual and  playful for long as they started to learn  

play08:21

racism and bigotry from the users after some races  and pro hitler tweets they had to be shut down  

play08:29

the second ai program claims it could  identify criminals by their faces  

play08:36

like the size and shape of the upper lip and  the distance between one's eyes a side note  

play08:42

this profiling based on how a person look is not  new in fact the chinese and even the europeans in  

play08:48

the 13th century practice face reading to predict  the person's character and his life based on how  

play08:54

he looked now that you understand how such an ai  system is trained what do you think is the problem

play09:03

where do you think the researcher got his  pictures of criminals probably from police mug  

play09:09

shots and where do you think he got pictures of  non-criminals probably from crawling the internet  

play09:16

faces uploaded from parties and of friends  and in these two instances instances  

play09:23

what is the main difference do you think a person  having his back shock taken would be smiling  

play09:29

he probably would be very gloomy whereas when you  upload a picture of you and your friends you would  

play09:36

often choose a happy picture so what did the ai  learn yes a smile detector but more importantly  

play09:44

is it even ethical to deploy such a system to  say that you are criminal based on how you look  

play09:51

other well-known ai deployments which had issues  included apple card which launched with goldman  

play09:56

sachs as the underlying partner appeared to  be biased against women and offered 10 to  

play10:02

20x less credit than the husband even when  the wife had a original better credit score  

play10:08

and amazon deployed an ai recruiting  tool which was biased against women

play10:18

hello everyone i'm temple a principal  air consultant at ai singapore  

play10:22

welcome to chapter one of ai for everyone  multi-lingual version this is the first taj

play10:30

singapore

play10:39

today you should not trust everything  you see and read on the internet  

play10:43

deep fakes is a class of ai work streams  which uses ai to replace people's faces with  

play10:49

someone else not just in static images  but also videos like you just saw here  

play10:55

computer generate phases can be very lifelike  and unless you observe carefully you would  

play11:00

think it is a real person and i guess  this was an example of an ai done badly

play11:11

to help guide organizations make use of ai  ethically pdpc and imda created and put forward  

play11:18

the model ai governance framework to  the world economic forum back in 2020  

play11:24

it has two guiding principles the first is  that decisions made by ai should be explainable  

play11:30

transparent and fair explainable and transparent  decisions are made by the ai that we can trace  

play11:37

back and understand how the decisions were made  for example why did ai think that this person's  

play11:43

insurance claim should be rejected or why did  the ai reject this person's loan application  

play11:50

fair decisions means the ai is unbiased  and should not disadvantage any group  

play11:54

of people the second guiding principle is  that ai systems should be human-centric  

play12:00

in that they should always focus on the  benefit to humanity before other purposes

play12:08

the framework and its guidelines are  long but the four core principles are  

play12:15

internal governance structures and measures  determining the level of human involvement in ai  

play12:21

augmented decision making ai operations management  stakeholder interactions and communications  

play12:30

i highly encourage all practitioners  to review this framework

play12:36

and to show how the ai model governance framework  is used in practice ai singapore contributed four  

play12:44

ai projects we have done under the 100 experiments  program to volume 2 of the compendium of use cases  

play12:51

again highly recommended to download  and read these are real-world projects  

play12:56

completed here in singapore following  the model ai governance framework

play13:07

the eu have recently published  their ai regulatory framework  

play13:11

i highly recommend those interested in  air ethics to download a copy and read  

play13:16

it basically classify ai deployments into four  categories of risks those that are minimal risk  

play13:23

that is the vast majority of systems these can be  developed and used subject to existing legislation  

play13:30

and legal obligations ai systems  that fall into the limited risk

play13:37

and those that fall into high risk where there  would be an adverse impact on people's safety  

play13:44

or fundamental rights and unacceptable risk

play13:51

where the use of ai contravene  fundamental rights and should be banned

play13:58

let's watch this video on quality and standards  in today's competitive global economy singapore  

play14:05

has established itself as a trusted innovation  hub with a strong reputation for quality and  

play14:10

reliability underpinning this is a robust quality  and standards ecosystem that helps our industries  

play14:18

transform and innovate facilitates trade and  hence strengthen singapore's competitiveness  

play14:26

digitalization is a is one of those key  technologies that will enable local manufacturers  

play14:31

to participate fully in global supply chains so  things like aerospace marine and offshore medical  

play14:36

technology it's an opportunity for singapore  to become a world leader in standards for  

play14:41

these new technologies that will allow singapore  companies to be a first mover going international  

play14:47

singapore is going to be more  and more technological driven  

play14:51

we have developed q systems and traffic systems  to help our whole country to be more efficient and  

play14:56

more productive qis provide us a kind of system  in place and by adopting all these standards  

play15:04

we know that our processors and  services will be more efficient  

play15:09

so there's an increasing number of systems that  are coming online it's important for partners to  

play15:15

understand and have the confidence that they are  able to provide the level of security required  

play15:21

quality and standards set a bar for the industry  to meet and ensures that when companies are able  

play15:27

to meet their quality and standard businesses  that use them understand that the information  

play15:31

assets are well protected today we have numerous  emerging technologies coming up actively explored  

play15:39

such as artificial intelligence virtual reality  and definitely blockchain so blockchain technology  

play15:46

has a massive potential to transform numerous  domains specifically financial payments  

play15:53

traceability in food logistics transparency  accountability in supply chain qns will  

play15:59

continue to play a pivotal role in supporting the  implementation of these technologies going forward  

play16:06

we are at the dawn of a new world where digital  technologies are impacting the way businesses  

play16:12

operate quality and standards plays a crucial  role in supporting industry transformation  

play16:18

enabling innovation facilitating  market access for our enterprises  

play16:23

and driving interoperability what awaits us is a  vibrant digital economy where technology drives  

play16:30

new and emerging sectors and creative ideas  become reality quality and standards will  

play16:37

continue to underpin our economic transformation  and play a pivotal role in our future economy

play16:50

so in singapore enterprise singapore is  driving the adoption of standards and the  

play16:55

creation of standards by singapore companies ai  singapore is leading our ai standard works the  

play17:01

ai technical committee or aitc looks at emerging  standards for ai from a global body like iso and  

play17:08

vote on them more importantly singapore is also  developing our own best practices and standards  

play17:16

in particularly driven by our experience  in engaging more than 500 organizations  

play17:21

keen to develop ai products and solutions  and having worked on more than 70 of them  

play17:26

and bring more than 30 to the market the last  four years ai singapore have accumulated a  

play17:30

welfare experience of what works and what  does not in the aitc we have groups working  

play17:36

in a holistic view of what it takes to get  an ai model to market safely and ethically  

play17:42

what we call the ai model audit framework  the ai readiness index will look at how  

play17:49

and whether a business is ready for ai model  robustness to determine if a model is secure  

play17:55

robust and unbiased ai model engineering to  ensure that the ai model is built in a repeatable  

play18:02

consistent and sound manner and aiml ops to ensure  that the whole end-to-end ai pipeline is complete  

play18:10

the intent is for a trained and certified ai  engineer to use this ai model audit framework  

play18:15

as a reference to certify and sign off on  ai projects done by organizations ethical  

play18:22

and responsible use of ai is a very big topic and  we can only briefly discuss this important topic  

play18:28

and point you to more resources to read up  ai is limited by the data it has to train on  

play18:34

not just on how accurate the model can be but  more importantly would the model train be biased  

play18:42

because the training data itself could be biased  

play18:45

a number of countries including singapore have  published ai ethics and governance framework and  

play18:51

in singapore the ai technical committee is working  on several technical references and standards  

play18:59

and most importantly please remember the ai  algorithm itself is neutral it is just maths it is  

play19:07

the business use case or government policies that  needs to be questioned is it necessary or ethical  

play19:15

ai is just a tool that unfortunately allows bad  ethics to be amplified at scale in the next and  

play19:24

last module of this course we'll discuss how ai  can impact you as an individual and also show  

play19:31

you a very simple tool to help understand where  your organization is in terms of its ai maturity

Rate This

5.0 / 5 (0 votes)

Related Tags
AI EthicsNarrow AIData BiasAGIAI GovernanceMachine LearningHuman-Centric AITrolley ProblemAI MisuseTech StandardsAI Singapore