Lec 04-Introduction to AI Algorithms
Summary
TLDRThis video script offers an insightful overview of AI algorithms in marketing, focusing on their definition, complexity, and function. It delves into the three primary learning patterns: supervised, unsupervised, and reinforcement learning, each with its distinct set of algorithms like decision trees, random forests, and neural networks. The script also touches on the Algorithmic Bill of Rights, emphasizing the importance of awareness, accountability, and validation in AI algorithm usage.
Takeaways
- 🧠 AI algorithms are sets of instructions for computers to learn and operate independently, significantly more complex than general algorithms.
- 📚 The learning patterns for AI include supervised learning, unsupervised learning, and reinforcement learning, each with distinct training and functioning methods.
- 🌳 In supervised learning, algorithms are trained with labeled data to predict outcomes, akin to a student learning with a teacher's guidance.
- 🔍 Unsupervised learning uses unlabeled data to find patterns and relationships within the data, without any prior guidance.
- 🤖 Reinforcement learning algorithms learn from feedback in the form of rewards, improving actions based on the environment's responses.
- 📊 Common supervised learning algorithms include Decision Trees, Random Forests, Support Vector Machines (SVM), Naive Bayes, and Logistic Regression.
- 🔢 Linear Regression in supervised learning is used for continuous predictions, like sales forecasting, based on the relationship between variables.
- 👥 Unsupervised learning examples include K-means clustering for grouping data points and Gaussian Mixture Models for more complex cluster shapes.
- 🕵️♂️ K-Nearest Neighbors (KNN) is an algorithm used for both classification and anomaly detection, based on the proximity of data points.
- 🧬 Neural networks mimic the human brain's functions, with interconnected nodes organized in layers, capable of pattern recognition and complex tasks.
- 👮♂️ The Algorithmic Bill of Rights outlines principles for ethical AI use, emphasizing awareness, accountability, explanation, and validation to prevent biases and harm.
Q & A
What is an AI algorithm?
-An AI algorithm is a set of instructions for a computer to learn and operate on its own. It is a complex programming that determines the steps and learning capabilities of an AI program.
How do AI algorithms work?
-AI algorithms work by taking in training data, which can be labeled or unlabeled, and using that information to learn and grow. They complete tasks using the training data as a basis.
What are the different types of learning patterns in AI algorithms?
-The different types of learning patterns in AI algorithms are supervised learning, unsupervised learning, and reinforcement learning.
What is supervised learning in AI?
-Supervised learning is a category of AI algorithms that work by taking in clearly labeled data during training to learn and predict outcomes for other data.
Can you explain the decision tree algorithm in supervised learning?
-A decision tree algorithm is a supervised learning method that classifies data into nodes, with a root node and leaf nodes, using attribute selection measures like entropy and information gain.
What is the random forest algorithm and how does it differ from a single decision tree?
-The random forest algorithm is a collection of multiple decision trees that are used to gain more accurate results. It differs from a single decision tree by adding randomness to the model and considering the majority vote or average of multiple trees for the final output.
How does the support vector machine (SVM) algorithm work?
-The support vector machine algorithm works by plotting data in an N-dimensional space and finding the hyperplane that best separates the classes. It aims to maximize the margin between the nearest points of different classes.
What is the role of the Naive Bayes algorithm in AI?
-The Naive Bayes algorithm is a classification algorithm that assumes the presence of a feature is unrelated to the presence of other features in the same class. It is used for making probabilistic predictions based on the likelihood of features.
What is the purpose of linear regression in AI?
-Linear regression in AI is used for regression modeling to discover relationships between data points and make predictions or forecasts. It works by plotting data points and finding the best fit line that represents the relationship between variables.
How does logistic regression differ from linear regression?
-Logistic regression differs from linear regression in that it estimates a binary outcome (0 or 1) rather than a continuous value. It is used when the dependent variable is categorical, such as in spam filtering or predicting the occurrence of an event.
What is unsupervised learning and how does it differ from supervised learning?
-Unsupervised learning is a type of AI algorithm that is given unlabeled data and creates models to find patterns or relationships within the data. It differs from supervised learning in that it does not use labeled data and instead focuses on discovering inherent structures in the data.
What is the K-means clustering algorithm and how does it work?
-The K-means clustering algorithm is an unsupervised learning method that partitions data into K pre-defined clusters. It works by iteratively assigning data points to the nearest centroid and recalculating the centroids based on the assigned clusters until it converges to the best clustering.
What is the role of the Gaussian Mixture Model (GMM) in AI?
-The Gaussian Mixture Model is used in unsupervised learning for clustering data into groups. It is more versatile than K-means as it allows for clusters of various shapes, not just circular, and uses a probabilistic approach rather than a distance-based one.
What is the K-Nearest Neighbors (KNN) algorithm and its applications?
-The K-Nearest Neighbors (KNN) algorithm is a simple AI algorithm that classifies new data points based on their similarity to existing data points. It can be used for both supervised and unsupervised learning, with applications in classification, regression, and anomaly detection.
How do neural networks function in AI?
-Neural networks function by mimicking the human brain, consisting of interconnected nodes organized into layers. They process information by adjusting connection strengths (weights) during training to recognize patterns and make predictions.
What is reinforcement learning and how does it differ from other types of learning?
-Reinforcement learning is a type of AI algorithm where an agent learns by taking actions in an environment and receiving feedback in the form of rewards. It differs from other types of learning as it focuses on learning from interactions and consequences rather than from labeled data.
What are the key components of the Algorithmic Bill of Rights?
-The Algorithmic Bill of Rights includes principles such as awareness, access and redress, accountability, explanation, data provenance, auditability, and validation and testing. These principles aim to guide the ethical use of algorithms and ensure fairness and transparency.
Outlines
🧠 Introduction to AI Algorithms
This paragraph introduces the concept of AI algorithms in the context of a marketing course. It explains what AI algorithms are, emphasizing their complexity and importance in programming computers to learn and operate autonomously. The paragraph outlines the necessity of training data for AI algorithms and touches on different learning patterns such as supervised, unsupervised, and reinforcement learning. It also briefly mentions the Algorithmic Bill of Rights, setting the stage for a deeper dive into various AI algorithms.
🌲 Supervised Learning Algorithms
The second paragraph delves into supervised learning algorithms, which rely on labeled data for training. It describes how these algorithms use the data to predict outcomes, likening the process to a student learning with a teacher's guidance. The paragraph highlights decision trees, random forests, support vector machines, and Naive Bayes classifiers, explaining their structures and functions. It also touches on linear regression and logistic regression, detailing how they are used for modeling relationships and estimating binary outcomes.
🔍 Unsupervised Learning and K-Means Clustering
This paragraph focuses on unsupervised learning algorithms, which work with unlabeled data to discover patterns and relationships. It introduces K-means clustering, explaining how it assigns data points to clusters based on proximity to centroids. The paragraph outlines the steps of the K-means algorithm and contrasts it with Gaussian Mixture Models, which allow for more versatile cluster shapes. It also mentions the use of unsupervised learning for applications like sales forecasting and customer churn analysis.
🤖 K-Nearest Neighbors and Neural Networks
The fourth paragraph discusses the K-nearest neighbors (KNN) algorithm, a non-parametric method used for both classification and regression. It describes how KNN classifies new data points based on proximity to existing data. The paragraph then transitions to neural networks, which are complex AI algorithms that mimic the human brain. It explains the architecture of neural networks, including input, hidden, and output layers, and how they process information to recognize patterns and make predictions.
🛠️ Reinforcement Learning and Algorithmic Rights
This paragraph explores reinforcement learning algorithms, which learn from feedback in the form of rewards. It describes the components of reinforcement learning, including the agent and the environment, and how they interact through a cycle of actions and rewards. The paragraph also covers different approaches to reinforcement learning, such as policy-based and value-based methods. It concludes with an introduction to the Algorithmic Bill of Rights, a set of principles aimed at ensuring fairness and accountability in algorithmic decision-making.
📜 Algorithmic Bill of Rights and Conclusion
The final paragraph provides a detailed look at the Algorithmic Bill of Rights, a set of guiding principles for ethical algorithm use. It discusses seven key areas: awareness, access and redress, accountability, explanation, data provenance, auditability, and validation. The paragraph emphasizes the importance of these principles in mitigating biases and ensuring transparency in algorithmic decisions. It concludes the module by summarizing the discussion on AI algorithms, their applications in various learning techniques, and the importance of adhering to the Algorithmic Bill of Rights.
Mindmap
Keywords
💡AI Algorithms
💡Supervised Learning
💡Unsupervised Learning
💡Reinforcement Learning
💡Decision Trees
💡Random Forest
💡Support Vector Machines (SVM)
💡Naive Bayes
💡Linear Regression
💡Logistic Regression
💡K-means Clustering
💡Gaussian Mixture Models (GMM)
💡K-Nearest Neighbors (KNN)
💡Neural Networks
💡Algorithmic Bill of Rights
Highlights
Introduction to AI algorithms in marketing, discussing various learning patterns: supervised, unsupervised, and reinforcement learning.
Definition of an AI algorithm as a complex set of instructions for a computer to learn and operate on its own.
Importance of training data acquisition and labeling in distinguishing different AI algorithms.
Overview of supervised learning algorithms, emphasizing their reliance on clearly labeled data for training.
Explanation of decision trees, their structure, and how they use attribute selection measures for classification.
Introduction to Random Forest, a collection of decision trees that improve accuracy through diversity.
Support Vector Machines (SVM) for classification or regression by finding the optimal hyperplane for data separation.
Naive Bayes classifier, based on the base theorem, and its use in large datasets with various classes.
Linear regression for predictive analysis and forecasting, emphasizing its simplicity and popularity.
Logistic regression for binary outcomes, using a logistic function to estimate probabilities.
Unsupervised learning algorithms and their use in creating models from unlabeled data to find patterns.
K-means clustering for dividing datasets into clusters based on data point proximity to centroids.
Gaussian Mixture Models for clustering with more versatile shapes than K-means, allowing linear patterns.
K-Nearest Neighbors (KNN) algorithm for both supervised and unsupervised learning, focusing on data proximity.
Neural networks as complex AI algorithms mimicking the human brain, with applications still being discovered.
Reinforcement learning algorithms that learn from feedback in the form of rewards, involving an agent and an environment.
Algorithmic Bill of Rights, a set of guiding principles for ethical algorithm design and use.
Seven general areas of the Algorithmic Bill of Rights, including awareness, access, accountability, and auditability.
Conclusion summarizing the module's coverage of AI algorithms, learning techniques, and ethical considerations.
Transcripts
[Music]
[Music]
welcome to this uh nptl online
certification course on artificial
intelligence in marketing and now we are
discussing module 4 so we are talking
about introduction to AI algorithms and
we are in chapter 1 and module 4 so this
is what we are talking about that is
Introduction to artificial intelligence
algorithms so to introduce the module we
will talk about what are AI algorithms
and how they
work what are the various types of
commonly used AI algorithms under the
different kind of learning patterns and
we have seen that the the various types
of learning patterns are supervised
learning unsupervised learning and
reinforcement learning and then we will
talk briefly on the algorithmic Bill of
Rights so now to start with what is an
AI algorithm the definition of algorithm
is a set of instructions to be followed
in calculation or other operations so it
is a pure simple set of
instructions this applies to both
mathematics and computer science so thus
at the essential level an AI algorithm
is the programming that tells the
computer how to learn to operate on its
own and AI algorithm is much more
complex than what most people learn
about in algebra of course a complex set
of rules Drive AI program determine
their steps and their ability to learn
without an algorithm AI would not
exist while a general algorithm can be
simple AI algorithms are by Nature more
complex AI algorithms work by taking in
training data that helps the algorithm
to learn how the data is acquired and is
labeled marks the key difference between
different types of AI algorithm so keep
in mind that how that data is acquired
and is
labeled that gives the clear key
difference between different types of AI
at the code level an AI algorithm takes
in training data labeled or unlabeled
supp lied by developers or acquired by
the program itself and use that
information to learn and grow then it
completes its task using the training
data as a basis so that training data is
so important for this kind of AI some
types of AI algorithms can be taught to
learn on their own and taken new data to
change and refine their processes others
will need the intervention of
programmers in order to streamline now
let let us look at the various types of
AI algorithms so there are three major
categories of AI algorithms that we have
already learned in the previous module
namely one was the supervised learning
the second was unsupervised learning and
the third was reinforcement learning the
key difference between these algorithms
are in how they are trained and how they
function so their differences come
from a how they are trained and B how
they function under those
categories there are dozens of different
algorithms we will discuss about the
most popular and commonly used from each
category as well as where they are
commonly used so the supervised learning
algorithms the first and the most
commonly used category of algorithm is
supervised
learning these work by taking in clearly
labeled data while being trained and
using that to learn and grow it uses the
labeled data to predict outcomes for
other data the name supervised learning
comes from the comparison of a student
learning in the presence of a teacher or
expert so that is why it is called as
supervised learning building a
supervised learning algorithm that
actually works take a team of dedicated
experts to evaluate and review the
results not to mention data scientist to
test the models the algorithms created
to ensure their accuracy against the
original data and catch any errors from
the artificial
intelligence so this is the decision
tree one of the most common supervised
learning algorithm decision trees get
their name because of their tree like
structure even though the tree is
inverted so it is the inverted tree the
roots of the tree are the training data
sets and they leads to specific
nodes which donates a text attribute
notes often lead to another notes and a
note that doesn't lead onward is called
a
leaf so this is the decision node that
is the root node then we have a sub tree
decision node Leaf node this is again
Leaf node because nothing flows from
them here in this decision node again
another decision node node and then this
is leaf node this is leaf node and this
is LEF Leaf node decision Tre is
classify all the data into decision
nodes it uses a selection criteria
called attribute selection
measures which takes into account
various measures some examples would be
entropy gain ratio Information Gain
Etc using the root data and following
the ASM the decision tree can classify
the data it is given by following the
training data into subnodes until it
reaches the conclusion a decision tree
diagram with root node decision node and
Leaf node for better understanding so
this is how root node with friends then
yes windy cold yes no below par so this
is the branch no is splitting walk or
cart decision notes walk above Park cart
cold Etc so this is the demonstration of
this decision tree diagram another type
is random Forest the random Forest
algorithm is actually a broad collection
of different recision trees leading to
its name
the random Forest builds different
decision trees and connects them to gain
more accurate results so that is the
main advantage that it gives more
accurate results this can be used for
both classification and regression type
of supervised learning while a solo
decision tree has no outcome and a
narrow range of groups the forest
assures a more accurate result with a
bigger number of groups and decisions it
has the added benefits of adding random
Ness to the model by finding the best
feature among a random subset of
features overall these benefits create a
model that has wide diversity that many
data scientist
favors so as we can see from the diagram
the results of decision Tre 1 2 and
three are combined which is then
averaged out or the majority is
considered as the final result so these
are the data sets so this is decision
Tre one this is decision tree 2C
decision tree three so with the three
typ with the same type of data sets
there are three decision trees and the
three results majority voting averaging
and then we get the final results
another is Vector support Vector
machines the support Vector machine
algorithm is another common AI algorithm
that can be used for either
classification or regression but is most
often used for
classification the support Vector
machine
works by plotting each piece of data on
a chart in N Dimension space where n is
the number of data points then the
algorithm classifies the data point by
finding the hyper plane that separates
each class there can be more than one
hyper plane the main objective of a
support Vector machine is to segregate
the given data sets in the best possible
way the distance between either nearest
points is known as the margin the the
objective is to select a hyper plane
with the maximum possible margin between
support factors in the given data set
svm searches for the maximum marginal
hyper plane so these are the two
excesses X1 and X2 and here it is a
negative hyper plane then there are
support vectors this is positive hyper
plane so that is maximum margin and this
is maximum margin hyper plane so
generate hyperplanes which segregates
the classes in the best way the figure
on the top shows three hyper
planes black blue and
orange so these are the three hyper
planes here the blue and orange have
high classification errors but the black
is separating the two classes
correctly so so this black black is
separating these two select the right
hyper plane with the maximum segregation
from the either nearest data points as
shown in figure at the bottom then there
is Nave base
the reason this algorithm is called Nave
Bas is that it is based on base theorem
and also relies heavily on a large
assumption that the presence of one
particular feature is unrelated to the
presence of other features in the same
class that major assumption is the Nave
aspect in the name so KN bu is useful
for large data set with different
classes it like many other supervised
learning algorithm is a classification
algorithm it is an algorithm that learns
the probability of every object its
features and which groups they belong to
it is also known as
probabilistic classifier for example you
cannot identify a bir based on its
feature and color as there are many
words with similar attributes but you
make a probabilistic prediction about
the same and that is where knif wise
algorithm comes in so this is these are
the three classifiers 1 2 3 and this is
knif bias classifiers
and this is how they are classified into
three different categories NWI use the
following equation that is p h upon e is
equal to p h upon e into p h ID p e so p
h upon e denotes how event H happens
when event e take
place then P eh represents how often
event e happens when event H takes place
first pH represents the probability of
event X happening on its own and P e
represents the probability of Event Y
happening on its own so then comes
linear regression is a supervised
learning AI algorithm used for
regression modeling it is mostly used
for discovering the relationship between
data points predictions and
forecasting much like support Vector
machines it works by plotting pieces of
data on a chart with the
xais as the independent variable and the
Y AIS as the dependent variable the data
points are then plotted
in a linear fashion to determine their
relationships and forecast possible
future data linear regression is one of
the easiest and the most popular machine
learning algorithm so that is the best
part that is it it is the easiest it is
a statistical method that is used for
predictive
analysis linear regressions make
predictions for continuous real or
numeric values such as the sales salary
age product prices Etc linear
integration algorithm shows a linear
relationship between a dependent that is
y and one or more independent that is X
variables hence that is called as a
linear regression since linear
regression shows the linear relationship
which means it finds how the value of
the dependent variable is changing
according to the value of the
independent variable the linear
regression model provides a sloped
straight line representing the
relationships between the variables so
this is these are the independent
variables on the x-axis and here we have
the dependent variable and then we have
all these data points and in between is
the line of regression so now it it
tells that how it will happen here what
what will the dependent variable look
look like like at this level of
independent variable then comes logistic
regression a logistic regression
algorithm usually uses a binary value 01
to estimate value from a set of
independent
variables the output of Lo logistic
regression is either 1 or zero yes or no
an example of this would be a spam
filter in email the filter uses logistic
regression to Mark whether the incoming
mail is Spam zero or not one logistic
regression is only useful when the
dependent variable is
categorical either yes or no so if it is
not if it is somewhere in between then
logistic regression will not work the
logistic regression model is based on
logistic function which is the type of s
shaped curve that Maps any continuous
input to the probability value between 0
and 1 the logistic function allows us to
model the relationship between the
independent variables and the
probability of dependent variable taking
on the value of one the logistic
regression model estimates the
coefficient of the independent variables
that are most productive of the
dependent variable these coefficients
are used to create a linear equation
that is then transformed by the logistic
functions to produce a probability value
for the dependent variable taking on the
value one the logistic regression is
commonly used in fields such as
Healthcare marketing finance and social
sciences to predict the likelihood of an
event occurring such as whether a
patient has a certain disease or or
whether the customer will buy a product
or not so now as you can see from this
figure we have independent variable X
and then we have dependent variable y
now this dependent variable varies from
0 to 1 independent variable can take any
value but dependent variable can take
values only between 0 to 1 so this
predicts y lies within 0 to one one
range
so that is why it is a s shaped
curve
so now you
see at this level what will be the value
of y at this level what will be the
value of y and at this level what will
be the value of y so as we have already
studied before unsupervised learning
algorithms are given data that is not
labeled unsupervised learning algorithm
use that unlabelled data to create
models and evaluate the relationships
between different data points in order
to give more insights to the data the
next comes K means clustering K means is
an algorithm designed to perform the
clustering
function in unsupervised learning it
does takes this by taking in the
predetermined clusters and plotting out
all the data regardless of the
cluster it then plots a randomly
selected piece of data such as the cerid
of each cluster think link of it as a
circle around each cluster which with
that piece of data as the exact center
point from there it sorts the remaining
data points into clusters based on their
proximity to each other and the CID data
point for each cluster the algorithms
takes the unlabel data sets as input
divides the data set into K numbers of
clusters and repeat the process until it
does not find the best cluster the value
of K should be predetermined in this
algorithm the means clustering algorithm
mainly perform two tasks the first is
determine the West value of K Center
points or cids by and iterative process
the second is assign each data point to
its closest G
Center those data points which are near
to the particular K Center creates a
cluster so the working of the K means
algorithm is as follows first select the
number of K to decide the number of
clusters second is Select random ke
points or or
centroids the third is assign each data
point to those closed centroid which
means form which will form the
predefined K cluster the fourth is
calculate the variance and the place a
new centroid of each cluster the fifth
is repeat the third
steps which means reassigning each data
point to the new closest Cento for each
cluster so this is how it works before
the K means and this after K means how
neatly they are clustered the next comes
gausian mixture model goian mixture
models are similar to g means clustering
in many ways both are concerned with
sorting data into predetermined clusters
based on proximity however goian models
are a little more versatile in the
shapes of the Clusters they
allow K means clustering only allows
data to be clustered in circles with the
centroid in the center of each cluster
goian mixtures can handle data that LS
on the graph in more linear patterns
allowing for oblong shaped
structures this allows for greater
Clarity in clustering of one data point
lands inside the circle of another
cluster the starting point and training
process of the K means and GMM are the
same however K means use a
distance-based approach NG GMM uses a
probabilistic based approach there is
one primary assumption in GMM the data
set consist of multiple cians in other
word a mixture of the cians it is used
to forecast the sales of product
understand customer churn through the
length of different groups of customers
some AI algorithms can use either
supervised or unsupervised data input
and still function they might have
slightly different applications based on
their
status the next comes K nearest neighbor
algorithm so K nearest neighbor that is
K NN algorithm is a simplistic AI
algorithm that assumes that all the data
points provide ided are in a proximity
to each other and plots them onto a map
to show the relationship between them
then the algorithm can calculate the
distance between data points in order to
extrapolate the relationships and
calculate the distance on the graph both
supervised and unsupervised algorithm in
supervised learning it can be used for
either classification or regression
applications in unsupervised learning it
is popularly used for anomaly detection
that is finding data that does not
belong and removing it a NN is a
non-parametric algorithm which means it
does not make any assumptions on
underlying data it is also called a lazy
learn algorithm because it does not
learn from the training set immediately
Instead at the training phase it just
store the data sets and when it gets new
data it classifies that data into a
category that is much similar to the new
data KNN algorithms can be used for
regression as well as classification but
mostly it is used for classification
problem problem s Suppose there are two
categories category a and category B and
we have a new data point X1 so this data
point will lie in which of these
categories to solve this type of problem
we need a knnn algorithm with the help
of knnn we can easily identify the
category or class of a particular data
set so now this was category a this was
category B and now we have this new data
now after KNN this new data point is
assigned to category
1 so now it becomes easier to deal with
this uh this data point rather than just
having one data point is Stand Alone the
next comes neural networks neural
network algorithm is a term for a
collection of AI algorithms that mimic
the functions of a human brain so that
mimics the functions of a human brain
these tend to be more complex than many
of the algorithms discussed above and
have applic applications which are still
being discovered so all those
applications have not yet been
discovered they are still in the process
of Discovery in unsupervised and
supervised learning it is it can be used
for
classification and pattern
recognition it consists of
interconnected nodes that is neurons
organized into layers information flows
through this
nodes and the network adjust the
connection strengths that is weights
during training to learn from data
enabling it to recognize
patterns make predictions and solve
various tasks in machine learning and
artificial intelligence and there are
three levels in the network architecture
the input layer the hidden layer which
can be more than one and the output
layer because of the numerous layers it
is sometimes refers to as the MLP that
is multi-layer percep dra it is possible
to think of the Hidden layer as a
distillation layer which ass some of the
most relevant patterns from the inputs
and send them onto the next layer for
further analysis it accelerates and
improves the efficiency of the network
by recognizing just the most important
information from the inputs and
discarding the Redundant
information so this is how it works so
this is the hidden layer then there are
input layers Network layers feed forward
so this is Network
output and this is back propagation then
comes reinforcement learning algorithm
the last major type of AI algorithm is
reinforcement learning algorithm which
learns by taking in the feedback from
the results of its action this is
typically in the form of a reward the
reinforcement algorithm is usually
composed of two major
parts the first is an agent that
performs an
action that is
one and the second is the environment in
which the action is performed so these
are the two major parts of this
algorithm the cycle begins when the
environment sends a state signal to the
agent that cues the agent to perform a
specific action within the environment
once the action is performed the
environment sends the
reward signal to the agent informing it
on what happens so that the agent can
update and evaluate its last
actions then with that new information
it can take the action again that cycle
repeats until the environment sends a
termination signal so there are two
types of reinforcement the algorithm can
use one is either a positive report or a
negative reward in reinforcement
algorithms there are slightly different
approaches depending on what is being
measured and how it is being measured
here are some definitions of different
models and measures so one is
policy the approach that agent takes to
determine the next action taken by the
agent the second is model the situation
and dynamics of the environment and the
third is value the expected long-term
results this is different from the
reward which is the result of a single
action within the environment the value
is the long-term result of many actions
the second is value based in this value
based reinforcement algorithm the agent
pushes towards an expected long-term
return so that is important
here instead of just focusing on the
short-term reward the next is policy
based a policy based reinforcement
algorithm usually take one of the two
approaches to determine the next course
of
action either uh a standardized approach
so that can be one approach where any
state produces the same action or the
dynamic approach
where certain probabilities are mapped
out and the probabili calculated each
probability has its own policy
reactions the next is the model based in
this algorithm the programmers create a
different Dynamic for each environment
so the programmer create different
Dynamics for each environment one
environment one
dynamics that way when the agent is put
into each different model
it learns to perform consistently under
each condition now let us look at the
algorithms algorithmic Bill of Rights in
January 2017 the US public policy
Council of the associations for
computing
Machinery which consist of Educators
researchers and Professionals in the
world of Information Technology outlined
a set of guiding principles that could
serve as a precursor for an algorithmic
Bill of Rights these principles covers
seven general areas which we will be
discussing one by one so the first one
is awareness those who design Implement
and use algorithms must be aware of
their potential biases and possible harm
and take these into accounts in their
practices the second is access and
redress those who are negatively
affected by algorithms must have systems
that enable them to question the
decisions and seek a red
the third is accountability
organizations that use algorithms must
take responsibility for the
decisions those algorithms reach even if
it is not feasible to explain how the
algorithms arrive at those decisions the
fourth is explanation those affected by
algorithm should be given explanations
of the
decisions and the procedures that
generated them the fifth is data
provenance those who design and use
algorithms should maintain record of the
data used to train the algorithm and
make those records available to
appropriate individuals to be studied by
to be studied for possible
biases the sixth is auditability
algorithms and data should be recorded
so that they can be audited in case of
possible harm the seventh is validation
and testing organizations that use
algorithms should test them regularly
for biases and make the results publicly
available so to conclude in this module
we have briefly introduced AI algorithms
and how do they work then we have
discussed about the commonly used AI
algorithms under the various learning
techniques that is supervised
unsupervised and reinforcement learning
and then finally we have given a brief
on the algorithmic Bill of Rights and
these are the five references from which
the material for this module was taken
thank
[Music]
you
[Music]
oh
Ver Más Videos Relacionados
All Machine Learning algorithms explained in 17 min
The Fundamentals of Machine Learning
Types Of Machine Learning | Machine Learning Algorithms | Machine Learning Tutorial | Simplilearn
Types of Machine Learning for Beginners | Types of Machine learning in Hindi | Types of ML in Depth
#6 Machine Learning Specialization [Course 1, Week 1, Lesson 2]
How I'd Learn AI in 2024 (If I Could Start Over) | Machine Learning Roadmap
5.0 / 5 (0 votes)