Introduction to Generative AI n explainable AI

Dr. Ani Thomas
26 Sept 202321:58

Summary

TLDRAnnie Thomas discusses advancements in AI, focusing on machine learning and natural language processing. She introduces generative AI, capable of creating new data like text or images, and explainable AI, which counters the 'black box' nature of ML by providing transparent decision-making processes. Thomas also touches on large language models like GPT and Transformer architecture, highlighting their applications in various tasks. She emphasizes the importance of local language processing and suggests research opportunities in summarization, spelling correction, and sentiment analysis.

Takeaways

  • 🌟 Annie Thomas introduces herself as a speaker from India, focusing on advancements in AI, particularly in machine learning and natural language processing.
  • 📈 She discusses the importance of generative AI, which can create new content like text, images, or audio based on training data, and discriminative models that classify and predict from existing data.
  • 💡 Generative AI, also known as chain AI, operates on unstructured data and is exemplified by predictive text features on mobile devices.
  • 🧠 Large language models are highlighted as a significant development in AI, capable of understanding context and generating human-like text.
  • 🌐 The talk covers the rise of Transformers, a deep learning architecture that has revolutionized various AI applications beyond just language processing.
  • 🔍 Explainable AI is introduced as a response to the 'black box' issue in machine learning, aiming to make AI decisions understandable and trustworthy, especially in critical fields like medicine and finance.
  • 📊 Annie emphasizes the four principles of explainable AI: meaningful explanation, accuracy, knowledge limitation, and user assistance.
  • 🔎 The script mentions the challenges of balancing interpretability and accuracy in explainable AI and the need for clear abstractions in explanations.
  • 📚 Research opportunities in natural language processing are explored, including text summarization, spell checkers, and news article classification, with a focus on local languages.
  • 📊 The importance of differentiating between extractive and abstractive summarization is highlighted, where the former extracts parts of the original text and the latter generates new text.
  • 📈 The script concludes with the speaker's encouragement for researchers to explore various summarization techniques and semantic analysis to improve AI models.

Q & A

  • Who is Annie Thomas and what is her background?

    -Annie Thomas is the speaker of the keynote session. She is from the Line Street of Technology dirt, from Sattisgarh State, and from the country of India.

  • What is the main topic of Annie Thomas' keynote session?

    -The main topic of the keynote session is research prospects in the field of machine learning and natural language processing.

  • What are the two new aspects of AI in natural language processing mentioned by Annie Thomas?

    -The two new aspects of AI in natural language processing mentioned are generative AI and explainable AI.

  • What is the difference between supervised and unsupervised machine learning models?

    -Supervised models have a set of labels that are fixed and always existing, allowing for checking predictions for correctness. Unsupervised models do not have a class of labels and predict based on the model-generated data, which can be unstructured.

  • What is generative AI and how does it differ from discriminative models?

    -Generative AI generates new data based on the training provided and understands the distribution of data. Discriminative models, on the other hand, are used to classify, predict, and cluster and are trained on labeled data.

  • Can you provide an example of generative AI mentioned in the script?

    -An example of generative AI is the predictive text feature on mobile phones, which suggests the next word in a sentence based on patterns learned from previous inputs.

  • What are Foundation models in the context of generative AI?

    -Foundation models are large language models that work on unstructured data to generate new patterns and can generate new content such as text, images, or audio based on the training provided.

  • What is the importance of explainable AI in industries?

    -Explainable AI counters the 'Black Box' tendency of machine learning by providing explanations for decisions, which is crucial in domains like medicine, defense, finance, and law to build trust in the algorithms.

  • What are the four principles of explainable AI?

    -The four principles of explainable AI are providing meaningful explanations, ensuring accuracy, having a high knowledge limit, and assisting users in determining appropriate trust in the system.

  • What are the challenges faced in explainable AI?

    -Challenges in explainable AI include contrasting interpretability and accuracy, the need for abstractions to clarify explanations, and the difficulty of providing explanations that meet human accuracy levels.

  • What are some applications of natural language processing mentioned in the script?

    -Some applications of natural language processing mentioned are text summarization, spell checkers, news article classification, and semantic analysis of reviews.

Outlines

00:00

💡 Introduction to AI and Machine Learning

Annie Thomas introduces herself and the keynote session's focus on research prospects in machine learning and natural language processing. She distinguishes between generative AI, which creates new content like text or images, and explainable AI, aiming to make AI decisions understandable. Annie explains AI, machine learning as a subset of AI, and deep learning as a subset of machine learning with multiple hidden layers. She further discusses supervised and unsupervised learning, with the former using labeled data and the latter generating models from unstructured data.

05:00

📚 Deep Dive into Generative AI

The paragraph delves into generative AI, contrasting it with discriminative models. Generative models create new data based on training, understanding data distribution to generate examples. Annie provides an example with predictive text on mobile phones, illustrating how generative AI learns patterns to suggest the next word in a sentence. She also introduces foundation models and large language models capable of handling vast amounts of text, image, or video data to generate human-like text, with examples like GPT-3 and ChatGPT. The paragraph also mentions the rise of Transformer models post-2017, which are versatile for various tasks beyond just language processing.

10:02

🔍 The Emergence of Explainable AI

Explainable AI is introduced as a response to the 'black box' issue in machine learning, where decisions lack transparency. It's crucial for domains like medicine and finance to understand and trust AI algorithms. The paragraph outlines four principles of explainable AI: providing meaningful, accurate explanations within the designed operational limits. Challenges include balancing interpretability with accuracy and using abstractions for clarity. The categorization of explainable AI is discussed, including model-agnostic vs. model-specific and global vs. local explanations.

15:03

🌐 Local Language Processing and Applications

Annie discusses her work on local languages, focusing on text summarization. She outlines various types of summarization, including single and multi-document, and based on different criteria like informative or evaluative. The paragraph mentions different summarization techniques like extractive and abstractive, with the latter requiring more sophisticated models. She also touches on spelling and grammar correction for local languages and classifying news articles into predefined categories, emphasizing the challenges due to language diversity.

20:04

📊 Semantic Analysis and News Classification

The final paragraph discusses ongoing work on semantic analysis of social media reviews to improve sentiment analysis models. Annie mentions the application of data mining techniques to enhance accuracy. Additionally, she talks about classifying news articles into appropriate sections like national, international, business, etc., using discriminative models trained on the Hindi language. The paragraph highlights the collection of large datasets and the development of models to address the unique challenges of processing different languages.

Mindmap

Keywords

💡Artificial Intelligence (AI)

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions. In the video, AI is the overarching theme, with a focus on how it can make machines intelligent, efficient, and capable of understanding and predicting human-like responses.

💡Machine Learning

Machine Learning is a subset of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. It is mentioned as a part of AI where systems learn according to the knowledge base provided.

💡Deep Learning

Deep Learning is a subset of machine learning with neural networks having multiple hidden layers, allowing the model to learn and make decisions based on patterns in large amounts of data. It is discussed in the context of providing more precise and evaluative results for problem statements.

💡Generative AI

Generative AI refers to AI models that can generate new data based on the training data provided. It is highlighted as a type of machine learning model that can create new data, such as text or images, rather than just classifying or predicting existing data.

💡Explainable AI

Explainable AI is a concept that counters the 'black box' nature of machine learning models by providing understandable explanations for their decisions. It is discussed as a necessity in fields like medicine and finance where understanding the decision-making process is crucial.

💡Supervised Learning

Supervised Learning is a type of machine learning where the model is trained on a dataset that includes input and output pairs. It is mentioned as a model that has a set of labels to check the correctness of predictions.

💡Unsupervised Learning

Unsupervised Learning is a type of machine learning where the model works with input data without labeled responses. It is discussed as a model that predicts based on the patterns it discovers in the data without any pre-existing labels.

💡Discriminative Models

Discriminative Models are used to classify, predict, or cluster data and are trained on labeled datasets. They are contrasted with generative models in the script, where the latter generates new data examples.

💡Large Language Models

Large Language Models are AI models specifically designed for natural language processing tasks. They are characterized by their large size and ability to generate human-like text. Examples mentioned include GPT-3 and ChatGPT.

💡Transformers

Transformers are a deep learning architecture that relies on attention mechanisms to process data. They are mentioned as a model that can be applied to a wide range of problems, including natural language tasks, after the advent of large language models.

💡Summarization

Summarization refers to the process of shortening a text while retaining the main points. It is discussed in the context of natural language processing, where the script mentions single document, multi-document, and different types of summarization based on the target audience.

Highlights

Introduction to the keynote session on Research prospects in machine learning and natural language processing.

Exploration of generative AI and explainable AI as new aspects of AI in natural language processing.

Description of AI's goal to make machines as intelligent as humans, with examples like Chat GPT.

Explanation of machine learning as a subset of AI and its function to make systems learn from a knowledge base.

Deep learning defined as a subset of machine learning with deep neural networks for precise results.

Differentiation between supervised and unsupervised machine learning models.

Introduction to generative AI, capable of generating new data based on training.

Example of generative AI in predictive text input on mobile phones.

Generative AI's ability to work with unstructured data like text, images, or audio.

Foundation models and large language models in the context of generative AI.

Research focus on local languages in natural language processing tasks.

Popular large language models like GPT-3 and their capabilities in text generation.

The rise of Transformers in deep learning architecture and their applications.

Explainable AI as a counter to the Black Box tendency of machine learning.

Importance of explainable AI in high-stakes domains like medicine, defense, and law.

Four principles of explainable AI: explanation, accuracy, knowledge limit, and user assistance.

Challenges in explainable AI, including the trade-off between interpretability and accuracy.

Categorization of explainable AI into model-agnostic, model-specific, global, and local explanations.

Research opportunities in natural language processing, including text summarization and sentiment analysis.

Different types of text summarization: extractive, abstractive, monolingual, multilingual, and cross-lingual.

Spell checkers and grammatical correction systems for local languages.

Automatic classification of news articles into predefined categories using discriminative models.

Semantic analysis of social media reviews and sentiment analysis models.

Transcripts

play00:02

good morning everyone

play00:04

hope you all are fine

play00:06

let me introduce myself I'm Annie Thomas

play00:09

from the line Street of Technology dirt

play00:11

from sattisgarh State and from the

play00:14

country of India

play00:17

today's keynote session is on Research

play00:20

prospects in the field of machine

play00:21

learning and natural language processing

play00:26

the two new aspects of AI in the field

play00:29

of natural language processing I want to

play00:32

introduce over here that is generative

play00:34

Ai and explainable AI

play00:37

so as we all know about AI

play00:41

artificial intelligence everyone knows

play00:44

nowadays is to make

play00:47

the machine as intelligence intelligent

play00:50

as human beings and with the Advent of

play00:54

chat GPT and all all are familiar with

play00:56

the term AI nowadays to make the machine

play01:00

as efficient as humans

play01:03

so machine learning that is a part of AI

play01:06

there is a subset of AI and in machine

play01:10

learning we make the system learn

play01:13

according to the knowledge base that is

play01:15

provided again going deeper into that we

play01:18

have deep learning that is a subset of

play01:20

machine learning where we have deep

play01:23

neural networks to

play01:26

um

play01:27

have more precise and evaluative results

play01:31

for the problem statements it has more

play01:34

than one hidden layers which makes it

play01:37

possible to go deep into the network

play01:41

frames so depending on all this we'll go

play01:46

further into generative AI the machine

play01:48

learning model can be divided into

play01:50

supervised and unsupervised models

play01:53

supervised model has a set of labels

play01:57

that are fixed and they are

play02:03

always existing like if we get an output

play02:07

also we will have a label attached to

play02:10

the output from which we can we can

play02:13

always

play02:14

check whether our predictions correct or

play02:17

not the unsupervised models they are not

play02:20

having a class of labels we have to

play02:24

predict based on the model that is

play02:27

generated and maybe that may be

play02:31

unstructured data or unstructured data

play02:34

so unsupervised learning will be on the

play02:38

basis of the input data that is given a

play02:41

model will be working on that input data

play02:45

and it will be having a generated

play02:48

example

play02:50

going further into deep learning we

play02:53

again have two types discriminative and

play02:55

generative

play02:56

discriminative is used to classify

play02:59

predict cluster and it is trained on a

play03:02

data set of labeled data we already have

play03:06

the labeled data then

play03:09

um

play03:10

the training is easy as we have to put

play03:14

it into one particular class and it

play03:18

learns the relationship between the data

play03:19

points and the labels but generative is

play03:22

different that it generates new data

play03:24

based on the training that is provided

play03:26

to it so it has to understand the

play03:29

distribution of data and How likely a

play03:33

given example is going to be

play03:35

categorized into that generative example

play03:38

but in discriminative we don't have to

play03:40

generate any new data the best example

play03:43

of generative AI when it came into

play03:45

existence is everyone is having a mobile

play03:47

and we type the text so when we are

play03:51

typing the text it will give you like if

play03:53

I type Delight will give the next

play03:56

predicted word as Institute and the next

play04:00

predicted word or as of Billa Institute

play04:03

of Technology I'll get an option for

play04:06

that because it learns from the patterns

play04:09

I have been doing all these these days

play04:11

so the it is predicting the next word in

play04:15

a sentence that's the best example of a

play04:18

generative AI now if we come to the

play04:22

concept of generative AI it is also

play04:25

called chain AI it is artificial

play04:28

intelligence capable of generating text

play04:30

images or other media using the

play04:34

generative models and

play04:37

with the right side if whatever I have

play04:40

written if you can see not j a i when y

play04:44

input is a number or a discrete or a

play04:47

class or a probability but it is when it

play04:49

is a natural language text or an image

play04:51

or an audio that shows that it works on

play04:55

unstructured data so generative AI works

play05:00

on unstructured data to generate new

play05:04

patterns we have Foundation models large

play05:07

language models which we'll be

play05:08

discussing in the further slides

play05:10

so now we understood that generative AI

play05:14

where we generate new content content

play05:17

and that content may be a text or image

play05:21

or audio

play05:22

so if we see the category of the

play05:26

supervised semi-supervised unsupervised

play05:28

chain AI you can have input as training

play05:31

code or label data or unlabeled data and

play05:35

the foundation model works on all this

play05:38

and generates new content and the output

play05:42

can be a text or image or code or a

play05:45

video audio anything

play05:47

so

play05:49

the main concept here of generative AI

play05:53

is to generate the new content based on

play05:55

the training that is provided so here

play05:58

since I am interested in text so I have

play06:01

taken only the text input you can also

play06:04

take image or audio so in the

play06:08

uh some research which we are pursuing

play06:11

here we are working on text and that

play06:13

also we are working on the local

play06:15

languages because in English already the

play06:17

work uh text to text if you see the post

play06:20

column translation summarization

play06:22

question answering grammar correction

play06:24

all these my research Scholars are doing

play06:26

in the for the local languages here

play06:28

because for English we are having

play06:31

already uh or much work is being done on

play06:34

this uh in these areas I'm not working

play06:37

in image and audio and musicians but you

play06:40

can because image generation text to

play06:43

image text to video text to speech all

play06:46

these are again natural language

play06:48

processing generative way I can be used

play06:51

in all these areas

play06:53

we see we know about the Google's

play06:56

Foundation models that is Palm API for

play06:58

text is a very popular model word we use

play07:02

vit for vision we use

play07:05

vitgp2 blip vqa all these models we are

play07:09

using now uh when we work talk about

play07:13

natural language processing we talked

play07:15

about large language models a large

play07:18

language model is specifically designed

play07:21

and trained for natural language

play07:22

processing tasks what is the

play07:25

characterization of large language

play07:27

models is its large size so it can be

play07:31

working for worst amounts of Text data

play07:34

or it can be working for other areas

play07:39

also and it is capable of generating

play07:42

human-like text understanding context

play07:45

answering questions all these things the

play07:48

large language models are doing nowadays

play07:51

the notable examples are open AIS GPT

play07:55

GPT 3 GPT 4 and chat PT and birth all

play08:00

these examples are very popular they

play08:03

train

play08:04

um

play08:05

Text data or image or video and are

play08:11

capable of generating the new text based

play08:13

on the training data the large language

play08:16

models are able to handle the large

play08:19

volume of data that is taken from

play08:21

internet or from social media or any

play08:24

other place this shows some other models

play08:27

which are also popular

play08:29

then after the large language models

play08:32

came into existence

play08:34

there was an era of Transformers

play08:38

a Transformer is a deep learning

play08:43

architecture that relies on the parallel

play08:46

multi-head attention mechanism the

play08:48

modern Transformer it was proposed in

play08:51

the year 2017 I think and the paper from

play08:55

which it was introduced was attention is

play08:58

all you need so attention was given to

play09:01

the important data which has to be

play09:05

picked up from the given set of huge

play09:07

databases and the less attention has to

play09:11

be provided to the data which is not of

play09:13

much importance on the basis of that the

play09:16

Transformers were broken so so a general

play09:20

pre-trained Transformer is a more

play09:22

broader term for models based on

play09:24

transform architecture with these models

play09:27

they can be applied to wide range of

play09:29

problems not only for natural language

play09:31

tasks computer vision speech recognition

play09:33

reinforcement learning all the things it

play09:36

is being used then these models are

play09:39

pre-trained on large data sets and can

play09:42

be fine-tuned for specific tasks some

play09:46

more examples I'll be showing you in the

play09:48

next slide that is Vision Transformer

play09:50

the detr conformer swim Transformer

play09:53

perceiver and perceiver IO this is what

play09:57

we have talked about generative Ai and

play10:01

the concepts which are used in

play10:03

generative AI moving on ahead to

play10:05

explainable AI that is also a new term

play10:08

that is being used in the industry

play10:11

nowadays and

play10:14

when we talk about explainable AI it

play10:17

counters the Black Box tendency of

play10:20

machine learning where even the AI

play10:22

designers cannot explain why it arrived

play10:26

at a specific decision now what happens

play10:28

is we see the black box design of

play10:32

machine learning where the we have

play10:35

reached to a conclusion but there needs

play10:38

to be an explanation about how we are

play10:40

getting the results why it is not like

play10:42

this and why it is like this and why

play10:45

this explainable AI was needed when we

play10:48

are getting the results why is this

play10:49

explainable AI was needed that was

play10:52

because the domains like medicine

play10:56

Defense Finance and law where it is

play11:00

crucial to understand the decisions and

play11:04

build trust in the algorithms they have

play11:06

made their algorithms we are employing

play11:08

algorithms we are getting the results

play11:10

unless and until we have trust in that

play11:12

those models who will is going to use

play11:15

those models you know chargpt and Google

play11:19

search are so popular because of the

play11:21

trust we have in those systems that we

play11:23

are getting such important informations

play11:26

according to the need of the query we

play11:29

are putting there so explainable AI came

play11:32

into existence to attract find out how

play11:36

we are getting the results

play11:38

so four principles of explainable AI we

play11:41

have done this four principles as

play11:43

explanation meaningful explanation

play11:47

accuracy we we are giving an explanation

play11:49

but that explanation should have the

play11:52

satisfactory level of accuracy not that

play11:55

if it is 15 accurate we cannot say we

play11:58

are having a good explanation so

play12:00

explanation is there that is Meaningful

play12:02

that is interpretable but that

play12:04

explanation should have an accuracy and

play12:06

the knowledge limit should be so high

play12:08

that explainable a I will be able to

play12:11

provide the system to operate under the

play12:14

conditions for which it was designed and

play12:17

when it should reach the sufficient

play12:20

confidence in its output

play12:22

so

play12:27

the assisting its users and determining

play12:30

appropriate trust that suppose part

play12:32

trust we develop in the system in the

play12:35

model which we are generating and the

play12:37

second part is we have an

play12:39

interpretability and explainability

play12:40

mechanism which can explain to the users

play12:44

how it is working so the next part is

play12:47

that all that is having so many

play12:50

advantages there are so many

play12:53

disadvantages or issues challenges which

play12:56

are existing with this one is

play12:58

contrasting the interpretability and

play13:00

accuracy we know we have to reach the

play13:03

human accuracy levels sometimes it may

play13:07

the explainable AI may not be able to

play13:09

give the correct explanations it may be

play13:12

contrasting with the human explanations

play13:14

and those cases which has to be dealt in

play13:18

the model as we improved like that so

play13:20

still it exists these issues exist

play13:22

describing the

play13:28

and there should be the use of

play13:30

abstractions to clarify the explanations

play13:34

now based on this explainable AI xaia it

play13:39

is also called we see there is

play13:41

categorization based on agnosticity that

play13:44

is model agnostic or it is model

play13:47

specific if it is applied to all the

play13:50

model types then it is called Model

play13:52

agnostic and if it can be applied to

play13:55

only particular specific model types for

play13:58

which particular task it's been made the

play14:00

latest model specific that is the

play14:02

categorization of economic agnosticity

play14:05

now depending on the scope we have

play14:10

Global explanation or local explanation

play14:13

if you want some part or the prediction

play14:16

of some particular area only to be

play14:18

displayed it may say it is local

play14:20

explanation and we we want the

play14:22

explanation of the whole model how it is

play14:24

looking then we go for Global

play14:27

explanation but local expression nations

play14:29

are also important because individual

play14:33

predictions are also taken into account

play14:35

in X AI because Global explanation can

play14:40

be given on a larger scale but local

play14:43

explanation need to go deeper into the

play14:45

problem so that is again an area where

play14:49

the research can be done and the

play14:53

different problems can have different

play14:56

perspectives different aspects to

play14:59

include either model agnostic model spec

play15:02

or Global or local explanation

play15:07

as I told you we are working on local

play15:09

languages and on the basis of this local

play15:13

languages we are doing text

play15:15

summarization

play15:17

so again now for the persons who are

play15:21

hearing me those who want to do

play15:24

um

play15:25

and research in the field of natural

play15:27

language processing machine learning you

play15:29

can go for single document summarization

play15:32

you can go for multi-document

play15:33

summarization you can take any of the

play15:36

languages on which work has not been

play15:37

done you can go for indicative

play15:39

informative evaluative summarization any

play15:43

of the tasks can be taken upon and then

play15:46

on the basis of this you can generate

play15:50

models generative models or

play15:52

discriminative models which will help

play15:55

you to give a proper text summarization

play15:58

and on the basis of target audience

play16:01

there is a generic summarization and

play16:04

there is a query focused summarization

play16:07

on the basis of type of summarizer you

play16:10

have monolingual multilingual

play16:12

cross-lingual all types of summarization

play16:15

so these I'm I have only included this

play16:18

slide so that if you have some have

play16:21

interest in generic summarization some

play16:23

in multilingual so you can pursue your

play16:26

area of Interest

play16:28

then the two main Concepts that come

play16:30

over here is extractive and abstracted

play16:34

in extractive summarization the parts of

play16:38

the original text are taken to form the

play16:41

summary like if we are

play16:43

taking a document and from the document

play16:46

we want to summarize it the important

play16:49

sentences will be taken out and given as

play16:52

the extracted summarization but in the

play16:54

abstractive summarization you need to

play16:57

generate the new text depending on the

play17:01

extracted important parts of the

play17:04

documents so all this has to be done

play17:07

which shows that abstractive

play17:10

summarization needs more

play17:13

fields which have to be covered so that

play17:17

the abstracted summarization finds out

play17:19

the meaning and again generates the new

play17:22

test text based on the training data

play17:25

some comparative studies of peer-wise

play17:28

Publications has been done

play17:31

the New Concept which has been included

play17:35

for the new language has been done in

play17:38

using these steps from the input

play17:41

pre-processing has been done on the

play17:44

pre-processing part text cleaning was

play17:46

done stockpot removal based on the local

play17:49

language was done then feature

play17:50

extraction and then optimization was

play17:53

applied to find the text summary on the

play17:56

basis of some of the models bioelestem

play18:00

was applied word for supplied CPT was

play18:02

applied and we could find that the

play18:05

results are

play18:08

we could compare the models on the basis

play18:10

of the results and the new model

play18:13

Innovative models are being generated by

play18:15

the research scholars in this area

play18:18

then next topic was spelling and

play18:22

correction

play18:24

so if we are can go for two types of

play18:27

grammatically text can be corrected or

play18:31

spelling can be corrected that again

play18:32

leads to two different we are working on

play18:34

spell Checkers only for that particular

play18:37

language maybe in the future we'll be

play18:39

working on grammatical checking of the

play18:42

sentences as well so we know we have in

play18:46

English we have so good systems when we

play18:49

type of

play18:51

word and if it is wrong it gives the

play18:54

suggestions and also many systems are

play18:56

already existing in binding to all the

play18:59

particular softwares but for the local

play19:03

languages still we wrote half the

play19:05

systems like which finds the text errors

play19:08

may be the typographic maybe the

play19:10

syntactic maybe the discourse there are

play19:12

non-word errors real word errors

play19:14

phonological errors

play19:16

transformationals deletion errors

play19:19

insertion errors of substitution all

play19:21

these are the type of Errors which exist

play19:23

in the text errors and you can work on

play19:26

any of this type to remove all such type

play19:28

of Errors being on the domain or outside

play19:32

that only

play19:34

then another system was to classify the

play19:38

news via articles into the predefined

play19:41

classes of the newspapers so that

play19:43

whenever we get an article it will

play19:46

automatically be sent to the national

play19:48

International Business Sports

play19:50

entertainment health or weather and this

play19:53

has been done on our

play19:55

national language that is Hindi for this

play19:59

we have made the different

play20:01

categorizations labels and the

play20:04

discriminative models are being prepared

play20:06

to classify the news where articles into

play20:10

proper particular

play20:12

sex label sections so that the newspaper

play20:17

can automatically be aligned to the

play20:20

pages according to what they are

play20:23

so for challenging task was because of

play20:27

the every language is a challenging for

play20:31

processing for every language is a

play20:33

challenging task because it has

play20:35

different different types of consonants

play20:37

vowels combinations and maybe patterns

play20:41

or different Center structures are

play20:43

different

play20:45

so from different uh data set

play20:49

huge amount of data has been collected

play20:52

and work has been done on this I am not

play20:55

able to show you the particular

play20:59

results of these models but I'd like to

play21:03

tell you that the news via articles they

play21:07

will be classified into predefined

play21:09

labels another work that is going on is

play21:12

semantic analysis of the

play21:16

reviews which are collected from the

play21:19

social media and based on those reviews

play21:22

the sentiment analysis where the

play21:27

sentiment analysis models have still not

play21:30

been applied on and based on that

play21:33

suggestion in mining techniques are

play21:35

being proposed so that they give more

play21:38

accuracy than the existing semantic

play21:41

analysis models so that's all from my

play21:45

side thank you for the patient here

play21:48

thanks

play21:55

foreign

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
AI ResearchMachine LearningNLPGenerative AIExplainable AIDeep LearningData ScienceLanguage ModelsText GenerationAI Trends
Besoin d'un résumé en anglais ?