XAI-SA 2024, Opening talk

XAI-SA Workshop
24 Apr 202406:36

Summary

TLDRThis workshop delves into explainable machine learning and AI, addressing the shift from interpretable models to the black box nature of deep learning. It aims to make these models transparent through post-training interpretation methods or by designing inherently interpretable models. The event features presentations on various topics, including spectrum interpretability in music machine learning, hybrid deep neural audio processing, and the challenges of understanding AI models. With 20 paper presentations, oral talks, and a panel discussion, the workshop fosters collaboration and innovative ideas in the field of explainable AI.

Takeaways

  • πŸ“˜ The workshop is focused on explainable machine learning and AI, which has become a prominent field due to the rise of deep learning and the resulting black box models.
  • πŸ” Explainable machine learning aims to make these complex models transparent, allowing for better understanding of their decision-making processes.
  • πŸ› οΈ There are various solutions to achieve explainability, including post hoc interpretation methods and designing inherently interpretable models from the start.
  • πŸ“ˆ The trade-offs between these methods will be explored during the workshop, with presentations and discussions on the latest research and applications.
  • πŸŽ“ The workshop features invited talks from experts in the field, including researchers from the speech and audio community and a renowned machine learning expert.
  • πŸ“š There will be 20 paper presentations, with four oral presentations and 16 poster presentations, covering a range of topics from model interpretation to application-specific uses.
  • 🎼 Specific applications discussed include music machine learning, speech and audio processing, and analysis of deep learning models for various purposes.
  • πŸ—“οΈ The workshop schedule includes a series of talks, paper presentations, and a panel discussion aimed at fostering collaboration and generating new ideas.
  • πŸ“Ή The event is being recorded, and there is a YouTube channel for the workshop where invited talks and author videos will be posted.
  • πŸ‘₯ The organizers and reviewers are acknowledged for their efforts in ensuring a thorough review process, with each paper receiving at least three reviews.
  • πŸ“ Authors are encouraged to submit videos for their papers to the workshop's email for inclusion on the YouTube channel.

Q & A

  • What is the main focus of the workshop described in the transcript?

    -The main focus of the workshop is on explainable machine learning and explainable AI, particularly in the context of deep learning models that are often considered black boxes.

  • Why has explainable machine learning become more important in recent years?

    -Explainable machine learning has become more important due to the rise of deep learning, which has led to the creation of complex models that are not easily interpretable, thus making it a necessity to make these 'black box' models more transparent.

  • What are the two main approaches to making machine learning models more interpretable as mentioned in the transcript?

    -The two main approaches are: 1) using post hoc interpretation methods where the original model is not altered but an attempt is made to understand its workings after training, and 2) designing an interpretable model from the start.

  • What is the trade-off that needs to be explored when choosing between the two approaches to explainability?

    -The trade-off involves balancing the model's interpretability with its performance. Sometimes, more interpretable models may not perform as well as complex, less interpretable models.

  • What is the goal of the workshop in terms of the participants?

    -The goal of the workshop is to bring together people working in the field of explainable AI to foster collaborations and come up with innovative ideas.

  • How many paper presentations are planned for the workshop?

    -There are 20 paper presentations planned for the workshop, including four oral presentations.

  • What types of topics can be expected in the paper presentations and invited talks?

    -The topics cover a range of areas including interpretable models, applications of post hoc explanation methods, analysis of deep learning models for music, and methodological approaches to explainable AI in speech and audio applications.

  • Who are some of the invited speakers mentioned in the transcript, and what are their areas of expertise?

    -Some of the invited speakers include Ethan, who will talk about spectrum of interpretability for music machine learning; Cynthia Rudin from Duke University, who will discuss interpretable models versus post hoc interpretations; Professor G, who will cover hybrid and interpretable deep neural audio processing; and Gordon Wiinn, who will discuss understanding investigations into probing and training data memorization of AI models.

  • What is the schedule for the poster presentations at the workshop?

    -The first poster session is at 10 AM, and presenters are asked to set up their posters by that time. All participants will present in all the sessions.

  • What is the role of the reviewers in the workshop, and how many reviews were written in total?

    -The reviewers played a crucial role in the evaluation process, ensuring that each paper received at least three reviews, and in most cases, four. A total of 66 reviews were written for the workshop.

  • How can authors share their presentations or videos related to the workshop?

    -Authors can share their videos by sending them to the workshop's email, and these will be featured on the workshop's YouTube channel, along with the invited talks.

Outlines

00:00

πŸ“š Introduction to Explainable AI Workshop

The speaker begins by welcoming attendees to the workshop on explainable machine learning and AI, emphasizing the importance of making complex models more transparent. The session will cover various methods for interpreting machine learning models, including both post-hoc interpretation techniques and inherently interpretable models. The workshop aims to foster collaboration and generate new ideas, with presentations from invited speakers and contributions from various researchers. Attendees will explore the balance between model accuracy and interpretability, with examples provided throughout the day.

05:00

🎀 Workshop Structure and Presentations

The speaker outlines the workshop’s structure, mentioning the number of paper presentations, oral talks, and poster sessions. Various topics will be covered, including interpretable models, explainable AI methodologies, and specific applications in speech and audio. Invited speakers from prominent institutions will present on key areas such as music machine learning, neural audio processing, and AI model interpretability. The session will conclude with a panel discussion, encouraging audience interaction. Additionally, the speaker mentions logistical details, including the setup of posters, the role of reviewers, and the workshop’s YouTube channel for accessing recorded content.

Mindmap

Keywords

πŸ’‘Explainable Machine Learning

Explainable Machine Learning (XML) refers to the field of study that aims to make the decision-making processes of machine learning models understandable to humans. In the context of the video, it is highlighted as a response to the 'black box' nature of deep learning models, which can be difficult to interpret. The script discusses various methods to make these models transparent, such as post hoc interpretation methods and designing inherently interpretable models.

πŸ’‘Deep Learning

Deep Learning is a subset of machine learning that uses artificial neural networks with multiple layers to model and understand complex patterns. In the script, it is mentioned that the prevalence of deep learning has led to an increase in the use of black box models, which are powerful but lack transparency, thus necessitating the development of explainable machine learning techniques.

πŸ’‘Black Box Models

Black Box Models are systems or algorithms that are not easily understood or interpretable by humans. The term is used in the script to describe models that perform well but whose internal workings are opaque, which is a challenge for fields requiring transparency and trust in AI decisions.

πŸ’‘Post Hoc Interpretation

Post Hoc Interpretation refers to methods applied after a machine learning model has been trained to understand its decision-making process. In the script, this concept is brought up as one of the solutions to make black box models more transparent, without altering the original model's architecture.

πŸ’‘Interpretable Models

Interpretable Models are designed to be inherently understandable, with clear relationships between inputs and outputs. The script mentions the trade-off between model performance and interpretability, and how one approach to explainable AI is to design models that are interpretable from the outset.

πŸ’‘Spectrogram

A Spectrogram is a visual representation of the spectrum of frequencies of a signal as it varies with time. In the context of the video, it is used as an example to illustrate how explainable AI can focus on specific areas of a spectrogram, such as a 'chirp', to show where the model is concentrating its analysis.

πŸ’‘Invited Talks

Invited Talks are presentations given by experts invited to share their knowledge on a specific topic. The script outlines the schedule of the workshop, which includes several invited talks from renowned researchers in the field of explainable AI, covering various aspects of the subject.

πŸ’‘Oral Presentations

Oral Presentations are a form of academic communication where researchers present their work verbally, often accompanied by visual aids. The script mentions that there will be four oral presentations during the workshop, which are part of the 20 paper presentations.

πŸ’‘Poster Presentations

Poster Presentations are a method of presenting research where the presenter displays information on a board and discusses it with interested attendees. The script specifies that all participants will be doing poster presentations, indicating a common format for sharing research at academic conferences.

πŸ’‘Panel Discussion

A Panel Discussion is an interactive session where a group of experts discuss a topic, often answering questions from the audience. The script mentions a panel discussion with invited panelists, aiming to foster interaction and generate ideas related to explainable AI.

πŸ’‘YouTube Channel

The script mentions the workshop's YouTube channel, which will host the recorded invited talks and possibly the authors' videos. This indicates the workshop's intention to share the knowledge and discussions beyond the physical event, reaching a broader audience online.

Highlights

The workshop focuses on explainable machine learning and AI, addressing the shift from interpretable models to the need for transparency in deep learning black box models.

Explainable machine learning aims to make black box models transparent through post-training interpretation methods or by designing interpretable models from the start.

There is a trade-off between model accuracy and interpretability that will be explored in the workshop presentations.

The workshop will feature 20 paper presentations, including four oral presentations and a poster session, covering various aspects of explainable AI.

Submissions include research on interpretable models, post hoc explanation methods, and applications in speech and audio processing.

Invited speakers from the speech and audio community and a renowned machine learning researcher will provide talks on various topics related to explainable AI.

Ethan will discuss the spectrum of interpretability for music machine learning, emphasizing the importance of understanding model focus areas.

Cynthia Rudin from Duke University will compare interpretable models with post hoc interpretations, highlighting the challenges and benefits of each approach.

Professor Grishar will present on hybrid and interpretable deep neural audio processing, exploring new methods in audio analysis.

Gordon Wiinn from MER will delve into understanding investigations into AI model memorization of training data, a critical issue in AI ethics.

Professor Shinji Watanabe will discuss explainable speech foundation models, aiming to improve transparency in speech recognition systems.

Dr. Janel Pargman will present on using NMF for interpretable audio classification, showcasing a novel approach to making AI decisions clearer.

The workshop will conclude with a panel discussion featuring the invited speakers, encouraging interaction and idea generation among participants.

The schedule and details are available on the workshop website, with specific instructions for poster presenters on setup times and session formats.

The workshop has a YouTube channel where all invited talks and some author submissions will be uploaded for wider access.

The workshop organizers and reviewers are acknowledged for their efforts in ensuring a rigorous review process with 66 reviews in total.

The workshop is being recorded, and participants are reminded to be aware of the camera for a professional presentation.

Transcripts

play00:01

right I think we'll get started um so

play00:04

thanks a lot everyone for uh being here

play00:07

uh 8:30 in the morning um so today it's

play00:11

going to be about explainable machine

play00:12

learning explainable AI we word like

play00:14

machine learning AI intership

play00:16

interchangeably in the title

play00:19

um

play00:22

so let me say uh what is explainable

play00:24

machine learning so I guess few years

play00:27

ago this wasn't like before deep

play00:29

learning you happened uh this wasn't

play00:31

really kind of like a feeli because

play00:33

models were interpretable but like with

play00:37

uh you know everything being deep

play00:38

learning now uh the chances are that we

play00:41

have blackbox models in what we do so

play00:44

basically with explainable machine

play00:45

learning what we try to do is to somehow

play00:48

make these black boxes uh transparent

play00:51

right there's different solutions like

play00:54

we will see like uh today people will

play00:56

talk about host HW interpretation

play00:59

methods where um you don't touch the

play01:02

original model but you do something else

play01:04

after training to U uh understand what's

play01:07

going on going on in in the model or you

play01:10

could um uh just design an interpretable

play01:13

model to begin with there's a trade-off

play01:15

to explore uh so we will see see these I

play01:19

guess in the presentations uh of the

play01:21

invited talks today and

play01:23

also uh there will be quite a few of

play01:26

papers so you'll see that there also

play01:28

here like I just like show for instance

play01:31

uh how can we like what happens if you

play01:33

have like a spectrogram input your

play01:36

explainer kind of focus on a particular

play01:38

area in this case it was like a chirp so

play01:40

you see that maybe you cannot see

play01:42

because the colors uh but like a with an

play01:45

explain with the explanation methods we

play01:47

are able to kind of like show where the

play01:49

model is focusing on U but this Workshop

play01:52

so explaining explainable AI is a

play01:55

develop developing field right um and

play01:59

basically this workshop's goal is to

play02:01

bring together people who are working in

play02:03

this field and hopefully you know like

play02:05

uh Foster collaborations and uh uh come

play02:09

up with some nice ideas hopefully at the

play02:11

end of the day uh so today we will have

play02:14

20 paper presentations uh there will be

play02:17

four oral presentations and in total we

play02:19

will have 20 posters so everybody will

play02:22

do a poster presentation um I'll talk

play02:25

more about the details but like uh we

play02:27

had submissions on interpret

play02:30

models about application of different

play02:33

post talk explanation methods on various

play02:35

various speech and audio applications we

play02:37

had some papers on analysis of llms for

play02:40

music and we also have some

play02:42

methodological explainable AI

play02:45

papers so um we also have invited

play02:48

speakers from the speech and audio like

play02:50

iasp community and also uh we have a

play02:53

methodological machine learning talk uh

play02:55

from a uh from a renowned XI researcher

play02:59

uh um so with that uh we will start

play03:03

today with Ethan we there uh he will

play03:06

talk about um spectrum of

play03:09

interpretability for music machine

play03:11

learning then we will have Cynthia Rudin

play03:14

Professor C Cynthia Ruden from Duke

play03:15

University she will talk about um

play03:20

interpretable models versus uh uh you

play03:22

know post interpretations it will be

play03:24

right after uh Ethan's talk then we will

play03:27

have Professor G rishar from uh

play03:30

telec comp part um it will be about

play03:34

hybrid and interpretable deep neural

play03:35

audio processing it will be after the

play03:38

oral talks 11 15 then we will have

play03:42

Gordon wiin from uh from mer there uh so

play03:46

it will be about uh basically

play03:49

understanding reg tating investigations

play03:52

into probing and training data

play03:54

memorization of AIO Genera models we had

play03:57

one uh then we have Professor Shinji

play04:00

vatan there it's about uh the talk is

play04:04

about toward explainable speech

play04:06

Foundation

play04:07

models and then finally we will have Dr

play04:10

Janel parik on using nmf uh for

play04:14

interpretable audio classification this

play04:16

will be the last talk and in the end at

play04:19

the end of today we will have a panel

play04:21

discussion with these wonderful

play04:23

panelists and hopefully you know it will

play04:25

be an interactive panel so that you know

play04:28

uh you'll ask questions and and we'll

play04:30

come up with ideas hopefully um so let

play04:33

me exp so this is the

play04:35

schedule uh you can find it on the

play04:38

website uh but for the poster presenters

play04:42

basically so the first poster session is

play04:44

at 10 uh so we ask you to set up set up

play04:49

your

play04:49

poster uh at 10 um and basically

play04:54

everybody will present in all the

play04:56

sessions okay uh we'll probably remove

play04:59

some of the

play05:00

chairs back there so that that people

play05:02

have more

play05:04

space um and the organizers so myself

play05:08

Jam suban Franchesco pan over there m

play05:12

rali who's uh not here Shang gupa bcar

play05:16

jman who's on Zoom uh and Paris I don't

play05:19

know where Paris is is like back

play05:22

on yeah um so uh these are our wonderful

play05:28

reviewers um

play05:30

basically we made we tried well each

play05:32

paper had received received at least

play05:35

three reviews and most in most cases we

play05:39

had four reviews and uh in total 66

play05:42

reviews were

play05:44

written uh we thank to reviewers for

play05:46

their

play05:47

work and one last thing we have a

play05:49

YouTube channel so for the authors uh if

play05:53

you haven't sent your video yet please

play05:55

send it to us on our Workshop

play05:58

email um

play06:00

yeah some of us some some of you already

play06:02

did that like and it's on the on the

play06:05

YouTube channel of the of the

play06:08

workshop and we will uh we will put also

play06:11

all the invited talks on the on our

play06:13

YouTube

play06:14

channel uh oh we are recording the uh

play06:18

the event and there's like a camera so

play06:20

uh make sure to smile and uh that's it

play06:24

for me and uh we will continue with uh

play06:28

Ethan so Franchesco will introduce Ethan

play06:32

but maybe yeah Ethan maybe you can come

play06:33

and you can set up everything

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Explainable AIMachine LearningExpert TalksPaper PresentationsDeep LearningBlackbox ModelsInterpretabilityAudio ProcessingMusic AnalysisML Workshop