Genius Machine Learning Advice for 11 Minutes Straight

Data Sensei
14 Sept 202411:00

Summary

TLDRThe speaker emphasizes the importance of not blindly collecting more data in machine learning, suggesting that testing and modifying algorithms can be more effective. They highlight the need for systematic debugging and the value of hands-on learning in programming. The speaker advocates for sustained effort over time, the significance of understanding one's learning style, and the belief in the '10,000-hour rule' for expertise. They also discuss the maturation of AI, the importance of creating over consuming, and the power of deep learning despite initial skepticism. The talk concludes with advice for young people to find their passions, understand themselves, and the potential of cost functions in machine learning.

Takeaways

  • 🔍 **Efficient Debugging**: The speaker emphasizes the importance of quickly identifying when collecting more data isn't the solution to a problem, suggesting that modifying the architecture or trying different approaches can be more effective.
  • 🚀 **Expertise in Machine Learning**: Being adept at debugging machine learning algorithms can significantly speed up the process of getting a model to work, with experts being 10x to 100x faster than others.
  • 🤔 **Systematic Problem Solving**: When faced with issues in machine learning, it's crucial to ask the right questions and systematically try different solutions like changing the architecture, adding more data, or adjusting the optimization algorithm.
  • 💡 **Learning by Doing**: The speaker advocates for learning programming through hands-on experience, suggesting that one should not be afraid to get hands-on and dirty when trying to solve problems.
  • 📚 **Deep Understanding Through Struggle**: Encourages spending time struggling with problems to learn more, rather than immediately seeking answers through quick searches, which can hinder the learning process.
  • 📈 **Consistent Effort Over Time**: Success in fields like programming and machine learning comes from consistent effort over a long period, not from sporadic all-nighters or bursts of work.
  • 📝 **Handwritten Notes for Retention**: The act of taking handwritten notes is recommended for better knowledge retention and understanding, as it requires recoding information in one's own words.
  • 🤖 **Practical Application of AI**: When considering the application of AI, start with a specific problem rather than a general desire to use the technology, ensuring that the solution is targeted and effective.
  • 🌟 **The Value of Creation**: The speaker promotes the idea of creation over consumption or redistribution, arguing that creating something new is more satisfying and fulfilling.
  • 🧠 **Long-term Learning and Passion**: Encourages focusing on long-term learning and finding one's true passions, which can lead to a deeper understanding and more significant contributions in a field.

Q & A

  • Why do some engineers spend six months on a direction that might not be fruitful?

    -Some engineers spend six months on a direction like collecting more data because they believe more data is valuable. However, they might not realize that the problem at hand might not benefit from more data, and a different approach, such as modifying the architecture, could be more effective.

  • How can one become proficient at debugging machine learning algorithms?

    -Becoming proficient at debugging machine learning algorithms involves systematic questioning, such as why a model isn't working and what changes could be made, like altering the architecture, adding more data, adjusting regularization, or changing the optimization algorithm.

  • What is the best way to learn programming according to the transcript?

    -The best way to learn programming is by doing it, not just watching videos. One should start a project, face challenges, and try to solve them without immediately seeking answers online, which allows for deeper learning and understanding.

  • What does 'getting your hands dirty' mean in the context of learning programming?

    -'Getting your hands dirty' means spending time deeply engaged in trying to solve problems on your own, such as debugging a network that isn't converging, without immediately turning to Google for answers.

  • Why is sustained effort over time more valuable than sporadic all-nighters?

    -Sustained effort over time is more valuable because it allows for consistent progress and learning, whereas all-nighters are not sustainable and can lead to burnout. Regular engagement with the material leads to better retention and understanding.

  • How does the concept of the '10,000 hours' theory apply to becoming an expert in a field?

    -The '10,000 hours' theory suggests that spending a significant amount of time (10,000 hours) deliberately practicing and working on something can lead to expertise. It emphasizes the importance of consistent effort and time investment over natural talent.

  • What is the benefit of taking handwritten notes while studying?

    -Taking handwritten notes increases retention by forcing you to recode the knowledge in your own words, which promotes long-term retention and understanding.

  • Why should one consider creating over consuming or redistributing?

    -Creating is more satisfying and fulfilling than consuming or redistributing. It allows for the development of new ideas and innovations, which can have a more significant and lasting impact.

  • What advice is given for those interested in a career in AI?

    -For those interested in a career in AI, the advice is to start with a clear goal of what you want to achieve with AI, such as creating a machine that can perform a task not currently possible, and then work backward to identify the steps and research needed to achieve it.

  • How has the perception of large neural networks changed over time?

    -Initially, large neural networks were underestimated and not believed to be trainable. However, with the availability of large amounts of supervised data and computational power, along with the conviction that they could work, they have become successful and are now a cornerstone of deep learning.

  • Why are cost functions important in machine learning?

    -Cost functions are important in machine learning because they provide a measurable way to assess the performance of a system. They allow for the optimization of models by minimizing or maximizing a specific measure, which is crucial for training and improving machine learning algorithms.

Outlines

00:00

🔍 The Importance of Debugging in Machine Learning

The speaker emphasizes the importance of debugging in machine learning, suggesting that engineers often waste time collecting more data without testing whether it will be beneficial. They argue for a more iterative approach, adjusting the architecture or trying different strategies rather than blindly collecting data. The speaker also highlights the value of those who are skilled at debugging machine learning algorithms, as they can be significantly faster at getting systems to work. They stress the importance of asking the right questions to avoid going down unproductive paths and suggest that learning programming is an active process that involves hands-on experience and persistence, rather than passive observation.

05:00

💡 Pursuing Passion and Understanding Oneself for Success

The speaker advises young individuals to discover their true passions by exploring various fields and to understand their own optimal working methods and personal strengths. They advocate for a deep understanding of foundational subjects like quantum mechanics and statistical physics, which have broad applications. The speaker also discusses the underestimated potential of deep learning before its success, highlighting the need for data, computational power, and conviction in the approach. They suggest reimplementing concepts at different levels of abstraction to deepen understanding and encourage setting ambitious goals in AI research, focusing on real-world applications rather than just improving on existing metrics.

10:02

🚀 Embracing Creation and Learning from Experience

The speaker shares personal advice on success, including the importance of self-belief, independent thinking, and risk-taking. They stress the value of hard work, boldness, and building a network. The speaker also reflects on their own approach to success, which involved ignoring advice and learning from their own experiences. They caution against blindly following advice from others, suggesting that individual paths to success can vary greatly.

Mindmap

Keywords

💡Machine Learning

Machine learning is a subset of artificial intelligence that provides systems the ability to learn and improve from experience without being explicitly programmed. In the video, the speaker emphasizes the importance of understanding machine learning algorithms and the need for debugging skills to improve their performance. The script mentions that experts in debugging machine learning algorithms can be significantly more efficient in making systems work, highlighting the practical application of machine learning in solving problems.

💡Data Collection

Data collection refers to the process of gathering and measuring information from various sources to gain insights. The video script points out a common pitfall where engineers might spend excessive time collecting more data under the assumption that 'more data is valuable,' without first testing whether additional data will actually improve the model. This illustrates the importance of strategic data collection in the context of machine learning.

💡Debugging

Debugging is the process of identifying and removing bugs or errors from a system. The speaker in the video underscores the value of being adept at debugging in the field of machine learning, suggesting that those who are skilled at it can be 10x to 100x more efficient. Debugging is crucial for refining machine learning algorithms and ensuring they perform as expected.

💡Architecture

In the context of machine learning, architecture refers to the structure or design of a model, including the arrangement of layers and connections in a neural network. The script suggests modifying the architecture as a potential solution when a machine learning model is not performing well, indicating that changing the underlying structure can sometimes be more effective than simply collecting more data.

💡Optimization Algorithms

Optimization algorithms are used in machine learning to improve the performance of models by adjusting parameters. The video mentions trying different optimization algorithms as a strategy for troubleshooting and enhancing machine learning models, emphasizing the role of these algorithms in the iterative process of model development.

💡Regularization

Regularization is a technique used to prevent overfitting in machine learning models by adding a penalty term to the loss function. The script briefly mentions regularization as one of the potential changes to consider when a machine learning model is not working as expected, highlighting its role in balancing model complexity and performance.

💡Programming

Programming is the process of writing sequences of instructions to direct a computer to perform specific tasks. The video transcript discusses the importance of learning programming as a foundational skill, with the speaker advocating for hands-on practice and problem-solving as the primary means of learning, rather than passively watching videos or tutorials.

💡10,000 Hours Rule

The 10,000 Hours Rule, popularized by Malcolm Gladwell, suggests that it takes approximately 10,000 hours of practice to achieve mastery in a field. The speaker in the video subscribes to this concept, arguing that deliberate and sustained effort over time is key to becoming an expert in any area, including machine learning and programming.

💡Cost Functions

Cost functions, also known as loss functions or objective functions, are used in machine learning to measure the performance of a model. The video script describes cost functions as a way to quantify how well a model is performing, with the speaker advocating for their use in guiding the development and refinement of machine learning models.

💡Deep Learning

Deep learning is a subset of machine learning that focuses on artificial neural networks with many layers, or 'deep' architectures. The transcript discusses deep learning's initial underestimation and the critical role of large amounts of data and computational power in its success. The speaker reflects on the importance of conviction in the potential of deep learning to achieve results.

💡Reinforcement Learning (RL)

Reinforcement learning is a type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward. The video mentions reimplementing RL from scratch as a method for deeply understanding the mechanics and principles behind it, indicating the value of hands-on learning in mastering complex concepts.

Highlights

Engineers sometimes waste time pursuing unproductive directions like collecting more data, which could be better spent on modifying the architecture or trying new approaches.

Debugging machine learning algorithms effectively can lead to significant speed improvements, often by an order of magnitude.

The importance of asking the right questions when debugging, such as whether a model will eventually work and what changes could be tried.

Learning programming is best achieved through hands-on experience rather than just watching videos or tutorials.

The value of getting hands-dirty in programming, which means deeply engaging with problems and finding solutions independently.

The importance of sustained effort over time in learning and mastering a skill, such as reading research papers consistently.

The concept of the 10,000-hour rule in becoming an expert in a field through deliberate practice and effort.

The benefits of taking handwritten notes for better knowledge retention and understanding.

The humorous suggestion that machine learning should not be applied to macaroni and cheese production without a clear problem statement.

The advice to always choose creation over consumption or redistribution when given a choice, as it is more satisfying and impactful.

The maturity of the AI field and the growing responsibility to consider the impact of AI systems before releasing them.

The importance of learning foundational concepts with a long shelf life, such as quantum mechanics, over more transient skills.

Encouragement for the next generation to find their true passions and understand their optimal working methods.

The underestimated potential of deep learning before its success, highlighting the importance of conviction in the approach.

The suggestion to reimplement concepts at different levels of abstraction to deepen understanding.

The counterintuitive success of training large neural networks with relatively small amounts of data, defying traditional machine learning wisdom.

The role of cost functions in machine learning and their importance in measuring system performance.

Advice on how to be successful, including having self-belief, learning to think independently, and building a network.

Transcripts

play00:00

I find that um even today unfortunately

play00:02

there are Engineers that will spend 6

play00:04

months trying to pursue a particular

play00:06

direction uh such as collect more data

play00:08

because we heard more data is valuable

play00:10

but sometimes you could run some tests

play00:12

could have figured out 6 months earlier

play00:13

that for this particular problem

play00:15

collecting more data isn't going to cut

play00:16

it so just don't spend 6 months

play00:18

collecting more data spend your time

play00:21

modifying the architecture or trying

play00:22

something else so is an evolving

play00:24

discipline but I find that the people

play00:25

that are really good at debugging

play00:27

machine learning algorithms are easily

play00:29

10x

play00:30

maybe 100x faster at getting something

play00:32

to work the often question is um why

play00:34

doesn't it work yet or can I expect it

play00:37

to eventually work uh and what are the

play00:39

things I could try change the

play00:41

architecture more data more

play00:42

regularization different optimization

play00:44

algorithm different types of data so to

play00:46

answer those questions systematically so

play00:48

you don't spend 6 months heading down

play00:49

the blind alley before someone comes and

play00:51

says why you spend six months doing this

play00:53

we're never going to learn programming

play00:55

by watching a video the only way to

play00:57

learn programming I think and the only

play00:58

one is the only way everyone I've ever

play01:00

met who can program well learned it all

play01:02

in the same way they had something they

play01:04

wanted to do and then they tried to do

play01:06

it and then they were like oh well okay

play01:10

this is kind of you know it would be

play01:11

nice if the computer could kind of do

play01:12

this thing and then you know that's how

play01:14

you learn you just keep pushing on a

play01:16

project um so the only advice I have for

play01:18

learning programming is go program don't

play01:20

be afraid to get your hands dirty I

play01:21

think that's the main thing so if

play01:23

something doesn't work like really drill

play01:25

into why things are not working can you

play01:27

elaborate what your hands dirty means so

play01:30

for example like if an algorithm if you

play01:32

try to train a network and it's not

play01:33

converging whatever rather than trying

play01:35

to like Google the answer or trying to

play01:37

do something like really spend those

play01:38

like 5 8 10 15 20 whatever number of

play01:41

hours really trying to figure it out

play01:42

yourself cuz in that process you'll

play01:44

actually learn a lot more Googling is of

play01:45

course like a good way to solve it when

play01:47

you need a quick answer but I think

play01:48

initially especially like when you're

play01:50

starting out it's much nicer to like

play01:52

figure things out by yourself and I just

play01:54

say that from experience because like

play01:55

when I started out there were not a lot

play01:56

of resources so we would like in the lab

play01:58

a lot of us like we would look up senior

play02:00

students and then the senior students

play02:02

were of course busy and they would be

play02:03

like hey why don't you go figure it out

play02:04

because I just don't have the time I'm

play02:05

working on my dissertation or whatever a

play02:07

final PhD students and so then we would

play02:09

sit down and like just try to figure it

play02:11

out and that I think really helped me

play02:13

it's often not about the birth of

play02:15

sustain efforts and the all nighters

play02:17

because you could only do that a limited

play02:19

number of times it's the sustain effort

play02:21

over a long time I think you know

play02:22

reading two research papers is a nice

play02:25

thing to do but the power is not reading

play02:26

two research papers it's reading two

play02:28

research papers a week week for a year

play02:31

then you read a 100 papers and and you

play02:33

actually learn a lot when you read the

play02:34

100 papers beginners are often focused

play02:37

on what to do and I think the focus

play02:39

should be more like how much you do so I

play02:41

I am kind of like believer on a high

play02:42

level in this 10,000 hours kind of

play02:44

concept where you just kind of have to

play02:46

just pick the things where you can spend

play02:47

time and you you care about and you're

play02:48

interested in you literally have to put

play02:49

in 10,000 hours of work it doesn't even

play02:51

like matter as much like where you put

play02:53

it and you're you'll iterate and you'll

play02:55

improve and you'll waste some time but I

play02:56

think it's actually really nice cuz I

play02:57

feel like there's some sense of

play02:58

determinism about being being an expert

play03:00

at a thing if you spend 10,000 hours you

play03:02

can literally pick an arbitrary thing

play03:03

and I think if you spend 10,000 hours of

play03:05

deliberate effort and work you actually

play03:07

will become an expert at it one thing I

play03:09

still do when I'm trying to study

play03:11

something really deeply is uh take

play03:12

handwritten notes we know that that act

play03:14

of taking notes preferably handwritten

play03:16

notes increases retention taking

play03:18

handwritten notes it causes you to

play03:21

recode the knowledge in your own words

play03:23

more and that process of recoding

play03:25

promotes long-term retention I heard

play03:27

machine learning is important could you

play03:29

help integrate machine learning with

play03:31

macaroni and cheese production you just

play03:33

I don't even you can't help these people

play03:36

like who lets you run anything who lets

play03:38

that kind of person run anything my

play03:40

problem is not that they don't know

play03:41

about machine learning my problem is

play03:42

that they think that machine learning

play03:44

has something to say about macaroni and

play03:45

cheese production like I heard about

play03:47

this new technology how can I use it for

play03:50

why at least start with tell me about a

play03:52

problem like if you have a problem

play03:54

you're like you know some of my boxes

play03:55

aren't getting enough macaroni in them

play03:58

um can we use machine learning to this

play04:00

problem that's much much better than how

play04:02

do I apply machine learning to macaroni

play04:04

and cheese you tweeted when you have the

play04:06

choice between being a Creator consumer

play04:09

or redistributor always go for creation

play04:12

when you have the choice to create

play04:13

something always go for creation it's so

play04:15

much more satisfying and it also this is

play04:17

what life is about I think the field of

play04:19

AI has been in a state of childhood and

play04:22

now it's exiting that state and it's

play04:23

entering a state of maturity what that

play04:25

means is that AI is very successful and

play04:27

also very impactful and its impact is

play04:30

not only large but it's also growing and

play04:32

so for that reason it seems wise to

play04:35

start thinking about the impact of our

play04:37

systems before releasing them maybe a

play04:39

little bit too soon rather than a little

play04:40

bit too late try to get interested by

play04:43

big questions things like what is

play04:44

intelligence what is the universe made

play04:46

of what's life all about things like

play04:48

that like even like crazy big questions

play04:50

like what's time like nobody knows what

play04:52

time is and then learn basic things like

play04:55

basic methods either from math from

play04:57

physics or from engineering things I

play04:58

have a long shelf life like if you have

play05:00

a choice between learning uh you know

play05:03

mobile programming on iPhone or quantum

play05:05

mechanics take quantum mechanics because

play05:06

you're going to learn things that you

play05:08

have no idea exist you may never be a

play05:09

Quantum physicist but you learn about

play05:11

path integrals and path integrals are

play05:13

used everywhere learn statistical

play05:14

physics because all the math that comes

play05:16

out for machine learning basically comes

play05:18

out by statistical physicists in the you

play05:20

know late 19 early 20th century right I

play05:22

love giving talks to the Next Generation

play05:25

what I say to them is actually two

play05:26

things I think the most important things

play05:28

to learn about uh and to find out about

play05:31

when you're when you're young is what

play05:32

are your true passions is first of all

play05:34

there two things one is find your true

play05:36

passions and I think the way to do that

play05:38

is to explore as many things as possible

play05:40

when you're young and you can take those

play05:42

risks um I would also encourage people

play05:43

to look at the finding the connections

play05:45

between things uh in a unique way I

play05:47

think that's a really great way to find

play05:49

a passion second thing I would say

play05:51

advise is know yourself so spend a lot

play05:53

of time understanding how you work best

play05:56

like what are the optimal times to work

play05:58

what are the optimal ways that you study

play05:59

what are your how do you deal with

play06:01

pressure um sort of test yourself in

play06:03

various scenarios and um try and improve

play06:05

your weaknesses but also find out what

play06:07

your unique skills and strengths are and

play06:09

then hone those so then that's what you

play06:11

will be your Super Value in the world

play06:13

later on the key fact about deep

play06:15

learning before deep learning started to

play06:18

be successful is that it was

play06:19

underestimated people didn't believe

play06:21

that large neural networks could be

play06:23

trained the ideas were all there the

play06:25

thing that was missing was a lot of

play06:27

supervised data and a lot of compute

play06:30

once you have a lot of supervised data

play06:31

and a lot of compute then there is a

play06:33

third thing which is needed as well and

play06:35

that is conviction conviction that if

play06:37

you take the right stuff which already

play06:39

exists and apply and mix with a lot of

play06:42

data and a lot of compute that it will

play06:43

in fact work and so that was the missing

play06:46

piece it was you had the you needed the

play06:48

data you needed the compute which showed

play06:50

up in terms of gpus and you needed the

play06:52

conviction to realize that you need to

play06:54

mix them together I would say

play06:55

reimplement everything on different

play06:57

levels of abstraction in some sense but

play06:58

I would say RL and something from

play07:00

scratch rain PL and something from a

play07:02

paper rain PL and something you know

play07:04

from podcast that you have heard about

play07:05

i' would say that's a powerful way to

play07:07

understand things so it's often the case

play07:09

that you read the description and you

play07:10

think you understand but you truly

play07:12

understand once you build it then you

play07:14

actually know what really me that in the

play07:16

description if someone who's a student

play07:18

considering a career in AI like takes a

play07:20

little while sits down and thinks like

play07:22

what do I really want to see what I want

play07:23

to see a machine do what I want to see a

play07:25

natural language system and then

play07:27

actually sit down and think about the

play07:28

steps that are necessary to get there

play07:30

and hopefully that thing is not a better

play07:32

number on imet classification it's like

play07:34

it's probably like an actual thing that

play07:36

we can't do today that would be really

play07:37

awesome and I think that thinking about

play07:39

that and then backtracking from there

play07:40

and Imagining the steps needed to get

play07:42

there will actually lead to much better

play07:43

research it'll lead to working on the

play07:45

bottlenecks that other other people

play07:47

aren't working on deep planning has been

play07:49

kind of looked at with suspicion by a

play07:50

lot of computer scientists because the

play07:51

math is very different the math that you

play07:53

use for deep planning you know it kind

play07:55

of has more to do with you know

play07:57

cybernetics uh the kind of math you do

play07:59

in electrical engineering than the kind

play08:01

of math you do in computer science and

play08:03

nothing in in machine learning is exact

play08:05

right computer science is all about

play08:07

obiously compulsive attention to details

play08:09

of like you know every index has to be

play08:11

right and you can prove that an

play08:12

algorithm is correct right uh machine

play08:14

learning is the science of sloppiness

play08:17

and so the big idea is the cost function

play08:19

the cost function is a way of measuring

play08:22

the performance of the system according

play08:23

to some measure I'm a big fan of cost

play08:25

functions I think cost functions are

play08:26

great and they serve us really well and

play08:28

I think that whenever we can do things

play08:29

we with cost functions we should and you

play08:31

know maybe there is a chance that we

play08:33

will come up with some yet another

play08:35

profound way of looking at things that

play08:36

will involve cost functions in a less

play08:38

Central way but I don't know I think

play08:40

cost functions are I

play08:42

mean I would not bet against cost

play08:45

functions the fact that you can build

play08:48

gigantic neural Nets train train them on

play08:51

you know relatively small amounts of

play08:53

data relatively with stochastic gr and

play08:55

descent and that it actually works uh

play08:57

breaks everything you read in every

play08:58

textbook right every pre deep learning

play09:01

textbook that told you you need to have

play09:03

fewer parameters and you have data

play09:04

samples all those things that you read

play09:06

in textbook and they tell you stay away

play09:07

from this and they all wrong it was kind

play09:09

of obvious to me before I knew anything

play09:11

that this was a good idea and then it

play09:12

became surprising that it worked because

play09:15

I started reading those textbooks okay

play09:17

so okay can you talk through the

play09:19

intuition of why it was obvious to you

play09:21

if you remember well okay so the

play09:22

intuition was it's it's sort of like you

play09:24

know those people in the late 19th

play09:26

century who proved that heavier than a

play09:29

flight was impossible and of course you

play09:31

have Birds right they do fly and so we

play09:33

have the same kind of thing that we know

play09:34

that the brain works we don't know how

play09:36

but we know it works and we know it's a

play09:38

large network of neurons in interaction

play09:40

and that learning takes place by

play09:41

changing the connection so kind of

play09:43

getting this level of inspiration

play09:45

without covering the details but sort of

play09:46

trying to derive basic principles is

play09:49

also the idea somehow that i' I've been

play09:51

convinced of since I was on undergrad

play09:53

that that intelligence is inseparable

play09:55

from learning the idea somehow that you

play09:57

can create an intelligent Machine by

play09:59

basically programming for me it was a

play10:01

nonstarter you know from the start every

play10:04

intelligent entity that we know about

play10:05

arrives at this intelligence F rning you

play10:07

wrote a blog post a few years ago titled

play10:10

how to be successful it's so succinct

play10:12

and so brilliant compound yourself have

play10:14

almost too much self-belief learn to

play10:16

think independently get good at sales

play10:18

and quotes make it easy to take risks

play10:20

Focus work hard be bold be willful be

play10:23

hard to compete with build a network you

play10:25

get rich by owning things be internally

play10:27

driven what stands out to you you from

play10:30

that or Beyond as advice you can give

play10:32

yeah no I think it is like good advice

play10:34

in some sense but I also think it's way

play10:37

too tempting to take advice from other

play10:39

people and the stuff that worked for me

play10:42

which I tried to write down there

play10:43

probably may not work as well for other

play10:45

people and I think I mostly got what I

play10:48

wanted by ignoring advice and I tell

play10:51

people not to listen to too much advice

play10:53

listening to advice from other people

play10:54

should be approached with great caution

play10:59

[Music]

Rate This

5.0 / 5 (0 votes)

関連タグ
Artificial IntelligenceMachine LearningDebugging TipsData CollectionAlgorithm OptimizationProgramming AdviceLearning StrategiesExpertise BuildingAI ImpactInnovation Process
英語で要約が必要ですか?