Psychological Research: Crash Course Psychology #2

CrashCourse
10 Feb 201410:51

Summary

TLDRThis script delves into the scientific method's application in psychology, emphasizing the importance of skepticism towards intuition and the need for empirical evidence. It highlights the limitations of case studies and naturalistic observations, discusses the challenges in surveys, and underscores the significance of experiments in establishing cause-and-effect relationships. The script uses relatable examples, like the effects of caffeine on problem-solving, to illustrate the scientific process in psychological research, guiding viewers through hypothesis testing and the avoidance of common biases.

Takeaways

  • πŸ• The script humorously debunks the idea that week-old pizza can cause hallucinations and that coffee makes you smarter, emphasizing that intuition is not always reliable.
  • 🧠 It highlights the dangers of relying on false intuition and the importance of scientific methods in psychology to avoid such pitfalls.
  • πŸ” The transcript explains the concept of 'Hindsight Bias' and 'Overconfidence', which can lead to incorrect assumptions about people's behavior.
  • πŸ“Š It discusses the limitations of case studies and naturalistic observations in psychological research, noting they are descriptive but not necessarily explanatory.
  • πŸ—¨ The importance of surveys and interviews in psychological research is mentioned, with a caution about the influence of question phrasing and sampling bias.
  • πŸ”— The script clarifies the difference between correlation and causation, stressing that correlation does not prove causation.
  • 🧐 It outlines the steps of the scientific method in psychology, including operationalizing questions, forming hypotheses, and the necessity of replication for validation.
  • πŸ”¬ The value of experimentation in establishing cause-and-effect relationships is underscored, detailing the setup of control and experimental groups.
  • 🀝 The process of random assignment in experiments is discussed to minimize confounding variables and ensure the reliability of results.
  • πŸ‘₯ The concept of 'double blind' procedures in research is introduced to prevent bias from influencing outcomes.
  • πŸ“ The script concludes by emphasizing the importance of clear language and defined parameters in research to allow for replication and the aggregation of data for solid conclusions.

Q & A

  • Can consuming week-old pizza lead to psychedelic hallucinations?

    -No, the script suggests that pizza won't make you trip, indicating that week-old pizza is unlikely to cause psychedelic hallucinations.

  • Does coffee make you smarter according to the script?

    -The script clarifies that coffee doesn't make you smart, implying that while it might have some effects, it doesn't enhance cognitive abilities in a significant way.

  • What is the 'Hindsight Bias' or the 'I-Knew-It-All-Along' phenomenon mentioned in the script?

    -The 'Hindsight Bias' or 'I-Knew-It-All-Along' phenomenon refers to the tendency to believe, after an event has occurred, that one would have foreseen or predicted the event. It's a cognitive bias that affects how people recall their past beliefs or predictions.

  • What is the role of overconfidence in trusting one's intuition about people and their behavior?

    -Overconfidence can lead to a false sense of certainty about one's intuition regarding people and their behavior. This can result in being very wrong about people even when one feels very certain about their judgments.

  • Why do we perceive order in random events, and what is the potential issue with this?

    -People have a natural tendency to perceive order in random events, which can lead to false assumptions. This perception of order can give meaning to random sequences, like a series of coin flips, that does not actually exist, leading to incorrect conclusions.

  • What is the purpose of scientific inquiry in psychological research?

    -Scientific inquiry in psychological research helps to circumvent the problems caused by human biases and cognitive errors. It provides a structured and reliable method to study the mind and behavior, ensuring that conclusions are based on evidence rather than intuition or false assumptions.

  • What is operationalizing a question in psychological research?

    -Operationalizing a question in psychological research involves turning a general question about a subject into a measurable and testable proposition. This process is essential for formulating hypotheses that can be empirically tested.

  • Why is replication important in psychological research?

    -Replication is important in psychological research because it ensures the reliability and validity of findings. Consistent results across different studies and contexts strengthen the evidence for a particular phenomenon or effect.

  • What is the limitation of case studies in psychological research?

    -Case studies, while useful for providing in-depth insights into individual behavior, are limited because they cannot be replicated. This makes them susceptible to over-generalization and less reliable for drawing broad conclusions about a population.

  • How can surveys be influenced by the way questions are phrased?

    -Surveys can be influenced by subtle word choices in the questions, which can elicit different reactions from respondents. The way a question is framed can significantly impact the responses and, consequently, the research findings.

  • What is the difference between correlation and causation in psychological research?

    -Correlation refers to a statistical relationship between two variables, indicating that they change together. Causation, on the other hand, implies that one variable causes the other to occur. While correlation can suggest potential cause-and-effect relationships, it does not prove them.

  • What is the purpose of using a control group in an experiment?

    -The purpose of a control group in an experiment is to serve as a baseline for comparison. It allows researchers to isolate the effects of the independent variable by comparing the outcomes of the experimental group, which is subjected to the variable, with those of the control group, which is not.

  • Why is random assignment of participants important in experimental research?

    -Random assignment of participants to different groups in an experiment is crucial to minimize potential confounding variables and ensure that the results are not skewed by outside factors. It helps to distribute the variability evenly across groups, increasing the reliability of the findings.

  • What is a double-blind procedure in experimental research?

    -A double-blind procedure in experimental research is a method where neither the participants nor the researchers know which group is receiving the experimental treatment and which is receiving the control condition. This helps to prevent bias in the results due to expectations or behavior influenced by knowledge of the treatment.

Outlines

00:00

πŸ€” The Pitfalls of Intuition in Psychology

This paragraph discusses the limitations of relying on intuition when understanding human behavior. It emphasizes the dangers of false intuition and how it can be misleading, especially when our predictions are correct, reinforcing our trust in it. The concept of 'Hindsight Bias' is introduced, where past events are seen as predictable, and the tendency for overconfidence in our judgments is highlighted. The speaker also points out the human inclination to find order in randomness, which can lead to false assumptions. The paragraph concludes by advocating for the scientific method and psychological research as tools to overcome these cognitive biases and better understand the human mind.

05:00

πŸ” Methods and Biases in Psychological Research

The second paragraph delves into the various methods used in psychological research, including case studies, naturalistic observation, surveys, and interviews. It explains the importance of operationalizing questions in scientific research and the value of replication for establishing reliable findings. The limitations of case studies and naturalistic observation are discussed, such as their inability to be replicated and their potential to lead to over-generalization. The paragraph also touches on the challenges of conducting surveys, including the impact of wording on responses and the importance of random sampling to avoid bias. Correlation and causation are explored, with a clear distinction made between predicting relationships and proving them.

10:02

πŸ§ͺ Experimentation in Psychology: From Hypothesis to Conclusion

This paragraph focuses on the experimental method in psychology, detailing the steps from formulating a testable hypothesis to conducting experiments that isolate variables to establish cause-and-effect relationships. It explains the necessity of having a control group and an experimental group, the importance of random assignment to minimize confounding variables, and the use of blind procedures to prevent unintentional influence on results. An example experiment is proposed to test the effects of caffeine on problem-solving speed, illustrating how to define variables, obtain informed consent, and measure outcomes. The paragraph concludes by emphasizing the importance of clear language and replicability in scientific experiments to build a solid understanding of psychological phenomena.

🎬 Behind the Scenes of Crash Course Psychology

The final paragraph provides credits and additional information about the production of the Crash Course Psychology series. It acknowledges the contributions of Subbable subscribers, who support the series, and invites viewers to contribute to keep the course running. It also lists the scriptwriter, editor, consultant, director, script supervisor, sound designer, and graphics team involved in the creation of the series, giving insight into the collaborative effort behind the educational content.

Mindmap

Keywords

πŸ’‘Psychedelic hallucinations

Psychedelic hallucinations refer to the altered perceptions and vivid sensory experiences induced by certain substances, often characterized by seeing or hearing things that aren't there. In the video, the concept is introduced humorously to illustrate the unreliability of intuition, as the speaker questions whether week-old pizza could cause such an effect, emphasizing the importance of scientific inquiry over anecdotal evidence.

πŸ’‘Intuition

Intuition is the ability to understand or know something immediately, without the need for conscious reasoning. The video script discusses the limitations of intuition, pointing out that while it can reinforce beliefs when correct, it can also lead to overlooking errors when wrong. This is exemplified through the 'Hindsight Bias' and the tendency toward overconfidence, which can mislead one's understanding of human behavior.

πŸ’‘Hindsight Bias

Hindsight Bias is a cognitive bias where individuals believe, after an event has occurred, that they would have predicted or expected the outcome beforehand. The script uses this concept to highlight how our intuition can be deceptive, as we tend to remember our correct predictions more vividly than our incorrect ones, thus overestimating our foresight.

πŸ’‘Overconfidence

Overconfidence refers to the tendency to overestimate one's abilities or the accuracy of one's beliefs. The video script discusses how this natural tendency can lead to false assumptions about people and their behaviors, emphasizing the need for empirical research to counteract such biases.

πŸ’‘Operationalizing

Operationalizing is the process of defining a concept or theory in a way that allows it to be measured and tested. The script explains that in psychological research, turning general questions into measurable, testable propositions is crucial for conducting scientific experiments and obtaining reliable results.

πŸ’‘Hypothesis

A hypothesis is a proposed explanation for a phenomenon, made as a starting point for further investigation. In the context of the video, a hypothesis is a testable prediction derived from a theory, which is then put to the test through experimentation to determine its validity.

πŸ’‘Replication

Replication in research refers to the process of repeating an experiment or study to verify the results. The script emphasizes the importance of replication for establishing the reliability of findings, as consistent results across different studies lend more credibility to the conclusions drawn.

πŸ’‘Case Studies

Case studies are in-depth examinations of a particular individual, group, or event. The video script mentions case studies as a method of psychological research that, while valuable for framing questions and providing unique insights, has limitations in terms of generalizability due to the inability to replicate the specific circumstances.

πŸ’‘Naturalistic Observation

Naturalistic observation is a research method where behavior is observed in its natural setting without experimental manipulation. The script describes this method as allowing subjects to act naturally, which is useful for describing behavior but less so for explaining it.

πŸ’‘Surveys

Surveys are a research tool used to collect data from a large number of people through a series of questions. The video script discusses the use of surveys in psychological research, noting the importance of careful phrasing to avoid bias and the need for random sampling to ensure representative results.

πŸ’‘Correlation

Correlation is a measure that expresses the extent to which two variables are linearly related. The script explains that while correlation can suggest a relationship between variables, such as eating questionable food and experiencing hallucinations, it does not imply causation and further investigation is needed to establish a cause-and-effect relationship.

πŸ’‘Experimentation

Experimentation is a scientific method that involves manipulating one variable to observe the effect on another while controlling for other factors. The video script details the process of setting up an experiment, including defining variables, using control groups, and random assignment to test hypotheses and establish cause-and-effect relationships.

πŸ’‘Double Blind Procedure

A double blind procedure is a type of experimental design where neither the participants nor the researchers know who is in the experimental group and who is in the control group. The script mentions this as a method to prevent bias in the results, ensuring that the effects observed are due to the independent variable and not influenced by expectations or knowledge of the experiment's conditions.

Highlights

Week-old pizza does not cause psychedelic hallucinations, debunking a common myth.

Coffee does not make people smarter, contrary to some beliefs.

The importance of not relying solely on intuition when understanding human behavior.

The concept of 'Hindsight Bias' and its influence on reinforcing trust in intuition.

Overconfidence can lead to incorrect assumptions about people and their actions.

The tendency to perceive order in random events, leading to false assumptions.

The role of scientific methods and experimentation in psychology to avoid pitfalls of intuition.

Operationalizing questions in psychological research to make them measurable and testable.

The significance of replication in psychological studies to ensure consistent results.

Limitations of case studies in psychological research due to their inability to be replicated.

Naturalistic observation as a method to describe behavior without manipulation.

The use of surveys and interviews in psychological research to gather behavioral data.

The impact of question phrasing in surveys and how it can influence results.

The importance of random sampling in surveys to represent a population fairly.

Understanding the difference between correlation and causation in psychological findings.

The necessity of experiments in psychology to establish cause-and-effect relationships.

The process of conducting an experiment, including the use of control and experimental groups.

The role of informed consent in ethical psychological research.

The application of the scientific method in an experiment to test the effects of caffeine on problem-solving.

The importance of clear language and defined parameters in psychological research for replicability.

The potential for bias in psychological research and how research practices help to avoid them.

Transcripts

play00:00

Can week-old pizza cause psychedelic hallucinations?

play00:01

Does coffee make you smarter?

play00:03

Or does it just make you do dumb stuff faster?

play00:05

Like a bunch of psychology itself, questions like this can seem pretty intuitive.

play00:08

I mean, people may not be the easiest organisms to understand, but you're a person, right?

play00:12

So you must be qualified to draw, like, some conclusions about other people and what makes

play00:17

them tick.

play00:18

But it's important to realize that your intuition isn't always right.

play00:21

In fact, sometimes it is exactly wrong, and we tend to grossly underestimate the dangers

play00:27

of false intuition.

play00:28

If you have an idea about a person and their behavior that turns out to be right, that

play00:32

reinforces your trust in your intuition.

play00:34

Like if one of my buddies, Bob, begins eating that deep-dish pizza that's been in the fridge

play00:37

for the past week but he eats it anyway and soon starts to wig out, I'll say "Dude, I

play00:42

told you so".

play00:43

But if I'm wrong and he's totally fine, I probably won't even think about it ever again.

play00:48

This is known as 'Hindsight Bias" or the "I-Knew-It-All-Along" phenomenon.

play00:52

This doesn't mean the common sense is wrong, it just means that our intuitive sense more

play00:56

easily describes what just happened, than what will happen in the future.

play01:00

Another reason you can't blindly trust your intuition is your natural tendency toward

play01:04

overconfidence.

play01:05

Sometimes, you just really, really feel like you're right about people when actually you're

play01:10

really, really wrong.

play01:12

We've all been there.

play01:13

We also tend to perceive order in random events, which can lead to false assumptions.

play01:17

For example, if you flip a coin five times you have equal chances of getting all tails

play01:22

as you do getting alternating heads and tails.

play01:25

But we see the series of five tails as something unusual, as a streak, and thus giving that

play01:30

result some kind of meaning that it very definitely does not have.

play01:33

That is why we have the methods and safe-guards of psychological research and experimentation,

play01:38

and the glorious process of scientific inquiry.

play01:41

They help us to get around these problems and basically save the study of our minds

play01:46

from the stupidity of our minds.

play01:47

So I hope that it won't be a spoiler if I tell you now that pizza won't make you trip,

play01:53

and coffee doesn't make you smart.

play01:59

Sorry.

play02:01

[Intro]

play02:03

In most ways psychological research is no different than any other scientific discipline,

play02:10

like step one is always figuring out how to ask general questions about your subject and

play02:15

turn them into measurable, testable propositions.

play02:18

This is called operationalizing your questions.

play02:20

So you know how the scientific method works -- it starts with a question and a theory,

play02:24

and I don't mean theory in the sense of like, a hunch that say, a quad-shot of espresso

play02:29

makes you think better.

play02:30

Instead, in science a theory is what explains and organizes lots of different observations

play02:36

and predicts outcomes.

play02:37

And when you come up with a testable prediction, that's your hypothesis.

play02:40

Once your theory and hypothesis are in place, you need a clear and common language to report

play02:45

them with, so for example, defining exactly what you mean by "thinking better" with your

play02:48

espresso hypothesis will allow other researchers to replicate the experiment.

play02:53

And replication is key.

play02:55

You can watch a person exhibit a certain behavior once, and it won't prove very much, but if

play02:59

you keep getting consistent results, even as you change subjects or situations, you're

play03:04

probably on to something.

play03:05

This is a problem with one popular type of psychological research: case studies, which

play03:09

take an in-depth look at one individual.

play03:12

Case studies can sometimes be misleading, because by their nature, they can't be replicated,

play03:16

so they run the risk of over-generalizing.

play03:18

Still, they're good at showing us what CAN happen, and end up framing questions for more

play03:22

extensive and generalizable studies.

play03:25

They're also often memorable and a great story telling device psychologists use to observe

play03:29

and describe behavior.

play03:30

Like, say the smell of coffee makes Carl suddenly anxious and irritable -- that obviously doesn't

play03:35

mean that it has that same effect on everyone.

play03:37

In fact, Carl has terrible memories associated with that smell, and so his case is actually

play03:42

quite rare.

play03:43

Poor Carl.

play03:44

But you would still have to look at lots of other cases to determine that conclusively.

play03:48

Another popular method of psychological research is naturalistic observation, where researchers

play03:52

simply watch behavior in a natural environment, whether that's chimps poking ant-hills in

play03:57

the jungle, kids clowning in a classroom or drunk dudes yelling at soccer games.

play04:01

The idea is to let the subjects just do their thing without trying to manipulate or control

play04:05

the situation.

play04:06

So yeah, basically just spying on people.

play04:09

Like case studies, naturalistic observations are great at describing behavior, but they're

play04:13

very limited in explaining it.

play04:15

Psychologists can also collect behavioral data using surveys or interviews, asking people

play04:19

to report their opinions and behaviors.

play04:21

Sexuality researcher Alfred Kinsey famously used this technique when he surveyed thousands

play04:25

of men and women on their sexual history and published his findings in a pair of revolutionary

play04:31

texts, Sexual Behavior in the Human Male and Female respectively.

play04:35

Surveys are a great way to access consciously held attitudes and beliefs, but how to ask

play04:39

the questions can be tricky; subtle word choices can influence results.

play04:43

For example more forceful words like "ban" or "censor" may elicit different reactions

play04:47

than "limit" or "not allow".

play04:50

Asking "Do you believe in space aliens?" is a much different question than "Do you think

play04:53

that there is intelligent life somewhere else in the universe?"

play04:56

It's the same question, but in the first the subject might assume you mean aliens visiting

play05:00

earth, and making crop circles and abducting people and poking them.

play05:03

And if how you phrase surveys is important, so is who you ask.

play05:06

I could ask a room full of students at a pacifist club meeting what they think about arms control,

play05:10

but the result wouldn't be a representative measure of where students stand, because there's

play05:14

a pretty clear sampling bias at work here.

play05:16

To fairly represent a population, I'd need to get a random sample where all members of

play05:21

the target group, in this case students, had an equal chance of being selected to answer

play05:25

the question.

play05:26

So once you've described behavior with surveys, case studies, or naturalistic observation,

play05:30

you can start making sense out of it, and even predict future behavior.

play05:34

One way to do that is to look at one trait or behavior is related to another, or how

play05:39

they correlate.

play05:40

So let's get back to my buddy Bob who seems to think that his refrigerator is actually

play05:43

some kind of time machine that can preserve food indefinitely.

play05:45

Let's say that Bob has just tucked into a lunch of questionable leftovers, pizza that

play05:49

may very well have had a little bit of fungus on it.

play05:52

But he was hungry, and lazy, and so he doused it in Sriracha.

play05:56

Suddenly, he starts seeing things: green armadillos with laser beam eyes.

play05:59

From here we could deduce that eating unknown fungus predicts hallucination, that's a correlation.

play06:04

But correlation is not causation.

play06:06

Yes, it makes sense that eating questionable fungus would cause hallucinations, but it's

play06:11

possible that Bob was already on the verge of a psychotic episode, and those fuzzy leftovers

play06:15

were actually benign.

play06:17

Or there could be an entirely different factor involved, like maybe he hadn't slept in 72

play06:21

hours, or had an intense migraine coming on, and one of those factors caused his hallucinations.

play06:26

It's tempting to draw conclusions from correlations, but it's super-important to remember that

play06:30

correlations predict the possibility of cause-and-effect relationships; they cannot prove them.

play06:35

So we've talked about how to describe behavior without manipulating it and how to make connections

play06:40

and predictions from those findings.

play06:42

But that can only take you so far; to really get to the bottom of cause-and-effect behaviors,

play06:46

you're gonna have to start experimenting.

play06:48

Experiments allow investigators to isolate different effects by manipulating an independent

play06:52

variable, and keeping all other variables constant, or as constant as you can.

play06:57

This means that they need at least two groups: the experimental group, which is gonna get

play07:01

messed with, and the control group, which is not gonna get messed with.

play07:05

Just as surveys use random samples, experimental researchers need to randomly assign participants

play07:09

to each group to minimize potential confounding variables, or outside factors that may skew

play07:14

the results.

play07:15

You don't want all grumpy teenagers in one group and all wealthy Japanese surfers in

play07:18

the other; they gotta mingle.

play07:19

Now sometimes one or both groups are not informed about what's actually being tested.

play07:24

For example, researchers can test how substances effect people by comparing their effects to

play07:28

placebos, or inert substances.

play07:30

And often, the researchers themselves don't know which group is experimental and which

play07:34

is control, so they don't unintentionally influence the results through their own behavior,

play07:39

in which case it's called, you guessed it, a double blind procedure.

play07:43

So let's put these ideas into practice in our own little experiment.

play07:46

Like all good work, it starts with a question.

play07:48

So the other day my friend Bernice and I were debating.

play07:50

We were debating caffeine's effect on the brain.

play07:53

Personally, she convinced that coffee helps her focus and think better, but I get all

play07:57

jittery like a caged meerkat and can't focus on anything.

play08:00

And because we know that overconfidence can lead you to believe things that are not true,

play08:03

we decided to use some critical thinking.

play08:05

So let's figure out our question: "Do humans solve problems faster when given caffeine?"

play08:10

Now we gotta boil that down into a testable prediction.

play08:12

Remember: keep it clear, simple, and eloquent so that it can be replicated.

play08:17

"Caffeine makes me smarter" is not a great hypothesis.

play08:20

A better one would be, say, "Adult humans given caffeine will navigate a maze faster

play08:25

than humans not given caffeine."

play08:27

The caffeine dosage is your independent variable, the thing that you can change.

play08:30

So, you'll need some coffee.

play08:32

Your result or dependent variable, the thing that depends on the thing that you can change

play08:36

is going to be the speed at which the subject navigates through this giant corn maze.

play08:40

Go out on the street, wrangle up a bunch of different kinds of people and randomly assign

play08:43

them into three different groups.

play08:45

Also at this point, the American Psychological Association suggests that you acquire everyone's

play08:49

informed consent to participate.

play08:51

You don't want to force anyone to be in your experiment, no matter how cool you think it

play08:55

is.

play08:56

So the control group gets a placebo, in this case decaf.

play08:58

Experimental group one gets a low dose of caffeine, which we'll define at a 100 milligrams;

play09:02

just an eye opener, like, a cup of coffee's worth.

play09:05

Experimental group two gets 500 milligrams, more than a quad shot of espresso dunked in

play09:09

a Red Bull.

play09:10

Once you dose everyone, turn them lose in the maze and wait at the other end with a

play09:13

stopwatch.

play09:14

All that's left is to measure your results from the three different groups and compare

play09:17

them to see if there were any conclusive results.

play09:20

If the highly dosed folks got through it twice as fast as the low dose and the placebo groups,

play09:24

then Bernice's hypothesis was correct, and she can rub my face in it saying she was right

play09:28

all along, but really that would just be the warm flush of hindsight bias telling her something

play09:33

she didn't really know until we tested it.

play09:35

Then, because we've used clear language and defined our parameters, other curious minds

play09:39

can easily replicate this experiment, and we can eventually pool all the data together

play09:43

and have something solid to say about what that macchiato was doing to your cognition–

play09:48

or at least the speed at which you can run through a maze.

play09:51

Science: probably the best tool that you have for understanding other people.

play09:54

Thanks for watching this episode of Crash Course Psychology; if you paid attention you

play09:57

learned how to apply the scientific method to psychological research through case studies,

play10:01

naturalistic observation, surveys, and interviews and experimentation.

play10:05

You also learned about different kinds of bias in experimentation and how research practices

play10:10

help us avoid them.

play10:11

Thanks especially to our Subbable subscribers, who make this and all of Crash Course possible.

play10:15

If you'd like to contribute to help us keep Crash Course going, and also get awesome perks

play10:20

like an autographed science poster, or even be animated into an upcoming episode, go to

play10:24

Subbable.com/CrashCourse to find out how.

play10:27

Our script was written by Kathleen Yale and edited by Blake de Pastino and myself.

play10:31

Our consultant is Dr. Ranjit Bhagwat.

play10:33

Our director and editor is Nicholas Jenkins, our script supervisor is Michael Aranda, who

play10:37

is also our sound designer, and our graphics team is Thought CafΓ©.

Rate This
β˜…
β˜…
β˜…
β˜…
β˜…

5.0 / 5 (0 votes)

Related Tags
Psychology ResearchScientific MethodBehavioral StudyIntuition BiasCaffeine EffectHindsight BiasCase StudiesNaturalistic ObservationSurvey MethodExperimental DesignCognitive Bias