Academia is BROKEN! - Harvard Fake Data Scandal Explained
Summary
TLDRThis video script discusses the academic integrity crisis, spotlighting Francesca Gino, a Harvard professor whose research is under scrutiny for suspicious data. Three professors, Yuri, Joe, and Laif, have investigated Gino's studies, revealing anomalies that cast doubt on her findings. The video summarizes their findings across three articles, highlighting the potential manipulation of data to achieve dramatic, publishable results. This situation not only tarnishes Gino's reputation but also raises concerns about the reliability of academic research in behavioral science.
Takeaways
- 📚 The script discusses the alleged academic misconduct by Francesca Gino, a professor of Behavioral Science at Harvard University, and the impact on the field of academia.
- 🔍 Three articles have been published scrutinizing Gino's research, revealing potential data manipulation and questionable practices in her studies.
- 🤔 The first study 'Cluster Fake' questioned the effectiveness of an honesty pledge's placement on forms and the integrity of the data presented in the study.
- 📉 In the second study 'My Class Year is Harvard', anomalies in demographic data, specifically the year in school, raised suspicions about the validity of the research findings.
- 🧐 The third study 'Cheaters are out of Order' investigated the link between dishonesty and creativity, with the data suggesting a strong effect that disappeared upon closer inspection.
- 🚫 Francesca Gino is currently on administrative leave from Harvard, and several of her papers have been retracted, indicating serious academic concerns.
- 💡 The script highlights the pressure on academics to produce surprising results with large effect sizes to secure publication in top journals and maintain their positions.
- 🤨 The incident casts doubt on the reliability of academic research in the field of Behavioral Science and raises questions about the prevalence of such misconduct.
- 📈 The script emphasizes the importance of transparency and integrity in academic research, suggesting that the incentives in the academic system may be misaligned with good scientific practices.
- 🙏 The author empathizes with the pressure faced by academics but strongly condemns the manipulation of data as unacceptable.
- 📚 The script serves as a call to action for the academic community to reassess its standards and practices to prevent similar issues in the future.
Q & A
What is the main issue raised by the video script regarding Francesca Gino's research?
-The main issue raised is the suspicion of data manipulation and questionable research practices in Francesca Gino's studies, which have led to surprising and seemingly too good to be true results.
Who are the three professors that investigated Francesca Gino's research?
-The three professors are Yuri, Joe, and Laif, who are all specialists in Behavioral Science and related fields from different universities around the world.
What was the hypothesis of the 2012 study involving an honesty pledge?
-The hypothesis was that placing an honesty pledge at the top of a form would make participants more honest when filling out the rest of the form compared to placing it at the bottom.
What was the surprising result of the first study mentioned in the script?
-The surprising result was that only 37% of students lied when the honesty pledge was at the top of the form, compared to 79% when it was at the bottom, indicating a massive effect size.
What irregularities were found in the data set of the first study?
-Irregularities included duplicate participant IDs and a sequence of IDs that did not make sense, suggesting that the data may have been tampered with to show a larger effect size.
What was the hypothesis of the second study involving arguing against personal beliefs?
-The hypothesis was that arguing against one's own beliefs would make a person feel 'dirty' and subsequently increase their desire for cleansing products.
What was suspicious about the demographic data in the second study?
-The suspicious data included 20 entries where the participants' year in school was listed as 'Harvard', which made no sense and suggested possible data manipulation.
What was the hypothesis of the third study on dishonesty and creativity?
-The hypothesis was that people who are dishonest or cheat may be more creative, as the study aimed to explore the link between dishonesty and creative thinking.
What was the irregularity found in the data set of the third study?
-The irregularity was that the number of uses for a piece of newspaper reported by some participants who cheated was out of order, suggesting that the data may have been manipulated to show a significant effect.
What is the broader implication of the issues found in Francesca Gino's research?
-The broader implication is that it casts doubt on the integrity of academic research in the field of Behavioral Science and raises concerns about the reliability of published studies.
What has been the response from Harvard University regarding Francesca Gino's research?
-Francesca Gino has been placed on administrative leave, and Harvard has requested the retraction of several of her papers from the journals they were published in.
What does the video script suggest about the pressures faced by academics?
-The script suggests that academics, especially at top institutions like Harvard, face immense pressure to publish surprising results with large effect sizes to maintain their positions and reputations.
Outlines
📚 Academia's Integrity Crisis: Questioning Francesca Gino's Research
The video script begins by highlighting a critical issue within academia, specifically pointing out the alleged flaws in the research conducted by Francesca Gino, a professor of Behavioral Science at Harvard University. The speaker mentions three articles that scrutinize Gino's work, which has been previously well-regarded and influential in the field. The skepticism stems from the unusually high success rates and significant statistical results in Gino's studies, which have led some to suspect potential data manipulation. The video promises to delve into the details of these concerns, starting with a study on honesty pledges and their impact on cheating behavior.
🔍 Data Anomalies in Gino's 'Clustered Faking' Study
This section focuses on the first of the three articles, which examines a study by Francesca Gino and her collaborators on the effect of honesty pledges on form-filling behavior. The study's surprising results, showing a significant decrease in dishonesty when the pledge is placed at the top of a form, attracted much attention. However, upon closer inspection by the article's authors, anomalies in the data sorting were discovered, suggesting potential tampering. The anomalies included duplicate participant IDs and out-of-sequence numbers, which cast doubt on the study's validity and raised questions about the integrity of academic publishing.
🤔 Questionable Data in a Harvard Study on Belief and Cleansing Desires
The second paragraph discusses another study by Francesca Gino, this time in collaboration with Kuchaki and Golinski, which explored the hypothesis that arguing against one's beliefs increases the desire for cleansing products. The study, conducted at Harvard University, reported a strong effect size, with a p-value significantly less than the industry standard, indicating a high level of confidence in the results. However, the vigilantes who investigated the study found suspicious demographic data entries, such as the nonsensical answer 'Harvard' to the question about the participant's year in school. These entries were clustered together and seemed to exaggerate the study's effect, leading to further skepticism about the research's integrity.
😶 Irregularities in a Study Linking Dishonesty and Creativity
The third paragraph addresses a more recent study by Gino, this time co-authored with Wiltermuth, which paradoxically investigates the link between dishonesty and creativity. The study, titled 'Evil Genius,' used a virtual coin-flipping task to measure dishonesty and then assessed creativity through the number of uses participants could think of for a piece of newspaper. The study reported an extremely strong effect size, but the vigilantes' analysis of the data revealed irregularities in the排序 of responses, with some entries appearing out of order. This raised further doubts about the validity of the study and the integrity of the research conducted by Gino.
😟 The Broader Implications of Academic Misconduct
In the final paragraph, the video script reflects on the broader implications of the alleged misconduct by Francesca Gino. It discusses the impact on the field of Behavioral Science, the potential widespread nature of such issues in academia, and the pressure on academics to produce surprising results for publication in top journals. The speaker empathizes with the pressures faced by academics but condemns the manipulation of data as unacceptable. The paragraph concludes by acknowledging the personal toll this situation may have taken on Gino and encourages viewers to read the articles for a deeper understanding of the issues raised.
Mindmap
Keywords
💡Academia
💡Francesca Gino
💡Research Misconduct
💡Effect Size
💡Statistical Significance
💡Data Tampering
💡Honesty Pledge
💡Behavioral Science
💡Replication Crisis
💡Publish or Perish
Highlights
Academia faces criticism due to the alleged data manipulation in the research of Francesca Gino, a professor of Behavioral Science at Harvard University.
Francesca Gino is renowned for surprising findings in her research, which has led to skepticism and investigation by other academics.
Three professors, Yuri, Joe, and Laif, scrutinized Gino's data and discovered anomalies, suggesting potential data tampering.
The study 'Cluster, Fake' revealed a significant effect of honesty pledges on form cheating, with suspicious data sorting that raised questions about its validity.
The 'My Class Year is Harvard' study showed an implausible hypothesis that arguing against one's beliefs increases desire for cleansing products, with data anomalies found in demographic responses.
The third study, 'Cheaters are out of Order', suggested a link between dishonesty and creativity, with data inconsistencies in the number of uses for a piece of newspaper.
Gino's research has been influential, with her work being referenced in books and academic essays, which now casts doubt on their credibility.
The investigation exposed the potential for data manipulation to exaggerate effects and secure publication in top journals.
The academic community is under scrutiny, with the integrity of Behavioral Science research being questioned due to these findings.
Francesca Gino is currently on administrative leave from Harvard, and several of her papers have been retracted.
The incident highlights the pressure on academics to produce surprising results, potentially leading to unethical practices.
The video calls for a reevaluation of trust in academic research and the need for better safeguards against data manipulation.
The implications of these findings extend beyond Gino, affecting the credibility of the entire field of Behavioral Science.
The video concludes by empathizing with the pressures faced by academics while condemning the manipulation of research data.
A call to action for the academic community to uphold the integrity of research and resist the temptation to fabricate results for career advancement.
The video serves as a cautionary tale for researchers, policymakers, and practitioners who rely on academic research for decision-making.
Transcripts
Academia is broken universities are
broken the way that academic research is
published is broken that's the message
that's come through loud and clear over
the last few weeks thanks to three
articles concerning the research of
Francesca Geno if you don't know what
I'm talking about Let Me Explain
Francesca Geno is a professor of
Behavioral Science at Harvard University
she is extremely well known in the field
I've talked about her research to
clients before I've recommended books on
this channel to you guys that use her
work as a key reference I've used her
research before as references in my own
essays and work that I did at University
when it comes to academic Fame Francesca
Gino is up there as you would expect
from someone who is a professor at
Harvard however the reason why she's so
well known is because her research tends
to bring out a lot of very surprising
findings now some people just think this
research is cool and don't think much
more about it but a lot of people in the
industry have been quite skeptical of
Francesco Gino and her work because her
results just seem a little bit too good
her hypotheses are really wacky but yeah
they always seem to be proved correct
the effect sizes from her studies seem
to be really large and her statistical
significance just seem a little bit too
significant so while some of us have
been skeptical of her work for a while
nobody has taken the time to actually
investigate her research and go into her
data to see if they can find anything
fishy
until now these three guys Yuri Joe and
laif are also professors of Behavioral
Science and other related subjects from
different universities across the world
and they took it upon themselves to
investigate Francesca Gino and her data
to see if there was anything fishy going
on and spoiler alert they found a lot of
fishy stuff in the data and that's what
the three articles that they released
are talking about each article relates
to a different study by Francesco Gino
and in this video I'm going to be taking
you through each one the results of
their investigation are shocking damning
for Francesca Gino but I think they
speak even louder volumes about the
state of Academia in general and that's
what I'm going to be concluding on at
the end of this video so without further
Ado let's jump into the first study so
this first article is called cluster
fake and it's referring to a paper
written by Gino in 2012 along with her
collaborators Shu Nina Mazar Dan arieli
and Max baseman given the fact that I
know the first names of all of those
researchers with the exception of Shu
should tell you that all of these
researchers are very well-known people
in the field of Behavioral Science so in
this study they were trying to get
participants to be more honest and the
hypothesis was that if you put an
honesty pledge at the top of a form
that'll make people more honest when
they then fill out the rest of the form
so all of the studies in this paper by
these authors were looking at this idea
that if you put an honesty pledge at the
top of a form people will be more honest
than if you put the honesty pledge at
the bottom of a form now the first study
in this paper was led by Francesca Geno
our protagonist so in this study
students were brought into a lab to
complete 20 math puzzles in five minutes
the students were told that they would
be paid one dollar for each math puzzle
they solved correctly and the way that
this worked is that when students walked
into the room there were two pieces of
paper they had their work paper and
their report paper so on the work paper
they write down their workings for the
math questions and of course their
answers and then on the report paper
they would then have to report how many
answers they got correctly and therefore
how much they should get paid the
students were then told that before
handing in their report paper to the
researchers and getting paid that they
should shred their original work paper
the idea behind this is that by
shredding their work paper there's then
a stronger incentive for them to cheat
on the report paper and lie about how
many answers they got correct since the
researchers in theory should never know
how many answers they got right on the
work paper but what the students didn't
know was that the shredder at the back
of the room was not a normal Shredder
what the people in the experiment don't
know is that the shredder has been fixed
so the shredder only showed the sides of
the page but the main body of the page
remains intact now in order to test the
hypothesis of the researchers on the
reporting paper the participants were
split into two groups half of them had
an honesty pledge at the top of the
paper and half the planet honesty
pledged at the bottom of the paper with
the idea being of course that those who
sign the honesty pledge at the top would
then cheat less going forward so what
was the result well the result showed a
massive effect from this simple
intervention according to what was
published in the study originally for
the students who silently honestly
pledge at the top of the form only 37
percent of them lied but when students
signed at the bottom of the form 79 of
students lied this is a massive effect
size that the researchers are reporting
and as a result of that this study
gained a lot of public attention and I
have talked about it with many people in
the past before because it is so
surprising but that's why these
Vigilantes were suspicious the results
just seem a bit too good can it really
be the case that simply moving an
honestly pledge from the bottom to the
top of a form can have such a dramatic
effect on the amount of cheating that
happens it seems pretty unlikely so our
Vigilantes managed to Source the
original data set that was published by
the authors of the study and when they
looked into the data it just seemed a
little bit fishy if you look at this
table and specifically look at the left
hand column the P hash column this is
referring to participant ID this is the
unique ID given to each participant in a
study and as is highlighted in yellow
there are some weird anomalies in the
way that this data has been sorted
because when you look at this data it
seems obvious that this has been sorted
by first the condition so all of
condition 1 are together then all of
condition two are together and then in
ascending order of the participant ID
which means that the numbers should
consistently get bigger as you go down
the line and there should be no
duplicates remember each participant has
a unique ID so when you look at this
data it's a bit weird we've got 249s
here that's a duplicate that should
never happen and then at the end of the
condition one set of participants you
have participant 51 coming after 95 then
12 then 101 like that sequence doesn't
make any sense and similarly when you
get to condition two we start with 7
then 91 then 52 then all the way back
down to 5 again these entries in the
data set look suspicious they look like
they're out of sequence which suggests
that somebody maybe has tampered with
them so our Vigilantes are suspicious of
these rows so then you have to ask the
question why would the researchers want
to tamper with the data well it's
because they would want to show a bigger
effect than those actually seen in the
real data the more dramatic the effect
of the intervention is the more
surprising the result of the study is
and therefore the more likely it is to
get published in a top journal the more
likely it is that this will make a lot
of press headlines that they will get
lots of interviews and work off the back
of it and so there's a strong incentive
for the researchers to fudge the data a
little bit make the effect seem larger
than it really is and so that's what our
Vigilantes were looking for they wanted
to see if these suspicious rows in the
data set showed a bigger effect than the
normal data that wasn't suspicious and
sure enough that's exactly what they
found if you look at this graph the red
circles with the cross show the
suspicious data and the blue dots show
the unsuspicious data and as you can see
the circles with the red crosses are the
most extreme ones meaning that these few
data points are inflating the effect
size now the article goes on to show how
our Vigilantes did some very clever work
to unpack the Excel file that this data
was stored in and they were able to show
quite clearly that these suspicious rows
were manually resorted in the data set I
won't go into it on this video because
it's quite technical but I'll have a
link to all of these articles in the
description if you want to read them in
full but as you'll soon see this theme
of suspicious data and then there's data
showing extremely strong effect sizes
will be a recurring pattern so let's
move on to study two now this second
article is called my class year is
Harvard and you'll see why in a second
they're looking at a study from 2015
written by Francesca Gino as well as
kuchaki and golinski again two fairly
well-known researchers in the field now
the hypothesis for this study in my
opinion pretty stupid the hypothesis is
that if you argue against something that
you really believe in that makes you
feel dirty which then increases your
desire for cleansing products which is
kind of silly in my opinion but
nevertheless this is what they were
researching so this study was done at
Harvard University with almost 500
students and what they asked the
participants to do was the following so
students of Harvard University were
brought into the lab and then asked how
they felt about this thing called the
queue guide I don't really know what the
cue guide is but apparently it's a Hot
Topic at Harvard and it's very
controversial some people are for it
some people are against it so when they
were brought to the lab they were asked
how do you feel about the queue guide
and they either said they were for or
against it and then the participants
were split into two groups half the
participants were asked to write an
essay supporting the view that they just
gave so if they said I'm for the queue
guide they had to then write an essay
explaining why they were for the queue
guide but then half the participants
were asked to write an essay arguing
opposite to the point that they just
gave so if they said I'm for the queue
guide they would then have to write an
essay explaining why they should be
against the queue guide again the idea
being that those who are writing an
essay against what they actually believe
in would make them feel dirty because
after they'd written this essay they
were then shown five different cleansing
products and the participants in the
study had to rate how desirable they
felt these cleansing products were on a
scale of one to seven with one being
completely undesirable and seven being
completely desirable and again the
authors found a strong effect you can
see here that the p-value is less than
0.01. and for those of you who haven't
had any academic training and statistics
basically when you're doing a study like
this you're looking for a p-value that's
less than 0.05 that's the industry
standard if it's less than 0.05 you say
yes I'm confident that the effect that
I'm seeing is caused by the manipulation
that I just did so less than 0.1 is an
extremely strong effect you're basically
100 confident that what you're seeing in
the data is caused by the manipulation
that you did so once again our
Vigilantes are suspicious of this very
strong effect size so the managed to
Source the data online and do a little
bit of investigating and what they find
are some weird anomalies in the kind of
demographic data that the participants
have to give when they enter the study
and this is very common in psychological
studies that participants have to give a
little bit of demographic data about
themselves which gives the researchers a
little bit more flexibility about how
they cut up the data later on so in this
particular study the participants were
asked a number of demographic questions
including their age their gender and
then number six was what year in school
they were now the way this question is
structured isn't very good in my opinion
in terms of research design but
nevertheless there are a number of
acceptable answers that you can give to
you in school because Harvard is an
American School you might say I'm a
senior right which is a common thing or
a sophomore you might write the year
that you're supposed to graduate 2015
2016 Etc or you might indicate a one a
two a three or four or a five to
indicate how many years of school that
you've been in there these are all
different answers but they're all
acceptable and make sense in the context
of being asked what year in school are
you and so when our Vigilantes go into
the data that's exactly what they saw in
this column a range of different answers
that were all acceptable all except for
one there were 20 entries in this data
set where the answer to the question
what year in school are you was Harvard
that doesn't make any sense what year in
school are you Harvard
what right that doesn't make any sense
and the other thing that was suspicious
about these Harvard entries is that they
were all grouped together within 35 rows
again this was a data set of nearly 500
different participants and yet all of
these weird Harvard answers were within
35 rows so once again our Vigilantes
treat these Harvard answers as
suspicious data entries they mark them
in red circles with crosses and as you
can see the ones that are suspicious are
again the most extreme answers
supporting the hypothesis of the
researchers with the exception of this
one but come on it's most suspicious
when you look at the ones on argued
other side so these are the people who
wrote an essay arguing against what they
didn't believe in and therefore were
supposed to feel more dirty and find
cleansing products more appealing all of
these suspicious entries on that side of
the manipulation went for seven that
they found all of the cleaning products
completely desirable and so what are
Vigilantes go on to say is that these
were just the 20 entries in the data set
that looked suspicious because of this
Harvard answer to the demographic
question but who's say that the other
data in the data set was not also
tampered with but just they were more
careful when they filled in this column
and didn't put Harvard since it seems
pretty clear that at least these 20
entries were manipulated and tampered
with some way it probably means that
there are other entries within this data
set that were also tampered with are you
shocked yet I hope you are but it's
about to get worse because there's a
third article to do with Francesca Gino
so this third article was released
literally yesterday the day before I'm
filming this video and it's called the
cheaters are out of order this is
written by Francesca Gino and a guy
called wiltermuth I don't know
wiltermuth but again I find it
incredibly ironic that all of this
cheating and fake data is being
conducted by researchers who are
studying the science of honesty it is
incredibly ironic so in this third study
Gino and her co-author are investigating
the idea that people who cheat people
that lie who are dishonest are actually
more creative and they call the paper
Evil Genius how dishonesty can lead to
Greater creativity
really so let's quickly go through how
the study worked participants were
brought into a lab where they were sat
at a machine with a virtual coin
flipping mechanism what the participants
are asked to do is to predict whether
the coin will flip heads or tails and
then they would push a button to
actually flip the coin and if they had
predicted correctly about whether it
would go heads or tails then they would
get a dollar so again there's a strong
incentive to cheat so the participants
were right down on a piece of paper how
many predictions they got correct and
then they would hand that to the
researcher in order to get paid but then
of course the researchers would then go
back and look at the machine that they
were flipping the coin on to see how
many they actually got correct and then
they were able to tell how many times
that participant had cheated so after
they had completed the coin flipping
task they were then given a creativity
task and the creativity task was how
many different uses can you think of for
a piece of newspaper so in Psychology
this is a pretty common technique for
testing creativity you give somebody an
inanimate object and then you say how
many uses can you think of for this
inanimate object and again with this
study we see a very strong effect size
remember the magic number that academics
look for is p less than 0.05 and here we
have P less than 0.01 so basically what
that means is that there's an extremely
high likelihood that the effect that the
academics are seeing is caused by the
manipulation that they did so again our
Vigilantes are suspicious but this one
is interesting because our Vigilantes
were able to actually get the data set
from Geno several years ago so they got
this data set directly from Geno so
again when our Vigilantes look into the
data they find some weird things going
on as you can see it seems to be sorted
by two things firstly by the number of
times the participant cheated so all the
people who didn't cheat at all are zeros
and then the number of responses is the
number of different uses for a newspaper
that that participant could come up with
and those are clearly ranked in
ascending order but as you can see from
this next screenshot some of the
cheaters are out of order so these are
the people who cheated once who
basically over reported one time and the
number of uses that they could come up
with for the newspaper a route of
sequence here we have 3 4 13 then 9 and
then back down to five again then back
up to nine then five then nine and eight
the nine is just a total mess right so
these ones that are highlighted in
yellow are the suspicious ones they're
the ones that are out of order according
to how the data appears to have been
sorted so what our Vigilantes did was
they basically took this data set and
then made a new column and they called
it imputed low or imputed High what that
basically means is that rather than
taking the number of responses that are
written down in this original data set
they're going to say well where does
this entry sit in the ranking order and
so we're going to replace the value that
is given here with what the value should
really be so if it's between four and
five then that number should be either
four or five whether it's imputed low or
imputed high does that make sense so
once again our researchers plotted the
data suspicious entries are marked with
a circle and a cross and as you can see
the suspicious entries are the ones that
deviate from the pattern that you see in
the non-cheaters the blue line so in
other words the ones that are out of
order the suspicious entries they're the
ones showing the effect but when you use
the imputed position so that's the
number that is implied by the road that
the entry was in then suddenly the
entire effect disappears and the group
of cheetahs seem to show a very similar
pattern to the group of non-shooters and
the result of this statistically
speaking is significant remember the
original p-value for this study was P
less than 0.01 but once you use the data
that's implied by the row suddenly the
significance completely disappears it
then goes to P equals 0.292 or P equals
0.180 depending on whether you're
imputing low or high remember in order
for an academic study to be significant
the standard is p less than 0.05 and
here the p is clearly more than 0.05.
again this article goes on the
Vigilantes do a little bit more research
to really back up the point and really
drive home the fact that this data is
very suspicious I won't go into the
details now again all of these articles
are linked in the description go check
them out and you'll notice that these
were all called part one part two part
three and that's because this is
actually a four part Series so I'm
expecting doing a Fourth Article to come
out after this video is published
looking at yet another study from
Francesca Geno but I hope by this point
you get the picture there's a number of
studies conducted by Francesca Geno with
very suspicious looking data so at this
point you're probably wondering how did
Harvard allow this and the short answer
is well they don't really seem to have
done if you go on Francesca Gino's page
on the Harvard website it shows that
she's on administrative leave I think we
all know what that means and Harvard who
have even more access to Francesca
Gino's data than our Vigilantes do have
since asked for several of Francesco
Gino's papers to be retracted from the
journals that they were originally
published in now this is a bad look for
Francesca Gino right and we can't be
sure that it was Francesca Gino who was
doing this manipulation it could be one
of her co-authors but given that she's
the Common Thread between all of these
different papers it seems pretty likely
that it was her in the world of
psychology and writing good quality
academic papers this is really bad it's
not only bad for Francesca Gino but it's
bad for the field as a whole it casts
down over the entire field of Behavioral
Science because we we don't know the
extent of the damage that bad actors
like Geno have been causing in the field
and for how long like I said Francesca
Geno has been a prominent name in the
field for years gaining a position at
one of the top universities Harvard so
who's to say that this isn't a problem
that is Rife amongst many other
researchers in the field we certainly
hope not but you can't really know when
somebody's so high profile like this has
been engaging in this kind of behavior
for years and getting away with it it
also looks bad for people like me who
work in the industry who trust these
academics to publish good quality
research that we then use to try and
influence real world change in
businesses and government and so on and
so forth like I said I've used Geno's
work before to make recommendations to
my clients and I've recommended to you
guys to read Dan ariely's book the
honest truth about dishonesty in the
past a book which I no longer recommend
since the paper that was talked about in
the first article here is used heavily
as a reference for a lot of the claims
that Ariel is making in that book and
while it's tempting here to just
completely lay into Francesca Geno and
just you know really have a go at her
for this kind of bad behavior I actually
kind of understand why she did it right
if you're or an academic at a top
institution like Harvard you are under
an enormous amount of pressure to
publish surprising results and
consistently surprising results with big
effect sizes are more likely to get
published in top journals we need more
press interviews and basically cement
your position there at a top university
like Harvard so there is a strong
incentive for academics to fudge data
like this and come up with more
surprising results in order to try and
maintain their position I'm not
condoning the behavior in the slightest
it's completely unacceptable that an
academic would do this but I can
somewhat empathize that she's under a
lot of pressure and can see how the
incentives are working against the
practice of following good science but
what do you guys think of Francesca Gino
in all of this nonsense let me know in
the comments below please go read the
articles that are in the description
thank you to Yuri Joe and lay for
publishing This research you guys are
absolute Legends and Francesca Gino if
you're watching this video I know you
must be going through a really rough
time right now to have your sort of
entire career ripped away from you so
publicly like this while I think that
what you did is completely unacceptable
please don't do anything stupid with
your own life you're still a valuable
human being but thank you guys so much
for watching and I'll see you next time
bye
Weitere ähnliche Videos ansehen
5.0 / 5 (0 votes)