Google Product Manager Execution Interview: YouTube Watch Time Root Cause Analysis
Summary
TLDRIn this mock product management interview, Cherry, a former Google PM, discusses the impact of shipping YouTube comments on mobile. Despite increased comment engagement, watch time has dropped, prompting a strategic analysis. Cherry outlines a structured approach to diagnosing the issue, including setting clear launch criteria, examining user behavior, and considering UI adjustments. The discussion explores potential solutions like A/B testing different comment displays and ensuring a positive user experience, emphasizing the importance of aligning metrics with initial expectations.
Takeaways
- 📱 The scenario involves a decrease in YouTube watch time following the launch of YouTube comments on mobile devices, despite an increase in comment engagement.
- 🤔 Cherry, a former Google product manager, outlines a structured approach to diagnosing and addressing the issue, emphasizing the importance of having clear launch criteria and backstop metrics.
- 📈 Cherry suggests that the increase in comment engagement might be cannibalizing watch time, as users spend more time interacting with comments and less time watching videos.
- 🔍 She recommends analyzing the decline in watch time to determine if it's a one-time event or a progressive trend, and to check for any technical issues or regional/platform-specific problems.
- 🌐 Consideration of internationalization and localization issues is important, as different languages and text densities can affect the user interface and experience.
- 📊 Cherry proposes a series of questions to better understand the context of the decline and to identify potential causes, such as changes in user behavior or issues with the recommendation algorithm.
- 🛠️ A/B testing is suggested as a method to experiment with different UI treatments, such as reducing the number of comments displayed or adjusting the ranking of recommended videos.
- 📝 The importance of defining 'comment engagement' is highlighted, including various metrics like the number of comments created, replies, likes, and time spent on comments.
- 🚫 Assumptions about spam and abuse detection systems being in place are made, to ensure that the increase in comments represents meaningful user engagement.
- 🔄 Cherry discusses the need to continuously monitor and test to find the optimal balance between comment engagement and watch time, using data to guide decision-making.
- 🗣️ The mock interview concludes with a reminder of the importance of considering abuse and bad content growth when analyzing user engagement metrics.
Q & A
What is the main issue discussed in the mock interview?
-The main issue discussed is the decline in YouTube watch time following the launch of YouTube comments on mobile devices, despite an increase in comment engagement.
Who are the hosts and guests involved in the mock interview?
-The hosts and guest are Kevin Way, who is conducting the interview, and Cherry, a former product manager at Google who worked on YouTube and Google Maps.
What is Cherry's approach to handling the scenario of increased comment engagement but decreased watch time?
-Cherry outlines a structured approach involving clarifying terms, listing possible causes, gathering context, establishing a theory of probable cause, and testing this theory to fix the problem.
What was the expected outcome of launching YouTube comments on mobile devices according to Cherry?
-The expected outcome was an increase in comment engagement, with the assumption that if the engagement level was similar to that on desktop, then by launching on mobile, the overall comment engagement should have doubled.
What are the launch criteria and backstop metrics Cherry mentions?
-The launch criteria would be that comment engagement increases by at least 50%. The backstop metric is a maximum acceptable drop of 5% in YouTube watch time.
How does Cherry suggest diagnosing the cause of the decline in watch time?
-Cherry suggests diagnosing by asking questions about the time period of the decline, checking for technical problems, considering regional and platform-specific issues, and examining the ecosystem of YouTube features.
What potential UI treatments does Cherry propose to address the issue?
-Cherry proposes reducing the number of comments displayed, adjusting the ranking of recommended videos to include shorter videos, and adding an educational tooltip to inform users that more recommendations are available below the comments.
How does Cherry define 'comment engagement' in the context of the interview?
-Comment engagement is defined as an aggregate score that encompasses the number of comments created, replies, likes, time spent scrolling on comments, and taps on the comments.
What is Cherry's suggestion for conducting A/B tests to identify the best solution?
-Cherry suggests setting up multiple arms for the experiment, including a control group and test groups with varying numbers of comments displayed, and comparing the impact on watch time across these groups.
How does Cherry address the potential issue of spam and abuse in the comment section?
-Cherry assumes that there is a spam and abuse detection system in place and that the increase in comment engagement is primarily due to meaningful user interactions.
What is Cherry's final recommendation for choosing the best testing arm in an A/B test?
-Cherry recommends choosing the arm that best meets the launch criteria, considering both the impact on watch time and the level of comment engagement, ensuring that the solution aligns with the initial objectives.
Outlines
📱 YouTube Comments on Mobile: Impact Analysis
The video script begins with a scenario where YouTube comments have been implemented on mobile devices, resulting in increased comment engagement but a decrease in watch time. The interviewee, Cherry, a former product manager at Google, outlines a structured approach to address this issue. She emphasizes the importance of clarifying terms, setting up launch criteria, and understanding the context before diving into potential causes and solutions. Cherry suggests considering factors such as the layout changes on mobile, the potential for reduced visibility of recommended videos, and the trade-off between time spent on comments versus watching videos.
🔍 Diagnosing the Decline in Watch Time
Cherry continues the discussion by hypothesizing potential reasons for the decline in watch time, such as technical glitches, regional or platform-specific issues, and the impact of comment section layout on mobile devices. She proposes a series of questions to gather more context and narrow down the cause. Cherry also considers whether the decline is a one-time event or progressive, and whether it is isolated to certain regions or platforms. The goal is to establish a probable cause theory and then test it to fix the problem.
🎯 Formulating Strategies to Address Watch Time Reduction
In this paragraph, Cherry focuses on identifying specific strategies to address the reduction in watch time. She suggests examining the possibility that the decline is due to users engaging more with comments and less with recommended videos. Potential solutions include adjusting the user interface to show fewer comments initially, changing the ranking of recommended videos to promote shorter videos, and educating users about the availability of more recommendations below the comment section. Cherry also mentions the importance of ensuring that the increase in comment engagement is not due to spam or abuse.
🧩 A/B Testing to Optimize YouTube Comments Feature
Cherry discusses the importance of A/B testing to find the optimal solution for the decline in watch time. She proposes setting up experiments with different numbers of comments displayed and comparing the impact on watch time. Additionally, she suggests testing UI changes, such as adding tooltips to inform users about more recommendations below the comment section. The aim is to find the right balance that minimizes the decline in watch time while maintaining high comment engagement.
🗣️ Reflecting on the Mock Interview and Considering Abuse
The final paragraph wraps up the mock interview with Cherry reflecting on the process and the importance of considering comment abuse in product management. She emphasizes that an increase in engagement can sometimes correlate with an increase in abusive content, which is a critical factor to consider when evaluating the success of a feature like YouTube comments. Cherry also discusses the importance of having clear launch criteria to guide decision-making throughout the testing process.
Mindmap
Keywords
💡YouTube Comments
💡Engagement
💡Watch Time
💡Product Management
💡Launch Criteria
💡Backstop Metrics
💡Trade-off
💡User Interface (UI)
💡A/B Testing
💡Spam Abuse Detection
💡Aggregate Score
Highlights
Cherry, a former Google product manager, shares her experience working on YouTube and Google Maps.
The interview scenario presents a challenge: increased comment engagement on YouTube mobile but decreased watch time.
Cherry outlines a structured approach to analyze the situation, including clarifying terms and setting up parameters.
She emphasizes the importance of having clear launch criteria and backstop metrics before rolling out a feature.
The assumption that mobile comment engagement would double due to the launch of YouTube comments on mobile devices is discussed.
Cherry suggests considering the impact of mobile real estate on the user interface and user experience.
A hypothetical success metric is proposed: at least 50% increase in comment engagement with no more than a 5% drop in watch time.
The potential trade-off between comment engagement and watch time due to user attention being a limited resource is highlighted.
Cherry proposes a series of diagnostic questions to understand the context and causes of the decline in watch time.
The importance of checking for technical issues, regional differences, and platform-specific impacts on watch time is discussed.
Potential UI treatments to address the decline in watch time, such as adjusting the number of comments displayed, are suggested.
The idea of educating users about the availability of more recommendations below the comments section is presented.
Cherry recommends investigating the recommendation video pipeline for any abnormalities causing the decline in watch time.
The necessity of ensuring that the increase in comments represents meaningful engagement and not just spam or abuse is emphasized.
A method for setting up and evaluating A/B tests to find the optimal balance between comment engagement and watch time is detailed.
Cherry shares her thoughts on the importance of considering abuse and bad content growth when analyzing user engagement.
The interview concludes with a reminder to have a structured and thorough approach when facing product management challenges.
Transcripts
you shipped youtube comments on mobile
devices
comment engagements are up but youtube
watch time is down
what do you do
[Music]
hey everyone welcome back to another
exponent product management mock
interview
my name is kevin way and on today's show
we have cherry
and before we get started could you just
tell the audience a little bit about
yourself
hey everyone i'm cherry i'm a former
product manager at google and one of the
products i worked on was at youtube
as well as google maps i'm super excited
to be doing the mock interview
thanks jerry so today let's do an
analytical or
execution type interview question and
the question i have for you today is
you shipped youtube comments on mobile
devices
comment engagements are up but youtube
watch time is down
what do you do awesome okay so just to
clarify
i'm just gonna repeat it back to help me
uh you ship youtube comments on mobile
devices
uh and comments engagement are up but
the youtube watch time
is down and uh and you wanna hear kind
of my process about handling this
scenario is that right yeah that's
correct great
um well i'll just kind of outline uh
overall how i'd like to approach this
uh so i want to kind of clarify um
you know some of the terms in our
question and set up the parameters for
our discussion
i'll list some high-level reasons that
could be causing
the problem um and try to gather context
information
maybe ask you a few questions to
understand more um and once we can start
discarding issues that are out of scope
i'll establish a theory of probable
cause
and then we can try to test this theory
and fix the problem
does that sound good yup that sounds
good sweet um so yeah
clarifying and establishing the
situation a bit further i want to be
really explicit about the goals of this
launch right we should never launch a
product
or feature blindly without any
expectation of what's gonna what's gonna
happen right launches don't go out into
a black void and then whoo
surprise like the comments engagement
went up and and this happened
um so in this particular scenario before
the launch of youtube comments
um i would have expected that me and my
team have established some launch
criteria as well as backstop metrics
i'll go ahead and take a quick stab at
sort of approximating this to help frame
our decision
uh for what to do after its launch
because i don't think we can have a
meaningful discussion about what
happened after launch until we have
success metrics so is it okay if i use
some numbers to illustrate what i mean
by this
yeah go ahead yeah okay so let's assume
um i'm just pulling a number here i just
let's just assume that 50
of all youtube watch time happens on
mobile devices right
and so the future is youtube comments on
mobile i'll assume that youtube comments
on desktop has already been launched we
know that desktop came first
and so now yes we want these comments on
mobile super exciting
by launching comments on mobile we would
definitely expect that comments
engagement to go up
if comments engagement at launch is at a
similar level
to what we saw on desktop then just by
launching we should have essentially
doubled commons engagement right by
bringing it to mobile
so it's a given that comments engagement
goes up and let's say we set our success
metric
to be that comments engagement is at
least 50 increased
you know at minimum we have feature
performance parity
um on desktop we have the setup where on
youtube you know you have the
currently playing video on the left we
have comments below it and then a column
of recommended videos
to the right side uh so the currently
playing video
comments section and the next
recommended videos can all be visible on
the screen at the same time
considering some implications of mobile
here
real estate on phone is a lot less than
desktop
so we won't be able to kind of so neatly
fit all those
three features on the screen at one time
um
in launching comments v0 let's say we
made the decision to
you know put the comment section on top
of the recommended videos that would
normally be
below the video player so previously if
it were just videos and more videos
we now have videos comments more videos
right
and therefore it's a pretty logical
assumption that watch time is going to
be affected or
the more videos below will be less seen
right if we're showing less of those
recommendations
then the users will watch less i think
the real question here is how much are
we okay with
right there is some trade-off in a
perfect world i think everything would
go up and
you know there's no no downside at all
but
user attention is a pretty limited
resource
so if a user has 10 minutes to spend on
youtube maybe they spend seven minutes
watching a video
their first video and then they spend
like three minutes
watching a follow-up video from the
recommendations but if we're inserting
comments
maybe you spent seven minutes on the
first video and you spent three minutes
browsing comments and then we lose out
on the extra three minutes of watch time
that we might have gotten right
uh given those uh let's we assume that
you know there's gonna be some drop in
watch time
maybe we set a backstop metric of we're
okay with a five percent drop in youtube
watch time right
um i'm just saying five percent today i
mean it could be we saw something
similar when we launched comments on
desktop and we know that in the long
term
rolling out comments is a better
strategic move we have some strategy
that we're gonna see it go up in the
future
that's why we're okay with five percent
i think it's you know it's reasonable
that we assume that there will be a drop
but
we want to set the backstrap backdrop
backstop of
five percent uh does that sound
reasonable yeah i i don't think the
exact number is important here i'm kind
of curious um
what you might do here so let's let's
assume i think
let's assume that the current ordering
on mobile is video
comments and then recommended videos i
think that's fair and i'm curious where
you might go from here but i think
everything you've laid out so far
definitely makes sense
awesome yeah definitely this is just
some quick back of the napkin sort of
thinking
so we have some numbers to work with i'm
not doing anything crazy it's like 5 and
50
right um i want to get to the post
launch evaluation part and
i think that's where it's interesting
but the reason why i wanted to take the
time to
to set this up is because it gives us a
framework right our launch criteria
is that common engagement is 50
increased and watch time does not drop
more than 5
so going back to the question i shipped
youtube comments on mobile devices
comments engagement are up but youtube
watch time is down what do i do
well then that just depends entirely on
the numbers right in one case
let's say comment engagement is 50
increased watch time dropped four
percent
but the the comments engagement is good
it's it's what we want it to look like
then we launch right that's what we do
yes the watch comes down but we made
that decision
beforehand and and that's that's how you
make a clear decision like you can't
have something happen in the data and be
like well i actually don't know about
that right
so i think if we establish good launch
criteria
then less than five percent is okay yes
the watch time is down
but we're gonna launch the second case
could be
common engagement is fifty percent
increased right but watch time dropped
ten percent
we don't launch there we need to stop we
need to dive into the data and we need
to figure out where that extra drop is
coming from
you know there are other cases too like
common engagement only increased 10
percent but watch time drop to 20
20 or something uh we don't launch right
you see what i'm getting at but
uh for today like we'll focus on the the
common engagement is like plus 50
increased uh but watch time dropped over
five percent right this is where we get
gets interesting and my overarching goal
here is to identify the problem
causing this let's say ten percent drop
right
propose and execute a fix so that the
metrics get to within our launch
criteria
and then we can safely roll out the
feature um
cool so before i start breaking it down
uh there's one more thing i wanted to
clarify which is this measure of
engagement right i was kind of talking
broadly
about the 50 but uh how would you kind
of like to define
comment engagement is it by the number
of comments created
the number of replies to comments the
number of likes i want to comment
time spent scrolling on comments taps on
the comments um you know all of the
above
just love to know if there's anything in
particular you had in mind
so let's say the team just has some
aggregate score so everything
encompasses everything that you've
mentioned sweet okay cool
and can i also assume that we have some
spam abuse detection in place because
uh you know for a big company to roll
out a open forum
of text i think it is a risk to not
assume that we put some
measure of abuse in place so that these
are not just spam
yeah you could assume that these are
meaningful comments
okay sweet right um at a very high level
the decline in watch time is due to one
users watching
less minutes of youtube videos right and
we also know that users are creating
replying liking and sharing more
comments on youtube
so to start the diagnosing the cause of
these behaviors
i'll begin by asking the following
questions to get some more context
around
that decline in relation to the launch
of comments
so first of all let's just kind of think
about the time period right is this
decline in watch time
a one-time event um or has it happened
sort of progressively
uh essentially what does that graph look
like right
of youtube watch time if it's a one-time
thing then it's a possibility that a
technology glitch caused this problem
such as the downtime in the services
that support youtube you know this has
happened before
or is it a big drop of watch time on day
one of the launch
when everyone saw the new comments
feature and just went absolutely crazy
for it
but now it's recovering slowly so maybe
our seven day rolling average looks a
little shaky but we know that it's gonna
go back up
um i would definitely ask if there are
any technical problems that coincided 10
is a big drop you know check to make
sure that the watch time looks healthy
in the control group
as well you know if the decline in
engagement is progressive and the cause
is still there
then we continue diving deeper into data
right another thing just at a higher
level to think about
is the region um is the decline in
youtube watching a watch time happening
in
a isolated region you know if this is
true like the problem might be specific
to a country and it could be an issue
with international
internal lash no this word kills me
internationalization
or localization right so youtube
comments
they show text strings in rows um but
each language has different types of
text i'm just kind of
ideating here on what could be the cause
for some of these different countries
having different results um for instance
chinese characters
very compact right the same sentence
uh could be one line in chinese but take
up five lines in german which is a super
long language
uh did that impact how long our comment
section sort of expanded to
uh and therefore potentially affect the
amount of scrolling that
users have to do in one region to get to
the recommended videos below
or or somehow impact you know what
they're seeing on their screens
so region is one thing we could consider
another thing is thinking about
platforms
is the decline happening on a specific
platform such as ios
or android if so i would definitely
compare the drop of youtube watch time
engagement on each platform with
engagement across all platforms right
does something about our new feature the
way that we built it
not work as well considering unique ui
patterns of different platforms you know
we know that people swipe on ios versus
they use it back button on android do we
properly accommodate for tap target
sizes
across different screens could users be
accidentally tapping comments
and therefore driving our engagement up
uh when they actually want something
else
right so these are all things that i
want to have logging for
um have our engineers dive deeper into
to answering these questions right and
of course you know thinking
holistically is this decline in
engagement happening in other youtube
features
besides watch time so for instance are
youtube searches down
are users overall spending less time on
youtube
you know if so we've definitely got a
much bigger problem it's not just
youtube
watch time that's down we've somehow
affected the whole system and we need to
look at that
as well so i've just kind of laid that
out
uh to be fair it's probably not going to
be the case that all the above are
happening simultaneously
pretty disastrous launch um so this
series of questions for me
kind of helps me fine tune where that
problem is is really coming from and you
know
i think in a real world setting we we do
the due diligence and we get answers
um i want to be exhaustive when it comes
to listening you know all these possible
causes because
you know then we can establish the
theory of probable cause
um yeah so to continue with more
in-depth analysis should we assume that
you know it's it's not like a regional
problem or it's not platform specific
maybe it's just like a more progressive
decline um
and and therefore like we want to dive
deeper
so you've mentioned a few interesting
things about we've looked into drastic
versus steady declines
tech glitches region or platform or
anything wrong with the ecosystem on
youtube
let's say that yes it seems like this is
a
steady decline so i'm kind of curious
where you might go from here
right yeah so we'll we'll rule out let's
say we checked all the countries
looks like it's declining in all the
countries um and uh
the situation is generally not due to
some tech glitch right
um so yeah then um i want to continue
diagnosing
what could cause users to behave in the
two ways i described right which is one
they're creating replying liking more
content
comments so the aggregate score is going
up but two watching less video
so concerning the reduced watch time um
the most important question for me um
regarding watch time specifically is
like where is that reduction
of watch time coming from uh is the 10
loss
primarily coming from less engagement
with the recommended videos
that would have normally shown up above
the comments right
if so then i think there's a bunch of
like low hanging fruit that we can try
out right we can consider some
treatments
to reduce the size show less comments
and
more recommended more recommendations um
if at launch maybe we did three comments
um
and then a expansion and then the
recommended videos we could try showing
just two or just one comment to save
space to push up the recommendations
this could be something we just run
quickly to see if indeed
we we start to reduce the impact on
watch time like it doesn't have to be
the final solution i think
these are things that we can test if
users were also
like previously watching an average of
three recommendations
after their first video what what's that
change has that changed are they
like now watching zero new videos after
their first video
um i mentioned earlier that you know
users have
limited time and attention if now we're
only showing
one new recommendation above the fold so
above the point where the
user needs to scroll down um and maybe
that first video is like 15 minutes long
um maybe yeah i wouldn't want to click
on that after spending so much time
playing with these new comments right so
perhaps we can consider
a treatment where we adjust the ranking
for the new recommendations to be
shorter videos up top so that we can
kind of
you know make it make the barrier to
clicking another video lower
um so i think there's a lot of things we
can play with here in the category of
how do we make the the portion of
videos below the currently playing
videos so that all those recommendations
how can we make that more appealing how
can we up think uh the engagement with
them because that we're seeing that
there's a lot of reduction of watch time
there right um there's another thing i
think that's worth bringing up like is
it an education problem like
do users not know you can scroll down to
see more um
more recommendations like previously
they're used to seeing their
recommendations right below the video
but we've put comments and then maybe
there's this assumption like oh it's
just the comments and that's the end
um so a potential solution here i think
is like trying to add a
tip or some education educational banner
of like
by the way like we added comments but
you can still scroll down to see all the
recommendations that you know and love
right so i think that's that's the for
me
if we answer the question of like the
reduction reduction of watch time is
coming entirely from
users clicking less recommendations um
below the fold like
let's try a couple of these ui
treatments let's see how users respond
and see if we can start to recover some
of that um that 10
loss and get it closer to the five
percent range right
the other thing if we go on a deeper
level is there a problem
um with the recommendation video
pipeline uh the type of video content
being shown
um our recommendations being generated
but not displayed on the screen
you know if this were the case it could
explain why users are not watching as
much
uh i mean i get that this this is a
one of those cases where we probably see
a drastic decline as opposed to a
progressive decline if we somehow
wiped the recommendations from the
screen entirely then
um we'd probably see a cliff in terms of
the engagement so the probability of
this being the cause is small
but yeah i think you know just keeping a
very tight eye on what's happening
everything uh everything that's
happening within the
uh the youtube recommendations below the
current video
um let's we can think about whether
there's been an increase in video
reports or
or user dissatisfaction on recommended
videos
you know poor recommendations could
cause users to leave the app and be a
possible cause of this progressive
decline in engagement
i definitely investigate this further um
it asks my engineering team to check
whether that there are any
changes or signs that the algorithm is
performing abnormally
um or even you know the results of a
targeted attack where hackers
try to drive up malicious content in the
speed right we know that's happened
where people
will just do that and and all of a
sudden you're seeing this kind of bad
content being pushed to the top of your
recommendations
um so i think those are like if we if we
kind of pinpoint like
hey we we have this new ui where
instead of seeing my module of
recommendated videos right below my
current video
it's been pushed below her like there's
a couple things i think we can try in
terms of
making that more prominent making that
first
tap into uh the the recommendations a
little bit more appealing by having a
shorter video
diving deeper to make sure that there's
nothing wrong in the pipeline overall
um and you know i also want to i think
do some due diligence and
even though we see overall that
comments is up 50 and we have some
aggregate score that is taking into
account abuse
um yeah like let's just make sure that
this new increase in comments is
primarily positive engagement and that
we can be sure that
it is the users that would have been
watching more video um that are
commenting right so we're making sure
that it's a one-to-one so those users
that
would have been tapping on more videos
they're the ones commenting and it's not
some
mismatch here where we're assuming that
that's the case when
maybe it's not right um so yeah i mean i
know we're roughly at time but
to summarize my approach to kind of
finding this this lower watch time and
we can go deeper is just
you know understanding that context um
being sure to that we can hone in
on the real problem to solve and discard
the things that are unrelated
um and and just continue to test each of
these probable causes to identify the
exact source
um and you know if we do find that that
thing that helps us lift
the the decline to the right level and
then i think we can safely launch the
future
thanks jerry and before we do end the
mock i do have one follow-up question
and you you you mentioned some
interesting a b tests like reducing the
size of the comments or
changing the sort of videos that are
recommended how would you go about
testing one of these
uh experiments yeah so
um one one example i brought up was like
let's say at launch we did three
comments right
um i think a setup for us an experiment
like this
we just have multiple arms you always
have a control and then um
you would have you know an arm perhaps
with three comments and then an arm with
two comments and an arm with one comment
and we'd compare what the decline of
watch time looks
across all of those forearms that i just
set up right so the the variable that
we're testing is
the number of comments that we show how
does that impact
um decline on watch time so we know that
with our control
there should be no impact on watch time
and we know that with three comments
which was our initial
initial launch um that caused a ten
percent decline
so do does putting only two comments
reduce that to maybe like seven percent
and one comment is only five percent
um that's kind of the measure that i
would be checking we want to keep that
experiment pretty clean and only test
one thing like
um another a b could be like we want to
put a new tooltip that says by the way
more recommendations below we have a
control no tool tip
um and a test arm where we put the tool
tip and just kind of compare the
engagement and the
the impact on the watch time and so how
would you choose
whether you want to choose the which
which
testing arm that performed better
so let's say like the two comments was
a five percent decline and then the one
comment was a two percent decline
which one would you choose uh we
definitely choose the the arm that's
best i mean
if if we can it also i think depends on
the
uh the comments engagement if comments
engagement is the same for both then
let's choose the arm
with less watch time decline right but i
think like
where it gets a bit trickier is like
let's say oh we only decreased watch
time by two percent
but our common engagement is now only 40
right
then we go back to what is our launch
criteria right we established that way
from the beginning
um and that's what we need to launch
this we i think
the reason it's so important to have
that um always at the back of our minds
is because we that therefore we don't
see some data from a result
experiment result and be like well this
kind of seems bad or this kind of seems
better
um we always have what we decided at the
beginning
to guide us and we know that you know 40
common engagement is not what we're
looking for
so even though it only impacted um the
watch time by two percent i don't feel
comfortable launching until we can get
the the get us back to the 50
great well thanks cherry for the
excellent mock interview um i honestly
don't have much feedback to give this is
a very structured and thorough
answer that you gave i'm wondering if
you might have any cell feedback that
you wanted to share
yeah i mean i think it is interesting to
consider the implications uh
for specifically this question with
abuse
i know that we kind of decided that we
were just going to assume that we have
some abuse system
but i do think um it is important to
consider in
in real product uh environments as well
as
in interviews uh the fact that any time
you have users
uh writing comments anytime you have a
free form text box that is a
fire hose of abuse and more often than
not um
i think growth can translate to actually
just growth of abuse and growth of bad
content um so i do think that when we
think about things like oh engagement
has gone up a ton
abuse is definitely a big thing that
should be a part of the question for any
media site um but yeah i didn't
obviously want to
have our entire interview today be about
abuse so i think if one thing
i would have loved to go in a little
deeper is is kind of my approach to
handling that
got it cool well thanks again for being
on today's show this is a great
mock interview and for the viewers if
you have a different approach on how
you'd answer this please leave a note in
the comments we'd love to hear what
you'd have to say
otherwise good luck with your upcoming
pm interview
thanks so much for watching don't forget
to hit the like and subscribe buttons
below
to let us know that this video is
valuable for you and of course
check out hundreds more videos just like
this at tryexponent.com
thanks for watching and good luck on
your upcoming interview
[Music]
you
Weitere ähnliche Videos ansehen
Facebook Product Manager Execution Interview: Comments & Reactions
Facebook Product Manager Execution Interview: YouTube Goals & Decline
Google PM Interview: Google Maps Korea Launch
Facebook Product Manager Metrics Interview: Facebook Marketplace
Product Manager Mock Interview: Facebook Friend Requests are Down 10% (with Meta PM)
Curriculum - Design Product To Encourage Voting (with LinkedIn PM)
5.0 / 5 (0 votes)