Moral Models: Crucial Decisions in the Age of Computer Simulation
Summary
TLDRThe video discusses the role of computer models in decision-making, particularly during the COVID-19 pandemic. It highlights how models, like those from Neil Ferguson's team, influenced unprecedented interventions. The script explores the limitations and moral implications of relying on models that fail to account for societal impacts, such as economic inequality and mental health. The speaker emphasizes the need for transparency, public involvement, and careful consideration of uncertainties to avoid eroding trust in science and using it as a tool to silence differing perspectives.
Takeaways
- 🔍 People want certainty from science, especially during crises, but models provide predictions, not absolute truths.
- 💻 Computer models significantly influenced global decisions during the COVID-19 pandemic, particularly through Neil Ferguson’s model from Imperial College in March 2020.
- 📊 COVID models guided major decisions, such as lockdowns, school closures, and business shutdowns, using detailed projections about life-and-death outcomes.
- 💡 These models often failed to account for indirect effects, like the impact on economies, healthcare systems, mental health, and developing countries.
- 👨💻 The 'laptop class' benefitted more from pandemic policies compared to those working in essential jobs, exacerbating social inequalities.
- 🔄 Models are not morally neutral; they reflect the values and priorities of those who design them, leading to outcomes that favor certain groups over others.
- 🤔 Early pandemic models underestimated the uncertainties involved, and policymakers often did not revisit or adjust their forecasts as more data became available.
- ⚖️ Science and models should inform decisions, but they cannot substitute for moral judgment or societal debate about values and priorities.
- 🔬 Trust in science is a valuable resource, and misuse of models as rhetorical tools can erode public confidence in scientific decision-making.
- 🗣️ Claims like 'following the science' can silence legitimate debates over values and uncertainties, making it crucial to consider broader perspectives when using models for public decisions.
Q & A
What is the primary reason people tend to trust scientific models, especially during crises?
-People often trust scientific models during crises because they seek certainty. They believe that advanced technological tools can predict the future with precision, offering virtuous, scientific, and morally neutral guidance.
How did computer models influence decision-making during the COVID-19 pandemic?
-During the COVID-19 pandemic, computer models, such as those developed by Neil Ferguson's team at Imperial College London, played a significant role in decision-making. These models predicted the impact of various interventions, guiding policies that affected millions of lives globally.
What was unprecedented about the use of models in the COVID-19 pandemic?
-The unprecedented aspect was how central computer models became in guiding interventions that affected nearly everyone. These models provided detailed predictions, down to the number of lives that could be saved or lost, influencing policies in the UK, US, and globally.
What important factors did early COVID-19 models fail to consider?
-Early COVID-19 models largely ignored the broader social and economic costs, such as the impact on healthcare systems, the economy, developing countries, social isolation effects, and exacerbating inequalities.
Why did the pandemic response benefit certain groups over others?
-The pandemic response benefited people similar to the modellers (those in 'laptop class' jobs) because the interventions they recommended (e.g., remote work) suited their circumstances, while people in roles like meatpacking or grocery delivery faced greater hardships.
What moral challenges arise when creating models for public decision-making?
-Creating models for public decision-making involves making choices about what to represent, and these choices are never morally neutral. They can favor certain values, often reflecting the interests of the modellers rather than those affected by the policies.
Why should modellers consider the broader social impact of their recommendations?
-Modellers should consider the broader social impact to ensure fairness, especially in how their recommendations might affect people unlike them. For example, school closures may disproportionately harm children from disadvantaged backgrounds.
How did the phrase 'following the science' influence public debate during the pandemic?
-The phrase 'following the science' was often used to shut down alternative viewpoints, suggesting that certain policies were scientifically justified without considering differing values or uncertainties. This silenced legitimate debate.
What is the role of uncertainty in scientific models, and how should it be handled?
-Uncertainty is a fundamental aspect of scientific models. Good models account for various possible outcomes by exploring different parameters. During the pandemic, some models did not fully address the uncertainties, leading to overconfidence in their predictions.
What is the long-term risk of using scientific models as a rhetorical tool in policymaking?
-The long-term risk is eroding public trust in science. If models are used to justify pre-existing decisions or as tools to silence debate, it can undermine the credibility of science and diminish its role in guiding future decisions.
Outlines
🤔 The Influence of Models on Decision-Making
The speaker reflects on how people turn to science, particularly computer models, for certainty, especially in crises. They discuss how models, like those used during the pandemic, gained unprecedented influence in guiding critical decisions, such as those made by the U.K. and U.S. governments. These models were seen as providing scientifically-backed choices that seemed morally neutral, but the speaker suggests that their role in decision-making was profound and not without complexity.
📉 Shortcomings of Early COVID Models
The speaker examines how early COVID models neglected to consider broader societal and economic costs, such as the effects on healthcare workers, the economy, and developing countries. They argue that models often overlooked social impacts like income inequality and the challenges faced by those outside the 'laptop class,' highlighting a failure to reflect on how interventions would affect marginalized communities. This led to policies that disproportionately benefited certain groups while harming others.
🔬 Parameter Sensitivity in Models
Here, the speaker explores the uncertainty inherent in models, particularly regarding parameter choices. They emphasize that many early COVID models, especially in March 2020, were highly sensitive to input values that weren't carefully examined. This led to overly confident predictions that should have accounted for a wider range of possible outcomes. The speaker suggests that modellers should have been more transparent about the uncertainties in their forecasts as the pandemic unfolded.
⚖️ Models and Moral Choices
This section addresses the idea that models cannot be morally neutral because decisions about what to include and how to represent data are inherently value-laden. The speaker discusses how different models favor different moral frameworks, such as Nevada's decision to keep casinos open but close houses of worship during the pandemic. They argue that models must reflect a range of public values to ensure fair decision-making and suggest that more public involvement in modelling is needed.
🧠 The Limits of 'Following the Science'
The speaker critiques the use of the phrase 'following the science,' arguing that it often oversimplifies complex decisions. They contend that policy choices are not purely scientific but also involve appraisals of different harms, benefits, and uncertainties. The speaker warns against using science as a rhetorical device to silence legitimate debate, emphasizing the importance of maintaining public trust in science by using models as genuine tools for reasoning, not as tools to justify predetermined outcomes.
Mindmap
Keywords
💡Computer modelling
💡Pandemic mitigation
💡Neil Ferguson model
💡Uncertainty in modelling
💡Moral neutrality
💡Socioeconomic inequality
💡Public involvement in modelling
💡Economic impact
💡Trust in science
💡Policy-making
Highlights
People tend to trust forecasts from advanced technological models, especially in crises, believing these models provide scientific and morally neutral guidance.
Models play a crucial role in policy-making during crises, such as the COVID-19 pandemic, where computer models significantly influenced decision-making.
In the early days of the COVID-19 pandemic, computer models, like those from Neil Ferguson’s team at Imperial College London, guided massive interventions to prevent widespread deaths.
COVID models often focused on infection and death rates but omitted key social, economic, and healthcare impacts such as school closures, economic disruptions, and mental health.
The pandemic showcased how certain societal groups, particularly the 'laptop class,' fared better due to their work-from-home abilities, while others faced harsher consequences from policies recommended by models.
Models often did not consider how interventions, such as school closures, would disproportionately affect marginalized or less privileged communities, exacerbating inequalities.
Decision-makers should critically assess model parameters and explore a range of outcomes rather than relying on single forecasts, particularly in unprecedented crises.
COVID models were highly sensitive to certain parameter values, which could dramatically change predictions, yet this uncertainty was often not communicated or explored thoroughly.
There was a lack of transparency and public involvement in COVID model development, leading to models that reflected the values and perspectives of a narrow group of experts.
Public trust in science is fragile, and using scientific models as rhetorical devices in political discourse risks eroding this trust.
Policies justified as 'following the science' often ignore that models are not morally neutral and depend on assumptions that align with particular value systems.
In the pandemic, Nevada's decision to keep casinos open while closing houses of worship illustrates how models and policies reflect different value judgments.
There are no universally 'correct' decisions about prioritizing public health versus economic or social needs, and models should account for differing societal values.
The phrase 'following the science' was used to silence debate, implying that alternative viewpoints were anti-science, which risks deepening political and social divides.
Future public decision-making should include broader involvement in model development, reflecting diverse values and perspectives to ensure fairer policy outcomes.
Transcripts
[♪♪♪]
It's understandable
that people want science to give them certainty.
They want to know, if I do this, what will happen?
We don't generally think
that people can predict the future,
but when we're told
that the forecast is being made
by this big, fancy technological device,
it's tempting, in a crisis,
to think that it's telling you what's going to happen
and to think that somehow
choices that are guided by that
are sort of virtuous,
and scientific,
and well-informed,
and, in some sense,
morally neutral.
In a wide variety
of applications of modelling in general,
in health economics,
in pandemic mitigation,
in climate science,
you are using a model
to evaluate an action that you might take
and to evaluate
what the likely harms and benefits of that action are.
And I think it's had a pretty profound influence.
In fact, I think this is
probably the episode in human history
in which computer modelling
has affected the course of human events
more than ever before.
And that's in part
because early on in the pandemic,
we started contemplating pretty massive interventions.
Many of these were argued for by using modelling,
particularly a model developed by Neil Ferguson
and his team
at the Imperial College of London,
which came out in the middle of March of 2020
and which tried to show
that certain interventions that we might engage in
and the absence of certain interventions
would lead to massive differences
in the number of people who would die from the pandemic.
The degree to which
it took centre stage in decision-making, I think,
is the main way in which it was unprecedented.
So, we all know now, right,
that this group
brought their model
directly to
the Boris Johnson administration in the U.K.,
and had a very large influence
on how the United States responded.
And it was, I think--
that was really quite unprecedented
that we were making decisions
that affected
virtually everybody's life in ways--
in deeply-profound ways.
That was mostly guided
by this kind of computer modelling.
It was mostly guided
by a kind of reasoning that went like this--
Look, we have this model,
and we can set the model to, you know, business as usual,
keep doing what you're doing.
We can set the model to some intermediate strategy,
maybe close schools for a few weeks,
close large events,
shut down maybe certain transportation networks.
Or we can set the lever to maximum suppression, right?
Close all non-essential industry,
close all restaurants,
close all gatherings,
keep schools closed.
And we can use the model
to tell us
in really rather fine-grained and precise terms, right?
This model was making
not broad, qualitative predictions,
but it was making very detailed projections
about what would happen in each of those scenarios,
right down to the number
of people that would live and die.
I don't think anything quite that dramatic
has ever happened to human beings.
The combination
of the dramatic nature
of the interventions being suggested,
the essential role
that the modelling was playing in that,
the seriousness
of the forecasts being made by the modelling.
And this was forecasts
about hundreds of thousands,
if not millions,
of human lives being saved or lost.
So I think it's hard to think of a case--
a past case in history--
where a model was sort of brought out
and said, "Look,
I can tell you in an extremely fine-grained way
what are going to be the consequences
of the different choices that you might make."
So people have been using models
to try to both influence policy-makers,
to do various things,
and also to convince the public
that the things that they were being asked to do
or to abide
were being effective.
Pretty clearly, early on, COVID models left out costs.
There's nothing in the model
about what will happen to hospital systems
if the healthcare workers' children
are forced to stay home.
There's nothing in the model
about what will happen to the economy
if some of these non-essential businesses
are forced to be closed.
There's nothing in the model
about what the downstream effect of that will be
on developing countries who are our trading partners.
There's nothing in the model
about what doing all of this
will do to efforts
to vaccinate children against preventable diseases.
There's nothing in the model about what this will do to...
if you make people be socially isolated,
what this will do to drug abuse,
suicide.
There's, of course,
nothing in the models about those likely effects.
One thing, I think, that's worth thinking about
is who was involved in making these models
and who benefitted from the policies.
So, it's...
it's tempting to be overly crude about this,
I think,
but it's also...
quite dramatic,
the extent to which
the people who did best during the pandemic,
how much they were like
the people that built the models.
To realize that those of us
who you might think of as being in the laptop class, right?
Those of us
who do our work at desks on the Internet,
how much better we fared in the pandemic
than people who work in meatpacking plants,
or deliver us our groceries,
or live in parts of the world
where the Internet's not available.
It's pretty clear
that the people who did best during the pandemic,
as a direct result
of the interventions
that were recommended by the modellers,
resembled the people who did the modelling.
So it might have behooved them
to think a little bit about
what it would mean for people unlike them
to have schools closed.
To think a little bit about
what this would do, for example, to income inequality.
It's not difficult, I think,
to predict
that if you close schools
and you limit learning opportunities
to children
who come from families
that have great facility with computers and the Internet
and who have the opportunity
to stay home with their children,
it's not hard to predict
that those people will fare better
than the people who, let's say,
come from single-parent households,
that are racialized,
or whose parents
keep the water running
for the people in the laptop classes.
It's not hard to predict
that some of these interventions
would exacerbate those kinds of inequalities.
So it might have been more reflective
for the people making those models
to have thought a little bit more
about how people impacted by those models
would be different from them,
and to then include in the models
predictable results
of the kinds of policies that they were recommending
and how those results would impact people unlike them.
We understand that the people who do modelling
are going to come from a particular segment of society.
That's unavoidable, right?
I work in a meatpacking plant,
I'm not the person
that's going to come in
and build your COVID model for you.
But please, right,
remember that we exist.
And please be attentive to the fact
that when you model
the impacts of the interventions that you're suggesting,
that you be a bit reflective
about what are considerations
that are going to matter to people who aren't like you.
Suppose you're facing the decision
of whether to close houses of worship
in a community
in which the pandemic is spreading.
You need to be able to make the factual forecast
of what biologically will happen
as a result of closing the houses of worship or not.
How much spread are you going to prevent
of the pandemic by doing this?
And what will be
the downstream effects of this on people's health,
on the healthcare system,
on all the different sort of downstream possible places
in which the spread could impact?
All of the predictive juices of a model like that
come from the choices one makes
about what those parameter values are.
So, in other words,
if you tell children they can no longer go to school
and they should stay home,
then sure, it's predictable
that the flow of the virus among children will be reduced.
But by how much?
And particularly with
a novel pathogen like SARS-CoV-2,
these were not values
that anybody could have claimed
to have grounds
for making precise estimates of them.
So one of the things
that I think people ought to have been asking early on,
when modellers came out and said,
"Here's what will happen
if you engage
in this intervention or that intervention,"
is they should have said,
"Do you know exactly what value you should have put in,
in that place in the model?
And did you--"
and this is the crucial point, right? I think--
"did you explore what happens in that model
if you look at
different possible values of that parameter?
All of which might be equally reasonable."
One thing we didn't know back in March of 2020,
but we do know now,
is that many of these models
were very sensitive to those values
in ways that weren't carefully explored.
Now, was it reasonable
to expect these modellers
to have that all at their fingertips,
you know, one week into the pandemic,
when crisis decisions had to be made?
Probably not.
But was it reasonable for them
to come back to us, slowly over time,
and say,
"Okay, we came out early with these forecasts,
but in retrospect,
we didn't explore
the full degree
of possible uncertainty for models in this,
and now we're back
to tell you that, actually,
we're less certain about the forecasts that we made
than we were before,
and, in fact, these ranges of possible values
are consistent
with what we know about how this pathogen will behave."
If we had a perfect model,
arguably, we would have a duty
to always trust it
and to always make our decisions
in accord with what the model said.
But we don't ever have perfect models,
and as a result of that, I think,
we have, in many respects,
the opposite duty.
We have the duty
to treat, always, models as tools--
tools that we can use
to help us reason
about what our degree of uncertainty is
about the world,
to help us reason about how we should act,
to help us reason
about what the likely impact of our choices are,
but never as pure surrogates
for our own judgment and reflective decision-making.
But we also have to remember
that, for most people,
the scientific process is relatively opaque.
It's not realistic
for even the most sophisticated laypeople,
or, really,
even for the most sophisticated,
scientifically-trained people from neighbouring fields,
to have a full grip
on what's going on
inside the scientific machinery
of a model that's being used to guide decision-making.
All models make forecasts, or predictions,
or design recommendations,
or forecast the impact of possible interventions
in ways that are uncertain.
Models are imperfect.
That is a fundamental feature of models,
that they are imperfect,
and that they...
appropriately ought to be thought of
as giving rise to uncertainties.
Good modelling always involves
some accounting
of what a reasonable degree of uncertainty
to have about a model output is.
I think it's pretty clear here, you know, in retrospect,
looking at the modelling
that was done early on in the pandemic,
that insufficient attention was paid
to what the uncertainties
around these model forecasts ought to have been.
It's important to remember
that models can't be morally neutral,
because when you build a model,
you're confronted with two kinds of choices.
You're confronted with, first of all,
choosing what's going to go into the model,
so you're choosing what to represent,
and you're choosing how to represent it.
And different choices about how to represent it
will inevitably, right,
produce forecasts
that are going to look
a little bit more optimistic or a little bit less optimistic.
And the more optimistic it is, right,
the more weighted it is
towards a certain moral framework
than another,
and vice versa.
So choices about what to put into a model
and how to put it in there
are always, inevitably,
going to be choices
that are better for one set of values
and less good for another set of values.
And being attentive to that, I think,
is an essential feature of being a good modeller,
if what you're doing
is building models for public decision-making.
At one stage in the pandemic, for example,
the state of Nevada
made the decision that casinos could stay open,
but houses of worship were being closed.
Now, there's obviously
a kind of set of values one can have
from the point of view of which that's a good decision,
and so a decision-making process
that is skewed towards keeping casinos open,
closing houses of worship
might accord with my values
better than someone else
who thinks being able
to go to their house of worship on the relevant day of the week
is the most important thing in their life.
There's no correct answer.
There's no morally-neutral, correct answer
about whether
economic development
or opportunities to worship
are more important.
That's something about which
people can have legitimately different values.
And any modelling enterprise
that doesn't pay attention
to the fact that there are
a range of possible values one can have on this question
is not going to be one
that can be used for public decision-making
in a fundamentally fair way.
COVID modelling has been
the most morally
and socially-significant modelling
in the history of human modelling.
We have made decisions based on COVID models
that have impacted lives all over the planet
in unprecedented ways.
It's not unreasonable
for the people who don't resemble the modellers
to want to stand up
and remind the modellers,
"Hey, I'm out here.
My interests matter as much as yours.
Please don't forget to incorporate into the model
aspects that might make an intervention
that hurts me quite a bit more than it hurts you
look neutral."
One way to think about how to mitigate that risk
is to have some degree
of public involvement in modelling.
And I think it's pretty clear that we haven't had that
in the modelling of the COVID pandemic.
Most of this modelling
has been coming out of isolated groups,
producing models
that were not even well understood
by the scientific community.
If the future is going to be like the last two years,
where the response to any oncoming crisis
is to employ models
to evaluate strategies
with enormous social significance,
I think we need to think a lot more
about how to make that reflective
of a wide variety of public values.
A lot of the ways
in which public debate evolved in the pandemic
was that many voices were told
that they were not following the science.
That was a phrase
that we heard a lot early in the pandemic.
"We're following the science."
"You're not following the science."
And I think, in many respects,
that phrase, "following the science,"
was used as a way
of silencing
other possible appeals to differing values
than those of the people who were setting the policy.
I think it's always, always, always important
to remember
that there's no such thing
as a policy that follows the science.
What there is, is there are--
there are desired outcomes and appraisals
of the differential value of different harms and benefits,
combined with
what the science says
about what different possible paths will result,
and an uncertainty envelope around that.
So, if someone is telling you this policy follows the science,
first and second questions you ought to ask them are,
given what desired outcomes,
and given what weightings to different harms and benefits,
and with respect
to what consideration of uncertainties.
And if those two other elements
are not added to what the science says,
then it's likely that
legitimate debate is being silenced.
Let's be clear,
there are many cases
where modelling can be used straightforwardly
to guide decision-making.
There are cases
where we have widespread agreement
about what we're trying to achieve,
what costs we're willing to bear,
and where the models themselves are.
The uncertainty that they give rise to
is sufficiently narrow
that the choices become obvious
given what you're trying to achieve.
When models get used by policy-makers
in ways that push them beyond their limits,
or when policy-makers use models
to...
simply justify decisions
that they antecedently want to make,
I think there are two costs that we bear.
One is that
the actual decisions that get made in the moment
could be bad for people.
But the other, I think,
that we always have to keep in mind
is that science plays
an incredibly important role in society.
We want science to be available as a tool
for guiding decisions,
but it can only do that
insofar as people have trust in it.
And trust is a precious resource.
Trust in science is a precious resource.
And we need to be careful not to squander it
by using scientific models as rhetorical devices
rather than as genuine tools
for exploring what we know and what our uncertainties are.
Political discussions in much of the world
have become quite a bit more divisive
than they've been in the recent past,
but I think we need to be careful
not to try
to always conceptualize that political divide
as one that's taking place
between those who are wise and who follow the science
and those who want to deny science.
I think, ultimately,
we will pay a price
in terms of the credibility that science will have
if we continue to try to use it as a bludgeon
for telling people we disagree with
that they are the science deniers.
5.0 / 5 (0 votes)