NE499/515 - Lecture 10: Safety Culture and the Boeing 737 MAX Airplane Crashes

Nuclear Engineering Lectures
28 Sept 202116:13

Summary

TLDRThis lecture addresses the importance of a healthy safety culture in preventing criticality accidents. It uses case studies, including the Space Shuttle Challenger disaster and the Boeing 737 Max crashes, to illustrate how poor safety culture and management pressures can lead to fatal outcomes. The talk emphasizes the need for operators to feel empowered to raise safety concerns and for management to support a culture that prioritizes safety over production pressures.

Takeaways

  • 🚹 Safety culture is crucial in preventing accidents and is influenced by shared attitudes, values, goals, and practices within an organization.
  • 📈 The probability of failure can escalate rapidly, as illustrated by the hypothetical reactor test scenario, underscoring the importance of vigilance.
  • đŸ‘„ A healthy safety culture is not mandated or enforced but is cultivated through operators' attitudes and questioning of unsafe conditions.
  • 🛠 The Space Shuttle Challenger disaster was a result of poor safety culture, where management pressures overrode an engineer's critical safety warning.
  • ✈ The Boeing 737 MAX crashes were a consequence of a flawed safety culture, where cost-cutting measures and inadequate training led to tragic outcomes.
  • 🔍 A single point of failure, like the MCAS system in the Boeing 737 MAX, can have deadly consequences if not properly managed.
  • đŸ‘©â€đŸ« Training is vital; inadequate training on the MCAS system contributed to the Boeing crashes, highlighting the need for comprehensive safety education.
  • 🔄 Economic pressures can compromise safety, as seen in the Deep Water Horizon accident, where cost-saving decisions increased the risk of a blowout.
  • đŸ€ Operator involvement and management buy-in are essential for a strong safety culture, ensuring that rules are understood and followed.
  • 🔄 Routine self-assessments and audits can identify weaknesses in safety controls and are necessary for continuous improvement.
  • 🌟 Setting the right example and fostering a team mentality can encourage a proactive approach to safety and prevent accidents.

Q & A

  • What is the main focus of the lecture series?

    -The lecture series focuses on nuclear criticality safety, discussing how an unhealthy safety culture can lead to criticality accidents.

  • What is the significance of the hypothetical case study presented in the lecture?

    -The hypothetical case study illustrates the dilemma of proceeding with a high-risk test under pressure, drawing parallels to real-world disasters like the Space Shuttle Challenger accident.

  • What does the term 'safety culture' refer to in the context of the lecture?

    -Safety culture refers to a set of shared attitudes, values, goals, and practices within an organization that prioritize safety in day-to-day operations.

  • Why is it crucial for operators to be comfortable raising safety concerns?

    -Operators must feel comfortable raising safety concerns to prevent accidents, as they are often the first to notice potential hazards in their work environment.

  • What role did a poor safety culture play in the Boeing 737 Max crashes?

    -A poor safety culture at Boeing led to the design of the Maneuvering Characteristics Augmentation System (MCAS) with a single point of failure, inadequate pilot training, and a failure to address warning signs, contributing to the crashes.

  • How did management pressures contribute to the Space Shuttle Challenger disaster?

    -Management pressures led to the decision to launch the Challenger despite safety concerns raised by an engineer about the cold temperatures affecting the shuttle's O-rings.

  • What are some ways to cultivate a healthy safety culture in an organization?

    -Cultivating a healthy safety culture involves getting operator involvement, management buy-in, routine self-assessments, audit closeout meetings, tracking corrective actions, and promoting a questioning attitude.

  • Why is it important for criticality safety engineers to network with each other?

    -Networking allows criticality safety engineers to share experiences, learn from mistakes, and find better ways to ensure safety, as illustrated by the case of an expert noticing an abnormal condition at Y-12.

  • What is the significance of the ANS standards mentioned in the lecture?

    -The ANS standards, specifically ANS 819 and 820, provide guidelines for developing and maintaining a healthy safety culture in nuclear and other high-risk industries.

  • How can operators be encouraged to follow safety rules?

    -Operators are more likely to follow safety rules if they understand their purpose and see the rules as making everyone safer, rather than as inconvenient restrictions.

Outlines

00:00

🚀 The Impact of Safety Culture on Criticality Accidents

This paragraph introduces the concept of safety culture and its critical role in preventing accidents, particularly in the context of a hypothetical nuclear reactor test. It presents a scenario where a new reactor design is to be tested with a significant risk of failure. The dilemma of proceeding with the test despite the risk is highlighted, drawing parallels to the Space Shuttle Challenger disaster, where poor safety culture led to a tragic outcome. The importance of fostering a culture that values safety over operational pressures is emphasized.

05:02

✈ The Boeing 737 MAX Tragedy: A Case Study in Safety Culture

Paragraph 2 delves into the technical flaws and safety culture issues that led to the Boeing 737 MAX disasters. It explains how the Maneuvering Characteristics Augmentation System (MCAS) was designed with a single point of failure and was not properly communicated to pilots, leading to two fatal crashes. The narrative underscores the consequences of management pressures that override safety concerns, resulting in inadequate training and a lack of transparency about the MCAS system.

10:03

đŸ› ïž Cultivating a Strong Safety Culture in Operations

This section discusses strategies for developing a robust safety culture, particularly in the context of physical material operations. It emphasizes the need for operator involvement, management support, and routine self-assessments. The paragraph also touches on the importance of addressing economic pressures that can compromise safety. It provides examples of how historical events, management decisions, and economic factors have influenced safety culture and led to accidents.

15:04

🌐 Networking and Learning for Enhanced Safety Culture

The final paragraph focuses on the value of networking and continuous learning in enhancing safety culture. It mentions the role of professional societies and standards in sharing best practices and learning from mistakes. The paragraph concludes with a call to action for attendees to engage with these resources and to consider the broader implications of safety culture in their work.

Mindmap

Keywords

💡Nuclear Criticality Safety

Nuclear criticality safety refers to the practices and measures taken to prevent a self-sustaining nuclear chain reaction, which could lead to an explosion or release of radiation. In the context of the video, it is central to the discussion on reactor operations and the importance of safety culture in avoiding accidents like the one described in the hypothetical case study.

💡Safety Culture

Safety culture encompasses the values, attitudes, goals, and practices that contribute to the safety of an organization's operations. It is not just about rules and regulations but also about the mindset and behavior of individuals within the organization. The video emphasizes that a healthy safety culture is crucial in preventing accidents like the Space Shuttle Challenger disaster.

💡Hypothetical Case Study

A hypothetical case study is a fictional scenario used to illustrate a point or explore a concept. In the video, it is used to discuss the ethical and safety dilemmas faced by engineers and managers when deciding whether to proceed with a risky test on a nuclear reactor prototype.

💡Feedback Mechanisms

Feedback mechanisms in a nuclear reactor are systems that provide information about the reactor's status and can help control its operation. The video mentions that cold temperatures in Idaho have made these mechanisms unstable, highlighting the importance of reliable feedback for safe reactor operation.

💡Risk Acceptance

Risk acceptance is the understanding and agreement to proceed with an action despite the potential for negative outcomes. The video discusses how the reactor operators voluntarily accepted the risk involved in the test, which is a critical aspect of safety culture and decision-making in high-stakes operations.

💡Space Shuttle Challenger

The Space Shuttle Challenger disaster is a historical event used in the video to illustrate the consequences of poor safety culture. The decision to launch despite warnings led to the deaths of all seven astronauts, underscoring the importance of listening to engineers' concerns.

💡Boeing 737 Max

The Boeing 737 Max is mentioned as an example of how poor safety culture can lead to catastrophic accidents. The video discusses how management pressures and a lack of proper training contributed to two fatal crashes, emphasizing the need for a strong safety culture in all aspects of aviation.

💡MCAS (Maneuvering Characteristics Augmentation System)

MCAS is an automated flight control system on the Boeing 737 Max that was implicated in two fatal crashes. The video uses MCAS to illustrate how a single point of failure and lack of proper understanding by pilots can lead to disaster, highlighting the need for comprehensive safety considerations in system design.

💡Economic Pressures

Economic pressures refer to the financial incentives or constraints that can influence decision-making. In the video, it is discussed how such pressures may have contributed to the Deep Water Horizon accident, where cost-saving decisions increased the risk of a blowout.

💡Self-Assessments

Self-assessments are internal evaluations conducted by an organization to identify strengths and weaknesses in their operations. The video suggests that these assessments can help develop a safety culture by providing opportunities for improvement and learning from mistakes.

💡Root Cause Analysis

Root cause analysis is a method of problem-solving that focuses on identifying the underlying causes of issues rather than just the symptoms. The video mentions that self-assessments should prompt corrective actions, which can involve root cause analysis to prevent recurring problems.

Highlights

Unhealthy safety culture can increase the likelihood of criticality accidents.

Hypothetical case study of a high visibility advanced reactor development program.

Risk assessment of a reactor test with a 1 in 100,000 chance of failure.

The dilemma of proceeding with a test despite potential funding loss.

Engineer's warning of unstable feedback mechanisms due to cold temperatures.

The ethical decision-making involved in conducting a risky test.

The real-life example of the Space Shuttle Challenger disaster.

The role of safety culture in the Challenger disaster and its implications.

Definition of safety culture as shared attitudes, values, goals, and practices.

Importance of operators questioning what could go wrong and refusing unsafe operations.

The impact of poor safety culture on multiple nuclear and non-nuclear accidents.

Case study of the Boeing 737 MAX and the Maneuvering Characteristics Augmentation System (MCAS).

The design flaws and single point of failure in the MCAS system.

The inadequate training and lack of awareness of the MCAS system among pilots.

The consequences of ignoring warning signs and the subsequent accidents.

The role of historical events and management pressures in shaping safety culture.

The influence of economic pressures on safety decisions and their impact on accidents.

Strategies to cultivate a strong safety culture in physical material operations facilities.

The importance of operator involvement, management buy-in, and routine self-assessments.

The value of audit closeout meetings and tracking corrective actions.

Encouraging a safety goal setting and questioning attitude among operators.

The significance of setting the right example and fostering a collaborative environment.

The benefits of networking and continuous learning for criticality safety engineers.

Transcripts

play00:00

hello everyone and welcome back to the

play00:01

nuclear criticality safety lecture

play00:03

series

play00:04

today we're going to discuss how having

play00:06

an unhealthy safety culture can make

play00:08

criticality accidents more likely and

play00:10

let's begin by discussing a hypothetical

play00:12

case study

play00:13

let's say that you're in charge of a

play00:15

very high visibility

play00:16

20 billion dollar advanced reactor

play00:18

development program and your team has

play00:20

just finished developing a prototype for

play00:23

a new reactor design concept at the

play00:25

idaho national laboratory

play00:27

you plan to begin operating and testing

play00:29

this prototype for the very first time

play00:31

which will be a very high visibility

play00:33

event the entire nuclear engineering

play00:36

world will be watching and there is a 1

play00:39

in 100 000 chance that this test will

play00:41

end in failure killing the seven reactor

play00:44

operators

play00:45

would you perform this test

play00:48

it will be viewed as a public failure if

play00:50

you decide to delay this test if you

play00:52

delay this test for too long then the

play00:54

funding agencies are likely to run out

play00:56

of patience and cancel your expensive

play00:58

reactor development program

play01:00

you've spoken with the seven reactor

play01:01

operators and they each understand the

play01:03

risk involved with this test and have

play01:05

voluntarily accepted this risk

play01:08

so would you perform this test

play01:10

let's say that you do decide to perform

play01:12

this test and that on the morning of the

play01:14

test one of the engineers says that the

play01:16

cold idaho temperatures have made the

play01:19

reactor's feedback mechanisms unstable

play01:21

the engineer now says that there is a 1

play01:23

in 100 chance that this test will fail

play01:26

and kill the seven operators however the

play01:28

other engineers disagree

play01:30

would you still perform this test

play01:33

seeing this probability of failure

play01:34

quickly increase by a factor of 1000 is

play01:37

certainly worrying but the operators are

play01:40

still willing to accept this risk of

play01:41

failure and a 1 in 100 failure rate

play01:44

still isn't super likely

play01:47

so be honest would you perform this test

play01:49

or would you risk closing your reactor

play01:51

development program and suffering a

play01:53

potentially fatal blow to your otherwise

play01:55

bright career

play01:57

it turns out that this hypothetical

play01:58

scenario isn't hypothetical at all but

play02:01

instead of testing a reactor prototype

play02:03

this test really was launching the space

play02:05

shuttle challenger

play02:07

one engineer thought that the cold

play02:08

temperatures during that morning would

play02:10

make the shuttle's o-rings fail to seal

play02:13

allowing hot high-pressure gas from

play02:15

burning the solid fuel to escape and

play02:18

destroy the shuttle but management

play02:20

decide to overrule this one engineer's

play02:22

advice and to launch anyway

play02:25

because of its poor safety culture nasa

play02:27

allowed management pressures to override

play02:29

the concerns of an experienced engineer

play02:32

and caused the deaths of all seven

play02:34

astronauts aboard the challenger

play02:36

so what is safety culture and how do we

play02:38

make sure that physical material

play02:40

operations do not fall into the same

play02:42

trap as nasa did

play02:44

safety culture is a set of shared

play02:46

attitudes values goals and practices

play02:50

within an organization

play02:51

safety culture is not an enforcement

play02:54

issue and it's not how many safety

play02:56

training videos you make your staff

play02:58

suffer through it's the attitude that

play03:00

operators bring with them during normal

play03:02

day-to-day operations

play03:04

you cannot cultivate a healthy safety

play03:06

culture by mandating it or by enforcing

play03:08

it you must make sure that operators

play03:10

approach a task with safety in mind and

play03:13

that they question what could go wrong

play03:14

and what doesn't feel right

play03:17

they must be comfortable raising these

play03:18

concerns and with refusing to perform an

play03:20

operation if things feel unsafe

play03:24

so how important is it to have a healthy

play03:25

safety culture

play03:27

well a poor safety culture has factored

play03:29

into multiple nuclear and non-nuclear

play03:31

accidents including the fukushima

play03:33

daiyachi accident the space shuttle

play03:35

challenger disaster multiple criticality

play03:38

accidents most notoriously the russian

play03:40

criticality accidents and the boeing 737

play03:44

max airplane crashes which we will now

play03:46

discuss

play03:48

in 2016 the new airbus a320 neo

play03:52

commercial airliners entered service

play03:55

these airplanes were bigger cleaner and

play03:57

as much as 15 percent more fuel

play03:59

efficient than competing designs and by

play04:01

october of 2019 the airbus a30 had

play04:04

surpassed the boeing 737 as the

play04:07

best-selling airliner

play04:09

boeing respond to this competition by

play04:11

upgrading their existing 737 design they

play04:14

chose to simply upgrade the design to

play04:16

avoid the costly process of completely

play04:18

recertifying their design and retraining

play04:20

their pilots

play04:22

the design upgrades caused the engines

play04:24

to be moved forward on the airplane

play04:26

which changed the plane's center of

play04:28

gravity and its aerodynamics

play04:30

this introduced some instability to the

play04:32

airliner and so they introduced the

play04:34

maneuvering characteristics augmentation

play04:37

system or mcas to compensate

play04:41

the mcas system used one of two sensors

play04:43

on the airplane to detect instability if

play04:46

it detected instability then it would

play04:48

respond automatically to re-stabilize

play04:50

the airplane

play04:52

re-stabilizing the plane involved

play04:53

pushing the airplane's nose down which

play04:55

would also cause the plane to lose

play04:57

altitude

play04:59

unfortunately the mcas system's design

play05:02

introduced some potentially deadly

play05:03

consequences

play05:05

because only one of its two total

play05:07

sensors was needed to trigger its

play05:09

response to an instability this meant

play05:11

that the mcas system was designed around

play05:13

a single point of failure that it was

play05:15

likely to see false positives for

play05:17

unstable conditions

play05:20

furthermore the mcas systems actuator

play05:23

which again would lower the airplane's

play05:24

nose was set to respond automatically

play05:28

pilots would be flying the plane and all

play05:29

of a sudden the plane's nose would lower

play05:32

and it would lose altitude

play05:34

this might not be an issue if the pilots

play05:36

knew what was going on and how to

play05:38

respond to it but unfortunately the mcas

play05:41

was not mentioned in the flight crew

play05:43

operations manual and moving the

play05:45

airplane's control yoke would not

play05:47

disengage the mcas

play05:49

so when the mcas activated some pilots

play05:52

didn't know how to turn it off

play05:55

as it was designed the mcas system

play05:57

required proper installation of both

play05:59

sensors to be effective improperly

play06:02

installed sensors would falsely trip the

play06:04

mcas and cause planes to lose altitude

play06:07

unfortunately boeing decided to reduce

play06:09

the number of mcas sensors from two to

play06:12

only one which made our single point of

play06:15

failure even worse

play06:18

to make matters even worse pilots were

play06:20

improperly trained on the mcas system

play06:23

many pilots only learned about the mcas

play06:25

from a two-hour ipad training video

play06:29

because of this pilots began reporting

play06:31

mcas-caused issues early in 2018 these

play06:35

conditions could not be replicated most

play06:37

likely because they are caused by the

play06:39

mcas sensor randomly failing or being

play06:41

iced over

play06:43

because they couldn't replicate the

play06:44

conditions the problem was incorrectly

play06:46

assumed to be resolved they just thought

play06:49

that it went away on its own

play06:51

on october 29th of 2018

play06:54

lion air flight 610 took off from

play06:57

jakarta and the pilots quickly

play06:59

experienced difficulty controlling the

play07:00

airplane after takeoff the airplane's

play07:03

mcas system was falsely detecting an

play07:04

unstable condition and as you can see in

play07:07

this plot kept responding by trying to

play07:10

lower the airplane's nose multiple times

play07:13

because of this the pilots were unable

play07:15

to gain much altitude reaching only

play07:17

about 5500 feet until the mcas lowered

play07:20

the nose one final time causing the

play07:22

airplane to crash into the java sea

play07:25

killing all 189 passengers and crew

play07:29

as a result of this accident the us faa

play07:32

and boeing issued warnings and training

play07:34

advisories to all 737 max series

play07:37

operators but these advisories were not

play07:39

fully implemented

play07:42

several months later a similar accident

play07:44

took place on march 10th of 2019.

play07:47

the mcas system activated an error

play07:49

during ethiopian airlines flight 302

play07:52

causing the airplane to continuously

play07:54

lose altitude and crash into the ground

play07:56

at nearly 700 miles per hour which

play07:59

killed all 157 passengers and crew upon

play08:03

impact

play08:04

boeing tried to cover up the cause of

play08:06

these accidents but was later sued for

play08:08

fraud and settled for 2.5 billion

play08:10

dollars in damages

play08:12

additionally all 737 max airliners

play08:15

across the globe were grounded following

play08:18

these accidents

play08:19

in november of 2020 the faa allowed the

play08:22

737 max airliners to re-enter service

play08:25

subject to a list of mandated design

play08:27

changes and training changes

play08:30

this example shows how a poor safety

play08:32

culture at boeing led to these accidents

play08:35

boeing engineers succumbed to management

play08:37

pressures and designed a workaround to

play08:39

avoid a costly 737 recertification

play08:41

process this workaround was one single

play08:44

point failure away from causing

play08:46

dangerous conditions

play08:48

some pilots weren't even aware that the

play08:50

mcas was installed on their planes and

play08:52

they certainly didn't know how to

play08:53

deactivate it in case it was triggered

play08:55

in error

play08:56

the natural intuitive way for

play08:58

deactivating the mcas which was to move

play09:00

the control yoke didn't actually work

play09:02

and it didn't actually shut off the mcas

play09:05

lastly when the warning signs of an

play09:07

accident appeared boeing assumed that

play09:09

the problem had resolved itself and they

play09:11

failed to investigate further

play09:14

so as we see a facility safety culture

play09:17

is influenced by

play09:18

one historical events if the operators

play09:21

have always done it that way and didn't

play09:24

run into problems in the past then

play09:26

they're likely to assume that it's safe

play09:27

to do things that way even if operating

play09:29

procedures or the site license forbid

play09:32

doing things that way

play09:33

we saw this during the tokomir accident

play09:36

where operators multi-batched 18.8

play09:38

percent enriched uranium probably

play09:40

because they had already done it in the

play09:42

past for low enrichment uranium with no

play09:44

consequences

play09:46

another factor affecting a site safety

play09:47

culture is management changes or

play09:49

pressures

play09:50

nasa managers allow the space shuttle

play09:52

program to unduly pressure them to

play09:55

launch the challenger

play09:56

after the accident the rogers commission

play09:59

recommended that nasa restructure the

play10:01

space shuttle program's management to

play10:03

prevent project managers from being

play10:04

pressured by the space shuttle

play10:06

organization to launch under unsafe

play10:08

conditions

play10:09

we see this effect often in criticality

play10:11

safety management pressure has led to

play10:13

multiple criticality accidents which is

play10:16

why the ansi ans standards state that a

play10:18

criticality safety program should remain

play10:21

independent of operations the crit

play10:24

safety staff should not be subject to

play10:26

production pressures

play10:28

economic pressures can also affect a

play10:30

site safety culture and as we've seen

play10:32

these pressures can also lead to

play10:34

criticality accidents

play10:35

economic pressures also played a role in

play10:37

the 2010 deep water horizon accident

play10:41

where decisions that british petroleum

play10:43

halliburton and trans ocean made

play10:45

regarding the rig's blowout preventer

play10:47

made a blowout significantly more likely

play10:50

on november 9th of 2010 a report by the

play10:53

oil spill commission criticized the

play10:55

rig's poor management decisions and

play10:57

stated that there had been a rush to

play10:59

completion on the well

play11:01

these management decisions were made to

play11:03

save money and to get the oil rig up and

play11:05

running faster and the co-chair of the

play11:07

oil spill report was quoted as saying

play11:09

that there was not a culture of safety

play11:12

on that rig so in light of these

play11:14

accidents how do we cultivate a strong

play11:16

safety culture in physical material

play11:18

operations facilities some things that

play11:20

we can do include

play11:22

first getting operator involvement and

play11:24

buy-in

play11:25

operators are more likely to follow

play11:27

rules if they understand them especially

play11:29

if the rules can be inconvenient

play11:32

operators need to understand why rules

play11:34

exist and to understand that these rules

play11:36

make everyone safer

play11:38

operations staff also probably know the

play11:40

facility better than the crit engineers

play11:42

and often they can offer valuable

play11:44

insight on potential upset conditions or

play11:47

easy but effective ways to implement

play11:49

criticality safety controls

play11:51

management also needs buy-in to

play11:53

criticality safety they're the ones who

play11:55

are likely to pressure operations to

play11:56

skirt the rules to save time and they

play11:59

also have access to resources to help

play12:01

develop and sustain a criticality safety

play12:03

program

play12:05

routine self-assessments can also help

play12:07

to develop a culture of safety and

play12:08

physical material operations facilities

play12:11

these assessments can include

play12:12

walk-throughs where operations staff

play12:14

show crit engineers how they usually

play12:16

perform their work which provides an

play12:18

opportunity for crit engineers to

play12:19

identify strengths and weaknesses in

play12:22

their criticality safety controls

play12:24

these walkthroughs can also help to

play12:25

identify the root causes of any current

play12:28

or likely abnormalities since they give

play12:31

us a chance to see how things are really

play12:32

done in practice

play12:34

it is worth noting that these

play12:35

self-assessments are only as good as the

play12:38

corrective actions that they prompt

play12:40

identifying a problem and doing nothing

play12:42

about it won't make anything safer

play12:45

along these lines having audit closeout

play12:48

meetings allows engineers and operators

play12:50

a chance to reflect on how well the

play12:51

criticality safety controls are

play12:53

operating under normal conditions and

play12:55

also after an abnormal condition arises

play12:58

the meanings allow us to document

play12:59

compliance with criticality safety

play13:01

controls and to assess the adequacy of

play13:03

the posted warnings and controls the

play13:05

availability and continued use of the

play13:07

controls and the adequacy of the

play13:08

existing criticality safety evaluations

play13:12

tracking corrective actions also helps

play13:14

to develop a safety culture by tracking

play13:17

these actions we can both ensure that

play13:18

they have been implemented

play13:20

notice if they have been removed or are

play13:22

no longer functioning and reflect on the

play13:24

root causes of an issue

play13:26

if we continuously have to implement a

play13:28

corrective action in response to a

play13:30

seemingly random but reoccurring

play13:32

abnormal event then chances are that

play13:34

there's probably some underlying root

play13:36

cause lurking about

play13:39

managers should also encourage a safety

play13:41

goal setting and a questioning attitude

play13:43

in their facilities if operators

play13:45

understand that going home safe each

play13:46

night is the goal of crit safety and

play13:49

that they're welcome to ask questions

play13:50

about things that seem off then they're

play13:52

more likely to notice potentially

play13:54

dangerous upset conditions or even

play13:56

better to proactively suggest ways to

play13:59

make operations more safe

play14:02

we should also seek to instill the

play14:03

attitude of we're in it together rather

play14:06

than it's us versus them

play14:09

criticality safety engineers should be

play14:11

seen as an ally to operations not as an

play14:13

adversary

play14:15

operators who think that you're only

play14:16

there to give them a hard time aren't

play14:18

very likely to pay much attention to

play14:20

your suggestions they're much more

play14:22

likely to cooperate and to proactively

play14:24

work with you when they understand that

play14:26

you're both on the same team

play14:29

lastly we should also strive to set the

play14:31

right example

play14:33

saying that someone asked a stupid

play14:34

question discourages everyone in the

play14:36

room from ever asking a question in the

play14:38

future

play14:39

instead we should all demonstrate our

play14:41

commitment to safety and show that we

play14:43

want to help operators do their jobs

play14:46

we should also seek to hire

play14:47

knowledgeable instructors and make sure

play14:49

that management continuously

play14:50

demonstrates that it values safety

play14:53

additionally it's very important to send

play14:55

our crit safety engineers to conferences

play14:57

and to support them to participate on

play14:59

the ansi ans standards committees

play15:02

these activities get credit engineers

play15:03

talking to one another which allows them

play15:05

to share their stories their experiences

play15:08

to learn from each other's mistakes and

play15:10

maybe to learn better ways to accomplish

play15:12

their jobs

play15:13

one of our homework assignments will

play15:14

cover a case study where an expert

play15:16

criticality safety engineer from bwxt

play15:20

noticed an abnormal condition at y-12

play15:22

this engineer called another expert

play15:25

criticality safety engineer to bounce

play15:27

ideas off of and to have review his

play15:29

calculations this engineer might not

play15:31

have known another expert safety

play15:33

engineer that he could reach out to with

play15:35

no notice had they not already met and

play15:37

become friends over many years at

play15:39

american nuclear society conferences

play15:42

networking isn't just about getting a

play15:43

job offer having a healthy network of

play15:46

colleagues in the criticality safety

play15:47

field allows us to help each other and

play15:49

to learn from each other

play15:51

this concludes our lecture on safety

play15:53

culture if you're interested in learning

play15:54

more about growing a healthy safety

play15:56

culture then i recommend reading the

play15:58

ansi ans 819 and 820 standards in the

play16:02

following lectures we will continue

play16:03

looking into criticality safety from an

play16:05

operator's perspective and will discuss

play16:07

ways to facilitate positive interactions

play16:09

with operations staff

Rate This
★
★
★
★
★

5.0 / 5 (0 votes)

Étiquettes Connexes
Safety CultureNuclear ReactorsSpace ShuttleBoeing 737 MaxCase StudiesRisk ManagementEngineering EthicsDisaster AnalysisSafety ProtocolsPreventive Measures
Besoin d'un résumé en anglais ?