by George Taniwaki

Which side of your face do you show when someone takes a photograph of you? Most people naturally turn right which shows the left side of their face.

I usually turn left, showing the right side of my face, though that isn’t a natural behavior for me. I learned it as a child. My natural inclination is to face directly at the camera. But in grade school, on picture day, a photographer told me not to look straight at the camera and to turn my head. When I do that, I naturally want to look left, which shows the right side of my head. This is true even though I part my hair on the left.

When asked to turn my head to the right and show the left side of my face, I feel like I am showing off.

The Atlantic Feb 2014 features a video by the science writer Sam Kean that discusses this observation and a theory about why it occurs. It may add insight into the right-brain, left-brain debate.

FacingLeft

Facing Left. Screenshot from The Atlantic

Advertisements

Nearly every state in the U.S. maintains a registry of people willing to become deceased organ donors. The intent of an individual to be a donor is stored as a Boolean value (meaning only yes or no responses are allowed) within the driver’s license database. Nearly all states use what is called an opt-in registration process. That is, the states start with the assumption that drivers do not want to participate in the registry (default=no) and require them to declare their desire (called explicit consent) to be a member of the registry either in-person, via a website, or in writing.

One of the frequent proposals to increase the number of deceased organ donors is to switch the registration of donors from an opt-in system to an opt-out system. In an opt-out system, all drivers are presumed to want to participate (default=yes) and people who do not wish to participate must state their desire not to be listed.

Let’s look at the logical and ethical issues this change would present.

Not just a framing problem

Several well-known behavioral economists have stated that switching from opt-in to opt-out is simply a framing problem. For instance, see chapter 11 of Richard Thaler and Cass Sunstein’s book Nudge and a TED 2008 talk by Dan Ariely using data from papers by his colleagues Eric Johnson et al., in Transpl. Dec 2004 and Science Nov 2003 (subscription required).

The basic argument is that deciding whether to donate organs upon death is cognitively complex and emotionally difficult. When asked to choose between difficult options, most people will just take the default option. In the case of an opt-in donor registration, this means they will not be on the organ donor registry. By switching to an opt-out process, the default becomes being a donor. Thus, any person who refuses to make an active decision will automatically become a registered organ donor (this is called presumed consent). This will increase the number of people in the donor registry without causing undue hardship since drivers can easily state a preference when obtaining a driver’s license.

However, these authors overlook two important practical factors. First, switching from opt-in to opt-out doesn’t just reframe the decision the driver must make between two options. It will actually recategorize some drivers.

Second, it changes the certainty of the decision of those included in the organ registry, which affects the interaction between the organ recovery coordinators at the organ procurement organization (OPO) and the family member of a deceased patient.

There are more than two states for drivers regarding their decision to donate

Note that the status of a driver’s intent to be an organ donor is not just a simple two-state Boolean value (yes, no). There are actually at least three separate states related to the intension to be an organ donor. First, upon the driver’s death, if no other family members would be affected, would she like to be an organ donor (yes, no, undecided). Second, has she expressed her decision to the DMV and have it recorded (yes, no). Finally, would she like her family to be able to override her decision (yes, no, undecided). The table below shows the various combinations of these variables.

Category

Driver would like to be organ donor
Driver tells DMV of decision
Driver would permit family to override decision

Comment

1a Yes Yes No Strong desire
1b Yes Yes Yes or Undecided Weak desire
2a No Yes Yes or Undecided Weak reject
2b No Yes No Strong reject
3a Yes No Yes, No, or Undecided Unrecorded desire
3b No No Yes, No, or Undecided Unrecorded reject
4 Undecided Yes or No Yes* Undecided

*No or Undecided options make no sense in this context

Opt-in incorrectly excludes some drivers from the donor registry

Now let’s sort these people into two groups, one that we will call the organ donor registry and the other not on the registry.

Under the opt-in process, only drivers in categories 1a and 1b are listed on the organ registry. These drivers have given explicit consent to being on the registry. Drivers in categories 2a, 2b, 3a, 3b, and 4 are excluded from the registry. Thus, we can be quite certain that everyone on the registry wants to be a donor. (There is always a small possibility that the driver accidentally selected the wrong box, changed their mind between the time they obtained their driver’s license and the time of death, or a computer error occurred.)

In most states the drivers not on the organ registry are treated as if they have not decided (i.e., as if they were in the fourth category). When drivers not on the registry die under conditions where the organs can be recovered, the families are asked to decide on behalf of the deceased.

Under an opt-in process, drivers in category 2a are miscategorized. They don’t want to be donors and didn’t want their family to override that decision, but the family is still allowed to decide. The drivers in categories 3a and 3b are miscategorized as well. The ones who don’t want to be donors (3b) are also forced to allow their families to decide. The ones who want to be donors (3a) are now left to let their families decide.

Opt-out incorrectly includes some drivers in the donor registry

Under an opt-out process, drivers in categories 1a, 1b, 3a, 3b, and 4 are grouped together and placed on the organ registry. If the donor registry is binding and the family is not allowed to stop the donation, then the process is called presumed consent. (Note that many authors use opt-out and presumed consent interchangeably. However, they are distinct ideas. Opt-in is a mechanical process of deciding which driver names are added to the registry. Presumed consent is a legal condition that avoids the need to ask the family for permission to recover the organs.)

Drivers in category 3a who wanted to be registered are now correctly placed on the registry. But any drivers in category 3b who don’t want to be on the registry are now assumed to want to be donors, a completely incorrect categorization. Similarly, all drivers in the fourth category who were undecided are now members of the definite donor group and the family no longer has a say.

Only drivers in category 2a and 2b are excluded from the registry. We can be quite certain these people do not want to be donors. But some (category 2a) were willing to let the family decide. Now they are combined with the group of drivers who explicitly do not want to donate.

The distribution of categories into the registry under the opt-in and opt-out process and how they are treated are shown in the table below.


Categories added to donor registry
Categories not added to donor registry

Implications

Opt-in process 1a, 1b both treated as if in category 1a (explicit consent) 2a, 2b, 3a, 3b, 4 all treated as if in category 4 (family choice) Drivers in registry are nearly certain to want to be donors. Actual desire of drivers not on registry is ambiguous
Opt-out process 1a, 1b, 3a, 3b,4 all treated as if in category 1a (presumed consent) or 1b (family choice) 2a, 2b both treated as if in category 2b (explicit reject) Drivers not in registry are nearly certain to not want to be donors. Actual desire of drivers on registry is ambiguous

 

Ethical implications of misclassification

If there are no drivers in categories 3a, 3b, and 4, then switching from opt-in to opt-out will have no impact on the size of the donor registry. However, if there are any drivers in these categories, then some will be incorrectly categorized regardless of whether opt-in or opt-out is used. This miscategorization will lead to some ethical problems.

Under opt-in, there may exist cases where the drivers has made a decision to donate (category 3a) or not (categories 2a or 3b) but family members overrules it. These errors are hard to avoid because they are caused by the lack of agreement between the drivers and other family members.

However, under opt-out combined with presumed consent, there may exist cases where neither the driver (category 3b) nor the family want to donate, but cannot stop it. Similarly, the driver may want to let the family choose whether to donate (category 4) and the family does not want to donate but cannot stop it.

It appears that from an ethical perspective, opt-in is less likely to create a situation where the respect for individual’s right to make decisions about how the body should be treated is denied. For further discussion of the ethical issues see  J. Med. Ethics Jun 2011, and J. Med. Ethics Oct 2011 (subscription required).

Next we will look at the impact switching from opt-in to opt-out will have on the interaction between the organ recovery coordinator and the family. See Part 2 here.

[Update: This blog post was significantly modified to clarify the “decision framing” issue.]

Yesterday’s New York Times has an incredibly detailed and sad story about the final hours onboard the Deepwater Horizon drilling vessel.

26spill-web3-articleLarge

Deepwater Horizon drilling vessel prior to capsizing and sinking. Photo from NY Times

I wrote about the disaster and the role poor risk management played in it in two Jun 2010 blog posts.

In a Dec 2009 blog post, I wrote that too many patients with end-stage renal disease (ESRD) are waiting for a deceased donor kidney. They would have a much shorter wait and experience better outcomes if they could find a live kidney donor. I am currently working with Harvey Mysel and the Living Kidney Donors Network to set up a program in Seattle to provide training to patients to give them the tools and the confidence to find a donor.

Part of my effort includes learning as much as I can about working with patients. I have plenty of experience in public speaking, having been a market research consultant. But in that case the audience consists of highly driven business executives. I have some experience working with disadvantaged populations, having been a volunteer tutor in an adult literacy program. But I do not have any experience working with medical patients. How does one instruct and motivate kidney patients who are quite ill? Even more concerning to me, can I effectively work with patients who have behavioral or emotional problems that make me uncomfortable? What about physical appearance? The leading causes of kidney failure are diabetes mellitus and hypertension, both of which are highly correlated with obesity. Will I consciously or even unconsciously blame overweight patients for their disease? Hopefully, just knowing I have a potential bias may help prevent me from allowing it to affect my ability to help.

While pondering this, my wife forwarded an article entitled “How clinicians make (or avoid) moral judgments of patients: implications of the evidence for relationships and research” that appeared in Philosophy, Ethics, and Humanities in Medicine Jul 2010. It is a review of 141 articles on how clinicians form moral judgments regarding patients and how those evaluations affect empathy, level of care, and the clinician’s own well-being. Just reading the list of references to the article is an eye opener. Below are some selected quotes from the article.

“The paucity of attention to moral judgment, despite its significance for patient-centered care, communication, empathy, professionalism, health care education, stereotyping, and outcome disparities, represents a blind spot that merits explanation and repair… Clinicians, educators, and researchers would do well to recognize both the legitimate and illegitimate moral appraisals that are apt to occur in health care settings.”

“[T]he treatment of medically unexplained symptoms… varied by patient ethnicity, physician specialty, the spatial layout of the clinic, and the path sequence of patient contact with physicians and ancillary personnel.”

“[N]urses judged dying patients by their perceived social loss, often giving ‘more than routine care’ to higher status patients and ‘less than routine care’ to the unworthy. People dying from a Friday night knife fight, or the adolescent on the verge of death who has killed others in a wild car drive, have their own social loss reinforced by an ‘it’s their own fault’ rationale.”

“The patients and physicians were able to gauge whether the other liked them, and that perception predicted whether they themselves liked the other. Physicians liked their healthier patients more than their sick patients, and healthier patients liked their physicians more. Physician liking predicted patient satisfaction a year later.”

“Poor patients belong to outgroups of particular interest in healthcare. Public hospitals serving these groups comprise only 2% of acute care hospitals in the United States but train 21% of doctors and 36% of allied health professionals. Primary care physicians serving poor communities are often troubled by what they perceive as their patients’ inadequate motivation and dysfunctional behavioral characteristics.”

“One of the factors that may prevent clinicians from triggering moral appraisals is interest, often equated with curiosity… Good teachers have stressed the value of curiosity for clinical care… ‘One of the essential qualities of the clinician is interest in humanity, for the secret of the care of the patient is in caring for the patient.’”

“Once a stimulus–or perhaps patient, for our purposes–appears beyond one’s comprehension and ability to manage, interest wanes. These appraisals mediate individual personality differences in curiosity and the experience of interest… [W]e can use interest to self-regulate our motivation. When intrinsic motivation lags, we can activate strategies to engage our interest and thereby remain motivated for the task.”

Most people are bad at thinking about low probability events and their eyes can glaze over as they think about very small or very large numbers. Further, how the data is framed has a big impact on how your react to them.

To take a personal example, I’m about to go in for surgery to donate a kidney next week. [Update: My surgery has been postponed, but that doesn’t affect this analysis.] The chances of me dying are very small, only about 0.02%. I guess that seems very safe. Now let’s frame it differently. There are about 6,500 live kidney transplants a year, which means on average one or two donors die each year. Now my surgery seems a lot more dangerous. By changing from a percentage to an actual count, the number seems more personal. I can image that the person who dies is me.

Here’s another example. According to the United Network for Organ Sharing (UNOS), as of June 24, there are 85,512 kidney patients in the U.S. waiting for a kidney transplant. It seems like an impossible task to find enough donors to help them all. Assuming no other additions or removals in the next week (which isn’t quite true), that number will drop to 85,511 after I complete my donation. It seems my contribution is insignificant. But I can frame the problem in another way. The Univ. Washington Medical Center, where my surgery will take place, has 416 people on the waiting list. After my donation, it will be 415. This makes the impact of my one donation seems a lot bigger.

To make the task more general, there are 249 transplant centers in the U.S. that perform live donor transplants, meaning an average of 341 patients per hospital. Finding 341 more people (per year) to donate seems like a solvable problem. This isn’t an unachievable task. I just need to find a group of live donor champions at each hospital, probably previous nondirected donors. Then I need to convince them that they just have to help each of the 341 patients (on average) at each hospital find a donor and the waiting list will go away. To make the task seem even smaller, I can state the problem is to find one donor for each patient on the waiting list. Now it really seems easy. Small integers have a concrete aspect to them. Very large numbers or very small fractions do not because you just can’t picture them in your mind.

Keeping this smaller goal in my head should help keep me motivated as I prepare my outreach efforts to help kidney patients find live donors. (Yes, I’m fooling myself, which violates my Real Numeracy credo. So what?)

Choice A Choice B
Which option seems riskier? A 0.02% chance of dying from surgery (outcome per patient) 1 to 2 deaths per year (outcome per population)
Which task seems harder? Find 85,510 donors for the kidney patients on UNOS waiting list (count per population) Find 1 donor for each kidney patient on the UNOS waiting list (count per patient)

This is a continuation of yesterday’s blog post on BP’s culture of risk.

The cause of the recent accident on the Deepwater Horizon and resulting Macondo oil spill are still under investigation, but it appears there was no single failure. Instead there was a chain of decisions and events like the one described in the previous blog post for the Ixtoc I oil spill. Some details have been revealed by congressional investigators. The Wall St. J. has reproduced the letter addressed to BP’s chairman from the House Committee on Energy and Commerce. Yesterday’s New York Times has an excellent long article on design weaknesses of blowout preventers.

I won’t speculate about the exact decisions that led to the accident on the Deepwater Horizon rig. I presume that a lot of work went into the design and specification of the equipment, materials, and processes. However, the main contributor to the accident may have been a culture at BP that encouraged engineers to engage in risk creep, to ignore the impact of low probability, high cost events, and reward overconfidence. I will discuss these in detail in the next sections.

BP has a reputation of taking on expensive, high-risk engineering projects. It was a participant in the construction of the Trans-Alaska Pipeline, it invests in Russia and Kyrgyzstan, and it was the lead developer of the Thunder Horse PDQ platform, the world’s largest and most expensive offshore platform, which nearly sank after its commissioning in 2005. BP has an explicit strategy of seeking the biggest oil fields in the Gulf of Mexico, even if it means drilling in deep waters far from shore.

ThunderHorse

Thunder Horse platform. Photo from Wikipedia

Nothing attracts top engineering talent like big challenges and an opportunity to work on high-profile, big budget projects. BP provided plenty of that with its Gulf Coast projects. The ability to handle the low temperatures and high pressures at the bottom of the gulf combined with ability to accurately guide the drill bit at extreme depths are amazing technical achievements. But it can also lead to cost overruns and schedule slips. When combined with the pressure to meet budgets and deadlines, it can lead to accidents.

Allowing risk creep

Good engineering practice requires that designs outside the known limits (called the design envelope) be done as experiments, preferably in a laboratory setting, preferably by PhDs who have extensive knowledge of the phenomena being studied, and that lots of data be collected so that the design can be standardized and repeated with confidence. That is, you want to get to the point that the design is easy to replicate and if you don’t make any avoidable mistakes, it works. However, this doesn’t appear to be what happened in the evolution of deepwater oil drilling. Instead, engineers built deeper, more complex wells without testing their designs adequately prior to implementation.

There are four factors that lead to risk creep. First, long periods of “safe” operation reinforces the belief that the current practices and designs are sufficient. Guess how many wells were drilled offshore in the Gulf of Mexico since the Ixtoc I accident in 1979? How about 50, or 200, or even 1,000? Not even close, try over 20,000. There have been 22 blowouts. But not all wells are the same; the newer wells are deeper, with colder temperatures and higher pressures. Overcoming the belief that long stretches with few accidents mean everything is well understood and under control is really hard, especially as firms compete with each other to meet production targets and minimize costs.

Second, very little time is spent on reflection of past failures. Failures don’t just mean accidents. For every well blowout, there are thousands of near-miss incidents where dangerous unexpected kicks or casing damage occurred. Most engineers consider it a burden to conduct safety reviews, file incident reports, and attend project post-mortems. Time spent doing this is less time spent on new projects. But reviews allow engineers to see trends. They also can help encourage more of the behaviors that led to good results and eliminate those that caused problems.

Third, engineers may believe that extrapolating current designs to new conditions don’t require peer review. Nobody likes to have their work reviewed by outsiders. And managers don’t want to spend the time and money to do it. Unless lots of effort is made, it becomes hard to get into the practice. Similarly, when time sensitive decisions must be made, it is easier to forge ahead with the current plan (or a quickly improvised new plan) than to stop and consider alternatives.

Finally, the risk may be growing so slowly that nobody who works in the field day-to-day notices that the process is actually out of control.

Ignoring rare events

In his book, The Black Swan: The Impact of the Improbable, Nassim Nicholas Taleb points out that humans are prone to two deceptions. First, we think that chaotic events have a pattern to them. That is, we believe that the best way to predict the future is to look at the recent past. Second, we underestimate the importance of rare events. In fact, we believe that rare events are not worth planning for since they are too infrequent to care about. Tony Hayward, the CEO of BP called the Macondo oil spill a one-in-a-million event. (It wasn’t, it is closer to 1 in 1,000.) But even if it were, the enormous consequences means that there is no excuse for not including it in planning at the top levels of the company.

BlackSwan     Image from Amazon

Rewarding overconfidence

As I mentioned earlier, engineers (and many other professionals) are rewarded for being confident in their projections. Managers select projects based on how confident they are about the chance of success. And they are influenced by the confidence of the engineer proposing the project. So everyone learns to speak with more confidence than is safe.

However, overconfidence doesn’t require an external reward. For example, I believe that I am a better than the average driver. I believe I can navigate icy roads safely, and can handle any emergency situation. Everyone believes this. When I first get on an icy road, I drive slowly until several drivers pass me. Then I speed up to match the speed of the other drivers and start passing other cars myself. I know I shouldn’t do this, but I do it anyway. I haven’t been in an accident, so that reinforces my behavior. Similarly, every time I get into my car I don’t explicitly consider the chance that I might kill someone. But I should. And I should be reminded of my fallibilities and the dangers every few minutes, lest my attention wander. I should drive every second as if someone will, not just could, die every time I make a mistake.

Proposals for reducing risk

The solution to oil spills is not to stop drilling offshore because the technology is inherently unreliable and unsafe as some writers recommend. Rather, it is to assume that equipment can fail, that hurricanes will strike, that unexpected rock formations exist, that mistakes in selecting the right mud will be made, and pressure to meet schedules and budgets exist, and then design the mitigation for each.

First, engineers need to admit that they are running experiments whenever they are designing and building something that is even slightly beyond the scope of an existing project. Once engineers admit that what they are doing is an experiment, not just following a recipe in a cookbook, they will be more cognizant of the need to consider the risk, examine alternative methods, take care when collecting data, and to spend more time analyzing the data after the end of the project. Managers also need to consider each project an experiment and remember that experiments can fail. They must be willing to nurture calculated risk taking. They must also be willing accept the cost of mitigation (or the cost of the consequences). It appears that BPs managers failed at this.

Second, engineers need to be more open about their work. In other fields like physical science and medicine, researchers are encouraged to disclose the results of their work and solicit peer review. Engineers rarely publish their findings, for two reasons. First, they are not paid to. Second, nearly all of their work is considered proprietary by management. Even work that would benefit the industry as a whole, like new safety ideas or techniques to protect the environment are often hidden from competitors. The government needs to encourage or enforce sharing of safety data, require public reporting of near-miss incidents, and set standards for best practices. Currently, the government relies too heavily on industry expertise. To adequately police industry, the government needs to start hiring engineers as regulators, recruiting at top universities, paying competitive salaries, and conducting its own research.

Unfortunately, I don’t have high hopes that government regulators, investors, and managers learn the correct lessons from the Macondo oil spill. Rather than looking at the systemic causes of accidents, we will ban offshore drilling for a few months to assuage the public. Then regulators will write new rules like requiring acoustic transducers that shows they are getting tough and reforming the industry. But they won’t do anything that actually encourages critical thinking or processes that channel engineers to do the right thing. Then once the public outcry dies down, new technology, risk creep, and overconfidence will return. But it will be invisible until the next accident happens and we are all left wondering again how something awful like that could happen in America.

[Update1: On June 22, a federal judge issued an injunction that struck down the Obama administration’s six-month offshore drilling ban. The Justice Department is preparing an appeal.]

[Update2: I just noticed a really eerie coincidence. In the sixth paragraph, there is a hyperlink to a report that provides the counts of total offshore oil wells and blowouts. The report is dated April 20, 2010, the same day of the Deepwater Horizon accident.]

[Update3: There is a recent AP story that points to some of the same human errors as this blog post.]

The recent fatal accident on the Deepwater Horizon rig and the resulting oil spill from the Macondo well are horrible tragedies. In addition to the deaths of eleven workers, the pictures of the dead wildlife and the despoiled coast are heartbreaking. Since most of the oil is hidden below the surface of the water, the ill effects may last for decades and the long-term environmental, economic, and health consequences are still unknown.

The visible part of this disaster will cause immediate changes to offshore oil regulation, in engineer decisions, and corporate behavior. Unfortunately, I don’t think the government or BP will address the underlying causes of the accident. This means that these root causes will not be corrected. Congress is good at holding hearings, finding scapegoats, and passing laws. Regulators are good at making rules though often have trouble enforcing them. But the real problems are harder to fix.

Those real problems are risk creep, overconfidence in ones own abilities, and the inability of humans to base short-term day-to-day decisions on the impact of low probability, high cost events that may occur in the future. These problems are not specific to offshore oil drilling and are endemic to many situations.

A typical public hearing places too much emphasis on affixing blame and putting in place onerous rules to avoid a repeat of the proximate cause of the accident. No effort is made to consider the incentives and psychology that lead people to rely on technology and past good luck to an extent that makes accidents almost inevitable.

I should reveal my own potential bias. I have a degree in chemical engineering. I’ve worked around oil and gas wells. I’ve seen fires and accidents and know of coworkers burned and sent to the hospital. I also have a personal interest in BP. I worked for several years at Amoco, which was acquired by BP. As a result of my employment, I owned a large chunk of BP stock and have a defined benefit pension funded by BP.

Lessons from Ixtoc I

When I was in college, there was a blowout on the Ixtoc I well in the Gulf of Mexico. The accident led to the largest oil spill in history (until now). About 10,000 barrels a day of oil leaked out and it took nine months to cap the well.

Ixtoc_I

Ixtoc I oil spill. Photo from Wikipedia

During drilling, engineers working for Pemex, the well owner, requested the drilling contractor Sedco (now part of Transocean) to replace the heavy drilling mud with a lighter one to speed up drilling. This is risky. A region of soft rock was hit and the lighter mud flowed into the porous rock quicker than expected. The operators were unable to pump in more mud fast enough to maintain the hydrostatic pressure. This is a dangerous situation because loss of pressure can allow the oil and gas to flow into the borehole (you want this when the well is producing, but not while you are drilling), a problem known as kick.

Pemex then asked Sedco to stop drilling, pull out the drill pipe, change the drill bit, and then start drilling again with a new bit and heavier mud. As the drill pipe was being removed, oil and gas flooded into the borehole. At this point, the well must be shut in using hydraulic clamps called blowout preventers (BOP) that choke the pipe until heavier mud can be circulated and hydrostatic control is regained. The BOP was either not activated or did not work. Oil and gas started flowing up the riser. At the surface, the oil and gas should have been vented to a flare. That failed and gas fumes filled the air and exploded when they were ignited by electrical equipment. The drilling platform caught fire and sank.

As engineering students, we talked about the Ixtoc I accident, and concluded that the problem was a result of poor decision-making by incompetent engineers. We took as proof that all the top firms recruited at our college (Colorado School of Mines) but Pemex did not. Another factor in our mind was that Pemex is an arm of a third-world government which must have interfered with the ability of engineers to correctly manage the project. Finally, we imagined that Pemex must have used equipment and drilling materials that were technologically inferior, that their work was sloppy, their roughnecks ill-trained, and they didn’t care about safety.

I don’t recall any of us thinking there was a systemic problem with how we as engineers think or how business decisions are made. None of us said, “Geez, this could happen to me someday on a project I work on.” We were all infallible and smarter than the engineers who worked on the Ixtoc I well. Similarly, I don’t recall a single one of my professors discussing the accident in class or what we should learn from it.

I believe there were several key items missing in our training. First, not enough emphasis was placed on teaching the danger of extrapolating data. Engineering is all about taking successful existing designs and applying them to novel situations. But students have a hard enough time learning the basics of good design without having the professor throw in trick problems where following all the correct procedures leads to a non-functional solution. There simply isn’t enough time to cover all the latest engineering methods and then go back and discuss the limitations of each one and how to detect if a limit has been reached. This is especially difficult since the limits generally are not known. If they were, someone would develop a new method to replace the old one.

Second, we were never taught to question our abilities. In team projects and presentations, admitting to being uncertain is seen as a sign by other team members and faculty that you haven’t done your work. This results in them not trusting your ability. On the job, engineers who are certain their designs will work get their projects approved. Those without confidence have their designs rejected which can damage their careers. So everyone learns to speak with more confidence than is safe.

Third, we were not taught qualitative risk assessment. Perhaps even worse, we were taught that risk assessment is a quantitative skill and risk could be calculated with absolute certainty. Thinking you know all the risks and that you have accurately calculated their probabilities and impacts means you will be overconfident about your chances of success. As students we never learned how to make decisions under uncertainty. This is a problem, since every decision worth making involves handling unknowns and risks. There is design risk, nondiversifiable finance risk, risk of events that have not yet occurred, etc.

Finally, we were not taught the impact of incentives on decision-making. Managers often believe that setting tough goals and rewarding success leads to better performance. But the schedules and budgets are developed in the early phase of a project when uncertainty is high. Very few companies know when to reset schedules and budgets as new data becomes available on a project. Doing it too often leads to demoralized teams. But not doing it can lead to teams that cut corners, violate policies, or burn out.

The next blog post will look at these problems as they apply to the Deepwater Horizon accident and offer partial solutions.

****

If you are interested in seeing the technical data regarding the response to the Macondo well oil spill, check out the U.S. Dept. of Energy, NOAA, and BP websites.