The recent fatal accident on the Deepwater Horizon rig and the resulting oil spill from the Macondo well are horrible tragedies. In addition to the deaths of eleven workers, the pictures of the dead wildlife and the despoiled coast are heartbreaking. Since most of the oil is hidden below the surface of the water, the ill effects may last for decades and the long-term environmental, economic, and health consequences are still unknown.

The visible part of this disaster will cause immediate changes to offshore oil regulation, in engineer decisions, and corporate behavior. Unfortunately, I don’t think the government or BP will address the underlying causes of the accident. This means that these root causes will not be corrected. Congress is good at holding hearings, finding scapegoats, and passing laws. Regulators are good at making rules though often have trouble enforcing them. But the real problems are harder to fix.

Those real problems are risk creep, overconfidence in ones own abilities, and the inability of humans to base short-term day-to-day decisions on the impact of low probability, high cost events that may occur in the future. These problems are not specific to offshore oil drilling and are endemic to many situations.

A typical public hearing places too much emphasis on affixing blame and putting in place onerous rules to avoid a repeat of the proximate cause of the accident. No effort is made to consider the incentives and psychology that lead people to rely on technology and past good luck to an extent that makes accidents almost inevitable.

I should reveal my own potential bias. I have a degree in chemical engineering. I’ve worked around oil and gas wells. I’ve seen fires and accidents and know of coworkers burned and sent to the hospital. I also have a personal interest in BP. I worked for several years at Amoco, which was acquired by BP. As a result of my employment, I owned a large chunk of BP stock and have a defined benefit pension funded by BP.

Lessons from Ixtoc I

When I was in college, there was a blowout on the Ixtoc I well in the Gulf of Mexico. The accident led to the largest oil spill in history (until now). About 10,000 barrels a day of oil leaked out and it took nine months to cap the well.

Ixtoc_I

Ixtoc I oil spill. Photo from Wikipedia

During drilling, engineers working for Pemex, the well owner, requested the drilling contractor Sedco (now part of Transocean) to replace the heavy drilling mud with a lighter one to speed up drilling. This is risky. A region of soft rock was hit and the lighter mud flowed into the porous rock quicker than expected. The operators were unable to pump in more mud fast enough to maintain the hydrostatic pressure. This is a dangerous situation because loss of pressure can allow the oil and gas to flow into the borehole (you want this when the well is producing, but not while you are drilling), a problem known as kick.

Pemex then asked Sedco to stop drilling, pull out the drill pipe, change the drill bit, and then start drilling again with a new bit and heavier mud. As the drill pipe was being removed, oil and gas flooded into the borehole. At this point, the well must be shut in using hydraulic clamps called blowout preventers (BOP) that choke the pipe until heavier mud can be circulated and hydrostatic control is regained. The BOP was either not activated or did not work. Oil and gas started flowing up the riser. At the surface, the oil and gas should have been vented to a flare. That failed and gas fumes filled the air and exploded when they were ignited by electrical equipment. The drilling platform caught fire and sank.

As engineering students, we talked about the Ixtoc I accident, and concluded that the problem was a result of poor decision-making by incompetent engineers. We took as proof that all the top firms recruited at our college (Colorado School of Mines) but Pemex did not. Another factor in our mind was that Pemex is an arm of a third-world government which must have interfered with the ability of engineers to correctly manage the project. Finally, we imagined that Pemex must have used equipment and drilling materials that were technologically inferior, that their work was sloppy, their roughnecks ill-trained, and they didn’t care about safety.

I don’t recall any of us thinking there was a systemic problem with how we as engineers think or how business decisions are made. None of us said, “Geez, this could happen to me someday on a project I work on.” We were all infallible and smarter than the engineers who worked on the Ixtoc I well. Similarly, I don’t recall a single one of my professors discussing the accident in class or what we should learn from it.

I believe there were several key items missing in our training. First, not enough emphasis was placed on teaching the danger of extrapolating data. Engineering is all about taking successful existing designs and applying them to novel situations. But students have a hard enough time learning the basics of good design without having the professor throw in trick problems where following all the correct procedures leads to a non-functional solution. There simply isn’t enough time to cover all the latest engineering methods and then go back and discuss the limitations of each one and how to detect if a limit has been reached. This is especially difficult since the limits generally are not known. If they were, someone would develop a new method to replace the old one.

Second, we were never taught to question our abilities. In team projects and presentations, admitting to being uncertain is seen as a sign by other team members and faculty that you haven’t done your work. This results in them not trusting your ability. On the job, engineers who are certain their designs will work get their projects approved. Those without confidence have their designs rejected which can damage their careers. So everyone learns to speak with more confidence than is safe.

Third, we were not taught qualitative risk assessment. Perhaps even worse, we were taught that risk assessment is a quantitative skill and risk could be calculated with absolute certainty. Thinking you know all the risks and that you have accurately calculated their probabilities and impacts means you will be overconfident about your chances of success. As students we never learned how to make decisions under uncertainty. This is a problem, since every decision worth making involves handling unknowns and risks. There is design risk, nondiversifiable finance risk, risk of events that have not yet occurred, etc.

Finally, we were not taught the impact of incentives on decision-making. Managers often believe that setting tough goals and rewarding success leads to better performance. But the schedules and budgets are developed in the early phase of a project when uncertainty is high. Very few companies know when to reset schedules and budgets as new data becomes available on a project. Doing it too often leads to demoralized teams. But not doing it can lead to teams that cut corners, violate policies, or burn out.

The next blog post will look at these problems as they apply to the Deepwater Horizon accident and offer partial solutions.

****

If you are interested in seeing the technical data regarding the response to the Macondo well oil spill, check out the U.S. Dept. of Energy, NOAA, and BP websites.

Advertisements