by George Taniwaki

Patients are often frustrated and confused when navigating the healthcare system. Part of the problem is that if you are sick or hurt, it reduces your cognitive abilities. But it also because hospitals are busy places with little funding for improving the user experience. Often the layout of the rooms, the signage, the forms and instructions, and the language used by the staff are not tailored to the needs of patients who are unfamiliar with the system.

Design to reduce patient violence

A significant problem in hospital emergency medical departments (called A&E in Britain, ER in America) is abusive and violent patients. According to the National Audit Office, violence and aggression towards hospital staff costs the NHS at least £69 million a year in staff absence, loss of productivity and additional security.

Some other statistics from the Design Council report: More than 150 incidents of violence and aggression are reported each day within the NHS system. In 2010, the incidence rate of violence and aggression was about 1 per 1000 patients. In 2009, 21% of staff report bullying, harassment, and abuse by patients, 11% report physical attacks by patients.

Working with the National Health Service, a design firm called PearsonLloyd developed some low-cost methods to reduce the incidence of violence and aggression, increase patient satisfaction, improve staff morale, and reduce security costs. They call their program, A Better A&E. The program was pilot tested at St. George’s Hospital in London and Southampton General. For an introduction, see the video below.

BetterAE

Figure 1. Still from video “A Better A&E. Image from Vimeo

Signage and brochure

The program consisted of three parts. First, improved signage was installed that included an estimated wait times along with a brochure that explained why a patient who arrived after you could be seen a doctor before you.

BetterAEbusyBetterAEWait

Figures 2a and 2b. Large screen monitor alternately shows how busy the A&E is and then how long the wait time is for different categories of patients. Images from Design Council report

BetterAEbrochure

BetterAESignage

Figure 3a and 3b. A page from brochure explaining why wait times differ among patients and what to expect at each station. Signage posted at each patient area keyed to the brochure. Images from Dezeen.com

Root cause analysis

The second part of the redesign was the introduction of program to capture information from doctors, nurses, and other staff about factors that led to violent and abusive behavior. The program included root cause analysis and a prominently posted Incident Tally Chart to record the “variables within the system that might hinder the ability of staff to deliver high quality care.”

BetterAEIncidentTally

Figure 4. Incident tally posted where staff can record any events during their shift. Images from Design Council Report

Toolkit and patterns

The final part of the program was to design a toolkit that would take the lessons from the A&E departments of the two pilot hospitals and generalize them so that they could be adopted by any hospital within the NHS system. The toolkit is presented as an easy to use website, http://www.abetteraande.com

Results

Surveys of patients and staff taken after the redesign indicated that both groups saw benefits.

  • 88% of patients felt the guidance solution was clear
  • 75% of patients felt the signage reduced their frustration during waiting times
  • Staff reported a 50% drop in threatening body language and aggressive behavior
  • NHS calculated that each £1 spent on design solutions resulted in £3 in benefits

by George Taniwaki

The year 2013 marks the 100th anniversary of the invention of hemodialysis, a life saving procedure to removing waste product from the blood of people with chronic kidney disease.

An experiment using animals is described in the May 1913 issue of Trans Assoc Amer Phys. The work was done by three doctors, John Abel, L.G. Rowntree, and B.B. Turner, all of Johns Hopkins Medical School.

TransAssocAmerPhys

A screenshot from the article describing the invention of hemodialysis. Image from Google books

From the archives of Scientific American Sept 1913 comes a description of the experiment. The article contains this quote from the Times of London:

A demonstration which excited great interest was that of Prof. [John Jacob] Abel of Baltimore. Prof. Abel presented a new and ingenious method of removing substances from the circulating blood, which can hardly fail to be of benefit in the study of some of the most complex problems. By means of a glass tube tied into the main artery of an anesthetized animal the blood is conducted through numerous celloidin tubes before being returned to the veins through a second glass tube. All diffusible substances circulating in the blood pass through the intervening layer of celloidin. In this way Prof. Abel has constructed what is practically an artificial kidney.

In their experiment Dr. Abel and his colleagues use a dialysis membrane made of celloidin. Celloidin is an early plastic made from nitrocellulose (cotton or wood pulp reacted with nitric acid). It was translucent and water-repellent. Films or tubes made from celloidin were water permeable, which made them good osmotic filters. However, celloidin was highly flammable and dangerous to work with.

Today, dialysis membrane tubing is made from rayon fiber (cellulose reacted with carbon disulfide and mixed with glycerin, then extruded through a spinneret to form a thread) or cellophane film (chemically similar to rayon, except that it is extruded through a slit to form a thin sheet).

by George Taniwaki

You would think that something as basic as the periodic table wouldn’t make the news. But recently two articles caught my attention. The Jun 2013 issue of Sci Amer (subscription required) points out that with the discovery of element 117 in 2010 (elements 1 through 116 and 118 had already been discovered), the periodic table has no gaps in it for the first time since it was first proposed in the 1860s. I found that pretty surprising. Of course, future discoveries of elements with higher atomic number may create new holes.

The article says that over 1,000 versions of the periodic table have been published. The arrangement that is most familiar was developed by Horace Groves Deming in 1923. An example table is shown below. It is color coded to indicate the date of discovery of each element. Note that four elements, shown in purple don’t have names yet.

PeriodicTable

Figure 1. Periodic table of elements showing era of discovery. Image courtesy of Wikipedia

The Deming chart starts with hydrogen (H) on the top left and helium (He) on the top right. The number of elements in each row tends to increase culminating in a separate block at the bottom for elements that start with lanthanum (La) and actinium (Ac). The elements in each column have similar chemical characteristics. For instance, the elements in the last column in each row are known as noble gases since they have high ionization energy potentials (the energy required to remove one electron). This makes it difficult, though not impossible, to make them react with other elements to form compounds.

IonizationEnergy

Figure 2. Ionization energy to remove one electron from each element. Image courtesy of Wikipedia

Another way to lay out the elements in a table is to group them by their quantum electron structure rather than by their chemical behavior. One example is the Janet left-step table. It was developed by the chemist Charles Janet in 1928. It moves helium to the column next to hydrogen and moves the first two columns to the end of the table. Each element falls into a block. The lanthanum and actinium row of elements are given their place in the main table, rather than having to sit at the children’s table.

PeriodicTableJanetLeftStep

Figure 3. Janet left-step periodic table of elements. Image by George Taniwaki

Electron quantum numbers

Each row in the Janet left-step table indicates increasing electron energy level. This is represented by the integer n, called the principal quantum number. The first eight levels are named K, L, M, N, O, P, Q, and R respectively.

The number of blocks in each row is represented by the integer , which must have a value of < ((n+1)/2 ) and is called the azimuthal quantum number. The first five blocks are named s, p, d, f, and g respectively.

My version of the Janet left-step table above color codes each element to show which block new electrons are added, f (green), d (blue), p (yellow). and s (red). Note that there are exceptions to the block ordering. (An explanation is beyond the scope of this blog post.)

The number of orbitals in each new block is larger than the block to the right of it. Specifically, each block contains m pairs of cells = (2 * –1) cells, where m is an integer called the magnetic quantum number. You can predict that the g block in the next row of the periodic table will contain 9 pairs (18 total) cells.

Besides n,  ℓ, and m, electrons have a fourth quantum property called spin which is represented by s, an integer that can have a value of either +1 or –1. No two electrons in an atom can have the same 4 quantum values.

The images below show the orbitals for a single electron in a hydrogen atom as energy increases. Note there is a single s orbital, 3 p orbitals, 5 d orbitals, and 7 f orbitals. Each orbital can hold two electrons with spin +1 and -1, which explains why the s, p, d, and f blocks hold 2, 6, 10, and 14 electrons respectively.

Single_electron_orbitals

Figure 4. Single electron orbitals. Image courtesy of Wikimedia Commons

Order in which electron orbitals fill

Each electron orbital has a different energy level. Orbitals with larger primary quantum number and larger azimuthal quantum number have higher energy than those with lower values. Electrons tend to fill the orbitals in what is called the Madelung rule which states that on average, orbitals with higher value of n + have higher energy. For orbitals with the same value of n +, those with higher value of n have higher energy. Thus, the order in which orbitals fill is a diagonal array as shown in the table below. This describes the layout of the elements in the Janet left-step table.

ℓ=1
s
ℓ=2
p
ℓ=3
d
ℓ=4
f
ℓ=5
g
n=1
K
1
1s
n=2
L
2
2s
3
2p
n=3
M
4
3s
5
3p
7
3d
n=4
N
6
4s
8
4p
10
4d
13
4f
n=5
O
9
5s
11
5p
14
5d
17
5f
n=6
P
12
6s
15
6p
18
6d
n=7
Q
16
7s
19
7p
n=8
R
20
8s

A pretty periodic table

In other periodic table news, the Aug 2013 issue of Pop Sci features a periodic table drawn by Alison Haigh, a London-based graphic designer. The article calls it beautiful and easy-to-read. I agree that it is beautiful. I don’t agree that it is easy-to-read or useful.

PeriodicTableAlisonHaigh

Figure 5. Periodic table without text. Image courtesy of Alison Haigh

First, showing both the cells and the dots is redundant. Just showing one or the other would be sufficient to convey the meaning. That’s because a periodic table is laid out in atomic number order. Thus, to find the atomic number of an element you can just count the number of cells from the top left or count the number of dots in the selected cell. To find which orbitals are filled for an element, you can see which row and column the element is in, or you can inspect the dot pattern in the selected cell.

The dots in each cell are arranged in an unusual order. They are grouped in concentric circles in order of their principal quantum number. The innermost circle has 2 dots, followed by rings containing 8, 18, 32, 32, 18, and 2 dots respectively. This means the dots are not arranged in the order that the electron orbitals are filled. This is a bit confusing.

Further, this arrangement only allows for up to 112 electrons, which corresponds to the element copernicium (Cn). The outer rings do not have room for additional dots to represent electrons for heavier elements that have already been discovered or predicted by quantum theory.

Finally, one of the most important uses of the periodic table is to help recall the names, abbreviations, and atomic number of the elements. There are no labels in this table. And counting the dots, or counting the number of cells to figure out the atomic number is tedious.

A modified version of Ms. Haigh’s periodic table is shown below. The elements are laid out in a spiral that follows the Janet left-step periodic table. Cells are color coded to highlight the s, p, d, and f blocks. Each cell is labeled with the atomic number and abbreviation of the element. It’s pretty, I guess; it looks like one of those eye tests for color blindness. But the layout is still not as useful as a standard periodic table.

PeriodicTableHaigh2

Figure 6. Periodic table based on Alison Haigh design. Image by George Taniwaki

****

In 8th grade science class we were required to memorize the names of all the elements and their symbols. Do teachers make their students do that today? It seems rather pointless. How often do you use ruthenium (Rh)? There were only 98 named elements back when I was in school, so memorization was easier than it would be today where there are 114 and counting. Incidentally, based on that statement, can you can guess how old I am?

I recently watched two videos featuring Anthony Atala, a surgeon and researcher at Wake Forest University who works in the Institute for Regenerative Medicine. The first video is from his talk at TEDMed Oct 2009. In it, he talks about creating artificial tissue and organs. His talk also includes video clips showing working urethras and blood vessels made with biopolymers. He also shows a standard ink jet printer modified to print live endothelial cells to form 3D objects such as heart valves. Finally, he shows a functional liver created using a scaffold made from a decellularized cadaver liver.

TEDMed

Artificial organs. Video from TED Med

The second video is from Dr. Atala’s talk at TED Mar 2011. In this newer video he describes the process of creating a scaffold for a kidney. Much of the content in the first eight minutes is a repeat of the previous talk. The exciting part starts at 10:04 into the video where he describes the process of using a 3D ink jet printer to create the kidney scaffold.

TED

Printing kidneys. Video from TED Med

The work of researchers at Wake Forest developing artificial organs was mentioned in an Aug 2010 blog post.

****

Also in March, I attended the Annual Faculty Lecture at Univ Washington. The speaker was Buddy Ratner, a professor in the Department of Bioengineering . Mr. Ratner is the Michael L. & Myrna Darland Endowed Chair in Technology Commercialization, the founder of Ratner BioMedical, and a member of the scientific advisory board for Tengion, a firm that has licensed the Wake Forest technology.

His talk, entitled “Regenerate, Rebuild, Restore — Bioengineering Contributions to the Changing Paradigm in Medicine”, described the work he and his graduate students have done in creating biodegradable scaffolds made from biopolymers such as polyHEMA, a common material used commercially for soft contact lenses, using a novel process called 6S.

The 6S process gets its name from the six steps used to make the material. First, polystyrene pellets are sieved to isolate pellets of 35 to 40 microns in diameter. These pellets are shaken to create a close-packed arrangement. The packed material is sintered to create a porous solid. This solid acts as a mold. The desired biomaterial, such as polyHEMA, is poured into the mold and surrounds the sintered pellets. The biomaterial is allowed to solidify. Finally, a solvent is added to dissolve the polystyrene mold, leaving only the biomaterial which contains many pores of 35 to 40 micron diameter. (Pores of this size have been shown to reduce the immune reaction that leads to scarring and infection. The explanation of why is beyond the scope of this blog.)

A company named Healionics was formed to commercialize the 6S process. Mr. Ratner is the chairman of the firm’s scientific advisory board.

6S

Schematic of 6S process. Image from U.S. FDA

After the talk I spoke to Mr. Ratner about artificial kidneys. Unfortunately, he indicated that there are no researchers at Univ Washington working on producing artificial kidneys. I also asked him about the pros and cons of natural and synthetic substrates. He believes that using decellularized organs as the substrate for new artificial organs will prove too difficult except for certain uses and that he expects synthetic substrates, like those created using the ink jet process or the 6S process, to be more likely to lead to successful functional organs.

[Update: Corrected the affiliation of Mr. Ratner with Tengion. He is a member of its scientific advisory board.]

In an Aug 2010 blog post, I discussed the prospects for regenerative medicine to alleviate the shortage of transplantable organs. Regenerative medicine usually starts with an organ obtained from deceased donors. But the organ itself isn’t used. Instead the cells are removed and the remaining scaffold is seeded with stem cells to create a new organ. Near the end of that blog post I mentioned that there was work being performed by David Hume and others at the Univ. of Michigan to produce an external device that could perform some of the endocrine functions of a kidney. It would supplement an external dialyzer to provide complete kidney function for a patient with end-stage renal disease.

Recently, Univ. California, San Francisco issued a press release stating that Shuvo Roy and other researchers in the Department of Bioengineering and Therapeutic Sciences have reduced the size of both devices by using a combination of micro-electromechanical systems (MEMS) and human kidney cells. Their prototype is about the size of a coffee cup, or similar in size to a kidney. They hope the device will be implantable, leading to a portable, artificial kidney. Much work remains and they don’t expect clinical trials to begin for another five to seven years. Yet, the promise is great. Such a device could help improve the medical outcomes and quality of life of all patients with ESRD, meaning both those waiting for a transplant and those who would otherwise receive dialysis therapy.

UCSFdialyzer

Artificial kidney. Video from UCSF

[Update: Replaced the cutaway view with a video.]

by George Taniwaki

There is an extreme shortage of kidneys available for transplantation with over 85,000 people on the UNOS transplant waiting list and an additional 300,000 on dialysis who are not on the waiting list but who could still benefit from improvements in renal replacement therapy. Although it is possible for a patient with end-stage renal disease (ESRD) to live several years on dialysis, it is not ideal.

A May 2010 blog post discussed ways to extend the shelf life of organs donated for transplant. Today’s blog post describes technologies in an exciting area of research called regenerative medicine that may provide significantly better outcomes than dialysis and alleviate the shortage in transplantable organs. Regenerative medicine consists of therapies that use live cells, mostly grown from stem cells, to replace a patient’s nonfunctional organ.

Preparing a scaffold for solid organs

Every organ in the body consists of three primary parts. First is a protein scaffold, a framework that defines the shape, mechanical properties, and organization of the cells in the organ. Second, is the network of blood vessels that feed the organ. Finally, there are the various cells within the organ that interact with the blood.

In solid organs, like the heart, the cells do not interact very much with the blood. Thus the requirements for an artificial heart are more clearly defined, and are more mechanical rather than biochemical. In a paper published in Nature Medicine Jan 2008 and summarized in Tech. Rev. Jan 2008, Doris Taylor, a researcher at the Stem Cell Institute at the Univ Minnesota, and her colleagues describe a process to create a scaffold for a heart. In experiments with rats, they start with a cadaver heart and decellularize it using detergents. Then they seeded the acellular matrix with either neonatal cardiac cells or rat aortic endothelial cells (the cells that line the blood vessels). Afterwards, the muscles in these bioengineered hearts would beat when stimulated.

RatHearts

Decellularizing a heart. Image from Nature Medicine

Other organs have been created using a similar process. Working with rat livers, several researchers at Massachusetts General Hospital published a paper in Nature Medicine Jun 2010 (subscription required). They started with a matrix created by removing the cells from an adult cadaver liver and then seeded it with fetal liver cells and endothelial cells. The resulting organ survived and functioned in culture for 10 days. A good description of the work is provided in Tech. Rev. Jun 2010 and includes a video.

Liver

Decellularizing a liver. Image from Tech. Rev.

In another experiment using rats, researchers created a lung by adding fetal lung cells and blood vessel cells to a matrix created from a decellularized cadaver lung. The work was conducted by Laura Niklason and other researchers at Yale. It was reported in the Science Jul 2010 and publicized in the Wall St. J. Jun 2010 (subscription required) and Tech. Rev. Jun 2010, which also has a video. Dr. Niklason has formed a company called Humacyte to commercialize human derived acellular matrices.

Artificial scaffolds

All of the artificial organs described above start with a scaffold made from an existing organ from a cadaver. There is also work underway to develop a man-made scaffold using polymers that mimic the behavior of natural proteins. One advance is reported in Nature Materials Nov 2008 (subscription required) for a honeycomb shaped scaffold that combines flexibility with strength. The polymer is made from poly(glycerol sebacate), a biodegradable elastomer. The work is described in Tech. Rev. Nov 2008.

Another scaffold material, this one made from the same fibronectin protein that serves as the framework for natural organs, is described in Nano Letters Jun 2010 (subscription required) and summarized in Tech. Rev. Aug 2010. The process, developed by Kevin Kit Parker of Harvard, starts by depositing fibronectin molecules on a chilled surface made of a hydrophobic polymer. This causes the protein to relax. Then the fibronectin is transferred to a sheet of glass coated with a water-soluble, hydrophilic polymer. Adding room temperature water causes to fibronectin to crosslink and also dissolves the hydrophilic polymer. This leaves the fabric which is ready to use.

Nanofabric

Protein nanofabric. Image from Nano Letters

Organs without scaffolding

It may be possible to eliminate the need for an existing scaffolding by suspending cells in a hydrogel containing iron oxide particles and held in a magnetic field to create 3D shapes. The technique is described in Nature Nanotech. Apr 2010 and summarized in Tech. Rev. Mar 2010.

Finally, it may be possible to build up an organ without a scaffold by using a 3D printer. Tom Boland and other researchers at Clemson University reproduced a heart using an off-the-shelf ink jet printer filled with cells suspended in a hydrogel. Their results were reported at the Amer. Assoc. Advan. Sci.Conf. 2007.

An experiment involving mice shows the first steps in creating an artificial pancreas without the use of scaffolding. The work was done by a company called ViaCyte (formerly Novocell). First, stem cells are encapsulated in a membrane. The membrane is porous enough to allow blood and glucose to enter, but fine enough to prevent the cells from leaking into the body. The stem cells are induced to become insulin-producing pancreas cells. Finally, the encapsulated cells are implanted in the mouse. The work was publicized at the Int. Soc. Stem Cell Res. 2010 and reported in Tech. Rev. Jun 2010.

Bioengineered kidneys

Creating an artificial kidney is much more difficult than forming other organs because the kidneys have a complex internal structure that includes items like tubules and glomeruli. However, it may not be necessary to reproduce these features to make a useful therapy. In addition to its well-known filtering functions, the kidneys are also part of the endocrine system. They produce and regulate the level of various hormones, the best known of which is erythropoietin (EPO), which stimulates the production of red blood cells.

Currently, all dialysis patients get injections of EPO as part of their renal replacement therapy, to avoid anemia. But there may be other hormones that they are missing. David Humes at the Univ. of Michigan has shown that an external device filled with kidney cells can be used to regulate the hormone levels of dialysis patients. The work is described in Tech. Rev. Nov 2006. A company named RenaMed Biologics was formed to commercialize the product. The company partnered with Genzyme to perform clinical trials of this renal assist device, but testing was suspended, MassHighTech Oct 2006.

James Yoo and other researchers and Wake Forest University report in Tissue Eng. Feb 2009 (subscription required) that they were able to generate three-dimensional renal structures resembling tubules and glomeruli in vitro using primary kidney cells. These structures produced a liquid that resembled urine. A company called Tengion has licensed the technology and is working on a neo-kidney augment product. However, it is not yet in clinical development and is not commercially available.

Optimistically, all of these techniques for regenerative medicine will come to market within ten years. Bioengineered organs have the potential to reduce the need for live donor organs, allow more deceased donor organs to be used rather than discarded, and shorten the waiting list for transplants. Further, assuming that the patient’s own stem cells are used to seed the acellular matrix, they will ensure HLA compatibility and eliminate the need for the patients to take immunosuppressant medications which should reduce the risk of side effects.

This is a continuation of yesterday’s blog post on BP’s culture of risk.

The cause of the recent accident on the Deepwater Horizon and resulting Macondo oil spill are still under investigation, but it appears there was no single failure. Instead there was a chain of decisions and events like the one described in the previous blog post for the Ixtoc I oil spill. Some details have been revealed by congressional investigators. The Wall St. J. has reproduced the letter addressed to BP’s chairman from the House Committee on Energy and Commerce. Yesterday’s New York Times has an excellent long article on design weaknesses of blowout preventers.

I won’t speculate about the exact decisions that led to the accident on the Deepwater Horizon rig. I presume that a lot of work went into the design and specification of the equipment, materials, and processes. However, the main contributor to the accident may have been a culture at BP that encouraged engineers to engage in risk creep, to ignore the impact of low probability, high cost events, and reward overconfidence. I will discuss these in detail in the next sections.

BP has a reputation of taking on expensive, high-risk engineering projects. It was a participant in the construction of the Trans-Alaska Pipeline, it invests in Russia and Kyrgyzstan, and it was the lead developer of the Thunder Horse PDQ platform, the world’s largest and most expensive offshore platform, which nearly sank after its commissioning in 2005. BP has an explicit strategy of seeking the biggest oil fields in the Gulf of Mexico, even if it means drilling in deep waters far from shore.

ThunderHorse

Thunder Horse platform. Photo from Wikipedia

Nothing attracts top engineering talent like big challenges and an opportunity to work on high-profile, big budget projects. BP provided plenty of that with its Gulf Coast projects. The ability to handle the low temperatures and high pressures at the bottom of the gulf combined with ability to accurately guide the drill bit at extreme depths are amazing technical achievements. But it can also lead to cost overruns and schedule slips. When combined with the pressure to meet budgets and deadlines, it can lead to accidents.

Allowing risk creep

Good engineering practice requires that designs outside the known limits (called the design envelope) be done as experiments, preferably in a laboratory setting, preferably by PhDs who have extensive knowledge of the phenomena being studied, and that lots of data be collected so that the design can be standardized and repeated with confidence. That is, you want to get to the point that the design is easy to replicate and if you don’t make any avoidable mistakes, it works. However, this doesn’t appear to be what happened in the evolution of deepwater oil drilling. Instead, engineers built deeper, more complex wells without testing their designs adequately prior to implementation.

There are four factors that lead to risk creep. First, long periods of “safe” operation reinforces the belief that the current practices and designs are sufficient. Guess how many wells were drilled offshore in the Gulf of Mexico since the Ixtoc I accident in 1979? How about 50, or 200, or even 1,000? Not even close, try over 20,000. There have been 22 blowouts. But not all wells are the same; the newer wells are deeper, with colder temperatures and higher pressures. Overcoming the belief that long stretches with few accidents mean everything is well understood and under control is really hard, especially as firms compete with each other to meet production targets and minimize costs.

Second, very little time is spent on reflection of past failures. Failures don’t just mean accidents. For every well blowout, there are thousands of near-miss incidents where dangerous unexpected kicks or casing damage occurred. Most engineers consider it a burden to conduct safety reviews, file incident reports, and attend project post-mortems. Time spent doing this is less time spent on new projects. But reviews allow engineers to see trends. They also can help encourage more of the behaviors that led to good results and eliminate those that caused problems.

Third, engineers may believe that extrapolating current designs to new conditions don’t require peer review. Nobody likes to have their work reviewed by outsiders. And managers don’t want to spend the time and money to do it. Unless lots of effort is made, it becomes hard to get into the practice. Similarly, when time sensitive decisions must be made, it is easier to forge ahead with the current plan (or a quickly improvised new plan) than to stop and consider alternatives.

Finally, the risk may be growing so slowly that nobody who works in the field day-to-day notices that the process is actually out of control.

Ignoring rare events

In his book, The Black Swan: The Impact of the Improbable, Nassim Nicholas Taleb points out that humans are prone to two deceptions. First, we think that chaotic events have a pattern to them. That is, we believe that the best way to predict the future is to look at the recent past. Second, we underestimate the importance of rare events. In fact, we believe that rare events are not worth planning for since they are too infrequent to care about. Tony Hayward, the CEO of BP called the Macondo oil spill a one-in-a-million event. (It wasn’t, it is closer to 1 in 1,000.) But even if it were, the enormous consequences means that there is no excuse for not including it in planning at the top levels of the company.

BlackSwan     Image from Amazon

Rewarding overconfidence

As I mentioned earlier, engineers (and many other professionals) are rewarded for being confident in their projections. Managers select projects based on how confident they are about the chance of success. And they are influenced by the confidence of the engineer proposing the project. So everyone learns to speak with more confidence than is safe.

However, overconfidence doesn’t require an external reward. For example, I believe that I am a better than the average driver. I believe I can navigate icy roads safely, and can handle any emergency situation. Everyone believes this. When I first get on an icy road, I drive slowly until several drivers pass me. Then I speed up to match the speed of the other drivers and start passing other cars myself. I know I shouldn’t do this, but I do it anyway. I haven’t been in an accident, so that reinforces my behavior. Similarly, every time I get into my car I don’t explicitly consider the chance that I might kill someone. But I should. And I should be reminded of my fallibilities and the dangers every few minutes, lest my attention wander. I should drive every second as if someone will, not just could, die every time I make a mistake.

Proposals for reducing risk

The solution to oil spills is not to stop drilling offshore because the technology is inherently unreliable and unsafe as some writers recommend. Rather, it is to assume that equipment can fail, that hurricanes will strike, that unexpected rock formations exist, that mistakes in selecting the right mud will be made, and pressure to meet schedules and budgets exist, and then design the mitigation for each.

First, engineers need to admit that they are running experiments whenever they are designing and building something that is even slightly beyond the scope of an existing project. Once engineers admit that what they are doing is an experiment, not just following a recipe in a cookbook, they will be more cognizant of the need to consider the risk, examine alternative methods, take care when collecting data, and to spend more time analyzing the data after the end of the project. Managers also need to consider each project an experiment and remember that experiments can fail. They must be willing to nurture calculated risk taking. They must also be willing accept the cost of mitigation (or the cost of the consequences). It appears that BPs managers failed at this.

Second, engineers need to be more open about their work. In other fields like physical science and medicine, researchers are encouraged to disclose the results of their work and solicit peer review. Engineers rarely publish their findings, for two reasons. First, they are not paid to. Second, nearly all of their work is considered proprietary by management. Even work that would benefit the industry as a whole, like new safety ideas or techniques to protect the environment are often hidden from competitors. The government needs to encourage or enforce sharing of safety data, require public reporting of near-miss incidents, and set standards for best practices. Currently, the government relies too heavily on industry expertise. To adequately police industry, the government needs to start hiring engineers as regulators, recruiting at top universities, paying competitive salaries, and conducting its own research.

Unfortunately, I don’t have high hopes that government regulators, investors, and managers learn the correct lessons from the Macondo oil spill. Rather than looking at the systemic causes of accidents, we will ban offshore drilling for a few months to assuage the public. Then regulators will write new rules like requiring acoustic transducers that shows they are getting tough and reforming the industry. But they won’t do anything that actually encourages critical thinking or processes that channel engineers to do the right thing. Then once the public outcry dies down, new technology, risk creep, and overconfidence will return. But it will be invisible until the next accident happens and we are all left wondering again how something awful like that could happen in America.

[Update1: On June 22, a federal judge issued an injunction that struck down the Obama administration’s six-month offshore drilling ban. The Justice Department is preparing an appeal.]

[Update2: I just noticed a really eerie coincidence. In the sixth paragraph, there is a hyperlink to a report that provides the counts of total offshore oil wells and blowouts. The report is dated April 20, 2010, the same day of the Deepwater Horizon accident.]

[Update3: There is a recent AP story that points to some of the same human errors as this blog post.]