June 2010


Until the mid-1990s, kidney transplants from live donors who were not genetically related to the recipient were rare. As mentioned in a May 2010 blog post, most hospitals had a policy of rejecting donation offers from spouses or other unrelated people. The thought was that the risk of any live donor surgery was too great unless the benefit of a better HLA match outweighed it. This meant the only allowed sources of transplant kidneys were close relatives and deceased donors.

Policies at hospitals changed as 1) the advantage to recipients of receiving a kidney from a relative fell with improvements in immunosuppressant medications, 2) the risk to donors of surgery declined with the development of laparoscopic techniques, and 3) public attitudes about donor informed consent began to favor the belief that unrelated donors were making their offer for altruistic reasons rather than selfish ones (like being compensated or ego gratification or coercion).

The figure below shows the number of kidney transplants performed each year in the U.S. from 1998 to 2009. As mentioned in an Apr 2010 blog post, the rise in transplants mirrors the rise in the incidence of end-stage renal disease in the U.S. The proportion of transplants from live donors rose from under 20% in 1988 to a peak of almost 43% in 2003. It then dipped for a few years and is rising once again.

TotalTransplants

All transplants. Data from UNOS

Looking at the figure below, the composition of the live donors has been changing dramatically. In 1988, only 71 kidneys were transplanted from unrelated donors (the New York Times article reports a lower number) representing 4% of living donors. The proportion of unrelated donors has been steadily rising and now accounts for about 43% of all living donations. Notice that after a quick rise, the number of donations by spouses or life partners has leveled off. The most common sources of unrelated donors now are friends, neighbors, coworkers, church members, and even strangers that meet on the Internet.

TransplantsLiving

Live donor transplants. Data from UNOS

Even more interesting is the extremely rapid rise of alternative sources of living donors. The last three lines from the figure above are rescaled in the figure below. The fastest growing source of living donors is paired exchanges. As described in a Mar 2010 blog post, a kidney exchange involves the trade of kidneys between two sets of incompatible pairs. Each pair consists of a recipient and a willing, medically suitable donor who is either blood type or HLA incompatible. Through an exchange, they can find another pair in the same situation but where the donors match the recipients in the other pair. Kidney exchanges have the potential of becoming the leading source of live donor kidneys within a few years. (More about that and the potential of adding compatible pairs to exchanges in a future blog post.)

TransplantsAlternatives

Exchanges and anonymous transplants. Data from UNOS

A living/deceased exchange is similar to a paired exchange, except that the recipient in the pair receives a kidney from a deceased donor rather than from a live one. Under UNOS rules, people who have donated organs receive 4 points if they ever need to enter the transplant waiting list. This is often enough to move them to the top of the list. In a living/deceased exchange, the donor provides her organ to a patient on the UNOS list and provides her 4 points to her incompatible recipient. The recipient is now at or near the top of the list to receive an organ from a deceased donor.

Since receiving a kidney from a live donor generally produces superior medical outcomes to one from a deceased donor, a living/deceased exchange does not produce the best possible outcome for the recipient participating in the exchange. Yet despite the growing use of paired exchanges, the number of patients participating in living/deceased exchanges is also growing. Hopefully, paired exchanges will grow fast enough to soon make living/deceased exchanges unnecessary. (Incidentally, the process for managing living/deceased exchanges is covered under a patent application. My low opinion of business process software claims can be seen in this Mar 2010 blog post.)

The final fast-growing source of donors are people who donate without a specific recipient in mind. They are called nondirected or altruistic donors. Johns Hopkins Medicine claims to have performed the first nondirected live donor transplant in September, 1999 (though UNOS data shows five other nondirected donors in 1999, three in 1998 and one in 1988, the first year data is available).

In addition to increasing the total number of donations, nondirected donors also play an important role in starting donor chains in kidney exchanges. Donor chains reduce the risk to recipients of their matched donors backing out an exchange after the first transplant takes place. Thus, nondirected donors reduce the need to perform the transplant surgeries simultaneously. This simplifies scheduling personnel and operating rooms for kidney exchange transplants.

To learn more about becoming a nondirected donor in a kidney exchange, contact a transplant center and ask if it participates. Lists of some participating centers are available at the National Kidney Registry, Alliance for Paired Donation, Paired Donation Network, and New England Program for Kidney Exchange.

[Update: This data is examined at the transplant center level in a Jul 2010 blog post.]

Most people are bad at thinking about low probability events and their eyes can glaze over as they think about very small or very large numbers. Further, how the data is framed has a big impact on how your react to them.

To take a personal example, I’m about to go in for surgery to donate a kidney next week. [Update: My surgery has been postponed, but that doesn’t affect this analysis.] The chances of me dying are very small, only about 0.02%. I guess that seems very safe. Now let’s frame it differently. There are about 6,500 live kidney transplants a year, which means on average one or two donors die each year. Now my surgery seems a lot more dangerous. By changing from a percentage to an actual count, the number seems more personal. I can image that the person who dies is me.

Here’s another example. According to the United Network for Organ Sharing (UNOS), as of June 24, there are 85,512 kidney patients in the U.S. waiting for a kidney transplant. It seems like an impossible task to find enough donors to help them all. Assuming no other additions or removals in the next week (which isn’t quite true), that number will drop to 85,511 after I complete my donation. It seems my contribution is insignificant. But I can frame the problem in another way. The Univ. Washington Medical Center, where my surgery will take place, has 416 people on the waiting list. After my donation, it will be 415. This makes the impact of my one donation seems a lot bigger.

To make the task more general, there are 249 transplant centers in the U.S. that perform live donor transplants, meaning an average of 341 patients per hospital. Finding 341 more people (per year) to donate seems like a solvable problem. This isn’t an unachievable task. I just need to find a group of live donor champions at each hospital, probably previous nondirected donors. Then I need to convince them that they just have to help each of the 341 patients (on average) at each hospital find a donor and the waiting list will go away. To make the task seem even smaller, I can state the problem is to find one donor for each patient on the waiting list. Now it really seems easy. Small integers have a concrete aspect to them. Very large numbers or very small fractions do not because you just can’t picture them in your mind.

Keeping this smaller goal in my head should help keep me motivated as I prepare my outreach efforts to help kidney patients find live donors. (Yes, I’m fooling myself, which violates my Real Numeracy credo. So what?)

Choice A Choice B
Which option seems riskier? A 0.02% chance of dying from surgery (outcome per patient) 1 to 2 deaths per year (outcome per population)
Which task seems harder? Find 85,510 donors for the kidney patients on UNOS waiting list (count per population) Find 1 donor for each kidney patient on the UNOS waiting list (count per patient)

by George Taniwaki

The hematologist is concerned about my low white blood count (WBC) and has scheduled a bone marrow biopsy. A bone marrow biopsy will be invasive and painful, but certainly not as invasive or as painful as kidney donor surgery, which I’m already committed to. There is a possibility of complications, but again, the probabilities are much lower than for the surgery. (Yes, I know that the probabilities are cumulative not comparative.) Plus, I’m a bit intrigued by the opportunity to observe this procedure, though watching it on YouTube may be enough. Also, since the sample is taken from the posterior of the pelvic bone, I won’t actually be able to see my own sample taken.

What is it?

Bone marrow biopsy

Why is it needed?

Check for abnormalities (esp. cancer) in the core blood or bone marrow

How is it done?

A needle is inserted into the pelvic bone to extract a sample of bone marrow and liquid

Preparation None
Test time One hour
Risks

Rarely, can cause bleeding, infection, allergic reaction to medications

Discomfort

You have to pull down your pants. You will be given local anesthetics and oral pain meds, so cannot drive or travel alone after the sample is drawn. Even with meds, the test is rather painful

 

BoneMarrowBiopsyVideo

Bone marrow biopsy (not for the squeamish). Video from csmcd

The transplant coordinator at UWMC puts me in contact with a recent nondirected kidney donor who also had to undergo a bone marrow biopsy. She sent me an email with some excellent advice and ideas of what to expect, which is reproduced below.

“At first I was going to tough it out, just local and it shouldn’t be too bad. I mean, my son was 52 hours of labor with no meds – I managed that, right? But then I was thinking, ‘is there any reason to be in pain when there’s a relatively safe and easy way to not be?’ So I went for Local Plus, a combination of local surface and deep anesthesia plus fentanyl lollipops (really more like a giant Pez on a stick, but that’s what they call them).

“The person who did mine was Dr. Kelly Smith at SCCA. She did a really nice job, listened when I said I could feel what she was doing, gave me more anesthesia, as needed. I probably didn’t need all of the second lollipop, half would have been good, [but] one was not enough. Or maybe half to start, wait and the second half would have worked. I think I had to concentrate on breathing deeply for 2 or 3 minutes during the bone extraction, that’s about it. As it was, I pleasantly felt very little pain and I took a short nap when they were finished before getting up and going home. I did throw up on the way home, I think that was the second half of the second lollipop.

“The only lasting effect was hives from the dressing. My skin is really sensitive and I get hives at the drop of a hat, so I don’t know if you’ll get them too. [I took] lots of Benadryl for a few nights and they went away eventually. My hip was a little achy, but no activity limitations.

“It was nothing like Will Smith in 7 Pounds. Clearly he had no pain relief since he was punishing himself and it was a full on donation, not just a biopsy which is a much smaller extraction. I did find someone who posted [advice] on YouTube, which is how I decided to step up to the Local Plus pain relief.”

****

My bone marrow biopsy is scheduled for the morning. This will be my tenth appointment at UWMC. Actually, it will be at the Seattle Cancer Care Alliance, which is on the fifth floor of UWMC.

Based on the email message I received from the other donor, I’ve decided try to avoid nausea by not eating any solid food for dinner (just soup) the previous night and to not eat any breakfast (just juice) this morning. I will be sedated, so I’ll need to stay home for the rest of the day afterwards, meaning I’ll miss a day of work. (I’m a contractor, so I don’t get paid sick time.) Also because of the sedation, I can’t drive to my appointment. My wife is out-of-town, so a coworker has kindly volunteered to chauffeur me.

SCCA

Welcome to SCCA. Photo by George Taniwaki

The procedure goes smoothly. It starts with the doctor giving me the standard medical history interview. She takes my blood pressure which reads 125/85. That’s really high; it’s normally 110/70. Perhaps I’m a bit anxious.

A nurse then has me suck on a 200µg fentanyl lollipop. Once it is about one-third gone, he asks me to lay on my stomach with pants down. After covering me with a sheet and applying iodine disinfectant, the doctor injects lidocaine in the skin on my upper hip. After a couple of minutes she pushes the skin against my pelvic bone and injects lidocaine on the surface of the bone.

After a few more minutes, she inserts the biopsy needle. She asks me if I feel a dull pain or a sharp pain. My reply is, “Actually, I can’t feel anything.” She starts pushing hard and bores through the bone. I can hear the bone grinding away, but don’t feel anything. She removes the center of the needle, drives the needle deeper and takes a core sample and aspirate (liquid marrow). I feel a bit of a tingling pain, but not much else (and with the fentanyl, I don’t care). And then it is over. I still have some of the fentanyl lollipop left and spit it out. Overall, it wasn’t much worse than a trip to the dentist (except for the pants down thing).

BoneMarrowSamples

Core sample and aspirate. Photo by George Taniwaki

After a rest of about 30 minutes, they let me go. I could probably go to work, but decide it would be better to stay home. I don’t feel any pain for the rest of the day. However, as I lie down in bed for the night, when the needle spot on my hip touches the mattress, I feel intense pain, as if I had just fallen on my hip on an icy sidewalk. I immediately sit up. I have to sleep on my left side. I expect my hip will be sore for the next few days.

It will take a few days to get the test results. Obviously, I hope that I don’t have cancer. But after all this testing, what I really hope is that I can complete my donation scheduled for next Wednesday.

For more information on becoming a kidney donor, see my Kidney donor guide.

[Update1: Today (June 25), I receive a phone call from Elizabeth Kendrick, the transplant nephrologist. The biopsy results are not available yet. There isn’t enough time for a review, so my donor surgery has been postponed. See Jul 7 blog post for more details.]

[Update2: Added summary table.]

This is a continuation of yesterday’s blog post on BP’s culture of risk.

The cause of the recent accident on the Deepwater Horizon and resulting Macondo oil spill are still under investigation, but it appears there was no single failure. Instead there was a chain of decisions and events like the one described in the previous blog post for the Ixtoc I oil spill. Some details have been revealed by congressional investigators. The Wall St. J. has reproduced the letter addressed to BP’s chairman from the House Committee on Energy and Commerce. Yesterday’s New York Times has an excellent long article on design weaknesses of blowout preventers.

I won’t speculate about the exact decisions that led to the accident on the Deepwater Horizon rig. I presume that a lot of work went into the design and specification of the equipment, materials, and processes. However, the main contributor to the accident may have been a culture at BP that encouraged engineers to engage in risk creep, to ignore the impact of low probability, high cost events, and reward overconfidence. I will discuss these in detail in the next sections.

BP has a reputation of taking on expensive, high-risk engineering projects. It was a participant in the construction of the Trans-Alaska Pipeline, it invests in Russia and Kyrgyzstan, and it was the lead developer of the Thunder Horse PDQ platform, the world’s largest and most expensive offshore platform, which nearly sank after its commissioning in 2005. BP has an explicit strategy of seeking the biggest oil fields in the Gulf of Mexico, even if it means drilling in deep waters far from shore.

ThunderHorse

Thunder Horse platform. Photo from Wikipedia

Nothing attracts top engineering talent like big challenges and an opportunity to work on high-profile, big budget projects. BP provided plenty of that with its Gulf Coast projects. The ability to handle the low temperatures and high pressures at the bottom of the gulf combined with ability to accurately guide the drill bit at extreme depths are amazing technical achievements. But it can also lead to cost overruns and schedule slips. When combined with the pressure to meet budgets and deadlines, it can lead to accidents.

Allowing risk creep

Good engineering practice requires that designs outside the known limits (called the design envelope) be done as experiments, preferably in a laboratory setting, preferably by PhDs who have extensive knowledge of the phenomena being studied, and that lots of data be collected so that the design can be standardized and repeated with confidence. That is, you want to get to the point that the design is easy to replicate and if you don’t make any avoidable mistakes, it works. However, this doesn’t appear to be what happened in the evolution of deepwater oil drilling. Instead, engineers built deeper, more complex wells without testing their designs adequately prior to implementation.

There are four factors that lead to risk creep. First, long periods of “safe” operation reinforces the belief that the current practices and designs are sufficient. Guess how many wells were drilled offshore in the Gulf of Mexico since the Ixtoc I accident in 1979? How about 50, or 200, or even 1,000? Not even close, try over 20,000. There have been 22 blowouts. But not all wells are the same; the newer wells are deeper, with colder temperatures and higher pressures. Overcoming the belief that long stretches with few accidents mean everything is well understood and under control is really hard, especially as firms compete with each other to meet production targets and minimize costs.

Second, very little time is spent on reflection of past failures. Failures don’t just mean accidents. For every well blowout, there are thousands of near-miss incidents where dangerous unexpected kicks or casing damage occurred. Most engineers consider it a burden to conduct safety reviews, file incident reports, and attend project post-mortems. Time spent doing this is less time spent on new projects. But reviews allow engineers to see trends. They also can help encourage more of the behaviors that led to good results and eliminate those that caused problems.

Third, engineers may believe that extrapolating current designs to new conditions don’t require peer review. Nobody likes to have their work reviewed by outsiders. And managers don’t want to spend the time and money to do it. Unless lots of effort is made, it becomes hard to get into the practice. Similarly, when time sensitive decisions must be made, it is easier to forge ahead with the current plan (or a quickly improvised new plan) than to stop and consider alternatives.

Finally, the risk may be growing so slowly that nobody who works in the field day-to-day notices that the process is actually out of control.

Ignoring rare events

In his book, The Black Swan: The Impact of the Improbable, Nassim Nicholas Taleb points out that humans are prone to two deceptions. First, we think that chaotic events have a pattern to them. That is, we believe that the best way to predict the future is to look at the recent past. Second, we underestimate the importance of rare events. In fact, we believe that rare events are not worth planning for since they are too infrequent to care about. Tony Hayward, the CEO of BP called the Macondo oil spill a one-in-a-million event. (It wasn’t, it is closer to 1 in 1,000.) But even if it were, the enormous consequences means that there is no excuse for not including it in planning at the top levels of the company.

BlackSwan     Image from Amazon

Rewarding overconfidence

As I mentioned earlier, engineers (and many other professionals) are rewarded for being confident in their projections. Managers select projects based on how confident they are about the chance of success. And they are influenced by the confidence of the engineer proposing the project. So everyone learns to speak with more confidence than is safe.

However, overconfidence doesn’t require an external reward. For example, I believe that I am a better than the average driver. I believe I can navigate icy roads safely, and can handle any emergency situation. Everyone believes this. When I first get on an icy road, I drive slowly until several drivers pass me. Then I speed up to match the speed of the other drivers and start passing other cars myself. I know I shouldn’t do this, but I do it anyway. I haven’t been in an accident, so that reinforces my behavior. Similarly, every time I get into my car I don’t explicitly consider the chance that I might kill someone. But I should. And I should be reminded of my fallibilities and the dangers every few minutes, lest my attention wander. I should drive every second as if someone will, not just could, die every time I make a mistake.

Proposals for reducing risk

The solution to oil spills is not to stop drilling offshore because the technology is inherently unreliable and unsafe as some writers recommend. Rather, it is to assume that equipment can fail, that hurricanes will strike, that unexpected rock formations exist, that mistakes in selecting the right mud will be made, and pressure to meet schedules and budgets exist, and then design the mitigation for each.

First, engineers need to admit that they are running experiments whenever they are designing and building something that is even slightly beyond the scope of an existing project. Once engineers admit that what they are doing is an experiment, not just following a recipe in a cookbook, they will be more cognizant of the need to consider the risk, examine alternative methods, take care when collecting data, and to spend more time analyzing the data after the end of the project. Managers also need to consider each project an experiment and remember that experiments can fail. They must be willing to nurture calculated risk taking. They must also be willing accept the cost of mitigation (or the cost of the consequences). It appears that BPs managers failed at this.

Second, engineers need to be more open about their work. In other fields like physical science and medicine, researchers are encouraged to disclose the results of their work and solicit peer review. Engineers rarely publish their findings, for two reasons. First, they are not paid to. Second, nearly all of their work is considered proprietary by management. Even work that would benefit the industry as a whole, like new safety ideas or techniques to protect the environment are often hidden from competitors. The government needs to encourage or enforce sharing of safety data, require public reporting of near-miss incidents, and set standards for best practices. Currently, the government relies too heavily on industry expertise. To adequately police industry, the government needs to start hiring engineers as regulators, recruiting at top universities, paying competitive salaries, and conducting its own research.

Unfortunately, I don’t have high hopes that government regulators, investors, and managers learn the correct lessons from the Macondo oil spill. Rather than looking at the systemic causes of accidents, we will ban offshore drilling for a few months to assuage the public. Then regulators will write new rules like requiring acoustic transducers that shows they are getting tough and reforming the industry. But they won’t do anything that actually encourages critical thinking or processes that channel engineers to do the right thing. Then once the public outcry dies down, new technology, risk creep, and overconfidence will return. But it will be invisible until the next accident happens and we are all left wondering again how something awful like that could happen in America.

[Update1: On June 22, a federal judge issued an injunction that struck down the Obama administration’s six-month offshore drilling ban. The Justice Department is preparing an appeal.]

[Update2: I just noticed a really eerie coincidence. In the sixth paragraph, there is a hyperlink to a report that provides the counts of total offshore oil wells and blowouts. The report is dated April 20, 2010, the same day of the Deepwater Horizon accident.]

[Update3: There is a recent AP story that points to some of the same human errors as this blog post.]

The recent fatal accident on the Deepwater Horizon rig and the resulting oil spill from the Macondo well are horrible tragedies. In addition to the deaths of eleven workers, the pictures of the dead wildlife and the despoiled coast are heartbreaking. Since most of the oil is hidden below the surface of the water, the ill effects may last for decades and the long-term environmental, economic, and health consequences are still unknown.

The visible part of this disaster will cause immediate changes to offshore oil regulation, in engineer decisions, and corporate behavior. Unfortunately, I don’t think the government or BP will address the underlying causes of the accident. This means that these root causes will not be corrected. Congress is good at holding hearings, finding scapegoats, and passing laws. Regulators are good at making rules though often have trouble enforcing them. But the real problems are harder to fix.

Those real problems are risk creep, overconfidence in ones own abilities, and the inability of humans to base short-term day-to-day decisions on the impact of low probability, high cost events that may occur in the future. These problems are not specific to offshore oil drilling and are endemic to many situations.

A typical public hearing places too much emphasis on affixing blame and putting in place onerous rules to avoid a repeat of the proximate cause of the accident. No effort is made to consider the incentives and psychology that lead people to rely on technology and past good luck to an extent that makes accidents almost inevitable.

I should reveal my own potential bias. I have a degree in chemical engineering. I’ve worked around oil and gas wells. I’ve seen fires and accidents and know of coworkers burned and sent to the hospital. I also have a personal interest in BP. I worked for several years at Amoco, which was acquired by BP. As a result of my employment, I owned a large chunk of BP stock and have a defined benefit pension funded by BP.

Lessons from Ixtoc I

When I was in college, there was a blowout on the Ixtoc I well in the Gulf of Mexico. The accident led to the largest oil spill in history (until now). About 10,000 barrels a day of oil leaked out and it took nine months to cap the well.

Ixtoc_I

Ixtoc I oil spill. Photo from Wikipedia

During drilling, engineers working for Pemex, the well owner, requested the drilling contractor Sedco (now part of Transocean) to replace the heavy drilling mud with a lighter one to speed up drilling. This is risky. A region of soft rock was hit and the lighter mud flowed into the porous rock quicker than expected. The operators were unable to pump in more mud fast enough to maintain the hydrostatic pressure. This is a dangerous situation because loss of pressure can allow the oil and gas to flow into the borehole (you want this when the well is producing, but not while you are drilling), a problem known as kick.

Pemex then asked Sedco to stop drilling, pull out the drill pipe, change the drill bit, and then start drilling again with a new bit and heavier mud. As the drill pipe was being removed, oil and gas flooded into the borehole. At this point, the well must be shut in using hydraulic clamps called blowout preventers (BOP) that choke the pipe until heavier mud can be circulated and hydrostatic control is regained. The BOP was either not activated or did not work. Oil and gas started flowing up the riser. At the surface, the oil and gas should have been vented to a flare. That failed and gas fumes filled the air and exploded when they were ignited by electrical equipment. The drilling platform caught fire and sank.

As engineering students, we talked about the Ixtoc I accident, and concluded that the problem was a result of poor decision-making by incompetent engineers. We took as proof that all the top firms recruited at our college (Colorado School of Mines) but Pemex did not. Another factor in our mind was that Pemex is an arm of a third-world government which must have interfered with the ability of engineers to correctly manage the project. Finally, we imagined that Pemex must have used equipment and drilling materials that were technologically inferior, that their work was sloppy, their roughnecks ill-trained, and they didn’t care about safety.

I don’t recall any of us thinking there was a systemic problem with how we as engineers think or how business decisions are made. None of us said, “Geez, this could happen to me someday on a project I work on.” We were all infallible and smarter than the engineers who worked on the Ixtoc I well. Similarly, I don’t recall a single one of my professors discussing the accident in class or what we should learn from it.

I believe there were several key items missing in our training. First, not enough emphasis was placed on teaching the danger of extrapolating data. Engineering is all about taking successful existing designs and applying them to novel situations. But students have a hard enough time learning the basics of good design without having the professor throw in trick problems where following all the correct procedures leads to a non-functional solution. There simply isn’t enough time to cover all the latest engineering methods and then go back and discuss the limitations of each one and how to detect if a limit has been reached. This is especially difficult since the limits generally are not known. If they were, someone would develop a new method to replace the old one.

Second, we were never taught to question our abilities. In team projects and presentations, admitting to being uncertain is seen as a sign by other team members and faculty that you haven’t done your work. This results in them not trusting your ability. On the job, engineers who are certain their designs will work get their projects approved. Those without confidence have their designs rejected which can damage their careers. So everyone learns to speak with more confidence than is safe.

Third, we were not taught qualitative risk assessment. Perhaps even worse, we were taught that risk assessment is a quantitative skill and risk could be calculated with absolute certainty. Thinking you know all the risks and that you have accurately calculated their probabilities and impacts means you will be overconfident about your chances of success. As students we never learned how to make decisions under uncertainty. This is a problem, since every decision worth making involves handling unknowns and risks. There is design risk, nondiversifiable finance risk, risk of events that have not yet occurred, etc.

Finally, we were not taught the impact of incentives on decision-making. Managers often believe that setting tough goals and rewarding success leads to better performance. But the schedules and budgets are developed in the early phase of a project when uncertainty is high. Very few companies know when to reset schedules and budgets as new data becomes available on a project. Doing it too often leads to demoralized teams. But not doing it can lead to teams that cut corners, violate policies, or burn out.

The next blog post will look at these problems as they apply to the Deepwater Horizon accident and offer partial solutions.

****

If you are interested in seeing the technical data regarding the response to the Macondo well oil spill, check out the U.S. Dept. of Energy, NOAA, and BP websites.

by George Taniwaki

The results from my recent complete blood count (CBC) test are back and the transplant nephrologist at Univ. Washington Medical Center, Elizabeth Kendrick, doesn’t like my low white blood count. She has referred my case to a hematologist. This is a bit worrisome. Low blood count can be a caused by many things. The most serious ones are infections that attack white blood cell like HIV and cancer of white blood cells like leukemia or lymphoma. I reply by email with the following additional medical background that I hope will be helpful.

  1. I feel fine.
  2. I’m not on any medications.
  3. I haven’t had any recent infections or allergic reactions. I have had episodes of pityriasis rosea in the past two summers. I believe each was brought on by a spider bite since it starts about a day after a sudden painful stabbing sensation in my hand (2008) or leg (2009) while working in the garden. I never actually saw a spider though.
  4. It is unlikely that I can find any earlier CBC test results for comparison. Until I decided to become a kidney donor I hadn’t been to a doctor in several years. I think my latest checkup was in 2003.

The table below shows my white blood count and leukocyte counts since volunteering to become a kidney donor.

Date Laboratory White blood count
(normal range 4.3-10.0)
Leukocyte count
(normal range 1.0-4.8)
6/8/2010 UWMC 3.52 0.65
9/25/2009 UWMC 3.43 0.65
9/16/2009 UWMC 2.99 not performed
3/10/2008 LabCorp 3.8 0.8

As you can see, my white blood count is about 20% below the minimum for normal and the lymphocyte count is about 30% below the minimum for normal. But neither is close to zero and the numbers have not been bouncing around the past two years. (Leukemia and lymphomas are often characterized by a rise in leukocyte count as the disease spreads, followed by a fall as the cells are killed off.)

I contact various people in an attempt to find additional comparative CBC data.

  1. I call the Puget Sound Blood Center. I’ve been a donor since 2001, but discover they do not perform a CBC on donors. (That’s worrisome in itself. But it explains why they ask you to fill out a medical questionnaire each time you donate.)
  2. I call my GP from 1996. Her assistant says records for patients inactive for more than 7 years are destroyed.
  3. I call the clinic of my GP from 2003. She is no longer with the practice, but her records may still be in archive storage. I fill out a form to request them, but it may take a month. Too late for the UWMC, but might make an interesting blog post for later.
  4. I send an email to my brother to see if he has any CBC numbers to share. Perhaps his are low as well and it is a familial trait.
  5. However, I don’t contact my mother. I don’t want to alarm her, so I won’t tell her. (She’s not much of a computer user and I don’t expect her to read this blog post either.)

I soon get an email response from Kami Sneddon, the transplant coordinator at UWMC. The hematologist would like to perform a bone marrow biopsy. I schedule it as soon as I can, but the earliest I can get an appointment is for this Wednesday, June 23. We’re cutting it awfully close. I hope this doesn’t cause my donor surgery, currently scheduled for next Wednesday, June 30, to be cancelled.

****

After learning that I have a low white blood count (WBC), it dawns on me that I am not at high risk for cancer or blood-borne diseases. If I was a normal patient rather than a potential kidney donor and had these CBC results, a doctor would not order a bone marrow biopsy and the insurance company would likely refuse to pay for it. However, I am not the only one at risk, the recipient is too. Transplanting tissues or organs multiplies the risk of disease by providing a transmission path. A case has been documented in which three recipients were infected with rabies transmitted by a single deceased donor.

Even more important, transplanting an organ from a donor who has any unusual characteristics and later discovering the donor has an infection or cancer would be grounds for a malpractice suit, probably even if the recipient does not get sick (due to mental anguish and emotional distress). So the bone marrow biopsy is not really for my benefit. It is defensive medicine. However, given the high potential cost for this low probability event, it is an understandable precaution. Note that the cost to UWMC would not just be the lawsuit. They would lose business if patients did not trust them, and the public would be hurt by an overall reduction of trust in the medical system.

Here’s a funny email exchange between my wife and me. I reversed the thread so that you can read it from top to bottom.

From: George Taniwaki
Sent: Friday, June 11, 2010 10:48 AM
To: Susan Wolcott
Subject: Using search engines to pick stocks

Check out this story, http://www.technologyreview.com/blog/guest/25308/?nlid=3099
George

From: Susan Wolcott
Sent: Friday, June 11, 2010 10:58 AM
To: George Taniwaki
Subject: RE: Using search engines to pick stocks

And the posted comments are interesting, but in a completely different way…

From: George Taniwaki
Sent: Friday, June 11, 2010 11:40 AM
To: Susan Wolcott
Subject: RE: Using search engines to pick stocks

Yeah. Who are these people and how do they decide 1) to read Technology Review and 2) write political rants?

From: Susan Wolcott
Sent: Friday, June 11, 2010 1:44 PM
To: George Taniwaki
Subject: RE: Using search engines to pick stocks

It’s part of that valuable cognitive surplus.

****

Sue and I obviously have too much time, er… valuable cognitive surplus, on our hands. If you don’t get the reference to cognitive surplus, read this book.

Next Page »