A Mar 2010 blog post explained how kidney exchanges can increase the number of patients who receive a live donor kidney. It also described how computers can find better (in fact, optimal) matches compared to random or arbitrary assignment of matches. This is all good. However, we can do even better. We can add a temporal dimension to the optimization decision.
In the exchange models described in my previous blog, it was implicitly assumed that the exchange would only be run once. However, in the real world, new traders (patients) enter the exchange daily (and unfortunately existing patients may be removed due to death). So the matching algorithm should be run repeatedly, not just once. A couple of questions arise when there are multiple rounds of matching. How should the matching algorithms described earlier be modified to ensure the maximum number of matches over time? And how often should the matching algorithm be run?
If all matches found in a round are allowed to exchange, then the pool of remaining traders will be depleted, leaving only the hardest to match patients. This means any new pair(s) entering the exchange the next round will be unlikely to find a match. If our goal is to maximize the number of matches over time, we may need to hold back some pairs each round in order to seed the next round.
However, we don’t know a priori the blood type and HLA type of the new donors and recipients to the exchange in any future round (that is, the composition of entrants is stochastic). However, we can model them using the historical medical data available on all previous donor/recipient pairs that have entered the exchange. Similarly, the pairs exiting the exchange between rounds is also not known, but it can be modeled based on historical data on the health and age of previous patients and donors who exited the exchanges.
Since the exact composition of traders in the next round is not known, our optimization technique needs to be probabilistic. Any model where the outcome of a given round is a result of a combination of random inputs and user decisions is known as a Markov decision process. There is plenty of previous research into these models that we can build on.
In an article in Intl. Joint Conf. on Artif. Intel. 2009, Tuomas Sandholm, who was one of the developers of the exact matching algorithm described in my previous blog post, and his coauthor tested models by varying three parameters for multi-round exchanges: sample size, look-ahead depth, and batch size.
Sample size is the size of a subset of the current traders in the exchange to be examined. A sample is used rather than the entire pool to reduce computational effort. The smaller the sample size, the faster the match can run, but the less accurate the result. Look-ahead depth is the number of future rounds to look at. As the number of future rounds examined goes up, the number of possible outcomes grows exponentially, as does the computing effort needed to explore them.
Batch size is the number of new pairs entering the exchange before the matching algorithm is run. Ideally, a match could be run every time a new pair enters the exchange. However, this won’t be practical if lots of pairs enter every day, as could happen is a unified national exchange. Further, a larger batch size also increases the chance of a match, just like a larger pool does. So using a larger batch size reduces the need to use look-ahead to estimate matches for future rounds. For the paper, the authors used batch sizes between 10 and 20, which they translated to a matching round every 1 to 2 months.
(Currently, the various kidney exchanges run their matches on a fixed time schedule rather than when a certain batch size is reached. This helps the participating hospitals know when they can expect matching data to be delivered. Although recipient/donor pairs enter and exit the pool at indeterminate intervals, their numbers per fixed time period can be estimated, if necessary, using the Poisson distribution.)
The paper reports results using both real data and generated data. For generated data, it found good results were obtained with sample sizes between 10 and 50 and look-ahead depths between 10 and 20 rounds. As can be seen in the figure below, in the early months, the single round algorithm (called offline-based) makes more matches than the multi-round algorithms (called 1 and 2), as would be expected since the multi-round algorithms withhold some trades. But after about 5 months, the multi-round algorithms catch up. After about three years of simulated matches, the multi-round approaches are way ahead and save about 6.5% more lives (about 13 patients in this study) compared to the single-round method.
Image from Intl. Joint Conf. on Artif. Intel.
Thus, by holding back some matches today, we can improve matches tomorrow, which will save more lives. This is true even though the multi-round algorithm provides an approximate solution, rather than an exact optimum solution that the single-round solution does.
Note that none of the kidney exchanges are currently using this multi-round optimization model. And it is easy to see why. It isn’t ethical to run an experiment on real patients where some will be withheld a chance at a match today in the hope of saving some other patient’s life at some future date. As noted in the prior blog post, the two major kidney exchanges and the UNOS pilot program are all using the exact single round algorithm.
On the other hand, even if we can’t justify multi-round optimization by saying it saves more lives, perhaps we can justify it by saying it shortens the average waiting time for a match. That’s because even though some people in the early rounds have to wait a bit longer for a match, most of the entrants in later rounds have a shorter wait. The paper doesn’t report differences in average wait times, but it would be easy to calculate.
Another way to encourage adoption of the multi-round algorithm would be to run a comparison between the results of the multi-round algorithm and the single round algorithm at each step and find the patients who are left out by the multi-round algorithm. We can give these traders a greater weight in the next round to reduce the chance that they end up waiting more than a few rounds.
In my next blog post, I will discuss the problem of O blood type patients. Then I will propose a way to modify kidney exchanges to help them.