Drug holidays and losing resistance with replicator dynamics
September 2, 2016 6 Comments
A couple of weeks ago, before we all left Tampa, Pranav Warman, David Basanta and I frantically worked on refinements of our model of prostate cancer in the bone. One of the things that David and Pranav hoped to see from the model was conditions under which adaptive therapy (or just treatment interrupted with non-treatment holidays) performs better than solid blocks of treatment. As we struggled to find parameters that might achieve this result, my frustration drove me to embrace the advice of George Pólya: “If you can’t solve a problem, then there is an easier problem you can solve: find it.”
In this case, I opted to remove all mentions of the bone and cancer. Instead, I asked a simpler but more abstract question: what qualitative features must a minimal model of the evolution of resistance have in order for drug holidays to be superior to a single treatment block? In this post, I want to set up this question precisely, show why drug holidays are difficult in evolutionary models, and propose a feature that makes drug holidays viable. If you find this topic exciting then you should consider registering for the 6th annual Integrated Mathematical Oncology workshop at the Moffitt Cancer Center. This year’s theme is drug resistance.
Discontinuous treatment powered by holidays
Most discussions of treatment holidays are focused on managing toxicity. Especially in the case of chemotherapy, the treatment is often devastating to the patient. For many therapies, the limiting factor is for how long (or how strong) a patient can be treated without being killed by the therapy. Thus, these holidays aren’t the active ingredient of the therapy, they are not what lets the therapy supress cancer cells. These holidays are a way to manage side-effects. And although I think it can be very important to focus on the toxicity of treatment, in this case I want to think about the case where holidays are a central driver behind the effectiveness of therapy. Is there a class of therapies and evolutionary dynamics such that the drug fails to treat the tumour unless its application is interspersed with holidays of no treatment?
In particular, I want to consider two therapies of the same strength (per unit time) that have the same start and end time. The continuous case would run continuous treatment from the start time to the end time. The discontinuous therapy punctuates blocks of treatments with holidays from the start time until the end time. Thus, I want to consider cases where the discontinuous therapy has strictly less therapy. And yet, I want to find settings under which the tumour burden at the end time is lower for the discontinuous therapy.
Minimal model and the difficulty of discontinuous treatment
Of course, in a real clinical setting, we might not care if a discontinuous treatment works by allowing a different dosage, recovery of sensitivity, or booting up the patient’s immune system. We only really care that it works. But if we consider all three (or more) effects at once in a heuristic model — or in a big simulation — then we won’t really understand why our strategy succeeds. Thus, in this post, I want to isolate just the recovery-of-sensitivity-during-treatment-holidays aspect of the strategy.
I will start with one of the simplest possible model of resistance. A tumour (or pathogen) growing towards carrying capacity. There is a single therapy and two tumour subtypes: a sensitive type of density that has one fitness in the absence of treatment and a second fitness in the presence of treatment; and a resistant type of density that a fitness regardless of if therapy is present of absent. Giving us the dynamics in the presence of therapy:
where is the total tumour burden, and where we remove the hat over for the times that treatment is off. On the one hand, it is easy to see that we have no hope at all if or if . On the other hand, if then the tumour is cured with continuous therapy. Thus, we will focus on .
Unfortunately, under these conditions, it is clear that for all time whenever and . Thus, the density of resistant cells will always be increasing and will only slow down as total tumour burden approaches 1.
Resensitization of the resistant population
To overcome this, we need a way for . This can be done by allowing for a flow from resistant to sensitive cells in the absence of therapy. In other words, when therapy is absence, our equations will be:
There are several ways we could interpret these equations, but my favourite option is to think of them as individual cells losing resistance. When under stress, each resistant cell re-evaluates its commitment to investing energy in resistance and gives up that strategy at a rate of . It is important to note, that unlike mutation, this is not linked to reproduction. In particular, does not necessarily need to depend on the difference , and — more importantly — it is independent of the remaining free space . This means that we can have the suppression of resistance when therapy is off and . In other words, there is hope of controlling the resistant population in large tumours, even if we cannot eliminate the whole population.
When treatment is on, the resistant proportion will grow back but it is possible to still have the total tumour burden shrink by killing sensitive cells quickly enough. In order for total tumour volume to shrink, we need which is achieved only when with where is the fraction of the tumour that is resistant. Thus, the tumour will shrink whenever the fraction of sensitive cells is high enough; i.e. when . Of course, this means that the fraction of sensitive cells will also decrease and once it falls below the threshold, we need to repeat the cycle of turning therapy on and off.
Unfortunately, just those conditions are not enough to guarantee that discontinuous therapies can control the tumour burden and outperform a single continuous treatment. The rates of loss of sensitivity under treatment and growth of tumour burden without treatment have to be low enough to be surpassed by the rate of loss of resistant density without treatment and loss of tumour density with treatment. It is probably possible to calculate these rates analytically and also solve for the average burden of the controlled tumour. However, this was meant to be a quick post so I only present a proof of concept by simulation.3
Proof of concept and adaptive therapy
Below is a figure showing an example of when discontinuous therapy outperforms a continuous therapy of equal strength. All parameters — except the times that treatment is turned on or off — are the same across the two simulations of the ODEs. These parameters are: and the initial conditions . This gives a critical proportion of sensitive cells , above this tumours shrink under treatment and below it they grow. This critical proportion is shown in solid green; in dotted green is the actual proportion of sensitive cells . In solid black is the total tumour burden and in red is the resistant tumour burden . For the times that therapy is off the background is white, and when therapy is of the background is gray.
In the top panel with a continuous therapy, as expected, the resistant population quickly takes over the whole tumour, grows the tumour to carrying capacity, and leaves the patient with a tumour burden of 1. With discontinuous treatment, it is possible to time treatment holidays such that the resistant tumour burden is kept oscillating around ~0.3 and the total tumour burden is kept oscillating around ~0.85 without ever surpassing ~0.93. This is not a huge reduction in tumour burden but it does show how holidays can help us control a tumour that is unresponsive to continuous therapy.
If you are curious, the discontinuous treatments in the second panel are from times 1 to 2, 3.5 to 4.5, 6 to 9, 10 ti 13, 14 to 17.5, 18.5 to 22.5, 23.5 to 27, 28 to 32, and 33 to 35. While the continuous treatment is just a single block from 1 to 35. In other words, the discontinuous treatment includes a total of 9 time-steps of holidays distributed over eight holidays (the first two are 1.5 time-steps long while the rest are 1). Thus, the discontinuous treatment produces a better outcome even though it ends up applying less than 74% of the drug-hours of the continuous treatment.
In this example, I tuned the treatment timings by eye but it is not hard to adapt my strategy into an adaptive therapy. A theoretical route might be to pick up upper sensitivity threshold . Whenever reaches (or exceeds) turn on the treatment and when reaches (or falls below) turn off the treatment. In practice, though, distinguishing sensitive from resistant cells to measure the proportion might be difficult. Instead, we could just track the tumour burden. We only care about crossing because it changes the sign of under treatment. For a more practical route, we might turn on treatment whenever the tumour burden passes a threshold , and if treatment is on we will turn off the treatment if the tumour burden stopped decreasing.
The practical route is a pretty obvious strategy for adaptive therapy. And I wouldn’t be surprised if doctors do something like this already. Although usually, they switch to a new drug when a tumour stops responding to an existing one. Thus, the difficulty becomes characterising the sort of tumours (or evolutionary dynamics) for which this discontinuous strategy will outperform a continuous counterpart. Then we can know when to return to and old drug to achieve tumour control instead of switching to a new drug in the hopes of a cure.
- Whether you work in mathematical oncology or in more general modeling of biological systems then I recommend this event. I have attended for the past three years and plan to attend this year. I greatly enjoyed each prior year: 2013, 2014, 2015. Here is a video of the 3rd workshop to whet your appetite:
- Note that the structural feature of fitness-based growth rate depending on but the phenotype-switching being independent of it is essential for the effect observed in this model. That means that logistic growth is not an accidental but an essential part of this model. If we want the same sort of results with exponentially (instead of logistically) growing populations then it will have to depend on a different sort of mechanism.
- I would prefer to not succumb to the curse of computing and avoid relying on ad hoc examples of parameters from simulations. From playing with simulations, it looks like the result is robust. However, for the parameters I tried, the average burden of the controlled tumour is still uncomfortably high. It seems that each parameter setting has an associated region of control, but I cannot figure out how tweaks to the parameters exactly affect the average controlled burden. An analytic treatment would help me resolve this. Unfortunately, I can only come up with rough analytic approximations by discretizing the cycles into two steps. This is not unreasonable — especially since under log or logit transform the ‘waves’ look close to piecewise linear — but not perfect. If you have ideas on a more grounded analytic treatment, dear reader, then let me know. I will also play around on my own when I have time and report back with any successes.