Three mechanisms of dark selection for ruxolitinib resistance

Last week I returned from the 6th annual IMO Workshop at the Moffitt Cancer Center in Tampa, Florida. As I’ve sketched in an earlier post, my team worked on understanding ruxolitinib resistance in chronic myelomonocytic leukemia (CMML). We developed a suite of integrated multi-scale models for uncovering how resistance arises in CMML with no apparent strong selective pressures, no changes in tumour burden, and no genetic changes in the clonal architecture of the tumour. On the morning of Friday, November 11th, we were the final group of five to present. Eric Padron shared the clinical background, Andriy Marusyk set up our paradox of resistance, and I sketched six of our mathematical models, the experiments they define, and how we plan to go forward with the $50k pilot grant that was the prize of this competition.

imo2016_participants

You can look through our whole slide deck. But in this post, I will concentrate on the four models that make up the core of our approach. Three models at the level of cells corresponding to different mechanisms of dark selection, and a model at the level of receptors to justify them. The goal is to show that these models lead to qualitatively different dynamics that are sufficiently different that the models could be distinguished between by experiments with realistic levels of noise.
Read more of this post

Dark selection and ruxolitinib resistance in myeloid neoplasms

I am weathering the US election in Tampa, Florida. For this week, I am back at the Moffitt Cancer Center to participate in the 6th annual IMO Workshop. The 2016 theme is one of the biggest challenges to current cancer treatment: therapy resistance. All five teams participating this year are comfortable with the evolutionary view of cancer as a highly heterogeneous disease. And up to four of the teams are ready to embrace and refine a classic model of resistance. The classic model that supposes that:

  • treatment changes the selective pressure on the treatment-naive tumour.
  • This shifting pressure creates a proliferative or survival difference between sensitive cancer cells and either an existing or de novo mutant.
  • The resistant cells then outcompete the sensitive cells and — if further interventions (like drug holidays or new drugs or dosage changes) are not pursued — take over the tumour: returning it to a state dangerous to the patient.

Clinically this process of response and relapse is usually characterised by a (usually rapid) decrease in tumour burden, a transient period of low tumour burden, and finally a quick return of the disease.

But what if your cancer isn’t very heterogeneous? What if there is no proliferative or survival differences introduced by therapy among the tumour cells? And what if you don’t see the U curve of tumour burden? But resistance still emerges. This year, that is the paradox facing team orange as we look at chronic myelomonocytic leukemia (CMML) and other myeloid neoplasms.

CMML is a leukemia that usually occurs in the elderly and is the most frequent myeloproliferative neoplasm (Vardiman et al., 2009). It has a median survival of 30 months, with death coming from progression to AML in 1/3rd of cases and cytopenias in the others. In 2011, the dual JAK1/JAK2 inhibitor ruxolitinib was approved for treatment of the related cancer of myelofibrosis based on its ability to releave the symptoms of the disease. Recently, it has also started to see use for CMML.

When treating these cancers with ruxolitinib, Eric Padron — our clinical leader alongside David Basanta and Andriy Marusyk — sees the drastic reduction and then relapse in symptoms (most notably fatigue and spleen size) but none of the microdynamical signs of the classic model of resistance. We see the global properties of resistance, but not the evidence of selection. To make sense of this, our team has to illuminate the mechanism of an undetected — dark — selection. Once we classify this microdynamical mechanism, we can hope to refine existing therapies or design new therapies to adapt to it.

Read more of this post

Cytokine storms during CAR T-cell therapy for lymphoblastic leukemia

For most of the last 70 years or so, treating cancer meant one of three things: surgery, radiation, or chemotherapy. In most cases, some combination of these remains the standard of care. But cancer research does not stand still. More recent developments have included a focus on immunotherapy: using, modifying, or augmenting the patient’s natural immune system to combat cancer. Last week, we pushed the boundaries of this approach forward at the 5th annual Integrated Mathematical Oncology Workshop. Divided into four teams of around 15 people each — mathematicians, biologists, and clinicians — we competed for a $50k start-up grant. This was my 3rd time participating,[1] and this year — under the leadership of Arturo Araujo, Marco Davila, and Sungjune Kim — we worked on chimeric antigen receptor T-cell therapy for acute lymphoblastic leukemia. CARs for ALL.

Team Red busy at work in the collaboratorium

Team Red busy at work in the collaboratorium. Photo by team leader Arturo Araujo.

In this post I will describe the basics of acute lymphoblastic leukemia, CAR T-cell therapy, and one of its main side-effects: cytokine release syndrome. I will also provide a brief sketch of a machine learning approach to and justification for modeling the immune response during therapy. However, the mathematical details will come in future posts. This will serve as a gentle introduction.

Read more of this post

Stem cells, branching processes and stochasticity in cancer

When you were born, you probably had 270 bones in your body. Unless you’ve experienced some very drastic traumas, and assuming that you are fully grown, then you probably have 206 bones now. Much like the number and types of internal organs, we can call this question of science solved. Unfortunately, it isn’t always helpful to think of you as made of bones and other organs. For medical purposes, it is often better to think of you as made of cells. It becomes natural to ask how many cells you are made of, and then maybe classify them into cell types. Of course, you wouldn’t expect this number to be as static as the number of bones or organs, as individual cells constantly die and are replaced, but you’d expect the approximate number to be relatively constant. Thus number is surprisingly difficult to measure, and our best current estimate is around 3.72 \times 10^{13} (Bianconi et al., 2013).

stochasticStemCellBoth 206 and 3.72 \times 10^{13} are just numbers, but to a modeler they suggest a very important distinction over which tools we should use. Suppose that my bones and cells randomly popped in and out of existence without about equal probability (thus keeping the average number constant). In that case I wouldn’t expect to see exactly 206 bones, or exactly 37200000000000 cells; if I do a quick back-of-the-envelope calculation then I’d expect to see somewhere between 191 and 220 bones, and between 37199994000000 and 37200006000000. Unsurprisingly, the variance in the number of bones is only around 29 bones, while the number of cells varies by around 12 million. However, in terms of the percentage, I have 14% variance for the bones and only 0.00003% variance in the cell count. This means that in terms of dynamic models, I would be perfectly happy to model the cell population by their average, since the stochastic fluctuations are irrelevant, but — for the bones — a 14% fluctuation is noticeable so I would need to worry about the individual bones (and we do; we even give them names!) instead of approximating them by an average. The small numbers would be a case of when results can depend heavily on if one picks a discrete or continuous model.

In ecology, evolution, and cancer, we are often dealing with huge populations closer to the number of cells than the number of bones. In this case, it is common practice to keep track of the averages and not worry too much about the stochastic fluctuations. A standard example of this is replicator dynamics — a deterministic differential equation governing the dynamics of average population sizes. However, this is not always a reasonable assumption. Some special cell-types, like stem cells, are often found in very low quantities in any given tissue but are of central importance to cancer progression. When we are modeling such low quantities — just like in the cartoon example of disappearing bones — it becomes to explicitly track the stochastic effects — although we don’t have to necessarily name each stem cell. In these cases we switch to using modeling techniques like branching processes. I want to use this post to highlight the many great examples of branching processes based models that we saw at the MBI Workshop on the Ecology and Evolution of Cancer.
Read more of this post

Experimental and comparative oncology: zebrafish, dogs, elephants

One of the exciting things about mathematical oncology is that thinking about cancer often forces me to leave my comfortable arm-chair and look at some actually data. No matter how much I advocate for the merits of heuristic modeling, when it comes to cancer, data-agnostic models take second stage to data-rich modeling. This close relationship between theory and experiment is of great importance to the health of a discipline, and the MBI Workshop on the Ecology and Evolution of Cancer highlights the health of mathematical oncology: mathematicians are sitting side-by-side with clinicians, biologists with computer scientists, and physicists next to ecologists. This means that the most novel talks for me have been the ones highlighting the great variety of experiments that are being done and how they inform theory.In this post I want to highlight some of these talks, with a particular emphasis on using the study of cancer in non-humans to inform human medicine.
Read more of this post

From heuristics to abductions in mathematical oncology

As Philip Gerlee pointed out, mathematical oncologists has contributed two main focuses to cancer research. In following Nowell (1976), they’ve stressed the importance of viewing cancer progression as an evolutionary process, and — of less clear-cut origin — recognizing the heterogeneity of tumours. Hence, it would seem appropriate that mathematical oncologists might enjoy Feyerabend’s philosophy:

[S]cience is a complex and heterogeneous historical process which contains vague and incoherent anticipations of future ideologies side by side with highly sophisticated theoretical systems and ancient and petrified forms of thought. Some of its elements are available in the form of neatly written statements while others are submerged and become known only by contrast, by comparison with new and unusual views.

If you are a total troll or pronounced pessimist you might view this as even leading credence to some anti-scientism views of science as a cancer of society. This is not my reading.

For me, the important takeaway from Feyerabend is that there is no single scientific method or overarching theory underlying science. Science is a collection of various tribes and cultures, with their own methods, theories, and ontologies. Many of these theories are incommensurable.
Read more of this post

Misleading models in mathematical oncology

MathOncologyBanner
I have an awkward relationship with mathematical oncology, mostly because oncology has an awkward relationship with math. Although I was vaguely familiar that evolutionary game theory (EGT) could be used in cancer research, mostly through Axelrod et al. (2006), I never planned to work on cancer. I wasn’t eager to enter the field because I couldn’t see how heuristic models could be of use in medicine; I thought only insilications could be useful, but EGT was not at a level of sophistication where it could build predictive models. I worried that selling non-predictive models as advice for treatment would only cause harm. However, the internet being the place it is, I ended up running into David Basanta — one of the major advocates of EGT in oncology — and Jacob Scott on twitter. After looking through some of the literature, I realized that most of experimental cancer research was more piecemeal than I expected and theory was based mostly on ad-hoc mental models. This convinced me that there is room for clear mathematical (and maybe computational) reasoning to help formalize and explore these mental models. Now we have a paper applying the Ohtsuki-Nowak transform to studying edge effects in the go-grow game prepped (Kaznatcheev, Scott, & Basanta, 2013), and David and I have a project on chronic myeloid leukemia in the works. The first is a heuristic model building on top of previously developed tools (from my experience, it is rather uncommon to build directly on others’ work in evolutionary game theory and mathematical oncology) and the other an abductive model using a combination of analytic and machine learning techniques to produce a predictive tool useful in the clinic.
Read more of this post

Simplifying models of stem-cell dynamics in chronic myeloid leukemia

drugModelIf I had to identify one then my main allergy would be bloated models. Although I am happy to play with complicated insilications, if we are looking at heuristics where the exact physical basis of the model is not established then I prefer to build the simpleast possible model that is capable of producing the sort of results we need. In particular, I am often skeptical of agent based models because they are simple to build, but it is also deceptively easy to have the results depend on an arbitrary data-independent modeling decision — the curse of computing. Therefore, as soon as I saw the agent-based models for the effect of imatinib on stem-cells in chronic myeloid leukemia (Roeder et al., 2002; 2006; Horn et al., 2013 — the basic model is pictured above), I was overcome with the urge to replace it by a simpler system of differential equations.
Read more of this post

Predicting the risk of relapse after stopping imatinib in chronic myeloid leukemia

IMODay1To escape the Montreal cold, I am visiting the Sunshine State this week. I’m in Tampa for Moffitt’s 3rd annual integrated mathematical oncology workshop. The goal of the workshop is to lock clinicians, biologists, and mathematicians in the same room for a week to develop and implement mathematical models focussed on personalizing treatment for a range of different cancers. The event is structured as a competition between four teams of ten to twelve people focused on specific cancer types. I am on Javier Pinilla-Ibarz, Kendra Sweet, and David Basanta‘s team working on chronic myeloid leukemia. We have a nice mix of three clinicians, one theoretical biologist, one machine learning scientist, and five mathematical modelers from different backgrounds. The first day was focused on getting modelers up to speed on the relevant biology and defining a question to tackle over the next three days.
Read more of this post