Three mechanisms of dark selection for ruxolitinib resistance

Last week I returned from the 6th annual IMO Workshop at the Moffitt Cancer Center in Tampa, Florida. As I’ve sketched in an earlier post, my team worked on understanding ruxolitinib resistance in chronic myelomonocytic leukemia (CMML). We developed a suite of integrated multi-scale models for uncovering how resistance arises in CMML with no apparent strong selective pressures, no changes in tumour burden, and no genetic changes in the clonal architecture of the tumour. On the morning of Friday, November 11th, we were the final group of five to present. Eric Padron shared the clinical background, Andriy Marusyk set up our paradox of resistance, and I sketched six of our mathematical models, the experiments they define, and how we plan to go forward with the $50k pilot grant that was the prize of this competition.

imo2016_participants

You can look through our whole slide deck. But in this post, I will concentrate on the four models that make up the core of our approach. Three models at the level of cells corresponding to different mechanisms of dark selection, and a model at the level of receptors to justify them. The goal is to show that these models lead to qualitatively different dynamics that are sufficiently different that the models could be distinguished between by experiments with realistic levels of noise.
Read more of this post

Dark selection and ruxolitinib resistance in myeloid neoplasms

I am weathering the US election in Tampa, Florida. For this week, I am back at the Moffitt Cancer Center to participate in the 6th annual IMO Workshop. The 2016 theme is one of the biggest challenges to current cancer treatment: therapy resistance. All five teams participating this year are comfortable with the evolutionary view of cancer as a highly heterogeneous disease. And up to four of the teams are ready to embrace and refine a classic model of resistance. The classic model that supposes that:

  • treatment changes the selective pressure on the treatment-naive tumour.
  • This shifting pressure creates a proliferative or survival difference between sensitive cancer cells and either an existing or de novo mutant.
  • The resistant cells then outcompete the sensitive cells and — if further interventions (like drug holidays or new drugs or dosage changes) are not pursued — take over the tumour: returning it to a state dangerous to the patient.

Clinically this process of response and relapse is usually characterised by a (usually rapid) decrease in tumour burden, a transient period of low tumour burden, and finally a quick return of the disease.

But what if your cancer isn’t very heterogeneous? What if there is no proliferative or survival differences introduced by therapy among the tumour cells? And what if you don’t see the U curve of tumour burden? But resistance still emerges. This year, that is the paradox facing team orange as we look at chronic myelomonocytic leukemia (CMML) and other myeloid neoplasms.

CMML is a leukemia that usually occurs in the elderly and is the most frequent myeloproliferative neoplasm (Vardiman et al., 2009). It has a median survival of 30 months, with death coming from progression to AML in 1/3rd of cases and cytopenias in the others. In 2011, the dual JAK1/JAK2 inhibitor ruxolitinib was approved for treatment of the related cancer of myelofibrosis based on its ability to releave the symptoms of the disease. Recently, it has also started to see use for CMML.

When treating these cancers with ruxolitinib, Eric Padron — our clinical leader alongside David Basanta and Andriy Marusyk — sees the drastic reduction and then relapse in symptoms (most notably fatigue and spleen size) but none of the microdynamical signs of the classic model of resistance. We see the global properties of resistance, but not the evidence of selection. To make sense of this, our team has to illuminate the mechanism of an undetected — dark — selection. Once we classify this microdynamical mechanism, we can hope to refine existing therapies or design new therapies to adapt to it.

Read more of this post

Drug holidays and losing resistance with replicator dynamics

A couple of weeks ago, before we all left Tampa, Pranav Warman, David Basanta and I frantically worked on refinements of our model of prostate cancer in the bone. One of the things that David and Pranav hoped to see from the model was conditions under which adaptive therapy (or just treatment interrupted with non-treatment holidays) performs better than solid blocks of treatment. As we struggled to find parameters that might achieve this result, my frustration drove me to embrace the advice of George Pólya: “If you can’t solve a problem, then there is an easier problem you can solve: find it.”

IMO6 LogoIn this case, I opted to remove all mentions of the bone and cancer. Instead, I asked a simpler but more abstract question: what qualitative features must a minimal model of the evolution of resistance have in order for drug holidays to be superior to a single treatment block? In this post, I want to set up this question precisely, show why drug holidays are difficult in evolutionary models, and propose a feature that makes drug holidays viable. If you find this topic exciting then you should consider registering for the 6th annual Integrated Mathematical Oncology workshop at the Moffitt Cancer Center.[1] This year’s theme is drug resistance.
Read more of this post

Evolutionary dynamics of acid and VEGF production in tumours

Today was my presentation day at ECMTB/SMB 2016. I spoke in David Basanta’s mini-symposium on the games that cancer cells play and postered during the poster session. The mini-symposium started with a brief intro from David, and had 25 minute talks from Jacob Scott, myself, Alexander Anderson, and John Nagy. David, Jake, Sandy, and John are some of the top mathematical oncologists and really drew a crowd, so I felt privileged at the opportunity to address that crowd. It was also just fun to see lots of familiar faces in the same place.

A crowded room by the end of Sandy's presentation.

A crowded room by the end of Sandy’s presentation.

My talk was focused on two projects. The first part was the advertised “Evolutionary dynamics of acid and VEGF production in tumours” that I’ve been working on with Robert Vander Velde, Jake, and David. The second part — and my poster later in the day — was the additional “(+ measuring games in non-small cell lung cancer)” based on work with Jeffrey Peacock, Andriy Marusyk, and Jake. You can download my slides here (also the poster), but they are probably hard to make sense of without a presentation. I had intended to have a preprint out on this prior to today, but it will follow next week instead. Since there are already many blog posts about the double goods project on TheEGG, in this post I will organize them into a single annotated linkdex.

Read more of this post

Hamiltonian systems and closed orbits in replicator dynamics of cancer

Last month, I classified the possible dynamic regimes of our model of acidity and vasculature as linear goods in cancer. In one of those dynamic regimes, there is an internal fixed point and I claimed closed orbits around that point. However, I did not justify or illustrate this claim. In this post, I will sketch how to prove that those orbits are indeed closed, and show some examples. In the process, we’ll see how to transform our replicator dynamics into a Hamiltonian system and use standard tricks from classical mechanics to our advantage. As before, my tricks will draw heavily from Hauert et al. (2002) analysis of the optional public good game. Studying this classic paper closely is useful for us because of an analogy that Robert Vander Velde found between the linear version of our double goods model for the Warburg effect and the optional public good game.

The post will mostly be about the mathematics. However, at the end, I will consider an example of how these sort of cyclic dynamics can matter for treatment. In particular, I will consider what happens if we target aerobic glycolysis with a drug like lonidamine but stop the treatment too early.

Read more of this post

Acidity and vascularization as linear goods in cancer

Last month, Robert Vander Velde discussed a striking similarity between the linear version of our model of two anti-correlated goods and the Hauert et al. (2002) optional public good game. Robert didn’t get a chance to go into the detailed math behind the scenes, so I wanted to do that today. The derivations here will be in the context of mathematical oncology, but will follow the earlier ecological work closely. There is only a small (and generally inconsequential) difference in the mathematics of the double anti-correlated goods and the optional public goods games. Keep your eye out for it, dear reader, and mention it in the comments if you catch it.[1]

In this post, I will remind you of the double goods game for acidity and vascularization, show you how to simplify the resulting fitness functions in the linear case — without using the approximations of the general case — and then classify the possible dynamics. From the classification of dynamics, I will speculate on how to treat the game to take us from one regime to another. In particular, we will see the importance of treating anemia, that buffer therapy can be effective, and not so much for bevacizumab.

Read more of this post

Population dynamics from time-lapse microscopy

Half a month ago, I introduced you to automated time-lapse microscopy, but I showed the analysis of only a single static image. I didn’t take advantage of the rich time-series that the microscope provides for us. A richness that becomes clearest with video:

Above, you can see two types of non-small cell lung cancer cells growing in the presence of 512 nmol of Alectinib. The cells fluorescing green are parental cells that are susceptible to the drug, and the ones in red have an evolved resistance. In the 3 days of the video, you can see the cells growing and expanding. It is the size of these populations that we want to quantify.

In this post, I will remedy last week’s omission and share some empirical population dynamics. As before, I will include some of the Python code I built for these purposes. This time the code is specific to how our microscope exports its data, and so probably not as generalizable as one might want. But hopefully it will still give you some ideas on how to code analysis for your own experiments, dear reader. As always, the code is on my github.

Although the opening video considers two types of cancer cells competing, for the rest of the post I will consider last week’s system: coculturing Alectinib-sensitive (parental) non-small cell lung cancer and fibroblasts in varying concentrations of Alectinib. Finally, this will be another tools post so the only conclusions are of interest as sanity checks. Next week I will move on to more interesting observations using this sort of pipeline.
Read more of this post

Counting cancer cells with computer vision for time-lapse microscopy

Competing cellsSome people characterize TheEGG as a computer science blog. And although (theoretical) computer science almost always informs my thought, I feel like it has been a while since I have directly dealt with the programming aspects of computer science here. Today, I want to remedy that. In the process, I will share some Python code and discuss some new empirical data collected by Jeff Peacock and Andriy Marusyk.[1]

Together with David Basanta and Jacob Scott, the five of us are looking at the in vitro dynamics of resistance to Alectinib in non-small cell lung cancer. Alectinib is a new ALK-inhibitor developed by the Chugai Pharmaceutical Co. that was approved for clinical use in Japan in 2014, and in the USA at the end of 2015. Currently, it is intended for tough lung cancer cases that have failed to respond to crizotinib. Although we are primarily interested in how alectinib resistance develops and unfolds, we realize the importance of the tumour’s microenvironment, so one of our first goals — and the focus here — is to see how the Alectinib sensitive cancer cells interact with healthy fibroblasts. Since I’ve been wanting to learn basic computer vision skills and refresh my long lapsed Python knowledge, I decided to hack together some cell counting algorithms to analyze our microscopy data.[2]

In this post, I want to discuss some of our preliminary work although due to length constraints there won’t be any results of interest to clinical oncologist in this entry. Instead, I will introduce automated microscopy to computer science readers, so that they know another domain where their programming skills can come in useful; and discuss some basic computer vision so that non-computational biologists know how (some of) their cell counters (might) work on the inside. Thus, the post will be methods heavy and part tutorial, part background, with a tiny sprinkle of experimental images.[3] I am also eager for some feedback and tips from readers that are more familiar than I am with these methods. So, dear reader, leave your insights in the comments.

Read more of this post

Cancer metabolism and voluntary public goods games

When I first came to Tampa to do my Masters[1], my focus turned to explanations of the Warburg effect — especially a recent paper by Archetti (2014) — and the acid-mediated tumor invasion hypothesis (Gatenby, 1995; Basanta et al., 2008). In the course of our discussions about Archetti (2013,2014), Artem proposed the idea of combining two public goods, such as acid and growth factors. In an earlier post, Artem described the model that came out of these discussions. This model uses two “anti-correlated” public goods in tumors: oxygen (from vasculature) and acid (from glycolytic metabolism).

The dynamics of our model has some interesting properties such as an internal equilibrium and (as we showed later) cycles. When I saw these cycles I started to think about “games” with similar dynamics to see if they held any insights. One such model was Hauert et al.’s (2002) voluntary public goods game.[2] As I looked closer at our model and their model I realized that the properties and logic of these two models are much more similar than we initially thought. In this post, I will briefly explain Hauert et al.’s (2002) model and then discuss its potential application to cancer, and to our model.
Read more of this post

Choosing units of size for populations of cells

Recently, I have been interacting more and more closely with experiment. This has put me in the fortunate position of balancing the design and analysis of both theoretical and experimental models. It is tempting to think of theorists as people that come up with ideas to explain an existing body of facts, and of mathematical modelers as people that try to explain (or represent) an existing experiment. But in healthy collaboration, theory and experiment should walk hand it hand. If experiments pose our problems and our mathematical models are our tools then my insistence on pairing tools and problems (instead of ‘picking the best tool for the problem’) means that we should be willing to deform both for better communication in the pair.

Evolutionary game theory — and many other mechanistic models in mathematical oncology and elsewhere — typically tracks population dynamics, and thus sets population size (or proportions within a population) as central variables. Most models think of the units of population as individual organisms; in this post, I’ll stick to the petri dish and focus on cells as the individual organisms. We then try to figure out properties of these individual cells and their interactions based on prior experiments or our biological intuitions. Experimentalists also often reason in terms of individual cells, making them seem like a natural communication tool. Unfortunately, experiments and measurements themselves are usually not about cells. They are either of properties that are only meaningful at the population level — like fitness — or indirect proxies for counts of individual cells — like PSA or intensity of fluorescence. This often makes counts of individual cells into an inferred theoretical quantity and not a direct observable. And if we are going to introduce an extra theoretical term then parsimony begs for a justification.

But what is so special about the number of cells? In this post, I want to question the reasons to focus on individual cells (at the expense of other choices) as the basic atoms of our ontology.

Read more of this post