Cytokine storms during CAR T-cell therapy for lymphoblastic leukemia
November 19, 2015 13 Comments
For most of the last 70 years or so, treating cancer meant one of three things: surgery, radiation, or chemotherapy. In most cases, some combination of these remains the standard of care. But cancer research does not stand still. More recent developments have included a focus on immunotherapy: using, modifying, or augmenting the patient’s natural immune system to combat cancer. Last week, we pushed the boundaries of this approach forward at the 5th annual Integrated Mathematical Oncology Workshop. Divided into four teams of around 15 people each — mathematicians, biologists, and clinicians — we competed for a $50k start-up grant. This was my 3rd time participating,[1] and this year — under the leadership of Arturo Araujo, Marco Davila, and Sungjune Kim — we worked on chimeric antigen receptor T-cell therapy for acute lymphoblastic leukemia. CARs for ALL.
In this post I will describe the basics of acute lymphoblastic leukemia, CAR T-cell therapy, and one of its main side-effects: cytokine release syndrome. I will also provide a brief sketch of a machine learning approach to and justification for modeling the immune response during therapy. However, the mathematical details will come in future posts. This will serve as a gentle introduction.
Cross-validation in finance, psychology, and political science
April 20, 2014 by Artem Kaznatcheev 6 Comments
A large chunk of machine learning (although not all of it) is concerned with predictive modeling, usually in the form of designing an algorithm that takes in some data set and returns an algorithm (or sometimes, a description of an algorithm) for making predictions based on future data. In terminology more friendly to the philosophy of science, we may say that we are defining a rule of induction that will tell us how to turn past observations into a hypothesis for making future predictions. Of course, Hume tells us that if we are completely skeptical then there is no justification for induction — in machine learning we usually know this as a no-free lunch theorem. However, we still use induction all the time, usually with some confidence because we assume that the world has regularities that we can extract. Unfortunately, this just shifts the problem since there are countless possible regularities and we have to identify ‘the right one’.
Thankfully, this restatement of the problem is more approachable if we assume that our data set did not conspire against us. That being said, every data-set, no matter how ‘typical’ has some idiosyncrasies, and if we tune in to these instead of ‘true’ regularity then we say we are over-fitting. Being aware of and circumventing over-fitting is usually one of the first lessons of an introductory machine learning course. The general technique we learn is cross-validation or out-of-sample validation. One round of cross-validation consists of randomly partitioning your data into a training and validating set then running our induction algorithm on the training data set to generate a hypothesis algorithm which we test on the validating set. A ‘good’ machine learning algorithm (or rule for induction) is one where the performance in-sample (on the training set) is about the same as out-of-sample (on the validating set), and both performances are better than chance. The technique is so foundational that the only reliable way to earn zero on a machine learning assignments is by not doing cross-validation of your predictive models. The technique is so ubiquotes in machine learning and statistics that the StackExchange dedicated to statistics is named CrossValidated. The technique is so…
You get the point.
If you are a regular reader, you can probably induce from past post to guess that my point is not to write an introductory lecture on cross validation. Instead, I wanted to highlight some cases in science and society when cross validation isn’t used, when it needn’t be used, and maybe even when it shouldn’t be used.
Read more of this post
Filed under Commentary Tagged with finance, Karl Popper, machine learning, Paul Feyerabend, philosophy of science, prediction, social sciences