Where I think the fallacy of jumping to fixed points comes in is in not realizing that hard families of instances are a possibility of our models and thus passing up this opportunity to improve our knowledge. I’m not accusing anybody of having committed this fallacy (well, maybe I am accusing THUNK? But in good fun), I am just using evolution and economics as two examples to illustrate this distinction (because these are the area where I am aware of some fun work was done on this).

]]>I think that a particularly interesting question to think about with regard to this is given some suitable language of models (say the NK-model for fitness landscapes or convex, monotonic utilities for markets), can we cut up the space of models in a useful way so that on the one side we have the region where fixed points are useful information for making sense of the system, and on the other side where they are not very informative.Then maybe we can start to reason empirically about which of these two subspaces has more probability in the ‘real world’.

My usual issue with chaotic system is on if we care about exact specific quantitative predictions (which becomes impossible given error in initial condition measurements) or qualitative features of the system (which, at least accoriding to Mark Braverman, is often easy to arrive at). It is nice to have a notion of difficulty of prediction that can abstract over reasonable classes of things we might want to predict about the system.

]]>In other words, economists don’t assume that there’s no $20 bills on the ground because of the EMH, they argue for the EMH because of all the time that people spent looking for $20 bills and not finding them reliably. It’s hardly a fallacy to use real world data to choose between theories.

Of course since the EMH became the hypothesis to beat, there’s been a bunch of work finding anomalies and trying to disprove it. But any replacement theory for the EMH is also going to have to explain why stock prices follow something that’s so close to a random walk and why it’s so seldom that people pick up $20 bills.

]]>http://facultyoflanguage.blogspot.com/2013/11/computational-linguistics-too.html

It would also be very helpful for theoretical CS join forces with those mathematical/algorithmic linguists, who often have mutual goals (speaking selfishly as I do learnability and automata theory). One can get a good look at the state of the field in the recent issue of the flagship journal Language, where a discussion on neural nets in linguistics drew all sorts of commentary (mine included).

]]>