(Note: I’m at Harvard, not MIT!) I found this statement very interesting:
I work in a field within experimental physics where simulation is not the primary way that we use to determine final conclusions. Unlike computational biology and information theory (at least based on the discussion in this thread), there is too much complexity in the system to get any result with better than 10-20% accuracy. More often simulation only gets us to the right order of magnitude.
Thus, when we actually perform simulations, what we often do is check it against a simplified, analytically solvable model - one of the “paradigm cases” in the field. For atomic physics, an example would be a two-level or three-level system. (In reality, most atoms and molecules have numerous energy levels that all contribute to the evolution of the system.) Only after doing this do we gain confidence that our simulation of the full system is telling us anything useful.
This process is very important - like the control experiment you performed above using Eric’s model. The difference is that the control, in physics, is heavily intuitive (in contrast to your statement that one cannot trust intuition in computational biology) because physicists believe that we have fully understood the behavior of these paradigmatic cases. I reminded of the joke that physicists like to model everything as a point particle plus some corrections.
But even after all of this, we often still distrust simulations. The final arbiter is always measurement and experiment. This is where an experimental physicist may differ from a theoretical or computational physicist, because we usually simulate systems that we have experimental access to.