@dga471 can you comment on this from your experience and training? You are a physicist studying at MIT. What would you add to this?
(Note: I’m at Harvard, not MIT!) I found this statement very interesting:
I work in a field within experimental physics where simulation is not the primary way that we use to determine final conclusions. Unlike computational biology and information theory (at least based on the discussion in this thread), there is too much complexity in the system to get any result with better than 10-20% accuracy. More often simulation only gets us to the right order of magnitude.
Thus, when we actually perform simulations, what we often do is check it against a simplified, analytically solvable model - one of the “paradigm cases” in the field. For atomic physics, an example would be a two-level or three-level system. (In reality, most atoms and molecules have numerous energy levels that all contribute to the evolution of the system.) Only after doing this do we gain confidence that our simulation of the full system is telling us anything useful.
This process is very important - like the control experiment you performed above using Eric’s model. The difference is that the control, in physics, is heavily intuitive (in contrast to your statement that one cannot trust intuition in computational biology) because physicists believe that we have fully understood the behavior of these paradigmatic cases. I reminded of the joke that physicists like to model everything as a point particle plus some corrections.
But even after all of this, we often still distrust simulations. The final arbiter is always measurement and experiment. This is where an experimental physicist may differ from a theoretical or computational physicist, because we usually simulate systems that we have experimental access to.
Same here. We always keep in mind there is a gap between the models and the real world. We can never model the whole of anything complex, and so there is always a possibility of dragons in the world, and gremlins in the machines.
We didn’t even get to that point in this conversation. There was a math error, so there not even a point in dealing with those challenges.
That is exactly what I am talking about.
So @dga471 this intuition point is important to clarify. We have to make a distinction between refined and unrefined intuition. Even our field, the intuition of highly experienced scientists is critically important. When I say it is “not intuitive”, I mean that our starting intuitions are off. They have to be refined by an immense amount of experience and apprenticeship.
Is that how it works for you in physics too? It is hard to imagine calling quantum physics “intuitive” before a long refinement process, for example…
I mostly agree with you, but I have two points in response to this:
First, I think quantum mechanics is so counterintuitive that most people have no preconceived notions about it - they know that they have to leave their unrefined intuitions at the door when first learning it. In addition, most of the situations described are far removed from ordinary life. How many people have even thought of conducting a Stern-Gerlach experiment, for example?
Second, physics has the beauty of the correspondence principle. Quantum mechanics must reproduce classical mechanics in the large-objects limit. Similarly, Einstein’s special relativity must reproduce Newton’s laws in the slow-speed limit. This means that even quantum mechanics can and must be checked against a paradigmatic case closer to our daily experience! This is why the physics curriculum is very structured and hierarchical - start with classical mechanics, then electromagnetism, then quantum mechanics and so on. In other words, there is a very clear path for refining our intuition! It seems from your discussion with Eric that there is no parallel of this in other fields like computational biology (i.e. a hierarchical “canon” of accepted results), such that you are arguing even over some basic terms such as “function.”