An AI system uses computation.
The starting assumption is that there is already information, and it computes (or uses logic) with that already existing information.
We should instead, look at a cognitive system as primarily a measuring system. I am mainly thinking in terms of analog measurement. A ruler is an analog measuring device, but the measurements we make are digital values.
An analog measuring system does not start with information. It starts with the world, and produces information. Measurement is a process for creating information from the world.
Philosophers make a similar mistake. They think that when a photon strikes the retina, that’s information and we can compute with it. However, William James thought that what you would get is “a blooming buzzing confusion.” I agree with James. To get useful information, you have to use a somewhat systematic procedure to get the information that you want.
Here’s an illustration that I sometimes use. To measure temperature, we can use the expansion of a column of mercury in a tube. Then we measure the height of that column. From the height and the shape of the column, we can compute how much the volume of mercury expanded. We then use the laws of thermal expansion to compute the temperature.
Or we can skip the entire computation, and just directly calibrate the mercury column in terms of temperature. No computation is actually needed, and we don’t need prior knowledge of the details of the law of thermal expansion. And that’s what I think the brain is doing – it is directly calibrating its measuring systems so that very little computation is needed.
My tentative advice to neuro-scientists: Stop looking at the brain as a computer, and start looking at it as a complex measuring system. When you see a neuron fire as it reaches a threshold, think of that as indicating a calibration mark in the measuring system. When you see “Hebbian Learning”, think of that as the measuring system readusting and recalibrating itself.