Along the same lines, unless you are referring to conditional mutual information, can we call it conditional entropy? That seems to be the accepted term and then we don’t have to worry about whether someone is an ID supporter.
If I didn’t know you and you came along and started using terms like joint information, conditional information, and relative information I would think you were an ID proponent.
To me H indicates entropy. I don’t think I can go wrong saying entropy or at least I am less likely to go wrong. I’m also more likely to be seen as knowing what I am taking about if I use the proper terms, even if I don’t know what I am talking about.
Entropy and information are essentially equivalent terms from two different academic discourses. It is wrong to use one or the other? Depends what language are you are speaking. If I’m speaking Spanish, I should say “perro”. If I am speaking english, I should say “dog.”. If I’m translating between the two, I will need a very solid grasp of the fine detailed nuanced commonality and distinctives of “perro” vs. “dog.”
I’m recommending that you use entropy because that’s what the texts use. I was struck by the absence of the word information in the Cover and Thomas text. It was all about entropy (not meant to be taken too literally please).
A simple google search on relative information joint information and conditional information is revealing.
Whatever language people are speaking they seem to prefer the term entropy, and it has nothing to do with thermodynamics.
Bill,
As Joshua noted, FI is not a concept in Information theory, but there is the idea of Meaningful Information. This has to do with the Alogorithmic Minimal Sufficient Statistic (AMSS), which attempts to separate the meaningful information from the meaningless random noise.
Minimal Sufficient Statistics are part of classical statistics too, and as an example, the sample mean is a minimal sufficient statistic for finding the “middle” many probability distributions. There are an infinite number of possible functions that would do the same, but the sample mean is the simplest one to do the job.
Lots of hand-wavy, there but I think I conveyed the basic interpretation correctly. The problem is the “meaningful” doesn’t necessarily imply “functional”, and I think you need a mathematical definition of what “function” means. IF that is possible, then there is a way to identify an AMSS to measure it.
Here is a paper on the topic, but it’s not easy to read. I can’t follow much of it, but I recognize what it is saying based on classical statistics.
There is a deep implication here, if FI is a thing, then we are talking about measuring information from some distribution of Functional “Design” Information separately from non-functional distributions. There are methods of identifying source random distributions in classical statistics, but no analog to “Detecting Design” that I can think of … except for conduction algorithmic statistical analyses rather than classical statistical analyses. There will be no magic found there, I think, only another way to perform analyses we already know how to do.
For precise definitions, notation, and results see the text [14]. Informally, the Kolmogorov complexity, or algorithmic entropy, of a string !is the length (number of bits) of a shortest binary program (string) to compute !)on a fixed reference uni- versal computer (such as a particular universal Turing machine). Intuitively, represents the minimal amount of informa- tion required to generate by any effective process.
The first problem I see is the Kolmogorov complexity like other information theories is based on analyzing binary data.
I don’t know how to convert it to biology based on the above description. In biology we are looking at the amount of sequences that can perform the defined function.
Here you are looking at the minimum bit length that can compute the defined function X.
@Mung this has been a good exercise for me too. There is a lot of terminology floating around that is very easily conflated to the wrong thing. I’m harboring a deep suspicion the whole darned Argument over ID might just be a misunderstanding.
It has been good. I’ve been changing my mind about a few things. But one of them is not whether entropy and information are synonyms. More tomorrow I hope.
Today I read some things that E.T. Jaynes had to say and also did some reading in Ben-Naim’s Information Theory.
Do you not agree that they are mathematically the same formula?
I’m not saying they are, generally speaking, synonyms. Rather I am saying that in information theory they are exchangeable terms that mean the same thing.