Dembski: Building a Better Definition of Intelligent Design

Yes, the probability of an event, and so it’s SC is always relative to a chance hypothesis. Dembski is perfectly aware of this. Here is what he wrote in the piece I link below :
With Ewert’s lead, specified complexity, as an information measure, became the difference between Shannon information and Kolmogorov information. In symbols, the specified complexity SC for an event E was thus defined as SC (E ) = I (E) – K (E ). The term I (E ) in this equation is just, as we saw in my last article, Shannon information, namely, I (E ) = –log(P (E )), where P (E ) is the probability of E with respect to some underlying relevant chance hypothesis.

And here is another passage from the second edition of the Design Inference (emphasis mine):
Note that it does not, strictly speaking, make sense to talk about the probability of E as such. Rather, we must always speak of the probability under some assumed hypothesis that might account for the event. Thus, when we see a probability like P(E), an underlying chance hypothesis has simply been left unexpressed.

Well, you could proceed like that:
Create a library of random proteins and find out what proportion bind to ATP.
It happens that Keefe and Szostak did just that. They created a library of 6 x 1012 proteins each containing 80 contiguous random amino acids and then estimated that approximately 1 in 10^11 random proteins bound to ATP.
From these numbers, Dembski, in the second edition of the Design Inference, calculated the SC of a protein (within the class of 80 aa long proteins) that binds to ATP. Here is the passage:
For the description that specifies this function, we will take the phrase « binds ATP », which is two words long and thus we estimate would take around forty bits to describe (if we are generous in assigning 20 bits per word). Using the formula for specified complexity (see Section 6.4), we then calculate:
SC(X\H) = I(X\H) - D(X)

  •  ~ -log2(1/10^11) - 40*
    
  •  ~ 36 - 40*
    
  •  ~ - 4 bits*