In his CV, which contains information about 33 peer reviewed publications (12 first author) between 2005 and 2019, including papers published in journals such as the Monthly Notices of the Royal Astronomical Society.
From the paper that I linked to you:
… what if the range is infinite? Assigning a constant, normalized probability measure over an infinite range is impossible: there is no probability distribution p such that a) p ( x ) is constant, and b) ∫∞0p(x) dx=1∫0∞p(x) dx=1. So, how should we test a theory in such circumstances? … A physical theory, to be testable, must be sufficiently well-defined as to allow probabilities of data (likelihoods) to be calculated, at least in principle. Otherwise, the theory cannot tell us what data we should expect to observe, and so cannot connect with the physical universe. If the theory contains free parameters, then since the prior probability distribution of the free parameter is a necessary ingredient in calculating the likelihood of the data, the theory must justify a prior. In summary, a theory whose likelihoods are rendered undefined by untamed infinities simply fails to be testable. In essence, it fails to be a physical theory at all. … How, then, do current theories of physics avoid problematic infinities? … Is there anything in the relevant physical theories that limits their range of possible values?
Yes — the Planck scale. … The Planck mass represents an upper boundary to any single-particle mass scale in our current theories. A lower boundary is provided by zero, since quantum field theory breaks down for negative masses; even if it didn’t, a lower bound would be given by − m Planck. Thus, the theory itself constricts the value of v to the range [0, m Planck), and ρ Λ to the range [0,m4Planck) (Wilson 1979; Weinberg 1989; Dine 2015). Outside of these ranges, our current theories cannot be trusted.
In short, there are normalization problems for a physical theory with free parameters that both vary over an infinite range and are uniformly distributed. The standard models of particle physics and cosmology avoid these problems as follows. Dimensional parameters do not vary over an infinite range; they are bounded by the Planck scale. Dimensionless parameters might not vary over an infinite range, and common practice in the physical sciences assumes that parameters of order unity are more probable, so a uniform probability distribution is not forced upon us by the principle of indifference.
If you want more, you should read the paper. The section ‘Response to critics’ deals very much with the objections you’ve raised about setting probability distributions.
The author explicitly states that these “concerns are perfectly legitimate, but are not specifically a problem with fine-tuning”, but a problem that concerns how any “physical theory, scuppered by infinities, fails to produce likelihoods of data — any data.”