Aurelien and I finally had a chance to look over the paper by Müller and we found it interesting, though disappointing in that it is an entirely theoretical paper with no data presented to evaluate any of the expressions. He does state "only the outcome of several studies underway will tell whether the suggested expressions are indeed valid".
We do not see any follow up papers from him in a Google Scholar search however, though we did find this paper from two other authors (An experimental test of Müller statistics for counting systems with a non-extending dead time), which we have not yet had a chance to look over:
https://doi.org/10.1016/0029-554X(78)90544-XSo that might be worth a look.
I’ve attached a paper by Jörg Müller, who has written (or wrote) extensively on the subject of dead time correction... The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term...
Unfortunately equation 5 (truncated or not) is not related to the Willis expression nor to or our "extended Willis" multiple term expression. In our case the coefficients are 1/2, 1/3, 1/4... and in the Müller paper they are 1/2, 2/3, 9/8... In fact, Aurelien thinks the last term 9/8 is a typo and should actually be 3/8 to be mathematically consistent.
The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.
When we first read this we thought Brian was referring to something stated in the Müller paper, but since it's an entirely theoretical paper, we could not understand why Brian would say that. And then we realized that he is just repeating his "insufficiently accurate" claim from his previous posts. And to be honest, though I (and the co-authors) have tried, we've never been able to make sense of his claim.
But then, a few hours after one of our recent zoom meetings with the manuscript co-authors, a light bulb finally went off in my head and I think I finally realized where Brian went wrong in his data analysis. The explanation (I hope) will provide some useful information to all and I must say I think it's quite appropriate that this occurs in this topic on Generalized Dead time which he created, because the mistake he made is related to how we interpret these various effects that all fall under the heading of the (generalized) "dead time" correction.
So let's start with Brian's "delta" plot that he keeps pointing to and expand it a little into the main area of interest:
We see the red circles which are the traditional linear expression and there's the green circles which are the logarithmic expression, both expressions using a dead time constant of 1.07 usec (I am assuming since he doesn't specify that for the traditional expression). And we see clearly the logarithmic expression provides identical results at low count rate (as expected), and more constant k-ratios at higher count rates, than the traditional expression (as he has already agreed). So far so good.
But then he does something very strange. He then proceeds to plot (green line) the logarithmic expression using a dead time constant of 1.19 usec! Why this value? And why did he not also plot the traditional expression using the same 1.19 usec constant? Because in both cases the result will be a severe over correction of the data! Why would someone do that?
I'm just guessing here, but I think he thought: OK, at the really high count rates even the logarithmic expression isn't working perfectly, so I'll just start increasing the dead time constant to force those really high count rate values lower.
But, as we have stated numerous times, once you have adjusted your dead time constant using the traditional linear expression (or obtained it from the JEOL engineer), one should just continue to use that value, or in the case of very high count rates where you might note a very small over correction of the data using the log expression, one might slightly
decrease the dead time constant by 0.02 or 0.03 usec.
But it should never be increased to produce an over correction of the data at lower count rates.
Let's now discuss the underlying mechanisms. As both BJ and SG have noted there are probably several underlying mechanisms that are described by the general term "dead time". We maintain that some of these effects (above 50K cps) are due to multiple photon coincidence. And above 300K or 400K cps other hardware/electronic effects become more dominant as BJ and SG have been discussing. Why do I say this? Because assuming Poisson statistics at these extremely high count rates just doesn't make any difference. But again, for count rates under say 200 to 300 or even 400K cps, the new expressions help enormously.
Here is a plot showing the traditional, Willis and log expressions for Anette's Ti PETL data (originally 1.32 usec from the JEOL engineer, but then
adjusted down slightly to 1.29 usec):
You will note that the k-ratios are increasingly constant as we go from the traditional expression (which only deals with single photon coincidence) to the two term Willis expression (which deals with two photons coincident with a single incident photon) to the log expression (which deals with any number of photons coincident with a single photon). However, and this is a key point, you will note that
at some sufficiently high count rate even the logarithmic expression fails to correct properly for these dead time effects. If we then attempt to force the dead time constant to correct for these extremely high count rates (by arbitrarily increasing the dead time constant), we are simply attempting to correct for other (non-Poisson) dead time effects as seen here, which produces an over correction just as Brian saw:
Note the over correction after the dead time was arbitrarily increased from 1.29 usec to 1.34 usec (red symbols).
This should not be surprising. All three of these expressions are only an attempt to model the dead time as mathematical (Poisson) probabilities. The traditional linear method was a good attempt when calculating on slide rules was a very nice thing. Now that we have computers I say let's also account for additional Poisson probabilties from multi-photon coincidence.
I know nothing about WDS pulse processing hardware/electronics, but let me now speculate by showing the Anette's data plot with some notations:
I am proposing that while these various non-linear dead time expressions have allowed us to perform quantitative analyses at count rates roughly 10x greater than were previously possible, at even higher count rates (>400K cps) we start to run into other non-Poisson effects (from limitations of hardware/electronics) that may require additional terms in our dead time correction as proposed by Müller and others. I suspect that these additional hardware related terms may require careful hardware dependent calibration or even as SG has proposed, new detectors and /or pulse processing electronics.
I welcome any comments and discussion.