I get rather weary of your armchair criticisms. You pontificate on various subjects, yet you provide no data or models. If you have data or a model that sheds light on the correction for dead time or pulse pileup, then please present it. A meme is not an adequate substitute.
I had shared some my own made python code of monte carlo simulations, which
modeled the pile-up events (making additional counts missing to the dead time) and produced results pretty closely to what counting rates was being observed going to very high current (like >800nA) on our SXFiveFE.
Here:
https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892The MonteCarlo simulation is much simplified (only at 1µs resolution) and I have plans to remake it in Julia with better resolution and better pulse model to make it possible to take into equation also PHA-shifting-out of pulses from the counting system. Then, Yes I don't have currently to showoff any ready equation, and I could not fit MC dataset then, because I was being kept in the same rabbit hole as You currently are, but I am working on it (rabbit hole of trying to tackle the system as single entity instead of subdividing it into simpler independent units or abstraction levels, I am still learning the engineer's ways of "divide and conquer").
On the instrument that I operate (JEOL JXA-8230), the required correction to the measured count rate is in the vicinity of 5% at 30 kcps. Under these conditions, how significant is pulse pileup likely to be? I have never advocated for application of the correction equation of Ruark and Brammer (1937) outside the region in which calculated ratios plot as an ostensibly linear function of measured count rate.
That will depend absolutely from time of shapping the pulse and how that pulse is read (the sensitivity of trigger of sensing pulse rising edge transition into pulse top). In Your case pulse pile ups could be <0.1 % or whole 5% of it. On Cameca SX line where pulses are shaped inside the AMPTEK A203 (Charge sensitive preamplifier and shapping amplifier in a single package, thus we have a public available documentation of that) - that is 500ns (full width of pulse is 1µs). According to my initial MC results on Cameca SX at 10kcps that makes 0.5% of counted pulses, so at 30kcps in case of Cameca instrument that would be around 2% of pulses (should look back to my MC simulation to tell more precise). Would shapping time be larger - that number can be twice as big. Different than Jeol probe, on Cameca SX You can decide about dead time and set to arbitrary (integer) number and that absolutely does not impact the % of pileups at all as shapping time is fixed (that is different from EDS, where output from charge sensitive preamplifier can be piped to different pulse shaping amplifiers) and pileup occurrence directly depend only from pulse density (the count rate) - That is my main hard point here saying any critiques to You and probeman. Also I don't know how it is on Jeol probe, but on Cameca probes there are testing pins left on WDS signal distribution board, for signals coming out from Shapping amplifier and it can be monitored with oscilloscope - thus You can physically catch the pile up events, not just talk some theoretical talks, and thus I spent some time with oscilloscope at different count rates. It is a bit overwhelming to save and share those oscilloscope figures (it is not high-end rather future-poor gear), would be there such demand I would prepare something to show (I plan anyway on other thread about signal pipeline).
BTW, looking to the raw signal with a help of oscilloscope was one of the best self-educational moments during my probe carrier. It at instance cleared up everything for me how PHA works (and works-not), the role of bias, gain, why PHA shifts (none of these fairy stories about positive ions crowding around anode, or anode voltage drop - there is a much more simple signal processing based extremely simple explanation), and made me instantly aware that there are pile-ups and that they are pretty huge problem even at very low count rates. The only condition to be sure that physically there was no pile up is to have 0cps rate.
And thus after seeing quantuple (x5) pile-ups (I am not joking), I made the Monte Carlo simulation as it got clear to me that all hitherto proposed equations miss completely the point of pile-ups and fuses these two independent constants into single constant (the same is said at
https://doi.org/10.1016/j.net.2018.06.014). I also was also initially led to wrong belief that proportional counters can have dead time in the counter itself - which I had found out is not the case, thus simplifying the system (and thus the equation, which I am working at).
Meme was my answer to the ridiculous (to me) claim of "photon coincidence" where it should be "pulse-pile" instead to make sense, as their equation and method can't detect such thing at photon level as it is few orders of magnitude shorter than shaped pulses (which are counted, not photons directly). Yes, Brian, probably I would had better made a chart with pulses in time scale illustrating the differences, but seeing how probeman just ignores all Your plots (and units), so I went for Meme.
I start to understand stubbornness of probeman with nomenclature.
From that shared publication it gets clear that historically two independent processes - physical dead time (real, electronic blocking of pipeline) and pile-up events - were very often (wrongly) fused into single "tau", and probeman et al, are going to keep the same tradition.
There is other problem I have with this log method, That is it would need "calibrations" for every other set hardware integer time on Camceca SX spectrometers (default is integer 3, but that can be set from 1 to 255).
"Whatever" - would say probeman, as he does not change hardware dead times and thus sees no problem.
But wait a minute, what about this?:
I think we should have choices in our scientific models. We have 10 different matrix corrections in Probe for EPMA, so why not 4 dead time correction models? No scientific model is perfect, but some are better and some are worse. The data shows us which ones are which. At least to those who aren't blind to progress.
Is it fair to compare 4 dead time correction models with 10 matrix corrections? With matrix corrections, We can get different results from the exactly same input (some would argue that MAC should be different and particularly fit for one or other matrix correction model.). With dead time corrections that is not the case, as PfS included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>
That is the main weakness of all of these dead time corrections, when pile-up correction is not understood and fused together with deterministic, prefixed signal blanking dead time.
I however tend to see this log equation could be some kind of less wrong thing, at least demonstrating that those "matrix matched" standards are pointless in most of the cases. I wish nomenclature would be given second thought and wish that probeman et al would try undoing some historical nomenclature confusions rather than repeat/continue that inherited nomenclature mess from the past. I know it can take lots of effort and nerves while trying go against impetus.
On the other hand If Brian is staying and will be staying in the low count rate ranges - I see no problem the classical equation would not work for him. I, however, Would not complain if count rates could be increased even to few Mcps, and I am making some feasible plans (a small hardware project) to get there one day with proportional counters.
It was mentioned already somewhere here that ability to do correct measurement at high currents brings in huge advantage for mappings, where condition switching is not so practical.