Author Topic: An alternate means of calculating detector dead time  (Read 4254 times)

sem-geologist

• Professor
• Posts: 300
Re: An alternate means of calculating detector dead time
« Reply #30 on: August 15, 2022, 09:35:13 PM »

I get rather weary of your armchair criticisms.  You pontificate on various subjects, yet you provide no data or models.  If you have data or a model that sheds light on the correction for dead time or pulse pileup, then please present it.  A meme is not an adequate substitute.

I had shared some my own made python code of monte carlo simulations, which modeled the pile-up events (making additional counts missing to the dead time) and produced results pretty closely to what counting rates was being observed going to very high current (like >800nA) on our SXFiveFE.
Here: https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892
The MonteCarlo simulation is much simplified (only at 1µs resolution) and I have plans to remake it in Julia with better resolution and better pulse model to make it possible to take into equation also PHA-shifting-out of pulses from the counting system. Then,  Yes I don't have currently to showoff any ready equation, and I could not fit MC dataset then, because I was being kept in the same rabbit hole as You currently are, but I am working on it (rabbit hole of trying to tackle the system as single entity instead of subdividing it into simpler independent units or abstraction levels, I am still learning the engineer's ways  of "divide and conquer").

On the instrument that I operate (JEOL JXA-8230), the required correction to the measured count rate is in the vicinity of 5% at 30 kcps.  Under these conditions, how significant is pulse pileup likely to be?  I have never advocated for application of the correction equation of Ruark and Brammer (1937) outside the region in which calculated ratios plot as an ostensibly linear function of measured count rate.

That will depend absolutely from time of shapping the pulse and how that pulse is read (the sensitivity of trigger of sensing pulse rising edge transition into pulse top). In Your case pulse pile ups could be <0.1 % or whole 5% of it. On Cameca SX line where pulses are shaped inside the AMPTEK A203 (Charge sensitive preamplifier and shapping amplifier in a single package, thus we have a public available documentation of that) - that is 500ns (full width of pulse is 1µs). According to my initial MC results on Cameca SX  at 10kcps that makes 0.5% of counted pulses, so at 30kcps in case of Cameca instrument that would be around 2% of pulses (should look back to my MC simulation to tell more precise). Would shapping time be larger - that number can be twice as big. Different than Jeol probe, on Cameca SX You can decide about dead time and set to arbitrary (integer) number and that absolutely does not impact the % of pileups at all as shapping time is fixed (that is different from EDS, where output from charge sensitive preamplifier can be piped to different pulse shaping amplifiers) and pileup occurrence directly depend only from pulse density (the count rate) - That is my main hard point here saying any critiques to You and probeman. Also I don't know how it is on Jeol probe, but on Cameca probes there are testing pins left on WDS signal distribution board, for signals coming out from Shapping amplifier and it can be monitored with oscilloscope - thus You can physically catch the pile up events, not just talk some theoretical talks, and thus I spent some time with oscilloscope at different count rates. It is a bit overwhelming to save and share those oscilloscope figures (it is not high-end rather future-poor gear), would be there such demand I would prepare something to show (I plan anyway on other thread about signal pipeline).

BTW, looking to the raw signal with a help of oscilloscope was one of the best self-educational moments during my probe carrier. It at instance cleared up everything for me how PHA works (and works-not), the role of bias, gain, why PHA shifts (none of these fairy stories about positive ions crowding around anode, or anode voltage drop - there is a much more simple signal processing based extremely simple explanation), and made me instantly aware that there are pile-ups and that they are pretty huge problem even at very low count rates. The only condition to be sure that physically there was no pile up is to have 0cps rate.

And thus after seeing quantuple (x5) pile-ups (I am not joking), I made the Monte Carlo simulation as it got clear to me that all hitherto proposed equations miss completely the point of pile-ups and fuses these two independent constants into single constant (the same is said at https://doi.org/10.1016/j.net.2018.06.014). I also was also initially led to wrong belief that proportional counters can have dead time in the counter itself - which I had found out is not the case, thus simplifying the system (and thus the equation, which I am working at).

Meme was my answer to the ridiculous (to me) claim of "photon coincidence" where it should be "pulse-pile" instead to make sense, as their equation and method can't detect such thing at photon level as it is few orders of magnitude shorter than shaped pulses (which are counted, not photons directly). Yes, Brian, probably I would had better made a chart with pulses in time scale illustrating the differences, but seeing how probeman just ignores all Your plots (and units), so I went for Meme.

I start to understand stubbornness of probeman with nomenclature.

From that shared publication it gets clear that historically two independent processes - physical dead time (real, electronic blocking of pipeline) and pile-up events - were very often (wrongly) fused into single "tau", and probeman et al, are going to keep the same tradition.

There is other problem I have with this log method, That is  it would need "calibrations" for every other set hardware integer time on Camceca SX spectrometers (default is integer 3, but that can be set from 1 to 255).
"Whatever" - would say probeman, as he does not change hardware dead times and thus sees no problem.
I think we should have choices in our scientific models. We have 10 different matrix corrections in Probe for EPMA, so why not 4 dead time correction models?  No scientific model is perfect, but some are better and some are worse.  The data shows us which ones are which.  At least to those who aren't blind to progress.
Is it fair to compare 4 dead time correction models with 10 matrix corrections? With matrix corrections, We can get different results from the exactly same input (some would argue that MAC should be different and particularly fit for one or other matrix correction model.). With dead time corrections that is not the case, as PfS included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>
That is the main weakness of all of these dead time corrections, when pile-up correction is not understood and fused together with deterministic, prefixed signal blanking dead time.

I however tend to see this log equation could be some kind of less wrong thing, at least demonstrating that those "matrix matched" standards are pointless in most of the cases. I wish nomenclature would be given second thought and wish that probeman et al would try undoing some historical nomenclature confusions rather than repeat/continue that inherited nomenclature mess from the past. I know it can take lots of effort and nerves while trying go against impetus.

On the other hand If Brian is staying and will be staying in the low count rate ranges - I see no problem the classical equation would not work for him. I, however, Would not complain if count rates could be increased even to few Mcps, and I am making some feasible plans (a small hardware project) to get there one day with proportional counters.

It was mentioned already somewhere here that ability to do correct measurement at high currents brings in huge advantage for mappings, where condition switching is not so practical.

« Last Edit: August 16, 2022, 05:48:09 AM by sem-geologist »

Brian Joy

• Professor
• Posts: 296
Re: An alternate means of calculating detector dead time
« Reply #31 on: August 16, 2022, 04:03:49 PM »
I had shared some my own made python code of monte carlo simulations, which modeled the pile-up events (making additional counts missing to the dead time) and produced results pretty closely to what counting rates was being observed going to very high current (like >800nA) on our SXFiveFE.
Here: https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892
The MonteCarlo simulation is much simplified (only at 1µs resolution) and I have plans to remake it in Julia with better resolution and better pulse model to make it possible to take into equation also PHA-shifting-out of pulses from the counting system. Then,  Yes I don't have currently to showoff any ready equation, and I could not fit MC dataset then, because I was being kept in the same rabbit hole as You currently are, but I am working on it (rabbit hole of trying to tackle the system as single entity instead of subdividing it into simpler independent units or abstraction levels, I am still learning the engineer's ways  of "divide and conquer").

I apologize for criticizing you unduly.  I simply did not remember your post from last year.  It would be helpful if you could expand on your treatment and show tests of your model (by fitting or comparing to data) with in-line illustrations.

As I’ve noted on a variety of occasions, my goal is to find the simplest possible model and method for determination of dead time at relatively low count rates.  It’s clear at this point that our picoammeters are not as accurate as we’d like them to be.  The ratio method of Heinrich et al. relies on as few measurements as possible for each calculated ratio (measurement is required only at each peak position simultaneously), which reduces the impact of counting error; it eliminates the picoammeter as a source of systematic error.  Further, inhomogeneities in the analyzed material do not affect the quality of the data, as both spectrometers (used simultaneously) are used to analyze the same spot.  Even if the equation of Ruark and Brammer (1937) is not fully physically realistic, it still constitutes a useful empirical model, as deviation from linearity in my ratio plots is only visible at count rates in excess of 50 kcps.  Like I noted, the equation is pragmatically simple, and this is nothing to be scoffed at (not that you’ve done this).  Donovan et al. have attempted to create a model that they say is applicable at high count rates, yet, as I’ve shown clearly, the model sacrifices physical reality as well as accuracy at low count rates.  This is simply not acceptable.  So far, I’ve mostly been ridiculed for pointing this out.

If pulse pileup is a serious problem, then this should be revealed at least qualitatively as a broadening of the pulse amplitude distribution with increasing count rate.  Here are some PHA scans collected at the Si Ka peak position using uncoated elemental Si and TAPJ at various measured (uncorrected) count rates.  While the distribution does broaden noticeably progressing from 5 kcps to 50 kcps, deterioration in resolution only becomes visibly severe at higher count rates.  Although this assessment is merely qualitative, the behavior of the distribution is so poor above 100 kcps that I question whether quantitative work is even possible when calibrating at lower count rates.  Further, note the extent to which I had to adjust the anode bias just to keep the distribution centered in the window, let alone the fact that the distribution is clearly truncated at high count rates.

« Last Edit: August 17, 2022, 12:35:45 AM by Brian Joy »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #32 on: August 16, 2022, 06:39:54 PM »
The ratio method of Heinrich et al. relies on as few measurements as possible, which reduces the impact of counting error; it eliminates the picoammeter as a source of error.

You have got to be kidding! Fewer data points means lower counting errors?     Sorry, but you're going to have to explain that one to me....

As for the picoammeter, the constant k-ratio method also eliminates the issue of picoammeter accuracy  as you have already admitted in previous posts (because we measure each k-ratio at the same beam current).  So just stop, will you?

Even if the equation of Ruark and Brammer (1937) is not fully physically realistic, it still constitutes a useful empirical model, as deviation from linearity in my ratio plots is only visible at count rates in excess of 50 kcps.

Yes, exactly. The the traditional/Heinrich (linear) model is "not fully physically realistic", and because of this, the model is only useful at count rates under 50K cps.  The fact that the raw data (even at relatively lower count rates) starts to demonstrate a non-linear response of the counting system, means that assuming a linear model for dead time is simply wrong.  It is, in a nutshell, *non physical*.

You're a smart guy (though just a tiny bit stubborn!)  , so if you just thought about the probabilities of photon coincidence for a few minutes, this should become totally clear to you.

Donovan et al. have attempted to create a model that they say is applicable at high count rates, yet, as I’ve shown clearly, the model sacrifices physical reality as well as accuracy at low count rates.  This is simply not acceptable.  So far, I’ve mostly been ridiculed for pointing this out.

Indeed you should be ridiculed because you have shown no such thing. Rather, we have shown repeatedly that the accuracy of the traditional expression and the logarithmic expression are statistically identical at low count rates.

As for "physical reality", how is including the probability of multiple photon coincidence somehow not physical?  I wish you would explain this point instead of just asserting it. Do you agree that multiple photons can be coincident within a specified dead time period?  Please answer this question...

And see this plot for actual data at 10 nA and 20 nA (you know, low count rates):

How exactly is this data at 10 and 20 nA "inaccurate"? The data for the linear and non-linear models at 10 nA are almost exactly the same. But at 20 nA we are seeing a slightly larger deviation, which just increases as the count rate increases.  You need to address these observations before you keep on making a fool of yourself.

And see here for the equations themselves:

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...

So if your data disagrees, then you've obviously done something wrong in your coding or processing, but I'm not going to waste my time trying to figure it out for you.  I humbly suggest that you give the constant k-ratio method a try, as it also does not depend on picoammeter accuracy, *and* it is much more simple and intuitive than the Heinrich method.

After all, k-ratios are what we do in EPMA.
« Last Edit: August 16, 2022, 09:54:37 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

• Professor
• Posts: 300
Re: An alternate means of calculating detector dead time
« Reply #33 on: August 16, 2022, 10:54:48 PM »

And see here for the equations themselves:

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...

In whole seriousness, at least myself, I have no idea how big the relative error is at small count rates from that plot. Please re-plot it with x and y in log10 scale (with major and minor gridlines on), as that will improve plot readability and prove/disprove your claim about perfect convergence.

It looks You don't get the point with formulation "as few measurements", maybe wording is not clear or wrongly chosen.  I think I start to understand what Brian is trying to convey to us, and I actually partially agree with that. The problem is this:
After all, k-ratios are what we do in EPMA.
But we don't! We do k-ratios in the software!. The software calculate these - and we need reference measurements for these to work. But wait a moment - k-ratios are not simple pulses vs pulses - we need to interpolate the background (again that depends from our interpretation how to do that correctly) count rate and remove that from counts before building k-ratios. The dead time, pulse pileup, picoamperometer non-linearity, faraday cup contamination, problems with spectrometer, beam charging - those problems won't get away and they are present on every measurement (those measurements which are not referenced to anything - i.e. WDS wavescan). EPMA measures continuously two quantities: pulses and time, and on demand - beam current. (machine itself monitors and regulates other parameters like HV). Anyway, k-ratios are our mathematical constructs calculated, not directly measured. We suppose the time is calculated precisely, problem arises from systems inability to count pulses incoming at to high rate, or incoming to close and thus that is what dead time correction models addresses.

Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).

While k-ratio vs count rate plots have its testing merits, I rather would go systematically identifying and fixing the problems with smaller pieces of device separately, or independently testable. I would start with picoamp measurements (collecting reading going through range of C1+C2 coils (or only C2 if field emission type of tip) then plotting discrimination of raw counts vs counts (coil strenght value) and look if there are any steps at 0.5,5,50,500 nA (yes You need some knowledge about hardware to know where to look). And only if it is correct then it is worth to test the linearity (i.e. to some degree with EDS total number of pulses, which have much more advanced dead time corrections and pile-up compensations). But even better the linearity better could be measured with artificial precise injection of currents to the picoamperometer.

Then and only then, it is sensible to develop new curves which take into account collisions of piled pulses distance galaxies (well if You are able to measure directly photons with these devices, why I could not measure distant galaxies?). To illustrate the absurd of keeping it calling "photon coincidence" instead of well recognized (in literature) established and taking place in real line the "pulse pile-up"; so consider the report of two cars crashing into one to another, and someone in report would state that on the "this and that" street the two safety belts had crashed one into another (while omitting any word about cars). You are exited about "smallish" final deviation across huge range of count rates (which indeed is an achievement), but fail to admit and prove otherwise that it can have some drawbacks at particular count rate spots (which from your point of view can look as insignificant corner cases or anomalies not worth attention, but for someone it can be everyday's bread).

Brian, thanks for these PHA - these shed a lot of light how the counting system is doing on Jeol side compared to Cameca probes. Are these count rate given raw, or dead time corrected?

I think it is constructed very similar with (unfortunately to You) some clear "cutting of corners". The bias is quite low and I am very surprised You see this much of the PHA shifting. Well, actually I am not. You can replicate such severe PHA shifting at relatavely low-moderate count rates by forcing hardware dead time to 1µs (That is why Jeol probes has shorter dead times - that is at cost of severe PHA shifting, and that is why default (but user changeable) deadtime on Cameca is 3µs - to postpone the PHA shifting for the higher counting rates). The gain on Cameca SX can be tweaked with granularity of 4095 values (12bits), where I hear on Jeol You can set it to round numbers of 32, 64, 128 (correct if I am wrong). PHA shifting can look unrelated here with this problem, but actually it is hiding your pileups. You see, the 0V is baseline of the pulse only at low count rates. At higher count rates the average baseline shifts below 0V and by increasing the bias You actually get much larger pulse amplitudes, which measured from 0V looks the same. At higher count rates (and higher bias volatges) the real baseline of pulse is far away to the left from 0V and pile up peaks is far away above 10V - that is why You see only the broadening (thanks to base line distancing and keeping the center of peak constant by increasing the bias it makes kind of "zooming" move into distribution). If You want to see the pileups in the PHA plots you need: 1) set your PHA peak at something like 3V, so that double value of that would be still exposed in 0-10V range of PHA. 2) do not compensate PHA shifting with increasing bias - that makes pile up peak move to the right and can go over 10V. It is important to understand that what is above 10V is not magically anhilated - it still blocks the counting of pulses as such >10V pulses needs to be digitized before being discarded and it blocks the system the same as pulses in the range.

Does Your Jeol Probe have the ability to pass all pulses (aka integral mode on Cameca SX), also those above 10V?
I can see the pulse pile up in PHA distribution on Cameca SX, look below how the PHA at ~4V grows with increased count rates:

The above picture has custom bias/ gain setup to diminish the PHA shifting. Next picture is going to give you idea how normally PHA shifts on the Cameca probe. count rates are raw (not corrected).

BTW, the broadening is going to be more severe with Ar escape peaks being present, as there then is many possible pile up combinations: main peak + Ar esc, Ar esc + Ar esc, 2x Ar esc + 2x main ....

There is other image with normal counting using automatic bias and gain settings, where pile-up's are visible too on PHA:
« Last Edit: August 17, 2022, 07:34:35 AM by sem-geologist »

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #34 on: August 17, 2022, 07:31:23 AM »

And see here for the equations themselves:

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...

In whole seriousness, at least myself, I have no idea how big the relative error is at small count rates from that plot. Please re-plot it with x and y in log10 scale (with major and minor gridlines on), as that will improve plot readability and prove/disprove your claim about perfect convergence.

You have got to be kidding.

Does that help at all?

It looks You don't get the point with formulation "as few measurements", maybe wording is not clear or wrongly chosen.  I think I start to understand what Brian is trying to convey to us, and I actually partially agree with that. The problem is this:
After all, k-ratios are what we do in EPMA.
But we don't! We do k-ratios in the software!. The software calculate these - and we need reference measurements for these to work. The dead time, pulse pileup, picoamperometer non-linearity, faraday cup contamination, problems with spectrometer, beam charging - those problems won't get away and they are present on every measurement (those measurements which are not referenced to anything - i.e. WDS wavescan). EPMA measures continuously two quantities: pulses and time, and on demand - beam current. (machine itself monitors and regulates other parameters like HV). Anyway, k-ratios are our mathematical constructs calculated, not directly measured.

What?  I never said we don't do k-ratios in software. I said we do them in EPMA!

And EPMA includes both instrumental measurements of raw intensities and software corrections of those intensities. I really don't get your point here.

But let me point out that even if we just plotted up these raw measured intensities with no software corrections, these non-linear dead time effects at high count rates would be completely obvious.

The reason we plot them up as background corrected k-ratios is simply to make any dead time mis-calibration more obvious, since as you have already stated, the k-ratio should remain constant as a function of beam current/count rate!

I wonder if Brian agrees with this statement...  I sure hope so.

We suppose the time is calculated precisely, problem arises from systems inability to count pulses incoming at to high rate, or incoming to close and thus that is what dead time correction models addresses.

And that is exactly the problem that the logarithmic expression is intended to address!

Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer).

First of all I have been not showing any plots of Cameca k-ratios in this topic, only JEOL.  The 40 nA anomalies were only visible in the SX100 k-ratio data, which were only shown at the beginning of the other topic.  Here I am sticking with only plotting the JEOL data from Anette's instrument because it does not show these Cameca anomalies.

Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).

Yeah, guess what, these instruments are not perfect. But the "bent this way or other way" you describe are very much within the measurement noise.  Try fitting that data to a regression and you won't see anything statistically significant.

Though I have been planning to discuss these very subtle effects in the main topic (and have several plots waiting in the wings), but want to clear up your and Brian's misunderstandings first. If that is possible!
The only stupid question is the one not asked!

sem-geologist

• Professor
• Posts: 300
Re: An alternate means of calculating detector dead time
« Reply #35 on: August 17, 2022, 07:48:39 AM »
You have got to be kidding.

...

Does that help at all?

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #36 on: August 17, 2022, 07:56:03 AM »
You have got to be kidding.

...

Does that help at all?

Yes, that helps a lot. And nails the point of being OK at low count rates!

Well, thank goodness for that!

Now if only Brian would "see the light" too.
The only stupid question is the one not asked!

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #37 on: August 17, 2022, 08:13:23 AM »
Since Brian continues to insist these expressions yield significantly different results when working with low count rate data, even though the mathematics of these various expressions clearly show that both expressions approach unity at low count rates, let's run through some data for him.

Instead of looking at Anette's Spc3 PETL spectrometer, we'll switch to her Spc2 LIFL spectrometer which produces 1/5 the count rate of the PETL spectrometer.  So, 5 times less count rate, and then plot that up with both the traditional linear expression and the new logarithmic expression:

Note that the 10 nA data starts at 4K cps, while the 200 nA data finishes with 80K cps. As measured on the pure Ti metal standard. The TiO2 count rates will be lower of course as that is the whole point of the constant k-ratio dead time calibration method!

Please note that at 10 nA (4K cps on Ti metal) the points using the traditional linear expression and the points using the logarithmic expression are producing essentially identical results.  At lower count rates, they will of course be even more identical.

Could it be any more clear?  OK, I'll make it more clear.  Here are quantitative results for our MgO-Al2O3-MgAl2O4 FIGMAS system, measured at 15 nA, so very typical (moderately low) count rates (9K cps and 12K cps respectively), starting with the traditional linear dead time expression correction:

St 3100 Set   2 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 15.0  Beam Size =   10
St 3100 Set   2 MgAl2O4 FIGMAS, Results in Elemental Weight Percents

ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    14.98   14.98     ---

ELEM:       Mg      Al       O   SUM
19  16.866  37.731  44.985  99.582
20  16.793  37.738  44.985  99.517
21  16.824  37.936  44.985  99.745

AVER:   16.828  37.802  44.985  99.615
SDEV:     .036    .116    .000    .118
SERR:     .021    .067    .000
%RSD:      .22     .31     .00

And now the same data, but using the new logarithmic dead time correction expression:

St 3100 Set   2 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 15.0  Beam Size =   10
St 3100 Set   2 MgAl2O4 FIGMAS, Results in Elemental Weight Percents

ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    14.98   14.98     ---

ELEM:       Mg      Al       O   SUM
19  16.853  37.698  44.985  99.536
20  16.779  37.697  44.985  99.461
21  16.808  37.887  44.985  99.681

AVER:   16.813  37.761  44.985  99.559

Is that close enough?   And again, at lower count rates, the results will be even closer together for the two expressions.

The only reason there is any difference at all (in the 3rd or 4th significant digit!) is because these are at 9K and 12K cps, and we still have some small multiple photon coincidence even at these relatively low count rates, which the linear model does not account for!

Now let's go back to the plot above and add some regressions and see where they start at the lowest count rates to make it even more clear:

Is it clear now?
« Last Edit: August 17, 2022, 11:01:56 AM by Probeman »
The only stupid question is the one not asked!

Brian Joy

• Professor
• Posts: 296
Re: An alternate means of calculating detector dead time
« Reply #38 on: August 17, 2022, 11:23:32 AM »
You have got to be kidding.

...

Does that help at all?

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.

The different models (linear versus 2-term or 6-term/log-term) produce slightly different results at low count rates.  How could they not?  These small differences create problems when calculating ratios.  Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic.  Please look very closely at my plots and commentary for Si, especially for the more subtle case of measurement set 1.  The “delta” plot is critically important.

I am not posting further on this subject.  I have been belittled repeatedly, and I am sick of it.  If you want to examine my spreadsheets in detail, then e-mail me at brian.r.joy@gmail.com or brian.joy@queensu.ca.
« Last Edit: August 17, 2022, 11:44:55 AM by Brian Joy »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #39 on: August 17, 2022, 12:12:33 PM »
You have got to be kidding.

...

Does that help at all?

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.

The different models (linear versus 2-term or 6-term/log-term) produce slightly different results at low count rates.  How could they not?  These small differences create problems when calculating ratios.  Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic.  Please look very closely at my plots and commentary for Si, especially for the more subtle case of measurement set 1.  The “delta” plot is critically important.

Yes, they do produce statistically insignificant differences at low count rates, as they should, since the traditional linear expression cannot correct for multiple photon coincidence. Because these multiple photon events do occur even at low count rates, though again, insignificantly. At the lowest count rates, all 4 expressions will produce essentially identical results, as even SEM Geologist now accepts.

The point is, that it is the traditional linear expression which is "physically unrealistic" (as you like to say), because it can only correct for single photon coincidence. Why is it so hard for you to understand this?

You do realize (I hope) that you are fitting your dead time constant to a linear model that doesn't account for multiple photon coincidence, so it is you that is adjusting your dead time to a physically unrealistic model.  Of course the dead time constant can be adjusted to fit a better (more physically realistic) model!

But the important point of all this, is not that the various expressions all produce similar results at low beam currents, but that the newer expressions (six term and logarithmic) produce much more accurate data at count rates that exceed 50K, 100K and even 300K cps.  As you have already admitted.  Yet you prefer to sulk in the 1930s and stubbornly limit your count rates to 30K or 40K cps.

That is your choice I guess.

I am not posting further on this subject.  I have been belittled repeatedly, and I am sick of it.  If you want to examine my spreadsheets in detail, then e-mail me at brian.r.joy@gmail.com or brian.joy@queensu.ca.

Good, because you need some time to think over where you are going wrong.  I'm not going to fix your mistakes for you!
The only stupid question is the one not asked!

sem-geologist

• Professor
• Posts: 300
Re: An alternate means of calculating detector dead time
« Reply #40 on: August 18, 2022, 04:43:22 AM »
Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic.
There is what I disagree, with both of You. First of all dead time constants should not be touched or adjusted at all - there should be other adjustable variable which would make the models to fit. As dead time is introduced at deterministic scale (by physical digitally controlled clocks - it can't skip a beat with age or increase/decrease the frequency - that is not an option from this world, only in fantasies), and this "tuning" of dead time constant had risen historically from (far-) imperfect models. You both have still an approach from the start of the last century by representing the urge of "tuning" the dead time constants. Thus I find it funny when one of you blame the other of ignoring the progress and choosing of staying in the last century.

However, I think probeman's et al model is too little physically realistic as it accounts for pulse pile up's too weakly (not to strongly as Brian's argumentation suggests) and that gets obvious at higher currents/higher count rates (I don't see that high count rates as anomaly, but as one from pivotal points in testing the correctness of the model). But classical "linear" model does not do that at all, so in that sense this new log function is much better as it do it at least partially, and while it is still not perfect, it is a movement in the right direction.
I disagree on some minor points of nomenclature such as "photon coincidence". In case it would be published in some physics-instrumentation-signal processing journal such coined term would be ridiculed as such events are hidden behind electronic signals wider in time than photon coincidence events near few orders of magnitude. (250ns vs 5-10ns). Unless probeman had discovered that signals in metal wires are photon and not electron movement-based...

What?  I never said we don't do k-ratios in software. I said we do them in EPMA!

And EPMA includes both instrumental measurements of raw intensities and software corrections of those intensities. I really don't get your point here.
I would say it depends how we use the abbreviation "EPMA" - it depends from context; if "A" stands for Analysis then OK, yes, the software is part of that. However if "A" stands for analyzer - hell no! I don't think that my personal computers can be part of the instrument - that is ridiculous. And that is not some far fetch situation, but kind of situation with EDS and DTSA-II where I take eds spectrum on the SEM-EDS, and process it on other (personal) computer where I also contain a database of standard measurements. Generally the same could be done taking raw count measurements and recalculating it with CalcZAF (am I wrong?).
Anyway, it is "analyzer" which introduce dead times, not the "analysis". And k-ratios is concept of "analysis" and is not inside of "analyzer".

Now, because Brian has some doubts about signal pile-up physical realism, I share here few oscilloscope snapshots with some explanations.
To understand more what will be going in more complicated snapshots, lets start with most simple situation - a single lone pulse with wide time of nothing before and after (typical situation at 1-10kcps). Please forgive me a highly pixelated images, as that is resolution that low cost equipment spits-out:

Things to note in this picture is that width of these pulses are the same for any measured energy or wavelenght of X-rays and it depends only on time set on the shapping amplifier. Also it is bi-polar pulse as it is after second RC-differentiation (RC stands here for resistor-capacitor, which are replaced by OPAMP equivalent) of mono-polar pulse. As this is Cameca SX line and I had opened the cover and took a note that current sensitive preamplifier and shapping amplifier is AMTEK A203 chip being used, its datasheet states that shaping time is 250ns (which is equal to 1 sigma in case of Gaussian shape of pulse, albeit pulses on spectroscopy are not symetric and also what we see is pulse after 2nd differentiation), that is clearly visible as it takes about 450ns from nothing (0V) to pulse peak, or ~300ns FWHM (the math is as follows: 250ns shapping time makes 2.4*250 = 600ns FWHM - that is after first differentiation which is mono-polar pulse; the bipolar pulse after 2nd differentiation of that mono-polar pulse have half of that, thus it is 300ns). However I will ignore FWHM, as what is most important is the rising edge of that pulse, as that is place where pulse detection and trigger of pulse amplitude capture happens. What it is important to understand is default 3µs (integer) dead time settings which makes to ignore the random pulses generated after the counted peak, so that we would get only pulses with correct amplitude. Sounds right? In case we are not interested in precise amplitude, but only the amount of detected peaks, we can ignore the default 3µs, and set it to 1µs and enjoy severely increased throughput at integral mode. As You can notice the 3µs does not blank the negative "after-pulse" completely, and if we would want better PHA shape (less of broadening, for better differential PHA mode) we could increase dead time to i.e. 5 µs.

Then let's look to the situation with some higher count rates (some random catch at ~20kcps input kcps) and this shows more real case scenario how hardware dead time works (also shedding light to one of causes for PHA broadening and shift (there are also additional mechanisms)). Please note I had added purple line to highlight how negative "after-tail" of bi-polar pulses influence the (relative to 0V) amplitude of following pulses, red vertical lines is manual deconvolution showing where baseline had shifted, and demonstrating that there is no physical loss of amplitude at Gas proportional counter, but it is lost from how pulse amplitude is recorded (Absolute voltages):

So lets look into 3 Cases with dead time set to 1, 2 or 3µs:
• Case 1 with 3µs (default)
• pulse 1: pulse counted in integral mode and amplitude correctly distributed to ADC
• pulse 2 and 3: ignored by dead time
• pulse 4: pulse counted in integral mode with amplitude distributed to ADC lower by ~200mV
• Case 2 with 2µs:
• pulse 1: same as in case 1
• pulse 2: ignored by dead time
• pulse 3: pulse counted in integral mode, rejected in diff mode as pulse peak value distributed to ADC would be near background noise
• pulse 4: ignored by dead time (while pulse 3 and pulse 4 peak to peak is more than 2µs, the distance of pulse 3 top to left bottom point of pulse 4 is lesser than 2µs) - the comparator-sample/hold tandem would report about indentified pulse to the system before system would be ready to listen
• Case 3 with 1µs:
• pulse 1: same as in case 1
• pulse 2: ignored by dead time
• pulse 3: same as in case 2
• pulse 4: same as in case 1
P.S. with some FPGA-based DSP all 4 pulses could be correctly recognized and correct amplitudes collected. The problem with missing pulses arise from simple (old-school) way of acquiring pulse information (the tandem of comparator and sample/hold chips).

But we are discussing there pile-ups, so let's look to pile-up situation below:

There are 3 numbers on plot but actually there are 4 pulses. pulses 1 and 2 is still possible to visually spot as the difference between the pulses is 440ns and that is more than the shaping time (250ns). The pulse 3 is a pile up of two pulses where time difference is too small to distinguish them between. With default 3µs (integer) dead time only first pulse would be counted, and 2nd and 3rd ignored. with 2 or 1 µs dead time 3rd pulse would be registered, however due to starting in negative voltage its amplitude relative to 0V would be much underestimated, and thus the pile-up would not be placed at PHA graph at 2x of value of primary peak but somewhere in between. That is why in PHA graphs where pile-ups are observed (my two examples in previous post) they do not form a nice gaussian shapped distribution at x2 value from primary PHa peak, but some washed wide irregular distribution with lots of smoothing between values x2 and primary peak position. Why it is so? There is much lesser probability (450ns getting to spot of 450ns/1s (0.45:1000000)) of second pulse to be on top of clean pulse (which has enough of random silence before), than to land into negative voltage range (450ns getting to spot of 2.5µs/1s (2.5:1000000)), with increased count rates the second is more favored.
« Last Edit: August 18, 2022, 09:26:21 AM by sem-geologist »

Brian Joy

• Professor
• Posts: 296
Re: An alternate means of calculating detector dead time
« Reply #41 on: August 19, 2022, 08:00:34 PM »
Actually, I do have a few more comments to make on this subject, and so I’ll go ahead and do so.  Feel free not to read if you don’t want to.

Let me list some advantages of use of the Heinrich et al. (1966) count rate correction method ("ratio method").  Obviously, the treatment could be extended to higher count rates with use of a different correction model.  The method of Heinrich et al. is very well thought out and eliminates some of the problems that can arise when k-ratios are measured/calculated; the method is actually quite elegant.  Note that I've included the reference and a template in the first post in this topic.

1) Only two measurements are required to form each ratio, as no background measurement is required.  The small number of measurements per ratio keeps counting error low.

2) The method is actually very easy to apply.  Each spectrometer is tuned to a particular peak position, and this is where it sits throughout an entire measurement set.  As long conditions in the lab are stable (no significant temperature change, for instance), reproducibility of peak positions is eliminated as a source of error.  Differences in effective takeoff angle between spectrometers do not impact the quality of the data, as only the count rates are important.

3) Because measurements are made simultaneously on two spectrometers, beam current is completely eliminated as a source of error.  If the current varies while the measurement is made, then this variation is of no consequence.  This is not the case when measuring/calculating k-ratios.

4) While I’ve used materials of reasonably high purity for my measurements, the analyzed material does not need to be homogeneous or free of surface oxidation.  Inhomogeneity or the presence of a thin oxide film of possibly variable thickness will only shift position of the ratio along the same line/curve when plotted against measured count rate, and so variation in composition does not contribute error.  As I showed in plots of my first and second measurement sets for Si, counting error alone can easily explain most of the scatter in the ratios (i.e., other sources of error have been minimized effectively).

Ideally, in evaluation of the ratio data, the count rate for one X-ray line should be much greater than the other (as is typically seen in Kα-Kβ measurement pairs, especially at relatively low atomic number).  By this means, essentially all deviation from linearity on a given plot of a ratio versus count rate is accounted for by the Kα line.  This makes the plot easier to interpret, as it facilitates simple visual assessment of the magnitude of under- or over-correction.

I’d also like to note that measurements for the purpose of count rate correction are generally made on uncoated, conductive materials mounted in a conductive medium, and I have adhered to this practice.  Metals and semi-metals are not subject to beam damage that could affect X-ray count rates.  Carbon coats of unknown and/or possibly variable thickness will also affect X-ray count rates (even for transition metal K lines due to absorption of electron energy) and contribute error, as would ablation of that coat at high current.  Accumulation of static charge should not be discounted as a potential issue when analyzing insulators.  Variably defocusing the beam could affect X-ray count rates as well.

SEM Geologist has pointed out that pulse pileup can be a serious issue.  In using a one-parameter model, the dead time and pulse pileup cannot be distinguished.  Perhaps it would be better to call this single parameter a “count rate correction constant.”  In a sense, at least at relatively low count rate, the distinction is immaterial(?), as I’ve shown in my various plots of calculated ratios versus a given measured count rate that behavior at low count rate is ostensibly linear.  As such, it presents a limiting case.  What I mean by this is that, if the correction constant is changed to suit a particular model, then the slope of that line (determined by linear regression) must change such that it no longer fits the data at those relatively low count rates.  This is not acceptable, and I'll expand on this below with reference to some of my plots.  Maybe I’m wrong, but it appears that adjustment of the constant by Donovan et al. is being done by eye rather than by minimization of an objective function.  If this is true, then this is also not acceptable.

No mathematical errors are present in my plots; I do not do sloppy work, and the math is not particularly challenging.  Use of the Donovan et al. model results in significant errors in corrected ratios at relatively low count rate, as I’ve shown in my “delta” plots for Si.  Further, keep in mind that counting error is only applicable to the data and not to the model.  As I’ve pointed out, arbitrary adjustment of the correction constant will generally either lead to over-correction or under-correction of the ratio.  In the latter case, which relates to my measurement set 1 for Si, for instance, the Donovan et al. model predicts a decrease in interaction between electronic pulses as count rate increases (as indicated by a negative slope).  Obviously, this is a physical impossibility and indicates that a flaw is present in the model.  Arbitrary adjustment of the constant can produce minima, maxima, and points of inflection in the calculated ratio as a function of corrected count rate, and none of these features is physically realistic.  A one-parameter model can only produce fully physically realistic behavior if the correction constant is equal to that determined by linear regression of data in the low-count-rate region.  (This is simply a restatement of the limit that I mentioned in the paragraph above.)  SEM Geologist has suggested that the Donovan et al. model is a “step in the right direction,” but I respectfully disagree.  This claim is impossible to evaluate considering the model in its present state.  Perhaps a two-parameter model would be better suited to the problem -- I can only guess.

As I’ve described in words and illustrated in a plot, the JEOL pulse amplitude distribution is exceedingly difficult to work with at high measured count rates (say, greater than 100 or 150 kcps).  There is no way to ensure that the distribution will not be truncated by the baseline, and thus some X-ray counts will simply be lost for good.  The situation is complicated by the fact that electronic gain can only be set at one of four values:  16, 32, 64, or 128.  Fine adjustment must be made by varying the anode bias; this situation is not ideal, as increasing the bias exacerbates shifts in the distribution as count rate changes.  Further, operating at high count rates shortens the useful lifespan of a sealed Xe counter, and these are not cheap to replace (~5+ kiloloonies apiece).

At some point – when I get a chance – I am going to bring my new digital oscilloscope into the lab and do the same kind of testing that SEM Geologist has done.  I need to talk to an engineer first, though, as the JEOL schematics provided to me are not particularly easy to work with (certainly by design).  I thank SEM Geologist for leading the way on this.

Finally, notice that, in my criticism above, I have not leveled any personal insults.  I have not told anyone that he/she “deserves to be ridiculed” or “should be ridiculed” or whatever it was that John wrote in bold type and then deleted.  I am not an idiot, nor was I born yesterday, and I am being very pragmatic in my approach to the problem of correction of X-ray count rates.
« Last Edit: August 26, 2022, 11:35:16 PM by Brian Joy »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #42 on: August 20, 2022, 09:11:25 AM »
Use of the Donovan et al. model results in significant errors in corrected ratios at relatively low count rate

There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Within a fraction of a photon count!

User specified dead time constant in usec is: 1.5
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre
0          0          0          0          0          0          0          0          0
1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989
2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955
3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898
4000   4024.145      0.994   4024.218   0.993982    4024.218   0.993982   4024.218    0.993982
5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718
6000    6054.49    0.99100   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593
7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445
8000   8097.166      0.988   8097.756   0.987928   8097.761   0.9879274    8097.761   0.9879274
« Last Edit: August 20, 2022, 11:17:13 AM by Probeman »
The only stupid question is the one not asked!

Brian Joy

• Professor
• Posts: 296
Re: An alternate means of calculating detector dead time
« Reply #43 on: August 20, 2022, 03:57:22 PM »
There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Please reread my previous post and please look at my plots.  I am not going to restate my argument.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

• Emeritus
• Posts: 2823
• Never sleeps...
Re: An alternate means of calculating detector dead time
« Reply #44 on: August 20, 2022, 07:05:39 PM »
There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Please reread my previous post and please look at my plots.  I am not going to restate my argument.

I am asking to you answer the question:  do all four dead time correction equations produce the same results at low count rates as demonstrated in this graph?

Within a fraction of a photon count!

User specified dead time constant in usec is: 1.5
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre
0          0          0          0          0          0          0          0          0
1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989
2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955
3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898
4000   4024.145      0.994   4024.218   0.993982    4024.218   0.993982   4024.218    0.993982
5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718
6000    6054.49    0.99100   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593
7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445
8000   8097.166      0.988   8097.756   0.987928   8097.761   0.9879274    8097.761   0.9879274
« Last Edit: August 20, 2022, 09:11:30 PM by Probeman »
The only stupid question is the one not asked!