Author Topic: New method for calibration of dead times (and picoammeter)  (Read 28677 times)

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #120 on: November 29, 2022, 08:37:32 AM »
First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 µsec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!
As you have estimated 2.6µs with logarithmic (I guess) equation, that means hardware is set to 3µs, correct? I had thought that I had already convinced You of benefits reducing it at least to 2µs (which should give you estimated "deadtime constant" somewhere between 1.5-1.8µs), so why You are still using 3µs?. You can reduce it safely in integral mode without any drawbacks (but on diff mode it is better to increase it at least to 4µs, if You use diff mode for anything at all). I had gathered some limited measurements on Ti/TiO2 with hardware DT set to 1µs. I need to pull and organize that data to show anything here.

Yes, this is using the logarithmic expression with an integer DT of 3 usec. 

I have presented this data to the lab manager at UofO and mentioned to her that we could be utilizing the 2 usec integer DT, but since I retired earlier this year (now a courtesy faculty), I can only make suggestions.   :'(

I am aware about these "Squiggles" as You called, and pointed out already previously (bold part in quote):
Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).
There is no pulses missing, it is that this equation is not perfect. Think this like these numerous matrix correction models, which works OK and comparably at common acceleration voltages (7-25kV), but some gives very large biases for (very-)low voltage analyses. It is because some of them describe mathematically very oversimplified physical reality. As I said I had already made a MC simulation and there is no visible discrepancy between modeled input and observable output count rates, albeit I could not find the equation, as my greed for whole possible range of counts (lets stay up to 10Mcps) stalled me. At least your method extends the usable range to 150-200kcps, You can minimize effect of the first "bump" by calibrating dead time only up to 100kcps. Your log equation in current form is already nice improvement, as there is no need to be limited up to 10-15kcps anymore, or requires separate calibrations for high current, or using matrix matched standards (where in real its count-rate matched intensities provided better results, and was misinterpret to do anything with matrix).

Thank-you. But of course the logarithmic equation is not perfect!  We are merely trying to find a better mathematical model for what we observe- you know, science!   :D

As for lower count rates, we have already demonstrated that at low count rates the performance of the traditional and logarithmic models are essentially identical. Here is a sentence from our recent dead time paper: " In fact, at 1.5 us dead times, the traditional and logarithmic expressions produce results that are the same within 1 part in 10,000,000 at 1000 cps, 1 part in 100,000 at 10 kcps and 1 part in 10,000 at 20 kcps".

As for problems with the picoammeter, please remember that because both the primary and secondary standards for each k-ratio are measured at the *same* beam current, the accuracy or linearity of the picoammeter should not be an issue.  That's one of the advantages of the constant k-ratio method.

Seriously, I am totally happy with the performance of the log expression because as you say, it extends our quantitative accuracy by roughly a factor of 8x or so in count rates.  But I am also sure that it could be further improved upon.

At this point, I just have some intellectual curiosity as to what these "squiggles" are caused by in the Cameca, and why the JEOL instrument appears to not show similar artifacts.  Do you see anything like this in constant k-ratio measurements on your instrument?

I will try to redo montecarlo simulation using a real pulse shape, with better detailed simulation of detection - that should clear up things a bit, I think. The point is actually not how and where (inside GPC detector - photon coincidence vs at Shapping amplifier signal - pulse-pile-up) coincidences happen, but how they are ignored. This is what I think Your log equation starts to fail to account correctly at higher count rates (>150kcps).

Yes, depending on the dead time constants.  For Cameca instruments (~2 to 3 usec), the log expression begins to fail around 200 to 300 kcps, while for the JEOL (~1 to 2 usec) the log expression seems to be good up to 300 to 400 kcps.

This last weekend I measured constant k-ratios for Si Ka in SiO2 and benitoite (using the same bias voltages) and calculated the optimum dead times and find that they are very similar to the results from Ti Ka.  Interestingly the PHA settings were much easier to deal with since there is no escape peak!  Next I need to measure some more emission lines to see if there are any systematic differences.
« Last Edit: November 30, 2022, 09:17:48 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 304
Re: New method for calibration of dead times (and picoammeter)
« Reply #121 on: December 05, 2022, 09:19:08 AM »
I revisited my MC simulations, and re-investigated pulse sensing electronic part of Cameca. I got much clearer view and found out that It works a bit different than I had fought initially, and after sorting it out it gets absolutely clear why this k-ratio shots-up after 200kcps or (more if using shorter hardware dead time constants). The answer is that integral mode on Cameca is not such "integral" - there is physical barrier stopping pulses near 0V and below from being counted at that mode. PHA distribution most left part looks always as natural decay (going toward 0V from most left "peak" distribution) and hides part of missed/ignored pulses.
The key finding is that Comparator compares reversed pulse (reversed by gain amplification electronics) signal with up-tight pulse signal (negative reversed to positive scaling 0 to 5V by sample and hold chip). The event of sensed pulse is triggered with rising edge of comparator output signal used as a clock at D-flipflop. That design works not so well with pulse-pileups, as even if D-flipflop is reset (after hardware set dead time times-out) triggering its RESET and D pins, for clock input to work it needs first to go from high state back to low state, and with dense pulse train this can be delayed a lot.

I always was thinking that it is using classical way of comparator-S/H chip tandem by comparing original (upright) with non-inverted delayed (by S/H chip) signals, which would put the comparator output to low state then pulse voltage is dropping after peak. Such design would be able to sense any shifted up or down pulses (shifted from pile up), even pulses which would start much at negative voltage and its top would not go over 0V. Anyway I think I should illustrate better these phenomenons to be better understood (probably in other thread).

Finally, I also was interested in what more detailed MC simulation of pulse will reveal.
So to remind everyone I was quite against the terminology used by probeman of "photon coincidence" detailed MC allowed me to look into this more closely, and numbers shows that I was partly wrong.
So first of all some constants: some other work shows that primary typical GFPC pulse (of random shape) is about 200ns, so if two photons arrives to detector within such a window we can say that there is photon coincidence. my simulation runs with 40ns granularity, if coincidence is within such window at least at this simulation the coincidence is unresolvable (looks as a single event with sum of collected energies).
So here are the numbers: the fraction of initial GFPC pulses which are presenting more than a single photon hit to the detector:
count rate | 40ns wind. | 200ns wind.
10k0.02%0.08%
100k0.2%0.81%
1M1.77%7.43%
10M16%50%

So what does this mean? with better counting design (i.e. FPGA based deconvolution) there would be still pretty huge limitations by perfectly concidenced (within 40ns window) photons. This is not so huge problem for low energy X-rays (below Ar esc peak energy) - As number of counts in piled-up pulse can be found by dividing it by average single pulse amplitude. With Ar esc pulses present, this gets challenging: is the piled-up pulse composed from 2 normal pulses, or is it 1 normal + 2 arg esc pulses? Even simple average normal pulse: is it representing single pulse or is it rather 2 or 3 piled up (coincidenced photons) arg esc pulses? So current way of counting, even if the pulse sensing would be significantly improved with newer electronic technology) that would be limited to ~500kcps as going above that would start increasing the uncertainty of measurement.
As probeman already noticed those high numbers of dead time correction - that is lots of uncertainty introduced to the measurement, this could be lowered down with better pulse sensing, but the limit would be pushed to the new boundaries.

Thanks to playing around with MC simulation I think I found some absolutely insane alternative to deal with this problem which would ditch the dead time completely and absolutely, but I would rather present this after hardware testing.

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #122 on: December 05, 2022, 09:45:47 AM »
Finally, I also was interested in what more detailed MC simulation of pulse will reveal.
So to remind everyone I was quite against the terminology used by probeman of "photon coincidence" detailed MC allowed me to look into this more closely, and numbers shows that I was partly wrong.

So first of all some constants: some other work shows that primary typical GFPC pulse (of random shape) is about 200ns, so if two photons arrives to detector within such a window we can say that there is photon coincidence. my simulation runs with 40ns granularity, if coincidence is within such window at least at this simulation the coincidence is unresolvable (looks as a single event with sum of collected energies).
So here are the numbers: the fraction of initial GFPC pulses which are presenting more than a single photon hit to the detector:
count rate | 40ns wind. | 200ns wind.
10k0.02%0.08%
100k0.2%0.81%
1M1.77%7.43%
10M16%50%

So what does this mean? with better counting design (i.e. FPGA based deconvolution) there would be still pretty huge limitations by perfectly concidenced (within 40ns window) photons. This is not so huge problem for low energy X-rays (below Ar esc peak energy) - As number of counts in piled-up pulse can be found by dividing it by average single pulse amplitude. With Ar esc pulses present, this gets challenging: is the piled-up pulse composed from 2 normal pulses, or is it 1 normal + 2 arg esc pulses? Even simple average normal pulse: is it representing single pulse or is it rather 2 or 3 piled up (coincidenced photons) arg esc pulses? So current way of counting, even if the pulse sensing would be significantly improved with newer electronic technology) that would be limited to ~500kcps as going above that would start increasing the uncertainty of measurement.
As probeman already noticed those high numbers of dead time correction - that is lots of uncertainty introduced to the measurement, this could be lowered down with better pulse sensing, but the limit would be pushed to the new boundaries.

Thanks to playing around with MC simulation I think I found some absolutely insane alternative to deal with this problem which would ditch the dead time completely and absolutely, but I would rather present this after hardware testing.

Hi SG,
This is a really interesting post, and most excellent work.  Of course we would all be very interested in any hardware breakthroughs you can come up with. One question: do you think these same hardware limitations also apply to the JEOL electronics?  I am asking because I am seeing some evidence that this is indeed the case and I was just posting about this when you posted this morning.

But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.

Based on this new information (we were a little stunned to say the least!) we ended up making some significant changes to the paper and in fact we have added you (Petras) in the acknowledgments section of our paper, if that is OK with you.  Your discussions have been very helpful and we very much appreciate your contributions to the topic. 

I still think that if you get a chance to perform some constant k-ratio measurements on your own instrument, you would find the data very interesting.  OK, back to the post I started on this morning...
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #123 on: December 05, 2022, 10:01:47 AM »
So I previously showed some strange "squiggles" in the constant k-ratio data from one of our large area crystal Cameca spectrometers for Ti Ka using TiO2 and Ti metal standards:

https://probesoftware.com/smf/index.php?topic=1466.msg11416#msg11416

As described in that previous post, at the time I was under the impression that these squiggles were only showing up in the Cameca instrument, but wanted to perform additional tests for just to compare with a different emission line. So here are constant k-ratio measurements on Benitoite and SiO2 on our LTAP crystal up to 200 nA, first with the traditional dead time expression:



Clearly the traditional dead time expression is not very useful at these high count rates giving us a total vraiance of around 29%!  But just for fun, let's increase the dead time constant to an arbitrarily large value to try and "force" the k-ratios to be more constant:



Unfortunately as we can see, with an arbitrarily large dead time constant, we are starting to over correct the lower intensities, while still under correcting the higher intensities giving us a total variance of around 7%, which is better but still not sufficient for quantitative work.  So let's try the logarithmic dead time expression:



This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again.  Again, it is worth mentioning that unless one is utilizing the constant k-ratio method, these subtle variations would never be noticeable.  Also worth mentioning is the fact that at lower count rates, these "squiggles" are not nearly as pronounced as seen here on a normal TAP crystal from this same run:



So about ~ 1.5% variance.

OK so what about the "squiggles" in the JEOL constant k-ratios I mentioned? Well I decided to look more closely at some of Anette's constant k-ratio measurements also using Si Ka and darn if I didn't find very similar "squiggles" when looking at k-ratios with the the highest intensities. So here are the Si Ka k-ratios on the JEOL TAP crystal:



Please ignore the k-ratios at the highest count rates. These are due to difficulties with getting a proper tuning of the PHA bias/gain settings as this data is from back in August when were were still trying to figure out how to deal with the extreme pulse height depression at these crazy count rates.
 
The point is that these subtle "squiggles" are also visible in the constant k-ratio data on the JEOL instrument.
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 304
Re: New method for calibration of dead times (and picoammeter)
« Reply #124 on: December 05, 2022, 12:31:44 PM »
One question: do you think these same hardware limitations also apply to the JEOL electronics?  I am asking because I am seeing some evidence that this is indeed the case and I was just posting about this when you posted this morning.
JEOL clearly has a bit different approach and a different problem, their pulse sensing catch the background and that can make it more extendable-like, where Cameca PHA and sensing ignores anything near and below 0V. Hopefully Brian will get to that at some time, as he is already investigating pulse pre-amplifier and shapping part at Jeol Probe.

But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.
Now this surprise me and I would disagree. My first MC simulation, much oversimplified and modeling strictly only pulse-pileups (no PHA shift, actually no amplitude data, no Arg esc pulses, ideal deterministic pulse sensing with strict forced deadtime) was clearly demonstrating that neither classical neither Willis equations could be fitted to the "ideally" deterministically counted pulses (imposing only the hardware dead time). Thus I am very surprised and curious how Your MC could give opposite conclusions. Maybe Your model miss some crucial piece? Had Aurelien looked into my first MC? Periodic Pulses keeping the pulse train above the 0V (preventing comparator from setting its output to low, and thus flip-flop can't get triggered by rising edge from low to high state and can't send the signal that there was a pulse), and random pulses getting at negative voltage after tail/depression with its top not getting above 0V - those are culprit of increasing part of pulses not sensed. They are incoming from the detector to pulse sensing and PHA system (heck, I even spent some afternoon, counting pulses by hand (by eye) on oscilloscope at something like 1Mcps to prove myself that it is there, nothing is missing), but damn pulse sensing system have "closed eyes" and is not sensing it properly.
 

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #125 on: December 05, 2022, 02:12:43 PM »
But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.
Now this surprise me and I would disagree. My first MC simulation, much oversimplified and modeling strictly only pulse-pileups (no PHA shift, actually no amplitude data, no Arg esc pulses, ideal deterministic pulse sensing with strict forced deadtime) was clearly demonstrating that neither classical neither Willis equations could be fitted to the "ideally" deterministically counted pulses (imposing only the hardware dead time). Thus I am very surprised and curious how Your MC could give opposite conclusions. Maybe Your model miss some crucial piece? Had Aurelien looked into my first MC? Periodic Pulses keeping the pulse train above the 0V (preventing comparator from setting its output to low, and thus flip-flop can't get triggered by rising edge from low to high state and can't send the signal that there was a pulse), and random pulses getting at negative voltage after tail/depression with its top not getting above 0V - those are culprit of increasing part of pulses not sensed. They are incoming from the detector to pulse sensing and PHA system (heck, I even spent some afternoon, counting pulses by hand (by eye) on oscilloscope at something like 1Mcps to prove myself that it is there, nothing is missing), but damn pulse sensing system have "closed eyes" and is not sensing it properly.

I know, we were totally surprised too.  As I said, we were stunned!    :o

But Aurelien said he looked at his code very carefully several times and is convinced that the traditional expression does deal properly with all (ideal) photon coincidence.  I add "ideal" as a qualifier because once the pulses start to overlap at very high count rates we think there are some non-linear effects (maybe due to the non-rectilinear shape of the pulses?) that start creeping in, hence the need for a logarithmic dead time correction at high count rates.
« Last Edit: December 05, 2022, 02:16:31 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #126 on: December 08, 2022, 10:14:47 AM »
I wanted to share the PHA scans for a couple of spectrometers on Si Ka, whose constant k-ratios were plotted in the above post:

https://probesoftware.com/smf/index.php?topic=1466.msg11432#msg11432

Here's spec 1 TAP at 200 nA:



Remember we should always tune our PHA settings at the highest beam current we anticipate using, on the highest concentration that we will be utilizing, in a specific probe session. This is in to ensure that PHA peak will always stay above the baseline level, even with pulse height depression effects at the highest count rates.

Now the same spectrometer at 10 nA:



Interestingly the PHA shift for Si Ka at lower count rates is much more subdued than for Ti Ka. Also, the lack of an escape peak makes things much easier!

Now for spectrometer 2 using a LTAP crystal (~370 kcps on SiO2) at 200 nA:



Again, we do not care that the peak is being "cutoff" on the right side of the plot because in INTEGRAL mode the PHA system still counts those pulses as previously demonstrated using a gain test acquisition on Ti Ka on Ti meta:



See here (and subsequent posts) for more details on PHA tuning:

https://probesoftware.com/smf/index.php?topic=1475.msg11330#msg11330

And finally spectrometer 2 LTAP again, but at 10 nA:
 


Note the shift to the right at these lower count rates. But the important point is that the PHA peak is always *above* the baseline level from 10 nA to 200 nA, so we have a nice linear response in our electronics!
The only stupid question is the one not asked!

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: New method for calibration of dead times (and picoammeter)
« Reply #127 on: December 11, 2022, 12:14:16 AM »
This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again. 

The ”squiggles” are mostly the result of inappropriate application of your model, as τ may not be adjusted arbitrarily.  Its value may only be revealed by regression of appropriate data, but it may not be varied at will without violating the constraints imposed by those data; this ought to be pretty obvious.  If the "log" function is expanded as a power series and then truncated after its first-order term, it gives a linear expression that may be applied in the region of relatively low count rates (< 50 kcps).  The value of τ determined within that linear region must be consistent with that used in the converged series (the “log” equation).  In the squiggly plots, on one side or the other of a maximum or minimum, you are likely illustrating that the fractional correction to the count rate for at least one material used to construct the ratio decreases with increasing count rate.  Obviously, this is not physically realistic.

I’m not going into any further detail because I’ve already done that in my discussion of “delta” plots, in which it is revealed that the log expression in conjunction with arbitrary variation of τ produces physically unrealistic behavior.

Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times” topic.  Has this discussion been forgotten? 
« Last Edit: December 11, 2022, 02:12:32 AM by Brian Joy »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #128 on: December 11, 2022, 10:08:02 AM »
This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again. 

The ”squiggles” are mostly the result of inappropriate application of your model, as τ may not be adjusted arbitrarily.  Its value may only be revealed by regression of appropriate data, but it may not be varied at will without violating the constraints imposed by those data; this ought to be pretty obvious.  If the "log" function is expanded as a power series and then truncated after its first-order term, it gives a linear expression that may be applied in the region of relatively low count rates (< 50 kcps).  The value of τ determined within that linear region must be consistent with that used in the converged series (the “log” equation).  In the squiggly plots, on one side or the other of a maximum or minimum, you are likely illustrating that the fractional correction to the count rate for at least one material used to construct the ratio decreases with increasing count rate.  Obviously, this is not physically realistic.

I’m not going into any further detail because I’ve already done that in my discussion of “delta” plots, in which it is revealed that the log expression in conjunction with arbitrary variation of τ produces physically unrealistic behavior.

Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times” topic.  Has this discussion been forgotten?

There you go again. So, "in the the strongest possible terms", hey?  Yup, no one has forgotten that you are both stubborn and wrong.  Well, you are free to restrict your quantitative analyses to less than 50 kcps, if that is your choice.  But for those of us who enjoy scientific progress beyond the 1930s, we will continue to investigate our spectrometer response at these high count rates for trace element and high speed WDS quant mapping. 

By the way, similar "squiggles" are also seen in EDS at high count rates so these artifacts are not unique to software corrections for dead time:


 
In the meantime, as we have previously pointed out, dead time is not a constant.  If it was a constant every detector would have the same value!   ;D  It is rather a "parametric" constant. Which is defined as "a. A constant in an equation that varies in other equations of the same general form, especially such a constant in the equation of a curve or surface that can be varied to represent a family of curves or surfaces."

Therefore, depending on the form of the equation (and the detector electronics), we might obtain slightly different constants, e.g., 1.32 usec using the traditional expression or 1.28 usec using the logarithmic expression. These slight differences are of course not visible except at count rates exceeding 100 to 200 kcps and only with the constant k-ratio method with its amazing sensitivity.

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call "physically unrealistic", we call empirical observation and the development of scientific models. We personally prefer to avoid large errors in our quantitative measurements while increasing our sensitivity and throughput, so we will continue to develop improved dead time models beyond what you learned in grad school:


 
And by the way, the only person who has been making "arbitrary adjustments" to the dead time constants, has been you, in a blatant and pathetic attempt to discredit our efforts.  You were already called you out on this, but apparently you need to be reminded again. Please stop misrepresenting our work! Instead, we are carefully adjusting the dead time constant to yield a *zero slope* in our constant k-ratio plots as a function of count rate. Which I'm sure even you will agree is the analytical result we should observe in an ideal detector system.

The point being that our detection systems are not perfect (and neither are our models), hence "squiggles".   :)
« Last Edit: December 12, 2022, 09:06:08 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 304
Re: New method for calibration of dead times (and picoammeter)
« Reply #129 on: December 12, 2022, 09:36:27 AM »
Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times” topic.  Has this discussion been forgotten?

From 1930'ies? Seriously? You know Gas quenching (required for proportional counter) was invented near decade later https://link.springer.com/article/10.1007/BF01333374 and proportional counter itself as We know today appeared whole decade after 30ies. Equations made for G-M tube is not relevant to proportional counter because proportional counter is not G-M tube. It is different kind of regime (gas, gas pressure, geometry, presence of quenching, voltage of wire, sensing, electronics). More further observed missing count problem on counter is not due to detector but counting electronics, the problem of analog-digital domain crossing - that is problem way few decades later originating than people could get or imagine at 1930'ies and is not directly used in EDS (which btw is expandable) or WDS - because it is not relevant. If dead times could been sorted out in 1930 already, then there had been no ongoing attempts to improve it later, and still it is ongoing effort.

In my opinion log model is not good enough as it bends the count rate too little, but it bends it more than simple linear equation and is able to fit closer to the real input vs output count rates. And this is where I disagree with probeman:

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call physically unrealistic, we call empirical observation and scientific models. We personally prefer to avoid large errors in our quantitative measurements, so we will continue to develop improved dead time models beyond what you learned in grad school:

But I was not right, but wrong (in what?). I was wrong in thinking that this is dominated with pulse-pile up (that is in middle of pipeline with electronic pulse-pile ups). But my most recent MC simulation shows that actually probeman's coined "photon coincidence" indeed is present already at low count rates and is not so exotic as looked for me. Still, probability of 200ns pulse (Townsend avalanche) to overlap at least partly in finite time domain is smaller than pulses of 3.5µs to overlap at least partly in same finite time domain. As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right. To make right simulation it is important to understand how detection works and works not. My initial oversimplified simulation with taking oversimplified timing resolution steps of 1µs (thus working a bit poorer but  faster) already showed that coincidences plays the most important role in non-linear response of counting.

The other technical effects infuences the final counts, but is not dominating. The key concept to understand is that at counting we have finite resource – that is time. We normally consider that we count the pulses (the representation of single x-ray events), and then get the ratio of counts by dividing counted pulses from time, when detector was not blind (dead). Lets do a thought experiment, lets say we have 1 second as time domain for disposition at which pulses can appear. If we get pseudo-random 100 counts (which does not overlap anyhow in time domain at this particular example - thus "pseudo") and counting system is blind for 3µs per pulse, using simple formula we get then that "live time" is: 1_000_000 µs - 100*3µs = 999700 µs, so rate is 100ct/0.9997s. Let's push this thought experiment further: this time we would suppose that we "pseudo"-randomly place 200 000 pulses at that 1s time domain. Live time in that case is: 1_000_00µs - 200_000*3µs = 400 000µs, so rate is 200_000counts / 0.4s = 500kcps. In this thought experiment it should already be obvious that this equation in case pulses would be not overlapping anyhow in the time domain, it would be absolutely completely broken, as we had put 200 kilo counts at 1 second domain and got 500kcps. But we were using this formula for many year as it "kind-of" works as its imperfection was hidden behind the numbers at limited count range or situation was improved to expected results with real photon/pulse coincidences and "calibrating" or actually scaling the dead time constant (or kind of bending the equation a bit to give more expected results, at least to controlled range of count rate). Value of that "Constant" needs to be calibrated so that equation would work - it actually have no relation with real-world hardware dead time.
If we will go further with thought experiment, inputting 333333 non overlapping counts will leave us with 1µs as live time and will give bizzare 333.333Gcps!  :o  And trying to push 333334 non-overlapping pulses would overflow the equation and would tear the universe apart :P.
In real life, however, the live time can't get anything close to 1µs - due to true-randomness of pulses many of these in 333333 pulses would be piled-up/overlapped in time domain and there would be much more than 1µs time-without-any-pulse or live time left.
Anyway, the illustration of that thought experiment:


Lets look to this problem from a bit different perspective. Do we really need to measure pulses? We barely can do that as pulse-pile ups blurs the reality. However what we can measure well is time without any pulses! That is the live time! The live time at 0kcps will equal the real time. With increasing count rate live time will proportionally but not-linearly will decrease. Live-time is decreasing in non linear fashion thanks to the pulse pileups or photon coincidences. Think about it as this: lets say we have 1s time domain populated with random pulses taking the 0.2s, thus free time left is 0.8s. Lets say we want randomly to add one more pulse, Which is higher probability for that pulse to overlap those 0.2s or to land in free 0.8s region? The answer is probability of 8:2 to land in still not occupied timespace. Now lets reverse this situation, lets say we have hundred of thousands of counts covering 0.8s from 1s and there is free time domain (which we would call live time) only of 0.2s. What is the probability for such another random pulse to land in that still not occupied timespace? 2:8 - much smaller!. Any next count which diminishes the free-from-pulses timespace (or eats the live time away) makes it less probable for another pulse to be placed in that smaller free timespace ( Not every pulse eats the live time away!). Thus the real life live-time diminishes with 1-log-like fashion and can near to 0s then count rate is nearing to the ∞. Thus This log equation is much closer to the workings.

Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.
« Last Edit: December 12, 2022, 12:13:39 PM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #130 on: December 12, 2022, 11:48:38 AM »
In my opinion log model is not good enough as it bends the count rate too little, but it bends it more than simple linear equation and is able to fit closer to the real input vs output count rates. And this is where I disagree with probeman:

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call physically unrealistic, we call empirical observation and scientific models. We personally prefer to avoid large errors in our quantitative measurements, so we will continue to develop improved dead time models beyond what you learned in grad school:

But I was not right, but wrong (in what?). I was wrong in thinking that this is dominated with pulse-pile up (that is in middle of pipeline with electronic pulse-pile ups). But my most recent MC simulation shows that actually probeman's coined "photon coincidence" indeed is present already at low count rates and is not so exotic as looked for me.
...
As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right. To make right simulation it is important to understand how detection works and works not. My initial oversimplified simulation with taking oversimplified (thus working a bit poorer) already showed that coincidences plays the most important role in non-linear response of counting.

It's funny how we seem to have swapped our positions!   :)

Originally you felt that "photon coincidence" (which by the way is a term I did not coin!), was not an issue at moderate count rates and I thought it was. I had thought that the traditional dead time expression only dealt with single photon coincidence, and when I tried the Willis two term expression and it worked better, my co-authors thought the 2nd term in that expression might be dealing with double photon coincidences which is the next most common coincidence type. That led eventually to the log expression.

But then Aurelien's Monte Carlo modeling (based on Poisson statistics) showed us that the traditional expression did indeed properly handle both single and multiple photon coincidence.  Yes, the Excel spreadsheet only reveals the results of his calculations and yes, we were stunned by this result.   :o

So we accepted these new results and now attempt to explain the improved performance of the log expression through non-linear behaviors of the pulse processing system. But you claim your Monte Carlo results show a different result!  Isn't this fun?   :)

Thus the real life live-time diminishes with 1-log-like fashion and can near to 0s then count rate is nearing to the ∞. Thus This log equation is much closer to the workings.

Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.

So perhaps we can perform a "code exchange" and try to resolve our differences?  If you are willing to zip up your code and attach it to a post, Aurelien has said he will do the same for his code.

How does that sound?
« Last Edit: December 12, 2022, 12:12:32 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 304
Re: New method for calibration of dead times (and picoammeter)
« Reply #131 on: December 13, 2022, 08:49:45 AM »
Sounds really good.

My old MC simulation sits all the time in this forum post as the last attachment (it is jupyter python notebook):
https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892

I however am cleaning it up a bit and will upload better version soon of that simplified simulation.
It is very simplified with time step of 1µs, no argon esc peaks, no variations of amplitude, no pulse shape - whole pulse fits into 1µs and if two pulses overlap in same time step it is treated as full pile-up, and if pulses land in subsequent time steps - they are treated as not piled-up. I found initially that to get the model to come up with observed count rate I need to add +1µs to the hardware set dead time. This old code contains old wrong superstitions, I thought this is because 700ns from datasheet of hold time of sample-hold chip + 0.3µs -the default additional dead time in Peaksight. I have a different explanation why it works: Because there is no pulse shape in that model that additional 1 µs allows to skip pulses which would be missed by hardware as sample-hold output signal needs to drop below 0V to reset the trigger for pulse sensing (it basically senses only rising edge of pulse and only (and only) if it rises from 0V - that is not most briliant design of hardware to be honest). So in case the sensing is armed and counting electronics starts to look for pulse, as it "opens its eyes" and if at moment pulse is rising, but it is already in progress of being risen - that pulse will be lost, and so about whole 1µs can be shaved-off. That is oversimplification but surprisingly it gives observable results (at least on our SXFiveFE).
 
This simplified MC strong side is low memory footprint and high modeling speed. Some parts I am reusing in next generation of MC simulation (New generation is much slower because it simulates pulse shapes with 40ns resolution, thus instead of 1M point it does 25M points to cover 1s, makes it possible to simulate pulse sensing trigger PHA shift and count losses, noise and etc, it is memory hungry and terribly slow.)

IT has two steps:
1) modeling 1s time frame of 1M 1µs segments with random pulses (an 1D array)
2) simulating the counting based on moving through such array.

Modeling of signal uses pythons numpy libraries random number (array full of random numbers) generator. Such generated array is then checked for criteria (number smaller n times, where n is times the random generator should be triggered) taking out single pulses at random array positions. n number of such arrays are summed efficiently element-wise finally generating the array with 0 (no pulses), singlefold (1), double-pile-up's (2), triple-.. (3) and so on.

counting is based on iteration through the generated array from its first to its last index (this is not so fast). There is no pulse sensing triggered when 0 is encountered, then array pointer is increased and next value is checked. Then non zero value is found, it adds the number to main counter, arms small counter for dead time time out and with next array pointer iterations puts encountered pulses into separate blanked-pulse counter.
It keeps from adding pulses to main counter until the separate small hardware dead time counter times out. The counting simulation consolidates data how many which kind of pulses was counted, how many was missed and how many and what kind of pileups was encountered, and what raw numbers of count rate we would see on machine. Then changing hardware time (1µs ,2µs,3µs,4µs...) it correctly predicts the platoo where count rate nearly halts from rising while increasing the current.

Aurelien Moy

  • Administrator
  • Graduate
  • *****
  • Posts: 9
Re: New method for calibration of dead times (and picoammeter)
« Reply #132 on: December 13, 2022, 10:19:56 AM »
Lots of great posts here. I really enjoyed reading this topic.

I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Below I will try to present the Monte Carlo code I wrote, as best as I can. I would love any feedback on it or any correction if I made a mistake.

The algorithm I wrote is very basic and does not attempt to model a physical detector. It assumes the detector is either available to detect a photon or is dead. The detector can switch from one of these two states to the other without any time delay. Once the detector is dead, it stays dead for a time equals to the dead time. This dead time is non extensible, i.e., if a new photon hit the detector while being dead, this does not extend the dead time. I assumed the emission of photons to follow a Poisson distribution. It is generally well accepted that photon emission can be described by such a distribution: the probability that k photons reach the detector in the time interval ∆t (in sec) is given by:



where λ=N×∆t is the average number of photons reaching the detector in the time interval ∆t and N is the emitted (real) count rate in c/s.

The Monte Carlo algorithm works with four parameters: the number of steps simulated, a time interval (∆t) corresponding to the time length of each simulated step, in second, the count rate (N) of emitted photons reaching the detector, in count per second, and the detector dead time (τ), in second.  Here is the logigram (flowchart) of the code:



For each step of the simulation, a time interval ∆t is considered. Based on the count rate N and the time interval ∆t, the program simulates how many photons (k) are reaching the detector in this time interval using the Poisson distribution and random numbers. What I called photon coincidence is when more than 1 photon reached the detector in the time interval ∆t. Obviously, ∆t needs to be small enough compared to the detector dead time. In my simulations, I used ∆t=10 ns.

When at least one photon reaches the detector, the total number of detected photons is increased by one (only one photon at a time is detected) and the total number of emitted photons increases by k. As a result of the detection of a photon, the detector becomes dead for a period corresponding to the deadtime τ. The detector is then staying dead for a number of steps j=τ/∆t. During each of these j next steps, the program simulates how many new photons are reaching the detector. If any, these photons are not detected (because the detector is dead) but they are accounted for by the program in a variable tracking the total number of emitted photons. After j steps have passed, the detector is ready to detect a new photon. The process is repeated until the specified number of steps has been simulated.

Note that for the simulation to give realistic results, the time interval ∆t must be much smaller than the detector deadtime τ (ideally 100 to 1000 times smaller) and ∆t must be a multiple of τ.

I coded this algorithm in VBA and included it in the attached Excel spreadsheet. For a given number of simulation steps, targeted count rate, detector dead time and simulation time interval (∆t), the first spreadsheet will calculate the number of emitted photons (photons hitting the detector), the number of detected photons as well as the corresponding count rates.

The second spreadsheet will do the same but for targeted count rates of 100, 1000, 10000, 25000, 50000, 100000, 200000 and 500000 cps. The results will also be compared to the traditional dead time correction formula and plotted together. It takes about 8 seconds on my computer to simulate 30,000,000 steps and so about 1 minute to simulate the 8 targeted count rates above (the spreadsheet may seem frozen while calculating).

I was very surprised to see that this Monte Carlo code, which deals with multiple photon coincidence, gives the same results as the traditional dead time correction.
« Last Edit: December 13, 2022, 12:48:11 PM by John Donovan »

sem-geologist

  • Professor
  • ****
  • Posts: 304
Re: New method for calibration of dead times (and picoammeter)
« Reply #133 on: December 13, 2022, 12:53:25 PM »
I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Seems we come from different background an so we have a bit different approach albeit I  think we get to very similar result, and we see this result a bit different because we look with different zoom-out-in to it. Excel has not so robust interactive plotting, which python has and is easy to inspect small details and biases with plotting.
Poisson or not Poisson - to make it clear I didn't care - If I would be so good in math to understand those all notations without my head being overheating I would probably even would not had attempted to do any Monte-Carlo simulation in a first place. What I cared was that model would behave similarly enough to what can be seen on oscilloscope of raw detector signal (after shapping amplifier) - that is the any photon hit to detector in finite time space should be random and independent from other photons which hit the detector, albeit sum of photons should be possible to control (with some randomisation) but it should not influence the placement. So I guess the produced distribution of pulses are following Poisson distribution. My code probably looks to use the flat distribution as flat random distributions are indeed used for efficient vectorised computing (I am familiar how to write highly vectorised python numpy code). It makes this Poisson distribution from many runs of flat random distributions. Initial flat random distribution fill array of 1M length with random values ranging from 0 to 1M, so If I want more less 100 counts I generate other array from that with vectorised function checking all elements of array which are less than 100 - thus in the end I get about 100 (+/- some random number) events randomly placed at different indexes of 1M array. Such single trick can't produce overlaps. Thus If I want final distribution of 1000, I can i.e. run 10 times checking for <100 and summing such arrays. What is interesting subdividing the distribution is beneficent only to some extent (can produce the nth-pulse-pileups). Instead of going every finite time step by step and rolling the dice at every step (which BTW older hardware is terrible, and can produce only pseudo-random number), I overuse random number generators in efficient way and use the randomness of those flat distributions to be reshaped into Poisson distribution. Thus it can look a bit convoluted at code, but is very efficient in generating millions of random events at random timestamps.

The important thing in My MC simulation is that pulse generation and pulse sensing models are completely separate. I could generate pulse train and visualize (with huge time interval where pulse shape does not matter) and compare with what I had seen on oscilloscope (and I saw double, triple, quadruple and even quintuple pile-ups). That is why I also had chosen for that simplified MC step of 1µs, because this is the effective length of pulses and simulation with that could produce appearances and increase of every mentioned kind of pileups to what I could observe on oscilloscope.

It is not so hard to modify my code to do go with 10ns resolution. I will include that in the renewed version of MC.
Finally Push the simulation to higher count rates (at least to 1M) - it will then show much more better how actually the conventional recalculation equation and model derails.
« Last Edit: December 13, 2022, 01:13:42 PM by sem-geologist »

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #134 on: December 13, 2022, 03:58:47 PM »
I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Seems we come from different background an so we have a bit different approach albeit I  think we get to very similar result...

I am not sure about a "similar result".

As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right.
...
Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.

Aurelien's Monte Carlo code found (much to our surprise) that the traditional expression *does* account for multiple photon coincidence, while it is my understanding that you claimed your code found that the traditional expression only accounts for single photon coincidence. Are we misunderstanding you? If not, which conclusion is correct? 

Traditional (2 usec)            Monte Carlo (2 usec)
Predicted       Observed        Predicted       Observed
10              10              10              10
100             100             97              97
1000            998             1008            1008
10000           9804            9978            9782
100000          83333           99965           83347
200000          142857          199897          142787
400000          222222          400318          221948


We agree that this "step back" is very surprising.  But we do agree that the log expression is a step forward because it performs better with empirical data. But we think that is because of various non-linear behavior of the pulse processing electronics at moderate to high count rates.

To eliminate issues of graphical display, can you provide numerical values from your Monte Carlo calculations at these predicted and observed count rates?
« Last Edit: December 13, 2022, 04:13:25 PM by Probeman »
The only stupid question is the one not asked!