Probe Software Users Forum

General EPMA => Discussion of General EPMA Issues => Topic started by: Probeman on May 26, 2022, 09:50:09 AM

Title: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 26, 2022, 09:50:09 AM
John Fournelle and I were chatting a little while back discussing dead time and how to calibrate our detectors and electronics and we realized that this also depends on the linearity of our picoammeter.
 
Normally when performing a dead time calibration we use a single material such as Ti metal (for LiF and PET) or Si metal (for PET and TAP), because these materials will yield a high count rate and also are conductive, so hopefully less chance of sample damage and/or charging.

We then repeatedly increment the beam current and measure the count rate as a function of beam current. The idea being that without dead time effects our count rate vs. beam current should be exactly proportional, that is a doubling of beam current should produce a doubling of count rate.

But because of the dead time characteristics of all detection systems (the interval during which the detector is busy processing a photon pulse), the system will be unavailable for photon detection sometimes, and that unavailability is simply a probability based on the length of the system (pulse processing) dead time and the count rate. 

Note that EDS systems, automatically "extend" the live time while processing photons so the dead time correction is part of the EDS hardware, while WDS systems must have the dead time correction applied in software after the measurements have been completed.

So this simple trend of count rate vs. beam current is utilized to calibrate our WDS spectrometers. However John Fournelle and I realized that if the picoammeter response is not accurate, the resulting dead time calibration will also not be accurate.

Even more to the point, what exactly is it we are doing with our microprobe instruments? We are simply measuring k-ratios. That is all we do. Everything else we do after that is physics modeling.  The electron microprobe is a k-ratio machine, so perhaps that should be our focus. And that is exactly the point of the "consensus k-ratio" project as originally suggested by Nicholas Ritchie:

https://probesoftware.com/smf/index.php?topic=1239.0

If we cannot accurately compare our k-ratio measurements to the k-ratio measurements from another lab, we do not have a science.  See the consensus k-ratio project topic:

https://probesoftware.com/smf/index.php?topic=1442.0

That is to say, using the same *two* materials (in order to obtain a k-ratio), and at a given detector takeoff angle and electron beam energy, we should obtain the same k-ratio, not only on all of our spectrometers, but also on all instruments. See topic on simultaneous k-ratios:
 
https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

Now, if are in agreement so far, let's ask another question: at a given takeoff angle and electron beam energy, and two materials containing the same element (and no beam damage/sample charging!), should the instrument (ideally) produce the same k-ratio at all beam currents?

John Fournelle and I believe the answer to this question is "yes".  Now if the two materials have significantly different concentrations of an element, the count rates on these two materials will be significantly different, and therefore the dead time calibration (and picoammeter!) accuracy are critical in order to obtain accurate (the same) k-ratios at different beam currents.

So first we looked at the k-ratio measurements from the MgO, Al2O3, MgAl2O4 round robin organized by Will Nachlas, where I measured only a few different beam currents starting at 30 nA but and then at lower beam current to reduce the effects of any mis-calibration of the dead time constants.  Note: these are just the first thing we looked at, as these measurements are by no means enough data, we need many more measurements and at higher beam currents!

Here are two results, the first using the original dead time calibration from 2015:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_35_55.png)

where one can see that the higher beam current measurement yield a larger k-ratio. This (positive slope) "trend" should mean that the dead time constant is too small. And now using the new dead time calibration from this year on the same data again:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_36_27.png)

So the slope has decreased as expected, but the dead time constant may still need to be increased. How can this be? Well maybe the picoammeter is not accurate...  we need better (and more) measurements!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 26, 2022, 10:06:54 AM
Next weekend I went into the lab and ran some different materials that should be more suitable (more electrically and thermally conductive). I choose Zn, Te, Se, ZnTe and ZnSe in order to measure k-ratios for Zn Ka, Se la and Te La on LiF, TAP and PET with emission energies of 8.63, 1.38 and 3.77 keV.

These are still not enough measurements because these were run manually (more on that later), but here are some Zn Ka measurements using our latest (traditional) current dead time calibration method:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_55_04.png)

By the way, the above plot is from using the Output | Output Standard and Unknown XY Plots menu in probe for EPMA, and selecting "On Beam Current" for the X axis and one of the element "Raw K-ratios" for the Y axis. 

Then we *manually* adjusted the dead time using the Update Dead Time Constants dialog in Probe for EPMA (from the Analytical menu):

https://probesoftware.com/smf/index.php?topic=1442.0

in order to obtain a more constant k-ratio as a function of beam current as seen here:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_55_30.png)

Now that seems to be an improvement but still not perfect. But we need many more measurements and I hope to get to that this weekend.  But please make your own measurements and post what you find from your instruments.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on May 27, 2022, 10:12:55 AM
In order to acquire these "constant k-ratio" datasets at different beam currents, and to ensure that the (primary) standard utilized for each beam current acquisition is at the same beam current as the secondary standard, one must acquire the datasets one beam current at a time. That is, all primary and secondary standards (or unknowns) must be acquired together at each beam current.

Until just now this had to done semi-manually in Probe for EPMA. It might seem reasonable that one could utilize the "multiple setups" feature in the Automate! window, but unfortunately this feature was originally designed for the acquisition thin film calibrations where each standard and unknown are acquired at multiple beam voltages, e.g., 10 keV, 15 keV, 20 keV.

Therefore the program would acquire each sample for *all* the (multiple) sample setups assigned to it. In other words the acquired samples might look like this acquisition, from a thin film run:

(https://probesoftware.com/smf/gallery/1_27_05_22_10_05_23.png)

But for the constant k-ratio method we need the samples acquired one (beam current) condition at a time, for *all* samples, as shown here from the Zn, Te, Se semi-manually acquired data shown in the previous posts:

(https://probesoftware.com/smf/gallery/1_27_05_22_10_08_55.png)

The reason of course is because samples with different accelerating voltages do not get utilized for quantification, because the k-ratios will be different. But that is not true for samples acquired with different beam currents!  These k-ratios should be the same.  But since that is exactly what we are trying to measure, it is best to have each set of beam current measurements grouped together in time.

However, we recently thought of a way to modify the automatically code to handle this constant k-ratio vs beam current acquisition so that it can be fully automated. We added a new checkbox to the multiple sample setups dialog as shown here (accessed as usual from the Automate! window):

(https://probesoftware.com/smf/gallery/1_27_05_22_9_52_05.png)

 8)

The only caveat is that all the samples selected should have the same number of (multiple) sample setups assigned to all samples, which is of course exactly what we want.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 09:44:19 AM
OK, so because Probe Software was able to implement a method to automate the acquisition of the "constant k-ratio" test, that is acquire k-ratios at multiple beam currents as seen here:

https://probesoftware.com/smf/index.php?topic=40.msg10899#msg10899

To remind everyone, the traditional dead time calibration method relies on comparing count rates on a pure material (usually a pure metal such as Ti for LiF and PET or Si metal on PET or TAP), as a function of beam current.  While this new "constant k-ratio" method, attempts to calibrate both the dead time *and* any picoammeter non-linearity, by measuring k-ratios of a primary standard and a secondary standard, as a function of beam current.

The idea being that the k-ratio should remain constant as a function of beam current (at a given beam energy and takeoff angle). And while recognizing that this method is not a replacement for having a well calibrated picoammeter, it can reveal problems in one's picoammeter calibration.

I was able to acquire a pretty dense set of k-ratios for Zn Ka, Te La and Se la using pure metal primary standards and ZnTe and ZnSe using the following beam currents:  6, 8, 10, 15, 20, 40, 60, 80, 100, 120, 140, 160, 180 and 200 nA. This was 60 sec on-peak, 10 sec of-peak and 6 points per standard. So it took about 13 hours.

So let's start with an example of Zn Ka of LLIF, which had last been dead time calibrated (using the traditional dead time calibration method on Ti metal) at 3.5 usec.  Here is what we see using Zn as the primary standard and ZnTe as the secondary standard:

(https://probesoftware.com/smf/gallery/395_31_05_22_9_34_01.png)

So we see two things: first that is a very large variance in the k-ratio! Second, there is an odd anomaly at 40 nA and third, that the dead time constant is too small, as the slope of the k-ratios is generally positive. 

Note the new "string selection" control in the Output | Output Standard Unknown XY Plots menu window in Probe for EPMA. Now let's use the Update Dead Time Constants dialog in Probe for EPMA as described here:

https://probesoftware.com/smf/index.php?topic=1442.msg10641#msg10641

and change the dead time constant in an attempt to obtain a more constant k-ratio trying 3.8 usec first:

(https://probesoftware.com/smf/gallery/395_31_05_22_9_38_58.png)

So that is a bit improved as one can see from the y axis k-ratio range. But there is still a large range of k-ratio as a function of beam current, and I suspect it is related to the picoammeter (mis-calibration). Remember, on a Cameca instrument, the beam current ranges are 0 to 5 nA, 5 to 50 nA and 50 nA to 500 nA (I think, but please correct me if that is wrong!).

I will provide another example soon, but please let me know what you think and/or if you have any "constant k-ratio" data to share on your JEOL or Cameca instrument.

By the way, what are the beam current ranges for the JEOL picoammeters?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 11:40:15 AM
Then again, maybe not!

So considering this is a large area crystal we might expect that such high count rates might require the use of the high precision dead time correction as seen here:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_09_36.png)

and documented here:

(https://probesoftware.com/smf/gallery/1_21_04_21_9_56_07.png)

So using this high precision dead time expression with the original dead time constant of 3.5 usec we get a much different plot:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_13_15.png)

So now we have a too large dead time constant!  What would it take to get a more constant k-ratio as a function of beam current? How about 2.9 usec?

(https://probesoftware.com/smf/gallery/395_31_05_22_11_33_30.png)

OK, so that is better, though there is still an anomaly at 40 nA and the high precision equation starts to break down at beam current over 100 nA, but it's pretty constant (expect for 40 nA) up to around 100 nA. 

So several conclusions.

1. I still think my picoammeter needs adjustment with the high precision current source (we're working on that), particularly given the the issue at 40 nA.

2. I think we might try a "super" high precision dead time correction with a 3rd factorial term.   :o

Finally, given these results I agree with Owen Neill who said recently that we all should be using the high precision dead time equation option in Probe for EPMA for best accuracy.

More to come, but in the meantime here's Te La on a PET crystal (about half the x-ray count rate as Zn ka on LLiF), which is actually quite good except for the "glitch" at 40 nA:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_37_37.png)

Because the count rate was lower than the LLiF, we don't see the need for a "super" high precision dead time correction but I think we will see if Donovan will implement that for us...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on May 31, 2022, 02:11:46 PM
So that is a bit improved as one can see from the y axis k-ratio range. But there is still a large range of k-ratio as a function of beam current, and I suspect it is related to the picoammeter (mis-calibration). Remember, on a Cameca instrument, the beam current ranges are 0 to 5 nA, 5 to 50 nA and 50 nA to 500 nA (I think, but please correct me if that is wrong!).

Yes.
Cameca has 5 ranges:
up to 0.5nA, 0.5-5nA, 5-50nA, 50-500nA, 500nA-10µA (yes you read it right, last bin is up to 10 micro Amperes, and it is possible to get beam currents of few µA on SX100 and SXFiveFE).
Now I see Your range 5 to 50 is probably misaligned. Probably, as the data covers only 1 and a half from 5 picoamperometer ranges.
This whole en-devour in my humble opinion is wrong way from finding, identifying and fixing problems where it originates, and it completely mingles two completely not related issues or shuffles the weight of one onto other and reverse. Which of your current measurement ranges are correct? 5-50, or 50-500nA? because I see in the end You had settled on 2.9µs which somehow "flattens" the k-ratios at 50-500, but I see clearly that 2.9µs at range at 5-50nA is clearly wrong. The measurement at 40nA probably are not anomaly at all, and rather 50-500 range is wrong. it would be interesting to see intensity changes at points 480nA, 498nA, 502nA, 520nA, if there would be step between 498 and 502 it would tell that 50-500 range is wrong (of course only if 500nA-10µA range is closer to correct measurement). Also 5-50 range is a bit tricky as at that range some beam-crossover funkiness happens. Are Your beam well aligned? Try using different I-emission - that moves that crossover point to different C1 and C2 position (and also different nA value) and could move the possible point of current anomaly to different spot - and that could identify problem of/or if "part of the beam-missing the faraday cup".

As all ranges are available on SXFiveFE effortlessly I had done such tests to make sure that the beam current measurement is continues with crossing the ranges, ant it was perfect curved line in beam current vs count rate with no discontinuities or visible steps at 500, 50, 5, 0.5nA. (the critical part is to include measurements from both sides close to it , i.e. 505 and 495nA, or 0.55nA and 0.45nA and so on). Had not seen such discontinuities on SXFiveFE (the column is different from tungsten/LaB6 one), I am going to check SX100 as soon as possible.
And that is correct procedure to check the picoamperometer continuity without going into dead time, which are counter issues. and k-ratios just sums all issues into single lump hiding the precise origin of it.
For checking picoamperometer linearity I would skip the WDS and its gas counting electronics at all. Or at least I would choose very weak lines and moderate concentrations ) i.e. 2nd order lines. So that dead-time non linearity would not bother the measurement up to those 200nA. As for measurement, fixed 60s for 200nA and same for 5nA beam is unfair. It would be better to count up to some fixed count number thus 5nA would be counted much longer, and 200nA much shorter (or normally as for 2nd order weak lines).
But even better, if Your probe is equipped with SDD EDS detector why not use total counts from that for current vs x-rays, as EDS has very sophisticated electronic hardware for dealing with pulse pile-ups (none on WDS), and with selected high throughput (or shortest shaping time) and (if equipped) medium or small aperture that should give really much better insight to picoamperometer and its linearity with no problem up to 200nA.

Only after identifying the picoamperometer (beam-faraday-cup) issues, artefacts and workarounds or fixes, it is sensible to move with dead time estimation and calibration.

BTW. You came to value of 2.9µs. What dead time is set in your Peaksight? 3µs?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 03:51:36 PM
I agree that this measurement mingles both the dead time calibration and the picoammeter calibrations. This point was clearly stated in the opening post.

However, to my mind the value of this method is, that it gives one a quantitative understanding of the total mis-calibration of the instrument.  These are instruments that merely generate k-ratios after all!

If all is good, then one is good. If not good, then how good or how bad?  This can be ascertained by looking at the Y axis in k-ratio units which for major elements is close to the concentrations (assuming the primary standard is a pure element, and if not, it is a simple calculation).

For me at least I find this helpful.  However, this method does re-iterate the need for a better dead time correction in software *and* an honest to god picoammeter calibration, which we are working on.

That said, it was pleasing to see the accuracy of the Te La line up to even 200 nA.  And as promised here is a closer look at the picoammeter (mis)calibration on Te La up to 100 nA (run last night).

(https://probesoftware.com/smf/gallery/395_31_05_22_3_50_08.png)

Not terrible at least, actually a sub percent level of variance.  But we are proceeding with obtaining a high accuracy current source nonetheless...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 01, 2022, 01:26:58 PM
So you all will remember Probeman showed this plot above using the high precision dead time equation for the Zn Ka line of LLiF with a dead time of 2.9 usec:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_33_30.png)

Well, just for fun we've implemented a three term factorial dead time expression which we call the "super high precision" deadtime expression.  It only really affects count rates above 100K cps.  But in the above Zn Ka plot the Zinc standard is producing 140K cps at 200 nA on a LLIF crystal!

Even setting the 40 nA k-ratios issue aside, we still have some picoammeter calibration issues, but the high current k-ratio values are a bit more consistent:

(https://probesoftware.com/smf/gallery/1_01_06_22_1_24_55.png)

What's amazing is how sensitive the dead time constant is when one is at such high count rates!  Just a difference of 0.01 usec makes a visible difference.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 01, 2022, 05:49:42 PM
Once ones dead time constants are properly adjusted, it's a bit amazing how accurate things can get.

Here is an analysis of ZnTe using Zn, Se and Te pure metal standards at 6 nA:

St  658 Set   1 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:     6.81    6.81    6.81     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
    19  33.714   -.047  66.562    .000    .000 100.229
    20  33.838   -.110  66.537    .000    .000 100.265
    21  33.835   -.084  66.782    .000    .000 100.533
    22  33.781   -.050  66.650    .000    .000 100.381
    23  33.821   -.063  66.776    .000    .000 100.533
    24  33.860   -.030  67.059    .000    .000 100.890

AVER:   33.808   -.064  66.728    .000    .000 100.472
SDEV:     .053    .029    .192    .000    .000    .242
SERR:     .022    .012    .078    .000    .000
%RSD:      .16  -45.42     .29   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:     -.21     ---     .92     .00     .00
DIFF:    -.072     ---    .608     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1841.76 2019.25  749.16     .00     .00

UNKF:    .3628  -.0002   .6340   .0000   .0000
UNCT:   668.12    -.44  474.98     .00     .00
UNBG:    13.08    3.90    4.89     .00     .00

ZCOR:    .9320  2.9673  1.0524   .0000   .0000
KRAW:    .3628  -.0002   .6340   .0000   .0000
PKBG:    52.09     .89   98.07     .00     .00
INT%:     ---- -117.13    ----    ----    ----

And here at 200 nA:

St  658 Set  14 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.61  200.61  200.61     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   409  33.831   -.075  66.756    .000    .000 100.513
   410  33.847   -.065  66.751    .000    .000 100.533
   411  33.858   -.073  66.759    .000    .000 100.544
   412  33.861   -.063  66.778    .000    .000 100.575
   413  33.877   -.060  66.864    .000    .000 100.681
   414  33.890   -.066  66.870    .000    .000 100.694

AVER:   33.861   -.067  66.796    .000    .000 100.590
SDEV:     .021    .006    .055    .000    .000    .078
SERR:     .009    .002    .023    .000    .000
%RSD:      .06   -8.47     .08   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:     -.06     ---    1.02     .00     .00
DIFF:    -.019     ---    .676     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1809.87 1841.21  749.56     .00     .00

UNKF:    .3633  -.0002   .6347   .0000   .0000
UNCT:   657.56    -.42  475.71     .00     .00
UNBG:    13.24    3.94    4.84     .00     .00

ZCOR:    .9320  2.9675  1.0524   .0000   .0000
KRAW:    .3633  -.0002   .6347   .0000   .0000
PKBG:    50.68     .89   99.20     .00     .00
INT%:     ---- -115.83    ----    ----    ----
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on June 02, 2022, 07:17:39 AM
probeman,
I want to ask again, what is the pulse blanking (the integer) value on the spectrometer of SX100 (the integer dtime, which is sent to cameca hardware when spectrometer is setup prior starting counting) for which You had found out the dead time to be 2.9µs?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 02, 2022, 08:37:29 AM
probeman,
I want to ask again, what is the pulse blanking (the integer) value on the spectrometer of SX100 (the integer dtime, which is sent to cameca hardware when spectrometer is setup prior starting counting) for which You had found out the dead time to be 2.9µs?

Sorry, I saw your question and meant to reply, but wanted to get all 5 spectrometers calibrated. These are using the following emission lines:

      1      2       3         4       5
    PET    LTAP    LLIF      PET     LiF
   Te La   Se La   Zn Ka    Te La   Zn Ka

The "enforced" (integer) dead time for all the spectrometers is 3 usec. For my spectrometers I'm getting calibrated dead times of 2.85, 2.80, 2.80, 3.00 and 3.00 usec, respectively. The 3rd digit actually matters at high beam currents!   :o

But (to everyone), what I'm finding really interesting in all this is that based on these k-ratio versus beam current plots, the software dead time correction needs to be expanded to include more factorial terms for accuracy at high beam currents.

So, the "normal" dead time expression is:

Code: [Select]
' Normal deadtime correction
If DeadTimeCorrectionType% = 1 Then
temp# = 1# - cps! * dtime!
If temp# <> 0# Then cps! = cps! / temp#
End If

Which I've had as the default since forever.  In fact as seen below, this expression starts failing even at 20 to 30 nA on large area Bragg crystals!  So seeing as we are routinely getting close to 50K cps for many modern spectrometers, we really should, as Owen Neill has mentioned, be using (at least) the high precision form of the equation which is here:

Code: [Select]
' Precision deadtime correction
If DeadTimeCorrectionType% = 2 Then
temp# = 1# - (cps! * dtime! + cps! ^ 2 * (dtime! ^ 2) / 2#)
If temp# <> 0# Then cps! = cps! / temp#
End If

This "high precision" expression doesn't start failing until around 100 nA on large area Bragg crystals. So, what is clear to me now is, if we want to have excellent accuracy at even higher beam currents, we really need to utilize a more extended version of the dead time equation, which I have attempted to implement here:

Code: [Select]
' Super precision deadtime correction
If DeadTimeCorrectionType% = 3 Then
temp2# = 0#
For n& = 2 To 6
temp2# = temp2# + cps! ^ n& * (dtime! ^ n&) / n&
Next n&
temp# = 1# - (cps! * dtime! + temp2#)
If temp# <> 0# Then cps! = cps! / temp#
End If

So this uses exponents up to ^6!

Honestly I had never previously appreciated the importance of the dead time expression having enough factorial terms, until I started plotting up these k-ratio versus beam current plots. What a revelation I have to say.  ;D

Here is what I mean. Using the normal dead time expression we obtain this on ZnTe/Zn on my LLiF spectrometer:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_20_40.png)

As one can see it begins to fail at around 20 to 30 nA (ignoring the "glitch" at 40 nA). Now let's try the "high precision" version of the dead time expression with the extra factorial term:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_21_16.png)

And here is the Zn Ka data using the dead time expression with 6(!) factorial terms:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_22_05.png)

So there's still something not quite right with my picoammeter (which I will discuss in my next post), but I have to say this has been a real learning experience for me.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 02, 2022, 08:42:36 AM
This "super high precision" dead time correction is now available in the latest version 13.1.5 Probe for EPMA.

(https://probesoftware.com/smf/gallery/1_02_06_22_8_41_21.png)

We're calling it the "three factorial expression", but as Probeman mentioned above it's actually 6 factorials!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 02, 2022, 12:35:51 PM
And here is the Zn Ka data using the dead time expression with 6(!) factorial terms:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_22_05.png)

So there's still something not quite right with my picoammeter (which I will discuss in my next post), but I have to say this has been a real learning experience for me.

So here is why I think the k-ratios take a small dip in the quoted plot above:

(https://probesoftware.com/smf/gallery/395_02_06_22_12_22_37.png)

Note that this is a plot of the Zn on-peak counts (not k-ratio) and notice also that the dip in the k-ratio plot seems to correspond with the bump in the on-peak counts.

I suspect that this is why my picoammeter needs adjustment. Finally, as Mike Jercinovic has pointed out, if the problem is in the picoammeter, the mis-calibration should show in all spectrometers and these plots would seem to confirm that:

(https://probesoftware.com/smf/gallery/395_02_06_22_12_32_43.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_32_57.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_33_09.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_33_21.png)

I suspect the "break" in the 40 nA beam current setting in the k-ratio plots (as seen in previous posts) may only be a beam current regulation issue for Cameca instruments at that "crossover").
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 03, 2022, 09:38:16 AM
To summarize:

1. We should all be using the "high precision" dead time expression (or even better the "super high precision" dead time expression!) for correction of measured intensities in software.

2. One can test the overall accuracy of the dead time calibration and the picoammeter calibration using the "constant k-ratio" test, where one measures k-ratios over a range of beam currents.  These measured k-ratios should (ideally) be constant (within counting precision) as a function of beam current (for a given beam energy and takeoff angle).

The constant k-ratio test is useful because it yields a plot that is easily interpreted in order to evaluate the overall accuracy of the k-ratios produced by the instrument.

3. Once the dead time constants in software are adjusted until the resulting k-ratios are as constant as possible, then any remaining inaccuracy is due to the picoammeter (mis)calibration.

4. The picoammeter calibration accuracy can be seen by a simple plot of cps/nA (dead time corrected) as a function of beam current. The on-peak intensities should ideally be constant as a function of beam current.

5. The dead time calibration of each spectrometer is easily performed using the constant k-ratio test, but you may need to consult with your instrument engineer to perform a calibration of your picoammeter.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 03, 2022, 12:35:25 PM
OK, so this may help.

Here are some analyses using pure metal standards acquired at 10 nA, and the secondary standards acquired at 200 nA!   First ZnTe at 200 nA:

St  658 Set  14 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.61  200.61  200.61     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   409  33.512   -.063  66.857    .000    .000 100.306
   410  33.528   -.054  66.851    .000    .000 100.325
   411  33.539   -.061  66.860    .000    .000 100.337
   412  33.542   -.053  66.878    .000    .000 100.367
   413  33.558   -.050  66.964    .000    .000 100.473
   414  33.571   -.056  66.971    .000    .000 100.486

AVER:   33.542   -.056  66.897    .000    .000 100.382
SDEV:     .021    .005    .055    .000    .000    .078
SERR:     .009    .002    .023    .000    .000
%RSD:      .06   -9.22     .08   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:    -1.00     ---    1.17     .00     .00
DIFF:    -.338     ---    .777     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1837.10 2016.76  748.16     .00     .00

UNKF:    .3600  -.0002   .6359   .0000   .0000
UNCT:   661.35    -.38  475.71     .00     .00
UNBG:    13.24    3.94    4.84     .00     .00

ZCOR:    .9317  2.9644  1.0521   .0000   .0000
KRAW:    .3600  -.0002   .6358   .0000   .0000
PKBG:    50.97     .90   99.20     .00     .00
INT%:     ---- -114.56    ----    ----    ----

And now ZeSe at 200 nA:

St  660 Set  14 ZnSe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.65  200.65  200.65     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   415  45.476  53.333   -.002    .000    .000  98.807
   416  45.427  53.872    .005    .000    .000  99.304
   417  45.356  54.034    .005    .000    .000  99.395
   418  45.457  53.843   -.001    .000    .000  99.299
   419  45.383  53.666   -.002    .000    .000  99.046
   420  45.181  53.264    .000    .000    .000  98.444

AVER:   45.380  53.669    .001    .000    .000  99.049
SDEV:     .107    .310    .003    .000    .000    .366
SERR:     .044    .127    .001    .000    .000
%RSD:      .24     .58  475.68   .0000   .0000

PUBL:   45.290  54.710    .000    n.a.    n.a. 100.000
%VAR:      .20   -1.90     .00     .00     .00
DIFF:     .090  -1.041    .000     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1837.10 2016.76  748.16     .00     .00

UNKF:    .5029   .2512   .0000   .0000   .0000
UNCT:   923.85  506.53     .00     .00     .00
UNBG:    10.69    4.80    3.25     .00     .00

ZCOR:    .9024  2.1369  1.1714   .0000   .0000
KRAW:    .5029   .2512   .0000   .0000   .0000
PKBG:    87.42  106.59    1.00     .00     .00
INT%:     ----     .00    ----    ----    ----


And remember, this is with the picoammeter still not calibrated properly!    :o

I would very much welcome seeing constant k-ratio data from other instruments... if you want feel free to call me and I can talk you through the procedure in Probe for EPMA. It's completely automated now!   ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 03, 2022, 03:00:18 PM
So there remains the question of emission line energies and dead time calibration.

I will run some more measurements this weekend, but it may simply be the case that Cameca instruments, with their "enforced" integer dead time electronics do not experience variable pulse widths as a function of emission line energies.

In the mean time it would be most helpful if we could obtain additional constant k-ratio measurements from other instruments, particularly JEOL instruments.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 04, 2022, 10:57:48 AM
So I used the same dead time constants from the last Sunday run and applied them to the Monday run where I acquired more beam currents but only up to 100 nA and everything looked very stable and consistant using the "super high precision" dead time correction expression (with six terms).

(https://probesoftware.com/smf/gallery/395_04_06_22_1_24_32.png)

(https://probesoftware.com/smf/gallery/395_04_06_22_10_55_25.png)

(https://probesoftware.com/smf/gallery/395_04_06_22_10_55_54.png)

You get the picture...  again we see the "glitch" at around 40 nA, but the k-ratios are quite constant from 6 to 100 nA.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 04, 2022, 11:01:16 AM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

Edit by John: update pdf attachment for standard intensity drift correction notes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 06, 2022, 12:53:16 PM
I ran different elements (emission energies) on the instrument yesterday to see if I could tease out any trends (or not) in the dead time calibrations using the new "super high precision" dead time correction expression. Unfortunately I didn't think to keep the bias voltages exactly the same on all spectrometers so that is a possible variable not controlled for.  But the initial data is still worth examining I think.

Here is the run I did on 05/29/2022 up to 200 nA using Zn Ka, Se La and Te La at 8.64, 1.38 and 3.77 keV respectively:
                 1            2            3           4            5
              Te La     Se La     Zn Ka     Te La     Zn Ka
               PET       LTAP       LLIF        PET        LIF
BIAS    1320v     1330v     1850v     1340v     1840v
DT      2.85us    2.80us    2.80us    3.00us    3.00us

When I plotted up the new data from 06/04/2022 using the sample DT constants from 05/29/2022 I saw some significant differences, for example on Sp 1 when going from Te La (PET) to Se La (TAP) and using the same bias voltages the k-ratio plot looks like this:

(https://probesoftware.com/smf/gallery/395_06_06_22_12_48_26.png)

After the DT is adjusted to 3.30 used, in order to produce a more constant k-ratio, we obtain this:

(https://probesoftware.com/smf/gallery/395_06_06_22_12_48_40.png)

So here is a summary of the run from yesterday using different emission lines on the spectrometers and adjusted to obtain a constant k-ratio as a function of beam current:

                 1            2           3            4            5
              Se La     Te La     Te La     Se La     Te La
               TAP       LPET       LPET        TAP        PET
BIAS    1320v     1320v     1850v     1313v     1850v
DT      3.30us    2.60us    2.70us    3.20us    2.90us

The bias voltages is red were modified from the previous run (note that Sp 3 and Sp 5 are 2 atm detectors). So, you can see that going from Te La (PET) to SE La (TAP) on sp 1 and 4 the emission energy went down, but the DT required for a constant k-ratio went up (both low pressure detectors).

However, on Sp 3 and 5, going from Zn Ka (LIF) to Te La (PET), the emission energies also went down, but the DT had to be adjusted down slightly (by 0.1 usec), to obtain a constant k-ratio. But both of these were 2 atm detectors, so that is another variable.

Meanwhile on Sp 2 going from Se La (TAP) to Te La (PET) the emission energy went up, but the DT had to be adjusted down slightly to obtain a constant k-ratio.

A bit of a mixed bag to say the least, so I am going to try some other emission lines this weekend.  By the way, I heard back from Cameca and they only utilize the "normal" or classic dead time expression, which we now know will not work above 50K cps.

In any case, one can specify different dead time constants for different crystals in Probe for EPMA, so maybe this variation in DT is something that can be dealt with.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 08, 2022, 04:22:08 PM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

We added a final section to the above pdf document attached to this message:

https://probesoftware.com/smf/index.php?topic=1466.msg10920#msg10920

Describing how to edit your SCALERS.DAT file once you have determined your new dead time constants using the "super high precision" expression.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 10, 2022, 12:46:57 PM
OK, so this is pretty cool.

It just occurred to me last night (yes I was dreaming about WDS!), that these "constant k-ratio" measurements can characterize not only our dead time constants and our picoammeter calibrations, but also our "effective" takeoff angles! The effective takeoff angle being the actual angle of X-ray measurement defined by our Bragg crystal (is it symmetrically diffracting?) and the spectrometer alignment and the surface of our sample holder. Of course, this requires that one measures the same element and x-ray line on more than one spectrometer!

So the reason this "constant k-ratio" method is interesting is not only because we should we get the same k-ratio at any beam current, but we should also get the same k-ratios (within precision) for *all* the spectrometers on our instrument, assuming of course the same element, X-ray line, beam energy and takeoff angle are utilized in the k-ratio measurement.

This is exactly the "simultaneous k-ratio" test that is often utilized in initial instrument acceptance testing:

https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

So here is a "constant k-ratio" plot of the two spectrometers using the same (Se La) emission line measured on two spectrometers using TAP crystals:

(https://probesoftware.com/smf/gallery/395_10_06_22_2_03_20.png)

As you can see spectrometers 1 and 4 agree pretty well with each other, which is impressive because the Se La line is only 1.38 keV, so fairly low energy and therefore more affected by variations in the effective takeoff angle.  Now how about Te La on three spectrometers using PET crystals:

(https://probesoftware.com/smf/gallery/395_10_06_22_12_38_59.png)

Hmmm, seems we might have a small difference between the two LPET crystals and the normal PET crystal.  The cool thing about using the constant k-ratio method for this simultaneous k-ratio evaluation is that one can obtain an immediate sense of the relative accuracy of the error. Our investigations continue...

I guess the point is that we need to make sure we have consistent k-ratios not only for different beam currents (dead times and picoammeter) but also between our spectrometers, before we start comparing our k-ratios to other instruments (which are hopefully equally well calibrated in these parameters!).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 11, 2022, 10:01:51 AM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

We added a final section to the above pdf document attached to this message:

https://probesoftware.com/smf/index.php?topic=1466.msg10920#msg10920

Describing how to edit your SCALERS.DAT file once you have determined your new dead time constants using the "super high precision" expression.

We added yet another section to the constant k-ratio method procedure on simultaneous k-ratios in the pdf attached here.

Edit by John: update pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on June 12, 2022, 06:27:52 PM
I'd like to point out that the (simple) expression for deadtime commonly in use, N’/I = k(1-N’τ), and lending itself to illustration on plots of cps/nA versus cps, is not the only means of calculating deadtime (simply).  Heinrich et al. (1966; attached) applied the so-called “ratio method,” in which the ratios of the observed count rates (N1’ and N2’) of two X-ray lines (they used Cu Ka and Cu Kb on Cu metal) measured simultaneously on two spectrometers at varying beam current (to produce two datasets in which N1’ alternately represents Cu Ka or Cu Kb) are used to determine the deadtimes for both spectrometers.  Although the expressions are linear and only applicable at relatively low count rates, since evaluation of the deadtime by this means only involves consideration of slopes and intercepts on plots of N1’/N2’ versus N1’ (Figs. 7 and 8 ), inaccuracy in the beam current measurement is irrelevant.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on June 13, 2022, 12:21:26 AM
This thread made me sit few days on SX100 and do some checking.
The production of some plots and consolidating the data will take some time.

However at this moment with 100% being sure I can point to few problems of non-linearity:
1. Widely used and evangelized here in this forums differential PHA mode with wide-window (differently to integral method) will introduce non linearity at high count rates, as PHA "peaks" of double and triple pulse-pileups will cross into/move into the PHA window. That makes counting particularly prone to be affected by random fluctuations of temperature and pressure. Better would be to use integral (simpler), or narrow (moving with the peak) window. The second one would have pseudo-expandable dead time behavior. The count rate between integral and wide-window PHA drops down to 95% at worst case. I see absolutely no advantage of wide window vs integral, as integral will have simple parabola shape in beam_current vs intensity plot, where wide-window PHA will have similar parabola with distortions (waves) at high current. Plots in this method thread does not catch that as jumps from 140 to 200 without smaller steps in between.
2. This proposed factorial math model does not work well. In case the higher count rate is fitted correctly - the lower count rate is the overestimated. In particularly if ignoring point one, it can produce wrong fitting for both high and low currents.
3. 2nd point is baseless claim? How to explain those dead time of 2.9 us while hardware blanks pulses for 3us. Unless this SX100 is accelerated to relativistic speeds or it have a Black Hole under there is no physical way for pulses be passed before unblanking. It rather evidences over fitting of that method at low currents (actually at low count rates, we should not care about beam current at all), where count rates are overestimated. I already had shared the jupyter notebooks with MC simulation in some other thread. There It was clear that that formula overestimates the rate at low count rate.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 13, 2022, 10:14:19 AM
I'd like to point out that the (simple) expression for deadtime commonly in use, N’/I = k(1-N’τ), and lending itself to illustration on plots of cps/nA versus cps, is not the only means of calculating deadtime (simply).  Heinrich et al. (1966; attached) applied the so-called “ratio method,” in which the ratios of the observed count rates (N1’ and N2’) of two X-ray lines (they used Cu Ka and Cu Kb on Cu metal) measured simultaneously on two spectrometers at varying beam current (to produce two datasets in which N1’ alternately represents Cu Ka or Cu Kb) are used to determine the deadtimes for both spectrometers.  Although the expressions are linear and only applicable at relatively low count rates, since evaluation of the deadtime by this means only involves consideration of slopes and intercepts on plots of N1’/N2’ versus N1’ (Figs. 7 and 8 ), inaccuracy in the beam current measurement is irrelevant.

Hi Brian,
I saw your post last night and was planning on responding this morning and when I got up to do so, your post has been removed and replaced with the above post.  I was so looking forward to responding to your previous comments.  Your feedback is always appreciated even when we're not in complete agreement!

Just working from memory I would just explain that with regards to your comment on simultaneous k-ratio measurements, you are correct, one should measure k-ratios on all 5 spectrometers and we did so, but just not using the same lines. The reason being because this topic started out looking at a new method to calibrate dead times using soft x-rays (Al Ka and Mg Ka) and because of issues with beam damage and subsequent curiosity in evaluating the effects from different emission energies, we had quickly moved to looking at Zn Ka, Se La and Te La on more electrically conducting materials

However, now that the software has been improved to completely automate the acquisition of these "constant k-ratio" datasets (with a y-axis stage increment for each beam current sample setup), yesterday we acquired some additional data sets, specifically Ti Ka on all 5 spectrometers.  Here is using Ti metal as the primary standard and TiO2 as the secondary standard over a range of beam currents.

(https://probesoftware.com/smf/gallery/1_13_06_22_9_03_32.png)

These k-ratios were calculated using the *same* dead time constants from the Zn, Se and Te calibration runs which is pretty good confirmation that emission energy doesn't seem to be a big factor in dead time. At least for Cameca instruments.  Unfortunately we still have no data from any instruments other than the Oregon instrument, but I am very much looking forward to seeing data from other instruments, especially JEOL instruments. 

The reason I think that different emission energies *might* affect JEOL instruments more  (mainly based on reports years ago from Paul Carpenter on his 8200 instrument), is that Cameca uses an "enforced" dead time circuit that forces all pulses to some integer value duration, say 3 usec. This circuit does not force the pulse width exactly to that value, hence the reason why the Cameca software includes a non-integer tweak to the software dead time correction.  In any case this electronic feature might help keep the pulse widths more consistent as a function of emission line energy.

Please note that one can see several artifacts in the above constant k-ratio plot.  The first is the anomaly at 60 nA.  It's interesting as we avoided performing any measurements around 40 nA because we had been seeing a similar anomaly. However it seems to also appear at 60 nA, perhaps when time the picoammeter range switches from the 5 to 50 nA to the 50 to 500 nA range?  We should perhaps try some measurements going from high beam currents to low beam currents.

Note also that spectrometer 3 using a LLIF Bragg crystal seems to yield significantly different k-ratios (by a couple of percent) than the other spectrometers, including a normal LiF Bragg crystal on spectrometer 5. I suspect that spectrometer 3 has some alignment issues which is interesting since we have just had a maintenance performed by Cameca, but perhaps the problem is asymmetrical Bragg diffraction. The large area crystals do seem to be more susceptible to these sorts of artifacts.

On the Heinrich paper, I had not seen this method before, thanks for sharing that.  I will definitely give that a try. With these recent Probe for EPMA software features (running multiple setup automatically one at a time and implementing a y stage axis bump for each sample setup) this is now a very easy thing to do.  I hope you also will "fire up" PFE with this new "super high precision" dead time expression and see what you obtain on your instrument for these constant k-ratio measurements.

In your previous comment you also mentioned your concerns with making one adjustment for separate calibration issues and I agree completely. Maybe you missed my earlier discussion of that very point where I said that I have concerns with making one adjustment for dead time calibration and picoammeter linearity. But it soon became clear after some experimentation, that adjusting the dead time constant (to improve the consistency of k-ratios over a large range of beam current), did not actually remove the picoammeter miscalibrations, it just made them much more clearly visible.  See this post for that data:

https://probesoftware.com/smf/index.php?topic=1466.msg10912#msg10912

So in the above post, the first plot (in the quotation area) is the constant k-ratio plot showing some small anomalies after the dead time has been adjusted to yield the most consistent k-ratios over the range of beam current for each spectrometer.

What is interesting are the following *on-peak* intensity plots (also DT corrected) of the different spectrometers all showing the same variation which seems to be related to the different picoammeter ranges (the cps/nA intensity offset occurring on all spectrometers at around 40 nA).  I find that very interesting and suggests to me that our picoammeter ranges require some adjustment.  The only time one might be compensating for picoammeter miscalibration using this dead time adjustment is if the picoammeter was non-linear in a very linear manner!   But that would also occur for the traditional dead time calibration method using a single material (and single emission line).

As for the more recent simultaneous k-ratio observations those are simply a nice side benefit of these constant k-ratio measurements. And unsurprisingly these simultaneous k-ratio offsets seem to be very consistent over the range of beam currents just as one would expect from a spectrometer/crystal alignment/effective takeoff angle issue(s).

I am really stoked at how useful these constant k-ratio measurements seem to be and I really love how by using k-ratio units we obtain very intuitive plots of the thing we actually care about in our instrument performance, that is: k-ratios!  I look forward to measurements from your JEOL instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 13, 2022, 11:09:54 AM
This thread made me sit few days on SX100 and do some checking.
The production of some plots and consolidating the data will take some time.

However at this moment with 100% being sure I can point to few problems of non-linearity:
1. Widely used and evangelized here in this forums differential PHA mode with wide-window (differently to integral method) will introduce non linearity at high count rates, as PHA "peaks" of double and triple pulse-pileups will cross into/move into the PHA window. That makes counting particularly prone to be affected by random fluctuations of temperature and pressure. Better would be to use integral (simpler), or narrow (moving with the peak) window. The second one would have pseudo-expandable dead time behavior. The count rate between integral and wide-window PHA drops down to 95% at worst case. I see absolutely no advantage of wide window vs integral, as integral will have simple parabola shape in beam_current vs intensity plot, where wide-window PHA will have similar parabola with distortions (waves) at high current. Plots in this method thread does not catch that as jumps from 140 to 200 without smaller steps in between.

Hi SG,
Looking forward to your data!   Hopefully you can also utilize this new "super high precision" dead time expression. I found that using the traditional expression rapidly fails above 50K cps. See here for an example:

https://probesoftware.com/smf/index.php?topic=1466.msg10909#msg10909

I also actually agree with your admonition of not using differential mode for these high current k-ratio measurements.  All the constant k-ratio measurements I have done for the last few weeks are using integral mode always.

2. This proposed factorial math model does not work well. In case the higher count rate is fitted correctly - the lower count rate is the overestimated. In particularly if ignoring point one, it can produce wrong fitting for both high and low currents.

OK, here we can disagree and the data I have supports this.  As for the math, you must have made a mistake in your calculations because the dead time correction is a simple probability calculation, and the Taylor Expansion series rigorously describes these probabilities.  As you can see from the most recent data in the plot above in my response to Brian, the lower beam current k-ratios seem to be very much in agreement with each other.  What sort of issues are you seeing on your instrument? 

And here is a plot also from yesterday showing the k-ratio for Ti metal as primary standard and SrTiO3 as a secondary standard, again showing the consistency in the k-ratios at lower beam currents, again using the "super high precision" expression:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_27_05.png)

Doesn't seem to be hurting the lower count rates to my eye. The traditional dead time expression seems to start failing even at moderate beam currents on my LPET using Ti Ka for example.

3. 2nd point is baseless claim? How to explain those dead time of 2.9 us while hardware blanks pulses for 3us. Unless this SX100 is accelerated to relativistic speeds or it have a Black Hole under there is no physical way for pulses be passed before unblanking. It rather evidences over fitting of that method at low currents (actually at low count rates, we should not care about beam current at all), where count rates are overestimated. I already had shared the jupyter notebooks with MC simulation in some other thread. There It was clear that that formula overestimates the rate at low count rate.

Well there must be a black hole underneath my instrument as it's not at all clear to me.   ;D

I would simply attribute these values being slightly less than exactly 3 usec to the fact that electronics itself can be miscalibrated.  Simply put: how do we know these "blanking" pulses are *exactly* 3 usec?  Knowing nothing about the electronic details I might ask: exactly how good are those resistor values?  I suspect it is possible they might be a little more or a little less than the specified integer dead times.  The dead time calibration simply measures this nominal enforced pulse width empirically.

Let's do an experiment. Here is the k-ratios for spec 1 PET looking at Ti Ka using the "empirically" found dead time of 2.85 usec:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_49_03.png)

Looks OK, but clearly as pointed out previously there may be some picoammeter adjustments necessary based on the simple count rate plots in previous posts.  Now let's change it to 3.0 usec as you suggest:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_50_16.png)

Well that definitely looks worse to my eye.  Forgive me but I guess my instrument has a black hole underneath it!   And now let's try the traditional dead time expression with 3 usec:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_55_45.png)

Now that's even worse than before.  I'm not saying this is all figured out, that's why more data from more instruments would be helpful. Let's see some constant k-ratio data from your instrument.  Here's mine again using the "super high precision" Taylor Series expansion expression for DT correction:

(https://probesoftware.com/smf/gallery/395_13_06_22_11_03_11.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 17, 2022, 11:06:43 AM
So here's a very different use case of the constant k-ratio method acquired by Ying Yu at University of Queensland.

She has an old JEOL 8200 which doesn't have any large area crystals and of course JEOL dead time constants tend to be around half of Cameca's so that's another advantage.

So here is a data set using Cu ka on CuFeS2, and Cu metal as the primary standard, on LIF going up to 120 nA using the traditional dead time expression:

(https://probesoftware.com/smf/gallery/395_17_06_22_10_57_21.png)

Pretty constant I'd say. It helps that her DT constants are around only 1.5 usec.  And here is the same data but plotted using the super high precision dead time expression:

(https://probesoftware.com/smf/gallery/395_17_06_22_10_57_35.png)

If you look very closely one can see that the data points on the right at the highest beam currents are very slightly lower.  How is this possible?  Well even at 120 nA on pure Cu, she's only getting around 30K cps of Cu Ka!

So in this case of an old JEOL instrument with very low count rates, the normal (traditional) dead time expression is good enough. 

To re-iterate, at dead times from 1 to 2 usec I would expect the traditional (normal) single term expression to be good to around 50K cps. Though Cameca instruments with dead times around 3 usec, might benefit from the two term high precision expression.

However, over 50K cps the high precision (two term) expression should perform better, and at over 100K cps, the super high precision (multi-term) expression will probably be necessary.  I guess the bottom line is that no matter what your count rates are, the multi-term "super high precision" dead time expression won't hurt, and in many cases (large area crystals and/or higher beam currents and/or maybe Cameca instruments in general), it will definitely help!

I'd be very interested in additional constant k-ratio measurements from any one willing to do some of these measurements.  The latest instructions for acquiring constant k-ratios are attached below.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 18, 2022, 09:49:55 AM
In the above post, we showed data from Ying Yu's lab which demonstrated no change in intensities between using the normal (traditional) dead time expression and the "super high precision" dead time expression at low to moderate beam currents, and only the slightest intensity differences at high beam currents.

This is due to the fact that her instrument is only producing ~30k cps on pure Cu and pure Fe metal even at 120 nA beam current!  So for her instrument with its 1.45 usec dead times, the traditional dead time expression is more than sufficient. Though of course it doesn't hurt to utilize the "super high precision" dead time expression as the default (maybe they will utilize beam currents of 200 nA at some point).

Meanwhile on our SX100 instrument we remeasured Ti on Ti metal, TiO2 and SrTiO3 up to 200 nA *and* also we acquired an EDS spectrum with each data point using our Thermo Pathfinder EDS spectrometer (10 sq. mm). At 200 nA this results in ~220K cps on our PET crystals, ~600K cps (!) on our LPET crystal and ~360K cps on our EDS detector.   And please note, for the Ti Ka by EDS, the ~360K cps is not the whole spectrum count rate, it's merely the Ti ka *net intensity* count rate!   :o

The results for all 5 WDS spectrometers using the "super high precision" dead time expression, and also the EDS detector (of course the EDS detector is correcting for dead time losses using hardware), can be seen here:

(https://probesoftware.com/smf/gallery/395_18_06_22_9_41_46.png)

The WDS spectrometers all look good (though with a bit of an possible asymmetrical diffraction outlier with the LLIF crystal on spec 3), and most impressively, the EDS detector did quite well up until around 200 nA, when the "wheels start to come off" around 85% DT.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 20, 2022, 11:24:31 AM
This is insane.  Here are quant calculations using the "super high precision" dead time expression on the most recent data set where I measured Ti Ka on Ti metal, TiO2 and SrTiO3.

Note that the absolute value of the k-ratio does not matter for this "constant k-ratio" dead time calibration method. The only thing we care is that the k-ratio remains constant as a function of beam current.

Also, for quantification I've utilized the "aggregate" feature in Probe for EPMA to combine the Ti ka intensities from all 5 spectrometers, because the matrix would be non-physical if Ti intensities from 5 spectrometers were added to the specified strontium and oxygen concentrations during the matrix iteration.

So here is Ti Ka measured on 5 spectrometers, using Ti metal as a primary standard measured at 12 nA, and TiO2 measured as a secondary standard measured at 200 nA:

St   22 Set   9 TiO2 synthetic, Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC
BGDS:      EXP     EXP     LIN     EXP     LIN
TIME:    60.00     .00     .00     .00     .00     ---
BEAM:   201.36     .00     .00     .00     .00     ---
AGGR:        5                                     ---

ELEM:       Ti      Ti      Ti      Ti      Ti       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()
   247  60.140    .000    .000    .000    .000  40.050 100.190
   248  60.148    .000    .000    .000    .000  40.050 100.198
   249  60.121    .000    .000    .000    .000  40.050 100.171
   250  60.137    .000    .000    .000    .000  40.050 100.187
   251  60.084    .000    .000    .000    .000  40.050 100.134
   252  60.088    .000    .000    .000    .000  40.050 100.138

AVER:   60.120    .000    .000    .000    .000  40.050 100.170
SDEV:     .027    .000    .000    .000    .000    .000    .027
SERR:     .011    .000    .000    .000    .000    .000
%RSD:      .05   .0000   .0000   .0000   .0000     .00

PUBL:   59.939    n.a.    n.a.    n.a.    n.a.  40.050  99.989
%VAR:      .30     .00     .00     .00     .00     .00
DIFF:     .181     ---     ---     ---     ---    .000
STDS:      522       0       0       0       0     ---


and here is SrTiO3 again using Ti metal as a primary standard measured at 12 nA, and SrTiO3 measured as a secondary standard measured at 200 nA:

St  251 Set   9 Strontium titanate (SrTiO3), Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti      Sr       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC    SPEC
BGDS:      EXP     EXP     LIN     EXP     LIN
TIME:    60.00     .00     .00     .00     .00     ---     ---
BEAM:   200.42     .00     .00     .00     .00     ---     ---
AGGR:        5                                     ---     ---

ELEM:       Ti      Ti      Ti      Ti      Ti      Sr       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()      ()
   253  26.226    .000    .000    .000    .000  47.742  26.154 100.122
   254  26.244    .000    .000    .000    .000  47.742  26.154 100.140
   255  26.228    .000    .000    .000    .000  47.742  26.154 100.124
   256  26.218    .000    .000    .000    .000  47.742  26.154 100.114
   257  26.209    .000    .000    .000    .000  47.742  26.154 100.105
   258  26.209    .000    .000    .000    .000  47.742  26.154 100.105

AVER:   26.222    .000    .000    .000    .000  47.742  26.154 100.118
SDEV:     .013    .000    .000    .000    .000    .000    .000    .013
SERR:     .005    .000    .000    .000    .000    .000    .000
%RSD:      .05   .0000   .0000   .0000   .0000     .00     .00

PUBL:   26.103    n.a.    n.a.    n.a.    n.a.  47.742  26.154  99.999
%VAR:      .46     .00     .00     .00     .00     .00     .00
DIFF:     .119     ---     ---     ---     ---    .000    .000
STDS:      522       0       0       0       0     ---     ---


I am attempting to measure these different emission lines at the same detector bias and only adjusting the gain to place the PHA peak a little to the right of center at a moderate beam current.  The idea being that as the count rate increases and the PHA experiences "pulse depression", the PHA peak will shift to the left, but still be within the range of the counting electronics.  All measurements are also done using "integral" mode.

I am looking at the data and looking for trends in the dead time constant as a function of emission energies, and I think I may be seeing something, but only between the 1 atm and 2 atm flow detectors.

It would be great to get some constant k-ratio measurements on a modern JEOL instrument with large area crystals with count rates exceeding 100K cps to compare...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 21, 2022, 09:16:06 AM
Now that our dead times are pretty well adjusted using the constant k-ratio method, we might be able to observe more subtle miscalibration issues such as the picoammeter calibration.

If ones picoammeter is miscalibrated, then the effect should be seen in all 5 spectrometers. Here are some plots where the intensities for all 5 spectrometers were aggregated using the aggregate feature in Probe for EPMA and the weight percent quantified. First for TiO2 using Ti metal as a primary standard (as a function of beam current):

(https://probesoftware.com/smf/gallery/395_21_06_22_9_09_45.png)

and here for SrTiO3 again using Ti metal as a primary standard:

(https://probesoftware.com/smf/gallery/395_21_06_22_9_10_02.png)

Although the effect is rather small we can see the offset between the 5 to 50 nA and the 50 to 500 nA range. We are attempting to obtain a high accuracy current source to calibrate our picoammeter and will let you know how it goes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 25, 2022, 08:21:43 AM
I was able to acquire a data set on all 5 spectrometers this time for Si Ka using PET and TAP Bragg crystals up to 200 nA.

Here are k-ratios for all 5 spectrometers using SiO2 as the primary standard and benitoite as the secondary standard, and again we can see that our spectrometer 3 with a large area crystal is offset from the other spectrometers as it was for Ti Ka:

(https://probesoftware.com/smf/gallery/395_25_06_22_8_16_12.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 28, 2022, 09:26:55 AM
Again, this time combining the Si Ka intensities from all 5 spectrometers (PET and TAP) using the "aggregate" feature in Probe for EPMA (to check for picoammeter calibration issues), we can see that the quantification is fairly reasonable, but it appears there is a small picoammeter mis-calibration between the 5 to 50 nA and the 50 to 500 nA ranges:
 
(https://probesoftware.com/smf/gallery/395_25_06_22_8_16_28.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 03, 2022, 12:01:35 PM
Here are some k-ratio plots using SiO2 as a primary standard and benitoite as a secondary standard using the three different dead time expressions in Probe for EPMA.

Regardless of the k-ratios we obtain, the essential point is that these k-ratios  should remain constant, so matter what the count rates (beam currents) are. These plots shown below, as mentioned in my reply to Brian Joy in the Heinrich Ka/Kb ratio dead time method topic, seen here:

https://probesoftware.com/smf/index.php?topic=1470.msg10971#msg10971

are relatively immune to the accuracy of the picoammeter calibration because each pair of primary and secondary standards are measured together at each beam current.  This was a point that I had not emphasized enough in previous posts on this constant k-ratio method for the determination of dead time constants.  The point being that as long as the beam current is stable at each beam current measurement, the constancy of k-ratios measured at each beam current reveals the value of the correct dead time constant.

Here is the traditional dead time expression using SiO2 as a primary standard and benitoite as the secondary standard, where each k-ratio (pair of materials) is measured at the same beam current:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_54_38.png)

and here for the two term (Willis , 1992) dead time expression:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_54_57.png)

Note that the low beam current k-ratios are unchanged, but the high beam current k-ratios are much improved (more constant).

And here the six term expansion (Taylor series) of the dead time expression:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_55_21.png)

Not bad at all considering our 200 nA measurements yield ~120K cps on SiO2!

The problems with the picoammeter calibration will not become apparent until one plots the k-ratios of the benitoite secondary standard using a single primary standardization at a low beam current as seen here:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_55_43.png)

Here we can see the approximately 1 % difference in the picoammeter calibration between the 5 to 50 nA range and the 50 to 500 nA range. We are hoping to obtain a high accuracy current source in the next few weeks and will let you know if we can improve this miscalibration between the picoammeter ranges.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 05, 2022, 10:23:30 AM
OK, this is a little bit insane, but I decided to run the benitoite and SiO2 k-ratios up to 400 nA of beam current. Just to see where the "wheels come off"!   ;D

(https://probesoftware.com/smf/gallery/395_05_07_22_10_13_21.png)

As you can see, things are pretty darn good up to 250 nA, but then after that the instrument automatically switches from the 150 um beam regulation aperture to the 200 um unregulated aperture, and then things aren't quite as good, but still only off by about 5%, which is probably fine for ultra high sensitivity trace element work.

Please keep in mind that even at 250 nA on the LTAP Bragg crystal we are getting over 400K cps coming into the detector!  And the k-ratios are essentially constant from 10 nA to 250 nA!    :o

Though maybe some aperture alignment or calibration work on our picoammeter would take care of this 5% variance with the unregulated aperture. I will let you know what we find out.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Anette von der Handt on July 07, 2022, 07:05:18 PM
Here is some data from a JEOL probe: Newly installed JEOL JXA-iHP200F at University of British Columbia.

Ti ka on LIFL (Spec 2&5) and PETL (Spec3) at 15kV. K-ratios on synthetic TiO2 and Ti-metal.

Normal Deadtime Correction
(https://probesoftware.com/smf/gallery/17_07_07_22_6_51_04.png)

Precision Deadtime Correction:
(https://probesoftware.com/smf/gallery/17_07_07_22_6_52_03.png)

Super Precision Deadtime Correction:
(https://probesoftware.com/smf/gallery/17_07_07_22_6_52_37.png)

All scaled the same. Count rates at 200nA are 2LIFL: 46700 cps, 3PETL: 288800 cps, 5LIFL: 32300 cps.

Very convincing win for using the Super Precision Deadtime correction. I almost want to turn it into an animated gif.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 08, 2022, 10:25:35 AM
Cool data!   Spectrometer 3 with the large PET really shows the benefits of the six term dead time expression very nicely!   With the traditional expression the k-ratios on spec 3 start to "head south" around 50 nA. 

A couple of other observations that I'm sure you also see:

It also demonstrates the "simultaneous k-ratio" test using the same data set!  That is to say, spectrometer 2 large LIF either has an alignment problem or perhaps an asymmetrical diffraction issue (just as I see on my spec 3 with a large LiF).  Of course it could be that the other two spectrometers are off and spec 2 is fine, but if we take a look at a quick calculation in CalcZAF for TiO2 (because you used a pure element as the primary standard), we see a calculated k-ratio of around 0.55:

SAMPLE: 32767, TOA: 40, ITERATIONS: 0, Z-BAR: 16.39299

 ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Ti ka   .9950  1.0000  1.0861  1.0806  1.1251   .9653   .9770  4.9670  3.0199 91.5617
   O  ka  6.6118  1.0000   .8910  5.8910   .8469  1.0521   .1060   .5317 28.2114 13655.4

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA KILOVOL                                       
   Ti ka  .00000  .55477  59.950   -----  33.333   1.000   15.00                                       
   O  ka  .00000  .06798  40.050   -----  66.667   2.000   15.00                                       
   TOTAL:                100.000   ----- 100.000   3.000

So it appears to me that it must be an "effective takeoff" issue of some kind for spec 2.  This is a good example of why we need consensus k-ratios as Nicholas Ritchie has suggested.

I'm also very pleased to see that apparently the new JEOL instrument does not show any beam current "glitches" within this range.  It would be worth seeing a plot of the "picoammeter test" using the same data, but where you disable all the Ti standards except for one at 10 nA, and then plot the k-ratios for TiO2 using that single standard.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 08, 2022, 06:47:08 PM
So Anette sent me her MDB files and I plotted the one remaining test on her constant k-ratio data set using Ti ka, which is the test where one disables all primary standards except one, say at 10 nA, and then analyzes all the secondary standards using that single primary standard.

This is essentially a test of the picoammeter accuracy (once the dead time constant is properly determined).  Here is the data using a single Ti metal standard at 10 nA and all the TiO2 secondary standards from 10 nA to 200 nA. Remember on spectrometer 2 LPET,  this is over 110K cps at 200 nA!

(https://probesoftware.com/smf/gallery/395_08_07_22_6_41_24.png)

Not too bad I'd say!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 10, 2022, 12:53:14 PM
Looking through her data, I note that Anette now has the EPMA record for highest count rate with a constant k-ratio:

(https://probesoftware.com/smf/gallery/395_10_07_22_12_51_55.png)

Spectrometer 3 with a PETL crystal with 540K cps on Ti metal with ~1% accuracy!

"Super high" precision dead time correction expression rules!

 8)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 13, 2022, 09:43:18 AM
Looking through her data, I note that Anette now has the EPMA record for highest count rate with a constant k-ratio:

(https://probesoftware.com/smf/gallery/395_10_07_22_12_51_55.png)

Spectrometer 3 with a PETL crystal with 540K cps on Ti metal with ~1% accuracy!

"Super high" precision dead time correction expression rules!

 8)

OK, so Anette and I went over this Ti data from her JEOL instrument again and we found a small mystery regarding the dead time loses at very high beam currents that maybe someone (SEM geologist/Brian Joy?) can help us with.

The graph quoted above (from the previous post) isn't quite correct because that plot of k-ratios is not based on the standards (primary and secondary) being measured at the same beam currents, but rather it's the k-ratios using a primary standard measured at one beam current, and all the secondary standards (TiO2) being measured from 10 to 200 nA.  So it's really a plot of the picoammeter accuracy, which does look very good actually.    :)

But the claim of 540K cps on the Ti standard at 200 nA is not correct because the Ti metal standard used in the graph was measured at a lower beam current.  The secondary TiO2 standards however were measured at all the different beam currents, and the count rate on the TiO2 secondary standard at 200 nA would be around half that of the metal so ~250K cps.  Which of course is still pretty impressive.

However a plot of the constant k-ratios plotted using primary and secondary standards (TI and TiO2) measured at the same beam currents looks like this:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_21_54.png)

It is still quite constant over the range of beam currents, but there is a small uptick in the k-ratios on Sp 3 using a PETL crystal at the highest beam currents.  So what is that uptick from?  Note for the Ti metal standard at 180 and 200 nA, the count rate is indeed over 500K cps!

Well at first we thought maybe the expanded dead time correction needed even more terms of the Taylor expansion series, so we increased them from 6 to 12, and it actually did slightly help the k-ratios, but just barely.  In fact we can see the problem is in the primary standard counts as seen here:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_22_15.png)

The last standard intensity was measured at 200 nA, the one above that at 180 nA, etc.

So even the expanded dead time correction starts to fail at count rates above 500K cps, but only by a percent or so (k-ratio 0.55 to 0.56). Which is not even as much as the offset visible in Sp 2 (red circles), probably from an effective take off angle problem on that spectrometer.

So we have to wonder what mechanism is causing the dead time to increase at counting rates over 500K cps on Sp 3 (PETL).  Any ideas?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 13, 2022, 02:45:47 PM
However a plot of the constant k-ratios plotted using primary and secondary standards (TI and TiO2) measured at the same beam currents looks like this:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_21_54.png)

It is still quite constant over the range of beam currents, but there is a small uptick in the k-ratios on Sp 3 using a PETL crystal at the highest beam currents.  So what is that uptick from?  Note for the Ti metal standard at 180 and 200 nA, the count rate is indeed over 500K cps!

I don’t necessarily have an answer, but I’ve modified my plot of N’12/N’32 versus N’12 for Ti to show both the uncorrected data and corrections based on N = N’/(1-N’τ) and N = N’/(1-(N’τ+N’2(τ2/2))).  (The measured count rate for Ti Kβ on channel 2/LiFL is represented by N’12, and the measured count rate for Ti Kα on channel 5/LiFH is represented by N’32.)  Note that the non-linear dead time correction introduces systematic error beginning at relatively low count rate, with the fixed ratio (0.0957) under-predicted.  Keep in mind that essentially all non-linear behavior is accounted for by the Ti Kα measurement on channel 5/LiFH (N’32)

I need to see just the right kind of plot in order to approach a problem like this.  I like to see the uncorrected ratios plotted along with the corrected values for the different models for one spectrometer or spectrometer pair at a time, and I like to see a lot of data.  I would have collected more than 55 ratios, but I didn’t want to spend all night in the lab.

What are the actual measured values of Ti Kα cps on Anette’s channel 3/PETL?  Is it possible that your plot illustrates the approach to X-ray counter paralysis?  I find that I reach this point somewhere in the vicinity of 300 kcps (uncorrected), but I haven’t explored this limit in detail.

Do you happen to have the Willis (1993) reference?  It’s pretty obscure.

(https://probesoftware.com/smf/gallery/381_13_07_22_2_26_52.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 13, 2022, 03:24:57 PM
I think you could be correct that the detector itself is getting saturated above 500K cps.

I compared the traditional correction with the expanded correction and I'm getting a smaller difference at low count rates  Here is the tradtional expression on pure Ti metal at 10 nA:

ELEM:       Ti      Ti      Ti
STKF:   1.0000  1.0000  1.0000     ---
STCT:   445.90 2819.27  293.08     ---

And here with the six term expanded expression:

ELEM:       Ti      Ti      Ti
STKF:   1.0000  1.0000  1.0000     ---
STCT:   445.91 2821.42  293.08     ---

That's a difference of 0.0007 or 0.07% on the PETL spectrometer.  On the lower count rates channels the difference is not even (barely) visible in 5 significant figures. Was your 0.09 number the percent difference? 

I attribute this slight difference on the PETL crystal at 2800 cps/nA to the fact that even at relatively reasonable count rates (~28K cps) the traditional expression is already failing in precision.

The Willis paper has been hard to track down.  I've attached what we found below.  The phrase "dead time" does actually appear in the paper but it's on optimizing neural nets!

As requested, I turned off the dead time correction in Probe for EPMA completely (it's a checkbox under Analytical Options) and we obtain the following very *non-constant* k-ratios:

(https://probesoftware.com/smf/gallery/395_13_07_22_3_19_15.png)

 :o
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 13, 2022, 10:55:25 PM
Hi John,

The uncorrected data from Anette appear to indicate that nothing unusual is happening in the channel 3 X-ray counter; the measured k-ratio trends upward monotonically with beam current, as is expected.  This means that the strange upward swing at high count rate (not current) in your plot of corrected k-ratio versus current is likely due to your model for N.  This is exactly why I advocated for plotting in the manner that I did two posts above.  I was even able to point out unphysical behavior in the 2nd order expression for N, manifested as clear negative deviation from the ratio, N12/N32, established in my linear fit.

Brian
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 14, 2022, 07:59:50 AM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

(https://probesoftware.com/smf/gallery/395_14_07_22_7_50_07.png)

(https://probesoftware.com/smf/gallery/395_14_07_22_7_56_21.png)
 
(https://probesoftware.com/smf/gallery/395_14_07_22_7_56_34.png)

You'll notice that the extra terms do not affect the lower count rate channels.  But they do help enormously with the very high count rates on spectro 3. In fact one should note in the last (six term expression) plot that spec 3 k-ratios follows spec 5 wonderfully closely, at least at count rates under 500K cps.

I do think you're right about the paralyzing behavior of the detector at these very high count rates.  You said you saw this occur yourself at count rates over 300K cps.  Why then do you not think it happens at count rates above 500K cps?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 14, 2022, 07:20:19 PM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

Yes, I’m aware that the linear model will not produce useful results at high count rate (> several tens kcps).

Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

If the counter is nearing paralysis, then, obviously, the Ti Ka count rate on Ti metal will produce this effect at lower current than TiO2.  This would be manifested as increasing positive deviation from rough linearity at high current on the plot of k-ratio versus current.  If I put a ruler up to your plot, then I can in fact see the apparent k-ratio deviating in this manner (but I need a ruler to see it).

When dealing with these high count rates, it really is necessary to specify whether the stated count rate is corrected or not (like the 506 kcps on Ti metal at 180 nA); this is one advantage of plotting against specified measured or corrected count rate rather than current.  On my channel 2/PETL, I see no obvious evidence for paralysis at 200 nA when measuring Ti Ka on high-purity TiO2 (with measured count rate between 250 and 300 kcps).  When I do a peak search at 400 nA to simulate the count rate on Ti metal, I get a peak with a distinctly flat top, indicating onset of paralysis.

Considering the above, it appears likely that your k-ratios collected above 140 nA are in fact affected by abnormal counter behavior, and so my first impression of the uncorrected ratios was wrong.  (But who could blame me considering that your plot contains no explicit information on measured count rate?)  What bothers me about your k-ratio versus current plots, though, is the fact that I can see patterns in the corrected values.  For instance, why do the corrected ratios for Anette’s channels 2 and 3 decrease in similar fashion when progressing from about 40 to 100 nA?  Why does a maximum appear to occur at 40 nA for the corrected channel 5 ratios?

I think that you need to investigate your model further to see if it is producing unphysical behavior.  I’ve already pointed out a potential problem on my N12/N32 versus N12 plot for Ti (shown again below).  It is absolutely physically impossible for N12/N32 to fall below the ratio determined in my linear fit, as this fit gives the extrapolation to zero count rate (noting that I collected abundant data in the linear region).  You or somebody else absolutely needs to test the higher order models in the same fashion.  Forming a ratio of Ti Ka and Ti Kb (with Kb measured on a spectrometer that produces relatively low count rate) is especially useful because the Kb count rate can be corrected reasonably with the linear model.  If you want to stick with k-ratios, then use a secondary standard that doesn’t contain much of the element under consideration (like Fe in bornite, Cu5FeS4, while using Fe metal as the primary standard).  On my plot of the uncorrected or linearly corrected data, note that no obvious deviation from linearity occurs below 85 kcps.

(https://probesoftware.com/smf/gallery/381_13_07_22_2_26_52.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 15, 2022, 09:22:25 AM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

Yes, I’m aware that the linear model will not produce useful results at high count rate (> several tens kcps).

Well that's sort of the point of this topic!  To reiterate:

1. Using the constant k-ratio method we can acquire k-ratios that allow us to determine the dead time constants for each spectrometer (and each crystal energy range if desired).

2. We can display the same k-ratio data using a primary standard measured at a single beam current to determine the accuracy of our picoammeter calibration.

3. We can plot k-ratios from multiple spectrometers so we can compare the effective takeoff angles of each of our spectrometers/crystals to determine our ultimate quantitative accuracy.

4. And finally, using the expanded dead time correction expression, we can correct WDS intensities at count rates up to around 500k cps with accuracy not previously possible.

Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

The traditional dead time expression, by utilizing only a single term of this Taylor expansion series, is simply a very crude approximation of this probability, and therefore is only accurate for count rates up to around 50K cps. Though it depends on the actual dead time, so Cameca instruments with roughly 3 usec dead times and using the traditional expression are probably only accurate up to 50K cps. While JEOL instruments with dead times around 1.5 usec, may be able to get up to ~80K cps with the traditional expression, as you have shown.

As Willis pointed out in 1993, by utilizing a second term in the dead time expression one can obtain better precision in this probability estimate, and we find that one can get up to count rates around 100K cps or so before the wheels come off.  Maybe a little higher on a JEOL with shorter dead times.

But by utilizing an additional (4) terms of this probability series, we can now get high accuracy k-ratios up to count rates close to 500K cps.  It's just math.

As far as the effects of peak shapes go at these high currents, I would have thought that the k-ratio data speaks for itself!  But I remember now that I did do a screen capture of the PHA peak shapes looking at Mn Ka on Mn metal at 200 nA last week:

(https://probesoftware.com/smf/gallery/395_15_07_22_8_46_55.png)

The LPET count rates were over 240K cps at 200 nA. Surprisingly good I think for an instrument with 3 usec dead times!  I'll try and remember to do a wavescan at 200 nA next time I'm in the lab, but again, the accuracy of the k-ratio data tells me that we are able to perform quantitative analysis at count rates never before attainable. 

If the counter is nearing paralysis, then, obviously, the Ti Ka count rate on Ti metal will produce this effect at lower current than TiO2.  This would be manifested as increasing positive deviation from rough linearity at high current on the plot of k-ratio versus current.  If I put a ruler up to your plot, then I can in fact see the apparent k-ratio deviating in this manner (but I need a ruler to see it).

When dealing with these high count rates, it really is necessary to specify whether the stated count rate is corrected or not (like the 506 kcps on Ti metal at 180 nA); this is one advantage of plotting against specified measured or corrected count rate rather than current.  On my channel 2/PETL, I see no obvious evidence for paralysis at 200 nA when measuring Ti Ka on high-purity TiO2 (with measured count rate between 250 and 300 kcps).  When I do a peak search at 400 nA to simulate the count rate on Ti metal, I get a peak with a distinctly flat top, indicating onset of paralysis.

Considering the above, it appears likely that your k-ratios collected above 140 nA are in fact affected by abnormal counter behavior, and so my first impression of the uncorrected ratios was wrong.  (But who could blame me considering that your plot contains no explicit information on measured count rate?)  What bothers me about your k-ratio versus current plots, though, is the fact that I can see patterns in the corrected values.  For instance, why do the corrected ratios for Anette’s channels 2 and 3 decrease in similar fashion when progressing from about 40 to 100 nA?  Why does a maximum appear to occur at 40 nA for the corrected channel 5 ratios?

I think that you need to investigate your model further to see if it is producing unphysical behavior.  I’ve already pointed out a potential problem on my N12/N32 versus N12 plot for Ti (shown again below).  It is absolutely physically impossible for N12/N32 to fall below the ratio determined in my linear fit, as this fit gives the extrapolation to zero count rate (noting that I collected abundant data in the linear region).  You or somebody else absolutely needs to test the higher order models in the same fashion.  Forming a ratio of Ti Ka and Ti Kb (with Kb measured on a spectrometer that produces relatively low count rate) is especially useful because the Kb count rate can be corrected reasonably with the linear model.  If you want to stick with k-ratios, then use a secondary standard that doesn’t contain much of the element under consideration (like Fe in bornite, Cu5FeS4, while using Fe metal as the primary standard).  On my plot of the uncorrected or linearly corrected data, note that no obvious deviation from linearity occurs below 85 kcps.

Well I'm glad you now see it.  And thank-you for taking the time to debate this with me. I have to say, all of this argument has actually helped me to appreciate exactly how good this new method and expression are.

The small deviations you point out are interesting and perhaps will provide additional insight into the inner workings of our instruments, but it should be noted that they are in the sub 1% level and significantly smaller than the k-ratio variations from one spectrometer to another.  The fact that we can attain 1% k-ratio accuracy up to 500K cps is, to me at least and Anette as well, the take home message in my book.

Here's an idea: I can't send you Anette's data until I ask her, but perhaps your best bet for understanding this constant k-ratio method and the new dead time expression is to perform a constant k-ratio run yourself. 

You already own Probe for EPMA, so why don't you just fire it up, go to the Help menu and update it to the latest version so you have the new dead time expression.  Then using Ti metal and TiO2, or any two materials with a large difference in count rates, try it out on your PET and LIF crystals.  Do you have any large area crystals? That is where these effects will be most pronounced. The procedure has been fully documented and is attached below.

Remember, Probe for EPMA has the traditional (single term) expression, the Willis (two term) expression, and the new six term expression, all available with a click of the mouse.  They are each simply more precise formulations of the probability calculation of randomly overlapping time intervals.

(https://probesoftware.com/smf/gallery/395_15_07_22_9_00_30.png)

And by unchecking the Use Dead Time Correction checkbox you can even turn off the dead time correction completely!

With the latest version of Probe for EPMA, it just takes a few minutes to set up a completely automated overnight run using the multiple sample setups feature with different beam currents.  See the attached PDF document for complete details.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 15, 2022, 09:41:04 PM
Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

Linear:  N'/N = 1 – N’τ (slope = -τ)
Non-linear:  N'/N = 1 – (N’τ + N’2(τ2/2))

(https://probesoftware.com/smf/gallery/381_15_07_22_9_35_39.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 15, 2022, 10:29:53 PM
Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

Linear:  N/N’ = 1 – N’τ (slope = -τ)
Non-linear:  N/N’ = 1 – (N’τ + N’2(τ2/2))

(https://probesoftware.com/smf/gallery/381_15_07_22_9_35_39.png)

Now you're just arguing semantics. Yes, the expanded expression is not a straight line the way you're plotting it here (nice plot by the way!), but why should it be? It's a probability series that produces a linear response in the k-ratios. Hence "constant" k-ratios.   8)

It's the additional terms of the Taylor expansion series which is why it works at high count rates. Whereas the single term expression fails miserably, as you have already acknowledged.

And as your plot nicely demonstrates, and I have noted previously, there is little to no difference at low count rates, and hence no "non-linearities" that you seem to be so concerned with. This is good news for everyone because the more precise dead time expressions can be utilized under all beam current conditions.

Think about it: plotting these expanded expressions on a probability scale would produce a line that approaches a straight line. That's the point about "linear" that I've been trying to make clear.

And if you plot up the six term dead time expression you will find that it approaches the actual probability of a dead time event more accurately at even higher count rates. As demonstrated by the constant k-ratio data from both JEOL and Cameca instruments in this topic.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 16, 2022, 09:04:42 AM
Here's something interesting that I just noticed.

When I looked at the dead times Anette had in her SCALERS.DAT file, and the dead times after she optimized them with her Te, Se and Ti k-ratio data, there is an obvious shift from higher to lower dead times constants:

Ti Ka dead times, JEOL iHP200F, UBC, von der Handt, 07/01/2022
Sp1     Sp2    Sp3     Sp4     Sp5
PETJ    LIFL    PETL   TAPL    LIFL
1.26   1.26    1.27    1.1     1.25          (usec) optimized using constant k-ratio method
1.52   1.36    1.32    1.69    1.36         (usec) JEOL engineer using traditonal method


I suspect that the reason she found smaller dead time constants using the constant k-ratio method is because she has an instrument that produces quite high count rates, so when the JEOL engineer tried to compensate for those higher count rate (using the traditional dead time expression), he had to increase the apparent dead time constants to get something that looked reasonable. And the reason is of course that the traditional dead time expression just doesn't cut it at count rates attained at even moderate beam currents on these new instruments.

In fact I found exactly the same thing on my SX100. Using the traditional single term expression I found I was having to increase my dead time constants to around 4 usec!  That was when I decided to try the existing "high" precision (two term) expression from Willis (1993).  And that helped but it still was showing problems at count rates exceeding 100K cps.

So that is when John Fournelle and I came up with the expanded dead time expression with 6 terms. Once that was implemented everything fell nicely into place and now we can get consistently accurate k-ratios at count rates up to 500K cps or so with dead time constants at or even under 3 usec!  Of course above that 500 K cps count rate we start seeing the WDS detector showing a little of the "paralyzing" behavior discussed earlier. 

I'm hoping that Mike Jercinovic will perform some of these constant k-ratio measurements on his VLPET (very large) Bragg crystals on his UltraChron instrument and see what his count rates are at 200 nA!    :o

It's certainly worth considering how much of a benefit this new expression is for new EPMA instruments with large area crystals. For example, on Anette's PETL Bragg crystals she is seeing count rates on Ti Ka of around 2800 cps/nA. So at 10 nA she's getting 28K cps and at 30 nA she's getting around 84K cps!

That means that using the traditional dead time expression she's already seeing accuracy issues at 30 nA!  And wouldn't we all like to go to beam currents of more than 30 nA and still be able to perform accurate quantitative analyses?

 ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 17, 2022, 09:26:58 AM
Brian Joy's plots of the traditional and Willis (1993) (two term) dead time expressions are really cool, so I added the six term expanded dead time expression to this plot. 

First I wrote code to calculate the dead time corrections for all of the Taylor series expressions from the traditional (single term) to the six term expression that we call the "super high" precision expression. It's extremely simple to generate the Taylor series to as many terms as one wants, as seen here:

Code: [Select]
' For each of the number of Taylor expansion terms
For j& = 0 To 5

temp2# = 0#
For i& = 2 To j& + 1
temp2# = temp2# + cps& ^ i& * (dtime! ^ i&) / i&
Next i&
temp# = 1# - (cps& * dtime! + temp2#)
If temp# <> 0# Then corrcps! = cps& / temp#

' Add to output string observed cps divided by corrected (true)
astring$ = astring$ & Format$(CSng(cps& / corrcps!), a80$) & vbTab
Next j&

The output from this calculation is seen here:

(https://probesoftware.com/smf/gallery/395_17_07_22_9_10_30.png)

The column headings indicates the number of Taylor probability terms in each of the columns (1 = the traditional single term expression). This code is embedded in the TestEDS app under the Output menu but the modified app has not yet been released by Probe Software as yet!

Plotting up the traditional, Willis and six term expressions we get this plot:

(https://probesoftware.com/smf/gallery/395_17_07_22_9_19_47.png)

The 3, 4 and 5 term expressions plot pretty much on top of each other since this plot only goes up to 200K cps, so they are not shown, but you can see the values in the text output.  On a JEOL instrument with ~1.5 usec dead times, up to about 50K to 80K cps, the traditional expression does a pretty good job, but above that the Willis (1993) expression does better, and above 160K cps we need the six term expression for best accuracy.

As we saw from our constant k-ratio data plots, the six term expression really gets going at 200K cps and higher.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 19, 2022, 01:45:48 PM
We modified the TestEDS app a bit to only display results for only the traditional (single term), Willis (two term) and the new six term expression dead time expressions. And we increased the observed count rates to 300K cps and also added output of the predicted (dead time corrected) count rates as seen here:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_20_11.png)

Note that this version of TestEDS.exe is available in the latest release of Probe for EPMA (using the Help menu).

Now if instead of plotting the ratio of the observed to predicted count rates on the Y axis, we instead plot the predicted count rates themselves, we can see this plot:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_20_39.png)

Note that unlike the ratio plot, all three of the dead time correction expressions show curved lines.  This is what I meant when I stated earlier that it depends on how the data is plotted.

Note also that at true (corrected) count rates around 400 to 500K cps we are seeing differences in the predicted intensities between the traditional expression and the new six term expression of around 10 to 20%!

To test the six term expression we might for example measure our primary Ti metal standard at say 30 nA, and then a number of secondary standards (TiO2 in this case) at different beam currents, and then plot the k-ratios for a number of spectrometers first using the traditional dead time correction expression:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_30_06.png)

Note spectrometer 3 (green symbols) using a PETL crystal.  Next we plot the same data, but this time using the new six term dead time correction expression:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_30_23.png)

The low count rate spectrometers are unaffected, but the high intensities from the large area Bragg crystal benefit significantly in accuracy. 

I can imagine a scenario where one is measuring one or two major elements and 3 or 4 minor or trace elements, using 5 spectrometers. The analyst measures all the primary standards at moderate beam currents, but in order to get decent sensitivity on the minor/trace elements the analyst selects a higher beam current for the unknowns. 

Of course one can use two different beam conditions for each set of elements, but that would take considerably longer.  Now we can do our major and minor elements together using higher beam currents and not lose accuracy.    8)

I've always mentioned that for trace elements, background accuracy is more important than matrix corrections, but that's only because our matrix corrections are usually accurate to 2% relative or so.  Now that we know that we can see 10 or 20% relative errors in our dead time corrections at high beam currents, I'm going to modify that now and say, that the dead time corrections might be another important source of error for trace and minor elements if one is using the traditional (single term) dead time expression at high beam currents.

The good news is that the six term expression has essentially no effect at low beam currents, so simply select the "super high" precision expression as your default and you're good to go!

(https://probesoftware.com/smf/gallery/395_15_07_22_9_00_30.png)

Test it for yourself using the constant k-ratio method and feel free to share your results here. 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 20, 2022, 11:20:41 PM
First I wrote code to calculate the dead time corrections for all of the Taylor series expressions from the traditional (single term) to the six term expression that we call the "super high" precision expression. It's extremely simple to generate the Taylor series to as many terms as one wants, as seen here:

Code: [Select]
' For each of the number of Taylor expansion terms
For j& = 0 To 5

temp2# = 0#
For i& = 2 To j& + 1
temp2# = temp2# + cps& ^ i& * (dtime! ^ i&) / i&
Next i&
temp# = 1# - (cps& * dtime! + temp2#)
If temp# <> 0# Then corrcps! = cps& / temp#

' Add to output string observed cps divided by corrected (true)
astring$ = astring$ & Format$(CSng(cps& / corrcps!), a80$) & vbTab
Next j&

For a given function, f(x), the Taylor series generated by f(x) at x = a is typically written as

f(a) + (x-a) f’(a) + (x-a)2f’’(a)/2! + … + (x-a)nf(n)(a)/n! +…

If a = 0, it is often called the Maclaurin series.  For example, the Maclaurin series for f(x) = exp(-x) looks like this:

exp(-x) = 1 – x + x2/2 – x3/6 + x4/24 – x5/120 + x6/720 + … + (-x)nf(n)(0)/n! +…

Just for the sake of absolute clarity, could you identify the function, N(N’), that you’re differentiating to produce the Taylor or Maclaurin polynomial (where N’ is the measured count rate and N is the corrected count rate)?  I’m specifically using the term “polynomial” because the series, which is infinite, has been truncated.

The function, N(N’), presented by J. Willis is

N = N’/(1 – (τN’+τ2N’2/2))

where τ is a constant.  How exactly does the polynomial generated from the identified function relate to the expression presented by Willis?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 12:18:17 PM
I've been writing up this constant k-ratio method and the new dead time expressions up with several of our colleagues (Aurelien Moy, Zack Gainsforth, John Fournelle, Mike Jercinovic and Anette von der Handt) and hope to have a manuscript ready soon.  So if it seems I've been holding my cards close to my chest, that's the reason why!   :)

Using your notation the expanded expression is:

(https://probesoftware.com/smf/gallery/395_21_07_22_3_32_19.png)

or

(https://probesoftware.com/smf/gallery/395_21_07_22_12_07_43.png)
 
I've been calling it a Taylor series, because it is mentioned in the Taylor series Wiki page, but yes, the Maclaurin expression is an exponential version of the Taylor series which is what we are approximating:

https://en.wikipedia.org/wiki/Taylor_series

We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).

Your comments have been very helpful in getting me to think through these issues, and we will be sure to acknowledge this in the final manuscript.

I'll share one plot from the manuscript which pretty much sums things up:

(https://probesoftware.com/smf/gallery/395_21_07_22_12_22_02.png)

Note that the Y axis (predicted count rate) is in *millions* of cps!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 12:49:20 PM
Here's another way to think of these dead time expressions that Zack, Aurelien and I have come up with:

The traditional (single term) dead time correction expression does describe the probability of a single photon being coincident with another photon, but it doesn't handle the case where two photons are coincident with another photon.
 
That's what the Willis (two term) expression does.

The expanded expression handles three photons coincident with another photon, etc., etc.   The log expression will handle this even more accurately.  Of course at some point the detector physics comes into play when the bias voltage doesn't have time to clear the ionization from the previous event.  Then the detector starts showing paralyzing behavior as has been pointed out.

What amazing is that we are seeing these multiple coincident photon events even at relatively moderate count rates and reasonable dead times, e.g., >100K cps and 1.5 usec. 

But it makes some sense because if you think about it, a 1.5 usec dead time corresponds to a count rate of 1/1.5 usec or about 600K cps assuming no coincident events.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 03:46:14 PM
Using your notation the expanded expression is:

(https://probesoftware.com/smf/gallery/395_21_07_22_12_07_43.png)
 
I've been calling it a Taylor series, because it is mentioned in the Taylor series Wiki page, but yes, the Maclaurin expression is an exponential version of the Taylor series which is what we are approximating:

https://en.wikipedia.org/wiki/Taylor_series

We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).

A human-readable equation is nice.  Picking through someone else’s code is never fun.

This is the Taylor series generated by ln(x) at x = a (for a > 0):

ln(x) = ln(a) + (1/a)(x-a) – (1/a2)(x-a)2/2 + (1/a3)(x-a)3/3 – (1/a4)(x-a)4/4 + … + (-a)n-1(n-1)!(x-a)n/n! + …

Note that the sign alternates from one term to the next.  A Maclaurin series cannot be generated because ln(0) is undefined.

I don’t see how this Taylor series relates to the equation you’ve written.  I also don’t see how it’s physically tenable, as it can’t be evaluated at N’ = N = 0 (but your equation can be).  When working with k-ratios, N’ = N = 0 is the point at which the true k-ratio for given effective takeoff angle is found.

I'm confused, but maybe I'm just being dense.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 04:00:24 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 04:09:31 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 04:17:07 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.

As you pointed out, it's not exactly a Taylor series.

I could make a joke here about "natural" numbers, but instead I'll ask: what is the meaning of zero incident photons?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 05:05:46 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.

As you pointed out, it's not exactly a Taylor series.

I could make a joke here about "natural" numbers, but instead I'll ask: what is the meaning of zero incident photons?

This is the answer I was looking for.  If it’s not a Taylor series, then you shouldn’t call it by that name.  What is the physical justification for the math?  If the equation is empirical, then how do you know that it will work at astronomical count rates (Mcps) that you can’t actually measure?

When you perform a regression (linear or not), the intercept on the vertical axis (i.e., the point at which N’ = N = 0) gives the ratio for the case of zero dead time, and it is the only point at which N’ = N.  This ratio could be a k-ratio or it could be a ratio of count rates on different spectrometers, or it could be the ratio, N'/I = N/I (if you trust your picoammeter).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 05:46:29 PM
This is the answer I was looking for.  If it’s not a Taylor series, then you shouldn’t call it by that name.

Says the man who claims it's not about nomenclature.      ::)

Yes, it's not exactly Taylor and not exactly Maclaurin.  It's something new based on our modeling and empirical testing.  Call it whatever you want, we're going to call it Taylor/Maclaurin-like.  That drives you crazy doesn't it?   :)

What is the physical justification for the math?  If the equation is empirical, then how do you know that it will work at astronomical count rates (Mcps) that you can’t actually measure?

When you perform a regression (linear or not), the intercept on the vertical axis (i.e., the point at which N’ = N = 0) gives the ratio for the case of zero dead time, and it is the only point at which N’ = N.  This ratio could be a k-ratio or it could be a ratio of count rates on different spectrometers, or it could be the ratio, N'/I = N/I (if you trust your picoammeter).

The physical justification will be provided in the paper.  I think you will be pleased (then again, maybe not!).  The log expression makes this all pretty clear, .

The empirical justification is that even these expanded expressions work surprisingly well at over 400K cps as demonstrated in the copious examples shown in this topic. But as we get above these sorts of count rates, the physical limitations (paralyzing behavior) of the detectors starts to dominate.

The bottom line is that these expanded expressions work much better than the traditional expression, certainly at the moderate to high beam currents routinely utilized in trace/minor element analyses and high speed quant mapping. And the log expression gives almost exactly the same results as the expanded expressions, so we are quite confident!

I'll just say that the expression we are using does indeed work at a zero count rate, but the expression you are thinking of does not.  It will all be in the paper.

obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
       0          0          0          0          0          0          0          0          0   
    1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989   
    2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955   
    3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898   
    4000   4024.145      0.994   4024.218   0.993982   4024.218   0.993982   4024.218   0.993982   
    5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718   
    6000    6054.49   0.9910001   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593   
    7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445   

Once again the co-authors and I thank you for all your criticisms and comments, as they have significantly improved our understanding of these processes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 22, 2022, 11:06:02 AM
Here's another way to think of these dead time expressions that Zack, Aurelien and I have come up with:

The traditional (single term) dead time correction expression does describe the probability of a single photon being coincident with another photon, but it doesn't handle the case where two photons are coincident with another photon.
 
That's what the Willis (two term) expression does.

The expanded expression handles three photons coincident with another photon, etc., etc.   The log expression will handle this even more accurately.  Of course at some point the detector physics comes into play when the bias voltage doesn't have time to clear the ionization from the previous event.  Then the detector starts showing paralyzing behavior as has been pointed out.

What amazing is that we are seeing these multiple coincident photon events even at relatively moderate count rates and reasonable dead times, e.g., >100K cps and 1.5 usec. 

But it makes some sense because if you think about it, a 1.5 usec dead time corresponds to a count rate of 1/1.5 usec or about 600K cps assuming no coincident events.

A few posts ago I made a prediction that traditional (single photon coincidence) dead time expression should fail at around 1/1.5 (usec) assuming a non random distribution.  That would correspond to 666K cps (1/1.5 usec)

I just realized that I forgot to do that calculation (though it really should be modeled using Monte Carlo for confirmation), so here is that calculation using 1.5 usec and going to over 600K cps:

(https://probesoftware.com/smf/gallery/395_22_07_22_11_01_08.png)

 :o

The traditional dead time expression fails right at an observed count rate of 666K cps!   Realize that this corresponds to a "true" count rate of over 10^8 cps, so nothing we need to worry about with our current detectors!   Now maybe this is just a coincidence (no pun intended) as I haven't even had time to run this past my co-authors...

The expanded dead time  expressions fail at somewhat lower count rates of course, but still in the 10^6 cps realm of "true" count rates.  The advantage of the expanded dead time expressions is that they are much more accurate than the traditional expression, at count rates we often see in WDS EPMA.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 23, 2022, 09:26:05 AM
Now that we have implemented Aurelien Moy's logarithmic expression let's see how it performs.

Here is a plot of observed vs. predicted count rates at 1.5 usec up to 300K observed count rates:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_10_02.png)

Wait a minute, where did the six term (red line) expression go?  Oh, that's right, it's underneath the log (cyan) expression. At under 300K cps observed count rates, these two expressions give almost identical results.  Meanwhile the traditional expression gives a predicted count rate that is about 30% to 40% too low!    :o

OK, let's take it up to 400K cps observed count rates:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_17_47.png)

Now we are just barely seeing a slight divergence between the two expressions which makes sense since the six term Maclaurin-like expression is only an approximation of the probabilities of multiple photon coincidences.

Note that at 1.5 usec dead times this 400K cps observed count rate corresponds to a predicted (true) count rate of over 4000K cps. Yes, you read that right, 4M cps. 

Of course our gas detectors will be paralyzed long before we get to such count rates.  From Anette's WDS data we think this is starting to occur at predicted (true) count rates of over 500K cps, which at 1.5 usec corresponds to an observed count rate of around 250K cps.

But even at these easily attainable count rates, the traditional expression is still off by around 25% relative.   It's all a question of photon coincidence.    :)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 08, 2022, 10:22:07 AM
Over the weekend I went over my MgO/Al2O3/MgAl2O4 consensus k-ratios from May, now that we finally figured out the issues with calibrating our WDS spectrometer dead times using the "constant k-ratios" procedure developed by John Donovan and John Fournelle and also attached to this post as a short pdf file:

https://probesoftware.com/smf/index.php?topic=1466.msg11008#msg11008

and now that we have an accurate expression for dead time correction (see Moy's logarithmic dead time correction expression in Probe for EPMA), we can re-open that old probe data file from May and re-calculate our k-ratios!

So, using the new logarithmic expression we obtain these k-ratios for MgAl2O4 from 5 nA to 120 nA:

(https://probesoftware.com/smf/gallery/395_08_08_22_10_09_04.png)

Note that I blanked out the y-axis values so as not to influence anyone (these will be revealed later once Will Nachlas has a better response rate from the first FIGMAS round robin!) but the point here is to note how *constant* these consensus k-ratios are over a large range of beam currents.
 
What does this mean?  It means we can quantitatively analyze for major elements, minor elements, and trace elements at high beam currents at the same time!

This is particularly important for high sensitivity quantitative X-ray mapping...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 08, 2022, 10:50:53 AM
I almost forgot to post this.  Here's the analysis of the above MgAl2O4 using MgO and Al2O3 as primary standards, first at 30 nA:

St 3100 Set   3 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 30.0  Beam Size =   10

St 3100 Set   3 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    29.85   29.85     ---

ELEM:       Mg      Al       O   SUM 
    28  17.063  37.971  44.985 100.020
    29  17.162  38.227  44.985 100.374
    30  17.234  38.389  44.985 100.608

AVER:   17.153  38.196  44.985 100.334
SDEV:     .086    .210    .000    .296
SERR:     .050    .121    .000
%RSD:      .50     .55     .00

PUBL:   17.084  37.931  44.985 100.000
%VAR:      .41     .70     .00
DIFF:     .069    .265    .000
STDS:     3012    3013     ---

And here at 120 nA:

St 3100 Set   6 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 120.  Beam Size =   10

St 3100 Set   6 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:   119.71  119.71     ---

ELEM:       Mg      Al       O   SUM 
    55  17.052  37.617  44.985  99.654
    56  17.064  37.554  44.985  99.603
    57  17.083  37.636  44.985  99.704

AVER:   17.066  37.602  44.985  99.654
SDEV:     .016    .043    .000    .051
SERR:     .009    .025    .000
%RSD:      .09     .11     .00

PUBL:   17.084  37.931  44.985 100.000
%VAR:     -.10    -.87     .00
DIFF:    -.018   -.329    .000
STDS:     3012    3013     ---

This is using the default Armstrong phi/rho-z matrix corrections, but all the matrix expressions give similar results as seen here for the 30 nA analysis:

Summary of All Calculated (averaged) Matrix Corrections:
St 3100 Set   3 MgAl2O4 FIGMAS
LINEMU   Henke (LBL, 1985) < 10KeV / CITZMU > 10KeV

Elemental Weight Percents:
ELEM:       Mg      Al       O   TOTAL
     1  17.153  38.196  44.985 100.334   Armstrong/Love Scott (default)
     2  17.062  38.510  44.985 100.558   Conventional Philibert/Duncumb-Reed
     3  17.126  38.468  44.985 100.580   Heinrich/Duncumb-Reed
     4  17.157  38.369  44.985 100.511   Love-Scott I
     5  17.150  38.186  44.985 100.321   Love-Scott II
     6  17.098  37.989  44.985 100.072   Packwood Phi(pz) (EPQ-91)
     7  17.302  38.321  44.985 100.608   Bastin (original) Phi(pz)
     8  17.185  38.701  44.985 100.871   Bastin PROZA Phi(pz) (EPQ-91)
     9  17.170  38.579  44.985 100.735   Pouchou and Pichoir-Full (PAP)
    10  17.154  38.399  44.985 100.538   Pouchou and Pichoir-Simplified (XPP)

AVER:   17.156  38.372  44.985 100.513
SDEV:     .063    .210    .000    .225
SERR:     .020    .066    .000

MIN:    17.062  37.989  44.985 100.072
MAX:    17.302  38.701  44.985 100.871

Proof once again that we really do not require matrix matched standards.

Again, for most silicates and oxides the problem is *not* our matrix corrections. Instead it's our instrument calibrations, especially dead time calibrations, and of course having standards that actually are the compositions that we claim them to be.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 09:08:54 AM
OK, I am going to start again this morning because I think I now understand the main reason why BJ and SG have been having so much trouble appreciating these new dead time expressions (aside from nomenclature issues!).   :)  Though SG seems to appreciate most of what we have been trying to accomplish when he states:

...albeit it have potential to work satisfactory for EPMA with some restrictions. That is not bad, and for sure it is much better than keeping using classical simple form of correction.

I will work through a detailed example in a bit, but I'll start with a short explanation "in a nutshell" as they say:

Whatever we call these effects (pulse pile up, dead time, photon coincidence) we have a traditional expression which does not properly handle photon detection at high count rates.  Let's call it dead time, because everybody calls the traditional expression the dead time correction expression.  And all of us agree, there are many underlying causes both in the detector and in the pulse processing electronics, and I wish luck to SG in his efforts towards that holy grail.   :)

Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.

You know already my answer from other post about matrix correction vs matrix matched standards. And to repeat that answer it is Absolutely Certainly Yes!

So, yes our k-ratios should remain constant as a function of beam current/count rate given two materials with a different concentration of an element, for a specified emission line, beam energy and takeoff angle.  And yes, we know that this k-ratio is also affected by a number of calibration issues. Dead time being one of these, and of course also spectrometer alignment, effective takeoff angle and whatever else we want to consider.

But the interesting thing about the dead time correction itself, is that the correction becomes negligible at very low count rates! Regardless of whether these "dead time" effects are photon coincidence or pulse pile up or whatever they might be.

So some of you may recall in the initial FIGMAS round robin that you received an email from Will Nachlas asking everyone to perform their consensus k-ratio measurements at a very low beam current. And it was because of this very reason that we could not be sure, even at moderate beam currents, that people's k-ratios would be accurate because of these dead time or pulse pile up (or whatever you want to call them) effects.

So Will suggested that those in the FIGMAS round robin measure our k-ratios at a very low beam current/count rate and that these will be the most accurate k-ratios, which should then be reported. This is exactly the thought that John Fournelle and I had when we come up with the constant k-ratio method:

That these k-ratios should remain constant as a function of higher beam currents if the instrument (and software) are properly calibrated.

Again aside from spectrometer alignment/effective takeoff angle issues, which can be identified from measuring these consensus k-ratios on more than one spectrometer!

Now I need to quote SG again, as this exchange got me thinking (a dangerous thing, I know!):
As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable.

And I responded:
Clearly it's a constant in the equation, but equally clearly it depends on how the constant is calibrated.  If one assumes that there are zero multiple coincident photons, then one will obtain one constant, but if one does not assume there are zero multiple coincident photons, then one will obtain a different constant. At sufficiently high count rates of course.

I think the issue is that SG is trying to separate out all these different effects in the dead time correction and treat them all separately. And we wish him luck with his efforts.  But we never claimed that our method is a universal method for dead time correction, merely that it is better than the traditional (or as he calls it the classical) expression. 

Roughly speaking, the new expressions allow us to utilize beam currents roughly 10x greater than previously and yet we can still maintain quantitative accuracy.

It is also a fact that if one calibrates their dead time constant using the traditional expression, then one is going to obtain one dead time constant value, but if one utilizes a higher precision dead time expression that handles multiple photon coincidence, then they will obtain a (somewhat) different dead time constant.  This was pointed out some time ago when Anette first reported her constant k-ratio measurements:

https://probesoftware.com/smf/index.php?topic=1466.msg10988#msg10988

And this difference can be seen in the values of the dead time constants calibrated by the JEOL engineer vs. the dead time calibrations using the new higher precision dead time expressions that Anette utilized:

Ti Ka dead times, JEOL iHP200F, UBC, von der Handt, 07/01/2022
Sp1     Sp2    Sp3     Sp4     Sp5
PETJ    LIFL    PETL   TAPL    LIFL
1.26   1.26    1.27    1.1     1.25          (usec) optimized using constant k-ratio method (six term expression)
1.52   1.36    1.32    1.69    1.36         (usec) JEOL engineer using traditonal method


The point being that one must reduce the dead time constant using these new (multiple coincidence) expressions or the intensity data will be over corrected! This will become clearer as we look at some data.  So let's walk through constant k-ratio method in the next post.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 09:53:15 AM
OK, this constant k-ratio method is all documented for those using the Probe for EPMA software in the pdf attachment below in this post, but I think it would be more clear if I walked through the process here for those without the PFE software.

I am not familiar with the JEOL or Cameca OEM software, so I cannot say how easy or difficult performing these measurements and calibrations will be using the OEM software, but I will explain what needs to be done and you can let me know how it goes.

The first step is to acquire an appropriate data set of k-ratios. Typically one would use two materials with significantly different concentrations of an element, though the specific element and emission line are totally optional. Also the precise compositions of these materials is not important, merely that they are homogeneous.  All we are looking for is a *constant* k-ratio as a function of beam current.

I suggest starting with Ti metal and TiO2 as they are both pretty beam stable and easy to obtain and can be used with both LIF and PET crystals, so do measure Ti Ka on all 5 spectrometers if you can, so all spectrometers can be calibrated for dead time.  One can also use two Si bearing materials, e.g., SiO2 and say Mg2SiO4 for TAP and PET crystals, though in all cases the beam should be defocused to 10 to 15 um to avoid any beam damage.

So we start by measuring *both* our Ti metal and TiO2 at say 5 nA (after checking for good background positions of course). For decent precision you might need to count for 60 or more seconds on peak. Measure maybe 5 or 8 points at whatever voltage you prefer (15 or 20 keV works fine, the higher the voltage the smaller the surface effects). Then calculate the k-ratios.

These k-ratios will have a very small dead time correction as the count rates are probably pretty low and for that reason, we can assume that these k-ratios are also the most accurate with regards to the dead time correction (hence the request by FIGMASS to measure their consensus k-ratios at low beam currents).  What we are going to do next is again measure our Ti metal and TiO2 materials at increasing beam currents up to say 100 or 200 nA.

Be sure to measure these materials in pairs at *each* beam current so that any potential picoammeter inaccuracies will be nulled out. In fact this is one of the main advantages of this constant k-ratio dead time calibration method over the traditional method which depends on the accuracy of the picoammeter because it merely plots beam current versus count rate (e.g., the Carpenter dead time spreadsheet).

Interestingly, this constant k-ratio method is somewhat similar to the Heinrich method that Brian Joy discusses in another topic, because both methods are looking at the constancy of two intensity ratios as a function of beam current (here a normal k-ratio and for Heinrich the alpha/beta line ratios).  However, as has been pointed out previously, the Heinrich dead time calibration method fails at high count rates because it does not handle multiple coincident photon probabilities. Of course the Heinrich method could be fitted using one of the newer expressions (and Brian agrees it then does do a better job at higher count rates), but he complains that it over fits the data. Which as mentioned in the previous post is because the dead time constant value itself needs to be adjusted down to yield a consistent set of ratios.

In any case, we think the constant k-ratio method is easier and more intuitive, so let's continue.

OK, so once we have our k-ratio pairs measured over a range beam currents from say 5 or 10 nA to 100 or 200 nA, we plot them up and we might obtain a plot looking like this:

(https://probesoftware.com/smf/gallery/395_12_08_22_9_58_54.png)

This is using the traditional dead time correction expression. So if this was a low count rate spectrometer this k-ratio plot would be pretty "constant", but the count rate at 140 nA on this PETL spectrometer is around 240K cps!  And the reason the k-ratio increases is because the Ti metal primary standard is more affected by dead time effects due to its higher count rate, so as that intensity in the denominator of the k-ratio drops off at higher beam currents (because the traditional expression breaks down at higher count rates), the k-ratio value increases!  Yes, it's that simple.   :)

Now let's have some fun in the next post.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 10:43:54 AM
So we saw in the previous post that the traditional dead time correction expression works pretty well in the plot above at the lowest beam currents, but starts to break down at around 40 to 60 nA, which on Ti metal is around 100K (actual) cps.

If we increase the dead time constant in an attempt to compensate for these high count rates we see this:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_07_33.png)

Using the traditional dead time expression we over compensate at lower beam currents and still under compensate at higher beam currents.  So what's an analyst to do?  I say, use a better dead time expression!  This search led us to the Willis, 1993 (two term Taylor series) expression.  By the way, the dead time reference that SG provided is worth reading:

https://www.sciencedirect.com/science/article/pii/S1738573318302596

Using the Willis 1993 expression helps somewhat as shown here reverting back to our original DT constant of 1.32 usec:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_14_37.png)

By the way I don't know how hard it is to edit the dead time constants in the JEOL or Cameca OEM software, but in PFE the software dead time correction is an editable field, because one might (e.g., as the detectors age) see a problem with their dead time correction (as we have above), and decide to re-calibrate the dead time constants. Then it's easy to update the dead time constants and re-calculate one's results for improved accuracy.  In PFE there's even a special dialog under the Analytical menu to update all the DT constants for a specific spectrometer (and crystal) for all (or selected) samples in a probe run...

Note also, that these different dead time correction expressions have almost no effect at the lowest beam currents, exactly as we would expect!  A k-ratio of 0.55 is what we would expect for TiO2/Ti.

OK, so looking at the plot above, wow, we are looking pretty good up to 100 nA of beam current!  What happens if we go to the six term expression? It gets even better.  But let's jump right to the logarithmic expression because it is simply an integration of this Taylor/Maclaurin (whatever!) series, and they both give almost identical results:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_25_41.png)

Now we have a straight line but with a negative slope!  What could that mean?  Well as mentioned in the previous post it's because once we start including coincident photons in the probability series, we don't need as large a DT time constant!  Yes, the exact value of the dead time constant depends on the expression utilized. 

So, we simply adjust our dead time constant to obtain a *constant* k-ratio, because as we all we already know that we *should* obtain the same k-ratios as a function of beam current!  So let's drop it from 1.32 usec to 1.28 usec. Not a big change but at these count rates the DT constant value is very sensitive:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_33_15.png)

Now we are analyzing from 10 to 140 nA and getting k-ratios within our precision.  Not bad for a days work, I'd say! 

I have one more post to make regarding SG's discussion of the other non-probabilistic dead time effects he has mentioned, because he is exactly correct.  There are other dead time effects that need to be dealt with, but I am happy to simply have improved our quantitative accuracy at these amazingly high count rate/high beam currents.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 16, 2022, 10:53:17 AM
So continuing on this overview of the constant k-ratio and dead time calibration topic which started with this post here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100

I thought I would re-post these plots because they very nicely demonstrate the higher accuracy of the logarithmic dead time correction expression compared to the traditional linear expression at moderate to high beam currents:

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)

Clearly, if we want to acquire high speed quant maps or measure major, minor and trace elements together, it's pretty obvious that the new dead time expressions are going to yield more accurate results.

And if anyone still has concerns about how these the new logarithmic expression performs at low beam currents, simply examine this zoom of the above plot showing results at 10 and 20 nA:

(https://probesoftware.com/smf/gallery/395_15_08_22_9_10_21.png)

All three are statistically identical at these low beam currents!  And remember, even at these relatively low count rates there are still some non-zero number of multiple photon coincidence events occurring, so we would argue that even at these low count rates, the logarithmic expression is the more accurate expression.

OK, with that out of the way, let's proceed with the promised discussion regarding SG's comments on the factors contributing towards dead time effects in WDS spectrometers, because there is no doubt that several factors are involved in these dead time effects, both in the detector itself and the electronics.

But however we measure these dead time effects by counting photons, they are all combined in our measurements, so the difficulty is in separating out these effects.  But the good news is that these various effects may not all occur in the same count rate regimes.

For example, we now know from Monte Carlo modeling that at even relatively low count rates, that multiple photon coincidence events are already starting to occur. As seen in the above plot starting around 30 to 40 nA (>50K to 100K cps), on some large area Bragg crystals.

As the data reveals, the traditional dead time expression does not properly deal with these events, so that is the rationale for the multiple term expressions and finally the new logarithmic expression. So by using this new log expression we are able to achieve normal quantitative accuracy up to count rates of 300K to 400K cps (up to 140 nA in the first plot). That's approximately 10 times the count rates that we would normally limit ourselves to for quantitative work!

As for nomenclature I resist the term "pulse pileup" for WDS spectrometers because (and I discussed this with Nicholas Ritchie at NIST), to me the term implies a stoppage of the counting system as seen in EDS spectrometers.

However, in WDS spectometers we correct the dead time in software, so what we are attempting to predict are the photon coincidence events, regardless of whether they are single photon coincidence or multiple photon coincidence. And as these events are 100% due to probabilistic parameters (i.e., count rate and dead time), we merely have to anticipate this mathematically, hence the logarithmic expression.

To remind everyone, here is the traditional dead time expression which only accounts for single photon coincidence:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_21_05.png)

And here is the new logarithmic expression which accounts for single and multiple photon coincidences:

(https://probesoftware.com/smf/gallery/395_09_08_22_7_39_33.png)

Now, what about even higher count rates, say above 400K cps?  Well that is where I think SG's concerns with hardware pulse processing start to make a significant difference. And I believe we can start to see these "paralyzing" (or whatever we want to call them) effects at count rates over 400K cps (above 140 nA!) as shown here, first by plotting with the traditional dead time expression:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_41_33.png)

Pretty miserable accuracy starting at around 40 to 60 nA.  Now the same data up to 200 nA, but using the new logarithmic expression:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_41_50.png)

Much better obviously, but also under correcting starting about 160 nA of beam current which corresponds to a predicted count rate on Ti metal of around 450K cps!   :o

So yeah, the logarithmic expression starts to fail at these extremely high count rates starting around 500K cps, but that's a problem we will leave to others, as we suspect these effects will be hardware (JEOL vs. Cameca) specific.

Next we'll discuss some of the other types of instrument calibration information we can obtain from these constant k-ratio data sets.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 17, 2022, 12:49:07 PM
So once we have fit our dead time to yield constant k-ratios over a large range of range count rates (beam currents), we can now perform high accuracy quantitative analysis from 5 or 10 nA to several hundred nA depending on our crystal intensities and our spectrometer dead times.

Because as we can all agree, we should obtain the same k-ratios (within statistics of course) at all count rates (beam currents). And we can also agree that due to hardware/electronic limitations (as SEM Geologist has correctly pointed out) our accuracy may be limited at count rates that exceed 300K or so cps. And we can see that in the 200 nA plots above when using Ti ka on a large PET crystal when we exceed 400K cps.

But now we can perform additional tests using this same constant k-ratio data set that we used to check our dead time constants, for example we can test our picoammeter linearity.

You will remember the plot we showed previously after we adjusted our dead time constant to 1.28 usec to obtain a constant k-ratio up to 400K cps:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_33_15.png)

We could further adjust our dead time to flatten this plot at the highest beam current even more, but if we examine the Y axis values, we are clearly within a standard deviation or so.

Now remember, this above plot is using both the primary *and* secondary standards measured at the same beam currents. So both standards at 10 nA, both at 20 nA, both at 40 nA, and so on. And we do this to "null out" any inaccuracy of our picoammeter.

Well, to test our picoammeter linearity we simply utilize a primary standard from a *single* beam current measurement and then plot our secondary standards from *all* of our beam current measurements as seen here:

(https://probesoftware.com/smf/gallery/395_17_08_22_12_27_14.png)

Well that is interesting, as we see a (very) small discontinuity around 20 to 40 nA in the k-ratios on Anette's JEOL instrument when using a single primary standard. This is probably due to some (very) small picoammeter non-linearity, because after all, we are now depending on the picoammeter to extrapolate from our single primary standard measured at one beam current, to all the secondary standards measured at various beam currents!

In Probe for EPMA this picoammeter linearity is an easy test to perform using the constant k-ratio data set, as we simply use the string selection control in the Analyze! window to select all the primary standards and disable them all, then simply select one of these primary standards (at one of the beam currents) and enable that one primary standard, and then from the Output | Output XY plots for Standards and Unknown dialog, we calculate the secondary standards for all the beam currents, in this case using beam current for the X axis and the k-ratio for the Y axis.

Stray tuned for further instrument calibration tests that can performed using this simple constant k-ratio data set.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 19, 2022, 12:28:11 PM
And now we come to the third (and to my mind, the elephant in the room) aspect in the use of these constant k-ratios, as originally described by John Fournelle and myself.

We've already described how we can use the constant k-ratio method (pairs of k-ratios measured at multiple beam currents) to adjust our dead time constant to obtain a constant k-ratio over a large range of count rates (beam currents) from zero to several hundred thousand (300K to 400K) cps, as we should expect from our spectrometers (since the lowest count rate k-ratios, should be the least affected by dead time effects).  Note that by measuring both the primary and secondary standards at the *same* beam current, we "null out" any non-linearity in our picoammeter.

Next we've described how on can check for our picoammeter linearity by utilizing from the same data set a single primary standard measured at one beam current and plot our secondary standard k-ratios (from the range of beam currents) to test the beam current extrapolation, and therefore linearity of the picoammeter system.

Finally we come to the third option using this same constant k-ratio data set, and that is the simultaneous k-ratio test.  Now in the past we might have only measured these k-ratios on each of our spectrometers (using the same emission line) at a single beam current as described decades ago by Paul Carpenter and John Armstrong.  But our constant k-ratio data set (if measured on all spectrometers using say the Ti Ka line on LIF and PET and the Si Ka line on PET and TAP), already contains these measurements, so let's just plot them up as seen here:

(https://probesoftware.com/smf/gallery/395_19_08_22_12_07_26.png)

Immediately we can see that two of these spectrometers (form Anette von der Handt's JEOL instrument at UBC), are very much in agreement, but spectrometer 2 is off by some 4% relative.  This is not an uncommon occurrence (as I have the same effect on my spectrometer 3 of my old Cameca instrument).  Please note that when I first measured these simultaneous k-ratios when my instrument was new, they were all within a percent or two as seen here:

https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

but it is concerning in a new instrument.  Have you checked your own instrument?  Here were the results I made during my instrument acceptance testing (see section 13.3.9):

https://epmalab.uoregon.edu/reports/Additional%20Specifications%20New.pdf

Note that spectrometer 3 was slightly more problematic than the other spectrometers even back then...

But it should cause us much concern, because how can we begin to compare our consensus k-ratios from one instrument to another, if we can't even get our own spectrometers to agree with each other on the same instrument?

 :(

If you want to get a nice overview of all three of these constant k-ratio tests, start a few posts above beginning here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 19, 2022, 03:38:05 PM
Are you sure your sample is perpendicular to the beam? I see something similar like that on our SX100 too. But I am sure that somehow the holder is not perfectly perpendicular to the beam, when compared with newer SXFiveFE. Also recent replacement of BSE detector on SX100 made me aware that it (actualy the metal cover plate) can affect the efficiency of counting at different spectrometer range.

BTW, where is that cold-finger or other cryo system mounted? maybe it is affecting (shadowing) part of x-rays. Large crystals are in particular edgy on shadowing effects as our bitter experiene with newer BSE had shown. Gathering huge number of k-ratios is not in vain - indeed it will be so helpful in identifying some deeply hiden problems with some annonimiously behaving spectrometers! Lets don't stop, but move on!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 20, 2022, 09:46:31 AM
Are you sure your sample is perpendicular to the beam? I see something similar like that on our SX100 too. But I am sure that somehow the holder is not perfectly perpendicular to the beam, when compared with newer SXFiveFE. Also recent replacement of BSE detector on SX100 made me aware that it (actualy the metal cover plate) can affect the efficiency of counting at different spectrometer range.

As we have discussed there are several mechanical issues which might explain ones spectrometers producing different k-ratios. 

Some relate to sample tilt as you say. Which is why we should if possible measure our constant k-ratios on all of our spectrometers, and include (as Aurelien mentioned in the consensus k-ratio procedure) measuring these k-ratios at several stage positions several millimeters apart in order to calculate a sample tilt.  Probe for EPMA reports sample tilt automatically when using mounts that have three fiducial markings.

The k-ratios above were acquired on Anette's instrument so I cannot say, but I doubt very much that she could have tilted the sample enough to produce a 4% difference in intensity!

The other mechanical issues can be spectrometer alignment or even asymmetrical diffraction of the Bragg crystals resulting in a difference in the effective takeoff angle.  Even more concerning is spectrometer to column positioning due to manufacturing mistakes, as Caltech discovered on one of their instruments many years ago.

On this JEOL 733 instrument Paul Carpenter and John Armstrong found that the so-called "hot" Bragg crystals provided by JEOL were not diffracting symmetrically, resulting in significant differences in their simultaneous k-ratio testing. 

After that had been sorted out by replacing those crystals with "normal" intensity crystals, they found there was still significant differences in the simultaneous k-ratios which they eventually they tracked down to the column being mechanically displaced from the center of the instrument, enough to cause different k-ratios on  some spectrometers.

This was the *last* JEOL 733 delivered as JEOL had already started shipping the 8900.  How many of those older instruments also had the electron column mechanically off-center?  What about your instrument?  There is only one way to find out!    ;D

Measure your constant k-ratios on all your spectrometers (over a range of beam currents) and see if your:

1.  dead times are properly calibrated

2. your picoammeter is linear

3. you get the same k-ratios within statistics on all spectrometers


BTW, where is that cold-finger or other cryo system mounted? maybe it is affecting (shadowing) part of x-rays. Large crystals are in particular edgy on shadowing effects as our bitter experiene with newer BSE had shown. Gathering huge number of k-ratios is not in vain - indeed it will be so helpful in identifying some deeply hiden problems with some annonimiously behaving spectrometers! Lets don't stop, but move on!

I don't know about Anette's instrument, but on our SX100 we see the same sorts of differences in one spectrometer and we have no cold finger. We use a chilled baffle over the pump:

https://probesoftware.com/smf/index.php?topic=646.msg3823#msg3823

But that's certainly something worth checking if your simultaneous k-ratios do not agree and you have a cold finger in your instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 21, 2022, 10:45:44 AM
One more real life experience which would make k-ratio biased. It is noise increase in the detector-preamplifier circuit. I had discovered a month ago we have such problems on some spectrometers. On Cameca instruments it is very easy to test it. Set Gain to 4095, while beam is blanked, and look to ratemeter. It should get sporadic jumps at blue level (very few counts per second) from cosmic rays (yes spectrometer is able to be hit by those  :o without a problem). If there is 10-1000cps there is potential problem. If it is high pressure spectrometer that is not so important, but if it is low pressure spectrometer - the problem grows in severity. We have some noise getting in at such gain on few spectrometers on SX100 and SXFiveFE, there in most of cases it dissapears with gain reduced to 2000-3000. However one of spectrometers produces not 1000cps, but 100000cps when having gain at 2500. After inspection of signal with osciloscope I had found that it have much higher noise than other signals of other spectrometers - after I had opened the metal casing of there preamplifier is placed I found out that one of HV capacitors is cracked and that is an obvious noise source. Cracking of these disc capacitors probably does not take a day and I guess such crack could creep making that noise to increase slowly in many years.

Why noise is important? Because if noise is leaks into counting (and triggers the counting) then dead-time corrected k-ratios will be biased, as such noise would effect the detection more at low count rates than at high count rates. I think older Jeol (is new too?) models would be effected even more buy such hardware aging as background noise is passed to PHA. In Case of Cameca instruments it is much easier to identify such problem.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 22, 2022, 08:39:43 AM
This reminds me of one of the tests that John Armstrong performed on one of the first JEOL 8530 instruments at Carnegie Geophysical.  I think he called it the "beam off test", because he would test the PHA electronics when the beam was turned off, and what he found was that he was seeing x-ray counts with no electron beam!

These counting artifacts were eventually tracked down to JEOL changing suppliers for some chips in the spectrometer pre-amp which were noisier than the original specification and therefore causing these spurious counts.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 22, 2022, 09:39:56 AM
Parametric Constant:

a. A constant in an equation that varies in other equations of the same general form, especially such a constant in the equation of a curve or surface that can be varied to represent a family of curves or surfaces

At this point I think it would be helpful to discuss what we mean when we refer to the dead time as a "constant".

Because if there is one thing we can be sure of, it's that the dead time is not very constant since it can obviously vary from one spectrometer to another!  More importantly it can even vary for a specific detector as it becomes contaminated or as the electronic components age over time. I've even seen the dead time constant change after replacing a P-10 bottle!  P-10 composition?

In addition, we might suspect that the dead time could vary as a function of x-ray emission energy or perhaps the bias voltage on the detector, though these possible effects are still under investigation by some of us.  I hope some of you readers will join in these investigations.

So this should not be surprising to us since the dead time performance of these WDS systems include many separate effects in both the detector and counting electronics (and possibly even satellite line production!), all which are convolved together under the heading of the "dead time constant".

But it should also be clear that when we speak of a dead time "constant", what we really mean is a dead time "parametric constant", because it obviously depends on how we fit the intensity data and more importantly, what expression we utilize to fit the intensity data. Here is an example of the venerable dead time calculation spreadsheet from Paul Carpenter plotting up some intensities (observed intensity vs. beam current):

(https://probesoftware.com/smf/gallery/395_22_08_22_8_52_18.png)

The question then becomes: which dead time "constant" should we utilize in our data corrections?  That is, should we be fitting the lowest intensities, the highest intensities, all the intensities? How then are we calling this thing a "constant"?   :D

Here is another thought: when we attempt to measure some characteristic on our instruments, what instrumental conditions do we utilize for measuring that specific characteristic? In other words how do we optimize the instrument to get the best measurement?

For an example by analogy, when characterizing trace elements, we might increase our beam energy to create a larger interaction volume containing more atoms, and we will probably also increase of beam current, and probably also our counting time, as all three changes in conditions will improve our sensitivity for that characterization.  But we also want to minimize other instrumental conditions which might add uncertainty/inaccuracy to our trace characterization.  So we might for example utilize a higher resolution Bragg crystal to avoid spectral interferences, or perhaps Bragg crystal with a higher sin theta position to avoid curvature of the background.  Or we could utilize better equations for correction of these spectral interferences and curved backgrounds!    8)

Similarly, that is also what we should be doing when characterizing our dead time "constants"!  So to begin with we should optimize these dead time characterizations by utilizing conditions which create the largest dead time effects (high count rates) and we should also apply better equations which fit our intensity data with the best accuracy (over a large range of beam currents/count rates).

So if we are attempting to characterize our dead time "constant" with the greatest accuracy, what conditions (and equations) should we utilize for our instrument?  First of all we should remove the picoammeter from our measurements, because we don't want to include any non-linearity to these measurements since we are depending on differences in count rates at different beam currents.  The traditional dead time calibration fails in this regard. However, both the Heinrich (1966) ratio method and the new constant k-ratio method exclude picoammeter non-linearity from the calculation by design, so that is good.

But should we be utilizing our lowest count rates to calculate our dead time constants? Remember, the lowest count rates not only provide the lowest sensitivity (worse counting statistics), but also contain the *smallest* dead time effects!  Why would we ever want to characterize a parameter when it is the smallest effect it can possibly be?  Wouldn't we want to characterize this parameter when its effects are most easily observed, that is at high count rates?    :o

But then we need to make sure that our (mathematical) model for this (dead time) parameter properly describes these dead time effects at low and high count rates!  Hence the need for a better expression of photon coincidence as described in this topic.

Next let's discuss sensitivity in our dead time calibrations...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 23, 2022, 10:45:52 AM
Before I continue writing about sensitivity, I just want to emphasize an aspect of the constant k-ratio method that I think is a bit under appreciated by some.

And that is that for the purposes of determining the dead time constant (and/or testing picoammeter linearity and/or simultaneous k-ratio testing), the constant k-ratio method does not need to be performed on samples of known composition. Whatever the k-ratio is observed to be (at a low count rate using a high precision logarithmic or multiple term expression), that is all we require for these internal instrumental calibrations.

The two materials could be standards or even unknowns. The only requirement is that they contain significantly different concentrations of an element, and be homogeneous and relatively beam stable over a range of beam currents.

In fact, they can be coated differently (e.g., oxidized or not), and we could even skip performing a background correction for that matter!  We only care about the ratio of the intensities, measured over a range of beam currents/count rates!  :o   All we really require is a significant difference in the count rates between the two materials and then we can adjust the dead time constant (again using a high precision expression) until the regression line of the k-ratios is close to a slope of zero!

 8)

Of course when reporting consensus k-ratios to be compared with other labs using well characterized global standards, we absolutely must perform careful background corrections and be sure that our instrument's electron accelerating energies, effective take off angles and dead time constants are well calibrated!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 25, 2022, 12:49:26 PM
OK, let's talk about sensitivity and the constant k-ratio method!

We've already mentioned that one of the best aspects of the constant k-ratio method is that it depends on a zero slope regression of k-ratios plotted on the y-axis.  We can further appreciate the fact that the low count rate k-ratios are the least affected by dead time effects, so therefore those k-ratios will be the values that are our "fulcrum" when adjusting the dead time constant. Remember, the exact value of these low count rate k-ratios is not important, only that they should be constant over a range of count rates!  So by plotting these k-ratios with a zero slope regression (a horizontal line) we can arbitrarily expand the y-axis to examine our intensity data with excellent precision.

Now let's go back and look at a traditional dead time calibration plot here (using data from Anette von der Handt) where we have plotted on-peak intensities on the y-axis and beam current on the x-axis:

(https://probesoftware.com/smf/gallery/395_25_08_22_11_56_59.png)

I've plotted the multiple points per beam current so we can get a feel for the sensitivity of the plots.  Note that the lower count rates show more scatter than the high count rates, because the scatter you are seeing is the natural counting statistics and will be expanded in subsequent plots. Pay particular attention to the range of the y-axis. In this plot we are seeing a variance of around 45%.

The problem for the traditional method is not only that we have a diagonal line which doesn't reveal much sensitivity, but also that we are fitting a linear model to the data and the linear model only works when the dead time effects are minimal.  It's as though we were trying to measure trace elements at low beam currents!  Instead we should attempt to characterize our dead time effects under conditions that produce significant dead time effects. And that means at high count rates!   :)

All non-zero slope dead time calibration methods will suffer from this lack of sensitivity, though the Heinrich method (like the constant k-ratio method) is at least immune to picoammeter linearity problems.  In fact, because the Heinrich ratio method is also a ratio (of the alpha and beta lines), if we simply plotted those Ka/Kb ratios as a function of beam current/count rate (and fit the data to a non-linear model that handles multiple photon coincidence) it would work rather well!

But I feel the constant k-ratio is more intuitive and it is easier to plot our k-ratios as a zero slope regression. And here is what we see when we do that to the same intensity data as above:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_16_01.png)

Note first of all that merely by plotting our intensities as k-ratios (without any dead time correction at all!), our variance has decreased from 54% to 17%!  Again note the y-axis range and how the multiple data points have expanded showing greater detail. And keep in mind that the subsequent k-ratio plots will always show the low count rate k-ratios right around 0.56 which will decrease slightly to 0.55 as we start applying a dead time correction, as with this PETL spectrometer, even at low beam currents, we are seeing serious count rates (~28K cps at 10 nA on Ti metal!).

Now let's apply the traditional linear dead time expression to these same k-ratios using the JEOL engineer 1.32 usec dead time constant:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_27_17.png)

Our variance is now only 5.4%!  So now we can really see the details in our k-ratio plots as we further approach a zero slope regression. We can also see that we've increased our constant k-ratio range slightly (up to ~80k cps), but above that things start to fall apart.

So now we apply the logarithmic dead time correction (again using the same dead time constant of 1.32 usec determined by the JEOL engineer using the linear assumption):

(https://probesoftware.com/smf/gallery/395_25_08_22_12_33_40.png)

And now we see that our y-axis variance is only 1.1%, but we also notice we are very slightly over-correcting our k-ratios using the logarithmic expression. Why is that?  It's because even at these relatively moderate count rates, we are still observing some non-zero multiple photon coincidences, which the linear dead time calibration model over fits to obtain the 1.32 usec value.  Remember the dead time constant is a "parametric constant", its exact value depends on the mathematical model utilized. 

So by simply reducing the dead time constant from 1.32 to 1.29 usec (a difference of only 0.03 usec!), we can properly deal with all (single and multiple) photon coincidence and we obtain a plot such as this:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_41_19.png)

Our variance is now only 0.5% and our k-ratios are constant from zero to over 300k cps!  And just look at the sensitivity!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Nicholas Ritchie on August 25, 2022, 01:55:56 PM
Pretty impressive!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on August 25, 2022, 07:47:47 PM
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 26, 2022, 09:03:16 AM
Pretty impressive!

Thank-you Nicholas.  It means a lot to me and the team.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 26, 2022, 09:06:54 AM
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.

Much appreciated!

Great work by everyone involved.  John Fournelle and I came up with the constant k-ratio concept, and Aurelien Moy and Zack Gainsforth and I came up with the multi-term and logarithmic expressions. While Anette has provided some amazing data from her new JEOL instrument (wait until you see her "terrifying" count rate measurements!).

We could use some more Cameca data as my instrument has a severe "glitch" around 40 nA. Do you see a similar weirdness around 40 nA on your new tactis instrument?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on August 27, 2022, 07:00:55 AM
I actually skipped 40 nA. I performed k-ratio measurements at 4, 10, 20, 50, 100, 150, 200, and 250 nA. I do see a drop in k-ratio between 20 to 50 nA. The k-ratios values produce (mostly) horizontals lines from 4 to 20 nA, then they decrease (substantially) and form another (mostly) horizontal line from 50 to 250 nA. As soon as I can access the lab computer again, I'll send the MDB file to you.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 27, 2022, 08:47:39 AM
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 27, 2022, 09:11:16 AM
Here's a different spectrometer on Anette's instrument (spc 5, LIFL) that shows how the sensitivity of the constant k-ratio method can be helpful even at low count rates:

(https://probesoftware.com/smf/gallery/395_27_08_22_8_48_19.png)

First note that at these quite low count rates (compared to spc 3, PETL), the k-ratios are essentially *identical* for traditional and log expressions (even when using exactly the same DT constants!) exactly as expected.

Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

Because the k-ratios acquired on other spectrometers at the same time do not show this "glitch", I suspect that this artifact is specific to this spectrometer.  More k-ratio acquisitions will help us to determine the source.

Next I will start sharing some of the "terrifying" intensities from Anette's TAPL crystal.    ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 29, 2022, 07:12:56 AM
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.

Oh Yeah I could :D

Well it depends from the machine (If we have C1 + C2 W/LaB6 column, or we have FEG C2 (no C1) column). In case of FEG it is supposed to be smooth at 1-600nA range, sometimes some crossover can be observed somewhere between 500-1000nA when FEG parameters are set wrong, or when tip is very old and standard procedure is not relevant no more (i.e. our FEG).

But in case of classical C1 and C2 column the crossover point depends from cleanness of the column (its apertures) as the beam crossover point is going to drift depending how much apertures are contaminated. We had our SX100 column not-cleaned for 7 years, and there was some funkiness going at 40-50nA range. After cleaning of the column the range of crossover is no more at that spot but at very high count rates (~500nA). What I suspect after seeing the faraday cup (during column cleaning) is that it is highly possible not whole beam gets into the cup, but in some cases just part of the beam (something like beam defocus onto faraday cup hole). So physically the picoamperometer could be completely OK, but beam measurement with faraday cup inside the column could measure the beam not fully at some of ranges (especially at lower currents). There is where this drifting beam cross-over could come in the observed discrepancies.

On the other hand the picoamperometer circuit is subdivided to sections: up to 0.5nA, 0.5-5 nA, 5-50nA, 50-500nA, 500nA-10µA(?). It is not completely clear for me how it decides which of range to switch-to (The column control board tells which range should be selected... no wait, c.c. board does not tell that it only transfers the request from main processing board), probably there are few loops of measurements for logic in the column control board to select the most relevant range, and probably this 5*10^x nA is the strict boundary only on the paper. Finally only 50-500nA and 500nA-10µA ranges have a potentionmeters and can be physically re-calibrated/tuned (albeit I had never needed to do that). Why only those ranges? the work of picoamperometer is realy simple: it needs to amplify the received current into some voltage range which ADC works with. It is single OPAMP, but different feedback resistors for different ranges. For highest currents, there is little amplification needed and thus feedback resistors are in kiloohm range, where low currents requires high amplification and thus very high ohm (hundreds of Mohm) resistors are used. In case of kilo-ohm resistors the final resistance can be tuned with serially connected potentionmeter, where for hundred of Mohm resistors there is no such potentionmeters available (or rather is not very financially feasible).  Anyway, the analog voltage value from such conversion is finally being measured with shared 15bit ADC (+1bit for sign) (the same ADC for all other column parameters, such as high voltage, emission...) and final interpretation of that converted digital value is burred somewhere in the digital logic (firmware). That is most probably main VME processor board (Motorola 68020 (old) or PowerQuiccII (new)), as column control board contains no processing chip (there are some PAL device s on board for VME<->local data control and rather are too limited for interpretative capabilities). And then this gets a bit tricky: The firmware is loaded during boot, AFAIK there is no mechanisms for alteration of hex files (files uploaded during machine boot), Also I know no commands in interpretor to calibrate the faraday cup measurements (albeit there are many cryptic special functions not exposed to user manuals). I guess such conversion table could exist in Cameca SX Shared folder in some binary files of machine state. How to change the conversion is still a mystery for me.

Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Also the picoamperometer is really pretty simple in design and I see not much possibilities for it to detune itself (could those potentiometers (un-)screw(-in)?), could resistors crack (albeit I had seen many times this to happen on different boards of SX100, but those are power resistors doing a lot of work). Maybe conversion tables were set wrongly from the moment of manufacturing and this problem got caught only recently after using this new calibration by k-ratios method? I just wonder where that problem of your 40-50 nA observed discontinuity is generated exactly and how it could be fixed...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 29, 2022, 07:22:15 AM
Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

I don't believe technological miracles (especially at lower and comparable prices), and I guess Jeol picoamperometer is forced to be segmented into ranges by same electronic component availability and precision as Cameca instruments (even stupid simple handheld multi-meter have such kind of segmentation). And most likely it is the similar problem as on Your SX100.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 29, 2022, 08:42:55 AM
Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Here's a few examples from my instrument showing this "glitch" around 40 nA.  Our instrument engineer told me recently that he had made some adjustments to the picoammeter circuits but I have not had time to test again.  I will try to do that soon as I can.

(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_10.png)

(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_28.png)

Note in the first plot that the glitch occurred at 30 nA!  Note also that I skipped measurements between 30 and 55 nA in the 2nd plot to avoid this glitch!

But here's my problem: the constant k-ratio method should not be very sensitive to the actual beam current since both the primary standard and the secondary standard of the k-ratio are measured at the same beam current. 

But yet the artifact is there on many Cameca instruments.  I do distinctly recall that more than one Cameca engineer has simply told me to "stay away from beam currents near 40 nA".  Maybe it's some sort of a beam current "drift" issue?

I also think that if one "sneaks up" on the beam current (using 2 nA increments for example) the instrument can handle setting the beam current properly.  I think Will Nachlas has done some constant k-ratio measurements like this on his SXFive instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 29, 2022, 11:35:28 AM
(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_10.png)

I know I'm not the sharpest knife in the drawer, but sometimes I can stare right at something and just not see it. 
 
You will have noticed in the above plot that we see a "glitch" in the k-ratios at 30 nA. And sometimes we see this "glitch" at 40 nA or sometimes at 50 nA.  It always seemed to depend on which beam currents we measured on our Cameca instrument just before and just after the "glitch" but I could not determine the pattern.  The thing that always bothered me was, that if we are indeed measuring our primary and secondary standards at the same beam current, we should be nulling out any picoammeter non-linearity, so I thought we should not be seeing any sort of these "glitches" in the k-ratio data.  It's the one reason I switched to looking at Anette's JEOL data, which did not show any of these "glitches" in the k-ratios.

But, first a short digression on something that I believe is unique to Probe for EPMA, and which, under normal circumstances, is a very welcome feature. And that is the standard intensity drift correction.  Now all microanalysis softwares perform a beam normalization (or drift) correction, so that intensities are reported as cps/nA. That way, one can not only correct for small changes in beam current over time, but one can also compare standards and unknowns (and or elements) acquired at different beam currents, and this correction is applied equally to all element intensities in the sample.

But Probe for EPMA also performs a standard intensity drift correction which tracks the (primary) standard intensities for *each* element over time and makes an adjustment for any linear changes in the standard intensities over time. Basically, if one has acquired more than one set of primary standards,  the program will estimate (linearly) the predicted (primary) standard intensity based on the interpolated time of acquisition of the secondary standard or unknown.

This schematic from the Probe for EPMA User Reference might help to explain this:

(https://probesoftware.com/smf/gallery/395_29_08_22_11_06_45.png)

What this means is that the standard intensity drift correction is on (as it is by default), and one has acquired more than one set of primary standards, the program, will always look for the first primary standard acquired just *before* the specified sample, and also the first primary standard acquired *after* the specified sample.  Then it will estimate what the primary standard intensity should be if the intensity drift was linear between those two primary standard acquisitions, and utilize that intensity for the construction of the sample k-ratio.

This turns out to be very nice for labs with temperature changes over long runs where the various spectrometers (and PET crystals) will change their mechanical alignments and is applied on an element by element basis. One simply needs to acquire their primary standards every so often, and the Probe for EPMA software will automatically take care of such standard intensity drift issues automatically.  I can't tell you how many times I been called by a student that said when they came back in the morning their totals had somehow drifted overnight and was hoping there was something they could do to fix this?  And I'd say, sure, just re-run your primary standard again!  And they'd call back: everything is great now, thanks!

But if we turn off the standard intensity drift correction, the Probe for EPMA software will only utilize the primary standard acquired just *before* the secondary standard or unknown sample.  Keep that in mind, please.   So now back to our constant k-ratios.

As you saw in the plot above, I was having trouble understanding why this "glitch" in the constant k-ratios was occurring, and also why it was occurring at sometimes random nA settings, often between 30 nA and 60 nA.

So this morning I started looking more closely at this MnO/Mn k-ratio data, and the first thing I noticed was that I had (correctly) acquired the Mn metal standard first at a specified beam current, and then acquired the secondary MnO standard at the same specified beam current and for each k-ratio set after that.  So OK.

But wait a minute, didn't I just say that if the standard intensity drift correction is turned (as it is by default!), the program will automatically interpolate between the prior primary standard, and the subsequent primary standard?  But with the constant k-ratio data set, we always want to be sure that the k-ratio is constructed from two materials measured at the *same* beam current. In order to eliminate any non-linearity in the picoammeter!

So the first thing I did was turn off that darn standard intensity drift correction and then plot the k-ratios using only a single primary standard. Remember, if we only utilize a single primary standard, then we are extrapolating to the beam current measurements for all the secondary standards measured at multiple beam currents and therefore testing the linearity of the picoammeter!

(https://probesoftware.com/smf/gallery/395_29_08_22_10_42_39.png)

And lo and behold, look at the above picoammeter non-linearity when the Cameca changes the beam current range from under 50 nA to over 50 nA. Clearly the picoammeter ranges require adjustment by our instrument engineer! 

But since we now have the standard intensity drift correction turned off, and we measured each primary standard just before each secondary standard, let's re-enable all the primary standards to produce a normal constant k-ratio plot and see what our constant k-ratio plot looks like now (compare it to the quoted plot above):

(https://probesoftware.com/smf/gallery/395_29_08_22_10_42_59.png)

Glitch begone! Somebody slap me please...

So we've updated the constant k-ratio procedure to note that the standard intensity drift correction (in PFE) should be turned off, and that the primary standard should always be acquired just before the secondary standard so the program is forced to utilize the primary and secondary standards measured at the same beam current.  See attached pdf below.
 
Only in this way (in Probe for EPMA at least) is any picoammeter non-linearity truly nulled out in these constant k-ratio measurements.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on August 30, 2022, 11:06:35 AM
If you update your Probe for EPMA software (from the Help menu), you will get a new menu that allows you to access the latest version of the constant k-ratio method procedure also from the Help menu:

(https://probesoftware.com/smf/gallery/1_30_08_22_11_05_06.png)

If you do not have the Probe for EPMA software, but you would still like to perform these constant k-ratio tests on your instrument, start here and read on:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 31, 2022, 09:23:23 AM
 
included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>

No need for sarcasm  :D , it is quite a reasonable question: that is, if the dead time (parametric) constants vary slightly depending on the exact expression utilized, how will we manage this assortment of expressions and constants? 

This post is a response to that question (since SG asked), but the actual audience for this post is probably the typical Probe for EPMA user, on exactly how we do we manage all these dead time constants and perhaps, do we even require so many?

The simple answer is: it's easy.

But before we get into the details of how all this is handled in Probe for EPMA it might be worth noting a few observations: in most cases the differences in the optimized dead time constants between the various expressions are very small (e.g., 1.32 usec vs. 1.29 usec in the case of Ti Ka on PETL). In fact, for normal sized Bragg crystals (as seen in the previous post of Ti Ka on LIFL), we don't see any significant differences in our results up to 50K cps. For most situations, the exact dead time expression and dead time constant utilized will not be an important consideration.  But if we want to utilize large area crystals at high beam currents on pure metals or oxides (not to mention accurately characterizing our dead time constants for general usage), then we will want to perform these calibrations carefully at high beam currents.

That said, it is still not entirely clear how much of effect emission line energy or bias voltage has on the exact value of the dead time constant. Probeman's initial efforts on the question of emission line energies is ambiguous thus far (from his Cameca SX100 instrument):

https://probesoftware.com/smf/index.php?topic=1475.msg11017#msg11017

And this much larger set of dead times from Philippe Pinard for a number of emission lines from a few years back on his JEOL 8530 instrument:

https://probesoftware.com/smf/index.php?topic=394.msg6325#msg6325

Pinard's data is also somewhat ambiguous as to whether there is a correlation between emission energy and dead time. Anyway, I will admit that when we started developing software for the electron microprobe we did not anticipate that Probeman might develop new expressions for the correction of dead time, much less that the different expressions would produce slightly different (optimized) dead time constants (it's hard to make predictions, especially about the future!).    :)

So how does Probe for EPMA handle all these various dead time constants? It all starts with the SCALERS.DAT file, which is found in the C:\ProgramData\Probe Software\]Probe for EPMA folder (which may need to be unhidden using the View menu in Windows Explorer).

The initial effort to define dead time constants was originally implemented using a single value for each spectrometer. These are found on line 13 in the SCALERS.DAT file.  It can be edited using any plain text editor such as NotePad or NotePad+.

The dead time constants are on line 13 shown highlighted here in red:
     
    "1"      "2"      "3"      "4"      "5"     "scaler labels"
     ""       ""       ""       ""       ""      "fixed scaler elements"
     ""       ""       ""       ""       ""      "fixed scaler xrays"
     2        2        2        2        2       "crystal flipping flag"
     81010    81010    81010    81010    81010   "crystal flipping position"
     4        2        2        4        2       "number of crystals"
     "PET"    "LPET"   "LLIF"   "PET"    "LIF"   "crystal types1"
     "TAP"    "LTAP"   "LPET"   "TAP"    "PET"   "crystal types2"
     "PC1"    ""       ""       "PC1"    ""      "crystal types3"
     "PC2"    ""       ""       "PC25"   ""      "crystal types4"
     ""       ""       ""       ""       ""      "crystal types5"
     ""       ""       ""       ""       ""      "crystal types6"
     2.85     2.8      2.85     3.0      3.0     "deadtime in microseconds"
     150.     150.     140.     150.     140.     "off-peak size, (hilimit - lolimit)/off-peak size"
     80.      80.      70.      80.      70.     "wavescan size, (hilimit - lolimit)/wavescan size"

This line 13 contains the default dead time constants for all Bragg crystals on each WDS spectrometer. The values on this line will be utilized for all crystals on each spectrometer (see below for more on this).

So begin by entering a default dead time constant in microseconds (usec) for each spectrometer on line 13 using your text editor as determined from your constant k-ratio tests. If you have values for more than one Bragg crystal just choose one and proceed below.

And if you have dead time constants for more than a single Bragg crystal per spectrometer, you can also edit lines 72 to 77 for each Bragg crystal on each spectrometer (though only up to 4 crystals are usually found in JEOL and Cameca microprobes).

Each subsequent line corresponds to each Bragg crystal listed above on lines 7 to 12. Here is an example with the edited dead time constant values highlighted in red:

     1        1        1        1        1     "default PHA inte/diff modes1"
     1        1        1        1        1     "default PHA inte/diff modes2"
     1        0        0        1        0     "default PHA inte/diff modes3"
     1        0        0        1        0     "default PHA inte/diff modes4"
     0        0        0        0        0     "default PHA inte/diff modes5"
     0        0        0        0        0     "default PHA inte/diff modes6"
     2.8      3.1      2.85     3.1    3.0     "default detector deadtimes1"
     2.85     2.8      2.80     3.0    3.0     "default detector deadtimes2"
     3.0      0        0        3.1      0     "default detector deadtimes3"
     3.1      0        0        3.2      0     "default detector deadtimes4"
     0        0        0        0        0     "default detector deadtimes5"
     0        0        0        0        0     "default detector deadtimes6"
     0        1        1        0        0     "Cameca large area crystal flag1"
     0        1        1        0        0     "Cameca large area crystal flag2"
     0        0        0        0        0     "Cameca large area crystal flag3"
     0        0        0        0        0     "Cameca large area crystal flag4"
     0        0        0        0        0     "Cameca large area crystal flag5"
     0        0        0        0        0     "Cameca large area crystal flag6"

These dead time constant values on lines 72 to 75 will “over ride” the values defined on line 13 if they are non-zero.

For new probe runs, the PFE software will automatically utilizes these dead time values from the SCALERS.DAT file, but what about re-processing data from older runs? How can they can utilize these new dead time constants (and expressions)?

For example, once you have properly calibrated all your dead time constants using the new constant k-ratio method (as described in the attached document), and would like to apply these new values to an old  run, you can utilize this new feature to easily update all your samples in a single run as described in this link:

https://probesoftware.com/smf/index.php?topic=40.msg10968#msg10968

In addition, it should be noted that since Probe for EPMA saves the dead time constant for each element separately (see the Elements/Cations dialog), when an element setup is saved to the element setup database as seen here:

(https://probesoftware.com/smf/gallery/395_31_08_22_9_04_09.png)

This means that one can have different dead time constants for each element/xray/spectro/crystal combination. So when browsing for an already tuned up element setup, the dead time constant for that element, emission line, spectrometer, crystal, etc. is automatically loaded into the current run. That is also true when loading a sample setup from another probe run. All of this information is loaded automatically automatically and can of course be easily updated if desired.

Now that said, the dead time correction expression type (traditional/Willis/six term/log) is only loaded when loading a file setup from another run.  And in fact Probe for EPMA will prompt the user when the user loads an older probe file setup, and finds that newer dead time constants (or expression type) are available as seen here:

(https://probesoftware.com/smf/gallery/395_31_08_22_11_07_03.png)

This feature prevents the user from accidentally using an out of date dead time constants for acquiring new data.

So in summary, there are many ways to insure that the user can save, recall and utilize these new dead time constants once the SCALERS.DAT file is edited for the new dead time (parametric) constant values.

Bottom line: edit your dead time correction type parameter in your Probewin.ini file to 4 for using the logarithmic expression as shown here:

[software]
DeadtimeCorrectionType=4   ; 1 = normal, 2 = high precision deadtime correction, 3 = super high precision, 4 = log expression (Moy)

Then run some constant k-ratio tests, on say Ti metal and TiO2.

You will probably notice that most spectrometers with normal sized crystals will yield roughly the same dead time constant, but that your dead time constants on your large area crystals may need to be reduced by 0.02 or 0.04 usec or so (probably more like 0.1 or 0.2 usec less for Cameca instruments) in order to perform quantitative analysis at count rates over 50K cps.

 8)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 03, 2022, 09:35:13 AM
Here is something else I just noticed with the Mn Ka k-ratios acquired on my SX100:

(https://probesoftware.com/smf/gallery/395_03_09_22_9_28_23.png)

The PET/LPET crystals are in pretty good agreement and in fact the k-ratios they yield at around 0.735 (see y-axis) are about right, according to a quick calculation from CalcZAF:

ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA KILOVOL                                       
   Mn ka  .00000  .73413  77.445   -----  50.000   1.000   15.00                                       
   O  ka  .00000  .17129  22.555   -----  50.000   1.000   15.00                                       
   TOTAL:                100.000   ----- 100.000   2.000


But the LIF and LLIF spectrometers produce k-ratios about 3 to 4% lower than they should.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on September 08, 2022, 03:07:57 PM
Here are some k-ratio measurement I performed on my Cameca SXFive-Tactis.

Background-corrected count rates (not corrected for dead time) are ~11 kcps on the 4 large crystals and ~ 3 kcps on the standard crystal (sp4) at 4 nA. Count rates are ~185 kcps and 113 kcps at 250 nA for large and standard crystals, respectively.

Traditional DT expression seems to work well up to 50 nA (100 kcps or 34 kcps for large and standard crystals, respectively). Logarithmic expression works well up to at least 150 nA (168 kcps on large crystals), if not higher, especially for sp4 standard size PET crystal.

Additional details are provided on the attached images.

Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 09, 2022, 11:49:14 AM
James,
This is fantastic data, and congrats on an excellently calibrated instrument.  I love seeing those simultaneous k-ratios all agreeing with each other! 

Hey, did you by any chance acquire PHA scans at both ends of your beam current range? 

The more I think about it, the more that I think that at least some of the deviation from constant k-ratios that we are seeing is due to the tuning of the PHA settings. We really need to make sure that our PHA distributions are above the baselines at both the low count rate/beam current and at the highest count rate/beam current.

Here's an example. When I ran some of my Ti Ka k-ratios on TiO2 and Ti metal, I checked the PHA distributions at both ends of the acquisition, first at 10 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_02.png)

and then at 200 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_20.png)

This is really important to check especially as we get to these high count rates. 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 09, 2022, 12:03:09 PM
A new feature that is worth taking advantage of in Probe for EPMA is to plot/export the raw on-peak counts on the x axis rather than the beam current. This is a new plot item found in the Output Standard and Unknown XY Plots menu dialog as seen here:

(https://probesoftware.com/smf/gallery/395_09_09_22_11_53_07.png)

Now when plotting/exporting the raw k-ratios for the secondary standard (the primary standard k-ratio will always be 1.000!), the program will plot/export the raw on-peak counts for the secondary standard.  But it's the count rate on the primary standard that we really care about since that will generally be a higher concentration/count rate.  And therefore be more sensitive to the dead time correction. And of course since the primary standard intensity is in the denominator of the k-ratio, when it loses counts faster (at higher count rates), the k-ratio values will trend up!

So we need to export twice. First to select all the primary standards and export the raw on peak intensities for the primary standards, then select all the secondary standards and export all the k-ratios for the secondary standards.
 
We then combine the raw on peak counts from the primary standards with the k-ratios from the secondary standards and then we can obtain a plot like the following:

(https://probesoftware.com/smf/gallery/395_07_09_22_10_01_04.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 19, 2022, 12:39:54 PM
I want to look at these PHA settings more closely because I think that some of what we are seeing, when performing these constant k-ratio measurements, is due to PHA peak shifting at high beam currents (count rates).

These effects will be different on Cameca and JEOL instruments obviously, so please feel free to share your own PHA scans at low and high count rates so we can try and learn more. This is of course complicated by the fact that on Cameca instruments, we tend to leave the bias fixed at a specific voltage (~1320v for low pressure flow detectors and ~1850v for high pressure flow detectors) and then simply adjust the PHA gain setting to position the PHA peak (normally around 2 to 2.5 v in the Cameca 0 to 5 v PHA range), but for the constant k-ratio method we want to instead position the peak to slightly *above* the center of the PHA range (at low beam currents) to avoid peak shifting from pulse height depression (at higher beam currents), so centered roughly around 3 volts or so.
 
Here for example is Mn Ka on Spc2, LPET, a low pressure flow detector at 30 nA:
 
(https://probesoftware.com/smf/gallery/395_19_09_22_12_13_37.png)

Note that the peak is roughly centered around 3 volts. Now using the same bias voltage of 1320v here is the same peak at 200 nA:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_13_55.png)

Please note that the gain is *exactly* the same for both the 30nA and the 200 nA scans!   This is really good news because it means that we don't need to adjust the PHA settings as we go to higher count rates.

But the PHA peak at 200 nA has certainly broadened and shifted down slightly to 2.5 volts or so (which is why we set it a little to the right of the center of the PHA range to begin with!), probably due to pulse height depression.  Note that even though it has broadened out, because we are in integral mode, we don't have to worry about cutting off the higher side of the PHA peak.  The important thing is to keep the peak (including the escape peak!), above the baseline cutoff.

How about a high pressure flow detector? This is a PHA scan on Spc3 LLIF which is a high pressure flow detector, first at 30 nA:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_14_10.png)

and again at 200 nA using the same (1850v)  bias voltage:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_14_24.png)

Again, the gain setting is the same, and very little change in the PHA peak (though it is again, slightly shifted down and broadened). Now admittedly we are getting a somewhat less count rate on the LLIF crystal than the LPET, so I do want to try this again on Spc3 LPET, but still very promising.

Again the take away point: check your PHA distributions at both the lowest and highest count rates to be sure you are not cutting off any emission counts when performing the constant k-ratio method.

On JEOL instruments that is an entirely different story because usually the gain is fixed and the bias voltage is adjusted. Question is: can we keep the JEOL PHA distributions above the baseline as we get to higher count rates using a single pair of bias and gain values?  Anette's initial data suggests, no we can't:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_06_08.png)

I should mention that this PHA shift effect (more pronounced on the higher concentration Si metal primary standard), would tend to produce a constant k-ratio trend as we see in this post, because the primary standard is in the denominator (and as the primary intensity decreases, the k-ratio tends to increase):

https://probesoftware.com/smf/index.php?topic=1489.msg11230#msg11230

Can we see some more JEOL PHA data at low and high count rates for both P-10 and Xenon detectors?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 26, 2022, 09:41:51 AM
Here are the constant k-ratios from Anette's most recent run, first for the TAP spectrometer:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_36_24.png)

When going from the logarithmic to exponential expression we clearly need to reduce the dead time constant from 1.26 usec to 1.18 usec.  Interesting that the predicted count rates are slightly different for these two models at these two slightly different parametric constants in the middle of the count rate range.

Now for the TAPL crystal (beware it ain't pretty):

(https://probesoftware.com/smf/gallery/395_26_09_22_9_36_39.png)

The logarithmic expression does a pretty good job (at least up until around 450 kcps) but the exponential expression loses it completely as the product exceeds 1/e (so no dead time correction can be applied at higher count rates).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 03, 2022, 10:05:12 AM
This was a test of our (new) WDS board on our SX100 that appears to have been installed, along with PeakSight 6.1, in 2015.

Please note that this is a very unusual dead time test because instead of using the default "integer" (enforced) dead time values of 3 usec, we instead wanted to see if the NEW WDS board improved the "intrinsic" dead times of the electronics compared to the OLD PHA board from the test performed in 2010. This "intrinsic" dead time test is performed by setting the "integer" (enforced) dead time values of the Cameca spectrometers to the lowest value possible which is 1 usec on the SX100 and SXFive instruments. And then seeing what the resulting dead times actually are.

Here are the "intrinsic" dead time values we obtained in 2010 (using the traditional linear dead time calibration method - Carpenter's Excel spreadsheet) using Ti Ka (running PET or LPET on all spectrometers):

Spc 1: 1.71 usec
Spc 2: 2.00 usec
Spc 3: 2.19 usec
Spc 4: 1.61 usec
Spc 5: 1.85 usec

The above test from 2010 was run up to 180 nA and even the LPET crystals produced no more than 230 kcps running at 15 keV.  Unfortunately in the new test I did yesterday, I ran at 20 keV (which increased the count rate significantly) and also I did not utilize all PET/LPET crystals (I should have checked the setup from 2010 first!). Still worth showing the data I think...

For this test I edited the Camca "integer" (enforced) dead time values in the SCALERS.DAT file in the Probe for EPMA ProgramData folder. These values are found on line 35 of the SCALERS.DAT file. They can be edited using any text editor such as NotePad.

Please also note that if either of the following keywords in the Probewin.ini file are set to non-zero values, Probe for EPMA will *not* set the spectrometers to the PHA values in the SCALERS.DAT file on startup:

UseCurrentConditionsOnStartUp=0   ; non-zero = read current instrument condition on software start
UseCurrentConditionsAlways=0   ; non-zero = read current instrument conditions on each acquisition

Here is the full setup that I ran yesterday:

On and Off Peak Positions:
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka
ONPEAK 31402.0 31504.0 68259.0 31456.0 68269.0
OFFSET 28.0293 -73.971 32.4297 -25.971 22.4297
HIPEAK 32898.5 32925.9 69214.5 33026.1 69097.5
LOPEAK 30052.3 29843.1 67323.8 30103.9 67509.0
HI-OFF 1496.50 1421.90 955.500 1570.10 828.445
LO-OFF -1349.7 -1660.9 -935.20 -1352.1 -760.00

PHA Parameters:
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka
DEAD:     2.85    2.80    2.80    3.00    3.00
BASE:      .29     .29     .29     .29     .29
WINDOW    4.50    4.50    4.50    4.50    4.50
MODE:     INTE    INTE    INTE    INTE    INTE
GAIN:     942.    864.   1369.    818.    864.
BIAS:    1320.   1320.   1850.   1320.   1850.

Last (Current) On and Off Peak Count Times:
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka
BGD:       OFF     OFF     OFF     OFF     OFF
BGDS:      EXP     EXP     LIN     EXP     LIN
SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LLIF     PET     LIF
ORDER:       1       1       1       1       1
ONTIM:   60.00   60.00   60.00   60.00   60.00
HITIM:   10.00   10.00   10.00   10.00   10.00
LOTIM:   10.00   10.00   10.00   10.00   10.00

I then automated a constant k-ratio acquisition for Ti metal and TiO2 at 10, 20, 30 , 60, 80, 120, 160 and 200 nA.  Let's look at the PHA scans first.  Here are the 10 nA PHA scans:

(https://probesoftware.com/smf/gallery/395_03_10_22_9_47_01.png)

Notice that the LPET shows the highest count rate as expected. Also note that I've adjusted the PHA gains so that the PHA peaks are all around 3 volts in the 0 to 5 volt range of the Cameca PHAs. This is done because at the higher count rates, the PHA peaks will shift to the left due to pulse height height depression.

And here are the PHA scans at 200 nA (at the same bias and gain settings as the 10 nA PHA scans):

(https://probesoftware.com/smf/gallery/395_03_10_22_9_47_21.png)

First of all I note that the high count rate intensities for all the crystals (other than the LIF) all seem to be about the same intensity.  I think this is due to an artifact of the PHA scan method which utilizes 8 bit MCA channels, so what happens in PFE is it just stops once one of the MCA bins gets full, so the high count rate spectrometers all get "normalized" to roughly the same count rate.

But more importantly note that all the peaks have shifted to the left, but not by so much that they are getting cut off by the baseline (hence the reason for setting the PHA peaks to the right of center at 10 nA to begin with).

So this means that our constant k-ratios should all be good to acquire from 10 nA to 200 nA.  I will show those in the next post.

Edit by Probeman: as SEM Geologist correctly points out below the escape peak in the spec 2 LPET PHA scan at 200 nA *is* getting cut off by the baseline level!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on October 04, 2022, 05:24:23 AM
But more importantly note that all the peaks have shifted to the left, but not by so much that they are getting cut off by the baseline (hence the reason for setting the PHA peaks to the right of center at 10 nA to begin with).

I am going to be a bit picky: I see something different - the Ar esc peak got out of PHA, that is about ~5% of counts (in particularly for 2nd spectrometer with Large PET).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 04, 2022, 08:33:00 AM
But more importantly note that all the peaks have shifted to the left, but not by so much that they are getting cut off by the baseline (hence the reason for setting the PHA peaks to the right of center at 10 nA to begin with).

I am going to be a bit picky: I see something different - the Ar esc peak got out of PHA, that is about ~5% of counts (in particularly for 2nd spectrometer with Large PET).

Yeah, OK I see that. That's probably why that spectrometer's k-ratio got really crazy at the highest beam currents.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 04, 2022, 12:39:21 PM
But more importantly note that all the peaks have shifted to the left, but not by so much that they are getting cut off by the baseline (hence the reason for setting the PHA peaks to the right of center at 10 nA to begin with).

I am going to be a bit picky: I see something different - the Ar esc peak got out of PHA, that is about ~5% of counts (in particularly for 2nd spectrometer with Large PET).

Yeah, OK I see that. That's probably why that spectrometer's k-ratio got really crazy at the highest beam currents.

I have to say that am very pleased that SEM Geologist pointed out the loss (due to pulse height depression) of the Ar esc peak in the PHA scans at 200 nA compared to 10 nA on the LPET spectrometer 2!  It's not as large an effect as it might be (he estimated around 5%), but it certainly exacerbates the loss of intensities as measured on the Ti metal primary standard. See the cyan line in the 10 nA and 200 nA PHA scans in this post and compare them:

https://probesoftware.com/smf/index.php?topic=1466.msg11309#msg11309

I myself should point out that both the 10 nA and the 200 nA plots in the link above were acquired using the normal 3 usec enforced dead time on the SX100, so once again here are the 200 nA PHA scans at 200 nA, where one can see the Ar escape peak has shifted below the baseline (see cyan line for the LPET spectrometer):

(https://probesoftware.com/smf/gallery/395_04_10_22_12_16_31.png)

This really makes a great point about how important it is to check our PHA peaks when operating at these very high count rates!

But before I started the constant k-ratio acquisitions, I thought to myself: maybe I should re-acquire the PHA scans using the 1 usec (integer) enforced dead times... and so I did:

(https://probesoftware.com/smf/gallery/395_04_10_22_12_16_45.png)

When I first glanced at these acquisitions as they were acquired one by one, I merely thought to myself, well they look pretty much the same as the 3 usec enforced dead times.  And in terms of there shapes they are almost exactly the same, but it wasn't until I plotted them all together that I noticed the Y-axis had changed significantly!

Note that the PET crystal spectrometers are almost 3 times the count rate as when I used the 3 usec enforced deadtimes.  That makes sense of course.  Because we expect that the intrinsic dead times will be lower using a lower enforced dead time!  The question is, will these intrinsic dead times be significantly lower using the new SX100 WDS electronics board, compared to the values we obtained in 2010 when we used the old WDS electronics board.

I'll discuss that in the next post, but in the meantime here is the k-ratio plot from the Spc 2 LPET spectrometer where we see some interesting effects given its "terrifying" count rate of 632 kcps on Ti metal at 200 nA (extrapolating from 10 nA):

(https://probesoftware.com/smf/gallery/395_05_10_22_11_46_53.png)

The decrease in the k-ratio for the 80 and 120 nA acquisitions is due to the fact that these k-ratio intensities were corrected using the original 2.80 usec software dead time correction calibrated using a 3 usec enforced dead time. Therefore we will need to decrease these software dead time constants as described in the next few posts since we utilized an enforced dead time of 1 usec, but how much will we need to decrease them to obtain a *constant* k-ratio!   ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on October 05, 2022, 03:26:55 AM
Very interesting,
I rather would argue that this steep shot of k-ratios up are due to Ar esc AND the part of normal counts being cut out, and that shallow decrease is from too large dead time constant used for count recalculation.
Your PHA graphs and k-ratio observations makes me question: Are the lower Voltage pulses rejected by PHA or is it rejected earlier in the pipeline by Pulse-hold chips low skew rate (the rate how fast Voltage can drop at output of pulse hold chip back to 0V (or below that value) after holding the voltage for A/D conversion) - the lag of that chip will make Comparator-PulseHold chip tandem to miss the pulse even in integral mode - the PHA would not get the pulse for rejection/acceptance by baseline of PHA (if that is taking place at all on Cameca hardware in integral mode). I think We are tricked to believe that whole distribution (except Ar esc peak) is preserved with PHA shift, as left slope of PHA distribution is not sharp vertical line, but we see smooth inclined curve. If PHA would be cutting out the distribution by its baseline - we should see sharp precise cutoff at that value. However mentioned "cutout" of "shifted to low V" pulses by lag of pulse-hold low skew rate would make such cut-off line inclined and curved as would depend from pulse height before this "cutout/missed" low V pulse.

Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 05, 2022, 08:11:55 AM
I rather would argue that this steep shot of k-ratios up are due to Ar esc AND the part of normal counts being cut out, and that shallow decrease is from too large dead time constant used for count recalculation.

Yes, I agree, though not sure about losing any normal counts as the spec 2 intensities looks to get very close to zero just above the baseline in the 200 nA PHA scan.

You are correct though, when the primary intensity loses counts either from a too small dead time constant or an escape peak shifting out of range, the k-ratio will increase. While a too large dead time constant will decrease the k-ratio assuming the primary standard has the larger concentration...

By the way, the 160 and 200 nA intensities on spec 2 cannot be corrected using the logarithmic dead time correction, but at 120 nA the dead time correction for spec 2 is almost 1800%!   :o

On-Peak (off-peak corrected) or EDS (bgd corrected) or MAN On-Peak X-ray Counts (cps/1nA) (and Faraday/Absorbed Currents):
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka   BEAM1   BEAM2
BGD:       OFF     OFF     OFF     OFF     OFF
SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LLIF     PET     LIF
ORDER:       1       1       1       1       1
   61G 1671.9931348.41  772.79 1373.29  169.11 124.210 124.180
   62G 1674.0632404.42  774.92 1375.77  169.16 124.195 124.195
   63G 1675.7632614.40  774.65 1373.35  169.33 124.210 124.210
   64G 1673.5032133.87  775.21 1373.92  169.06 124.210 124.210
   65G 1674.0432615.82  776.04 1373.10  169.09 124.226 124.180
   66G 1679.0833348.09  778.36 1374.96  169.22 124.210 124.195

AVER:  1674.7432410.83  775.33 1374.07  169.16 124.210 124.195
SDEV:     2.45  658.39    1.83    1.08     .10    .010    .014
1SIG:      .36     .49     .28     .34     .15
SIGR:     6.78 1355.15    6.47    3.18     .68
SERR:     1.00  268.79     .75     .44     .04
%RSD:      .15    2.03     .24     .08     .06
DEAD:     2.85    2.80    2.80    3.00    3.00
DTC%:     72.8  1742.7    30.3    61.7     6.5

Dang, now that's a big dead time correction!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 05, 2022, 10:46:07 AM
So ignoring spec 2 LPET for the time being let's plot up the constant k-ratios with the enforced dead time set to 1 usec and still using the optimized dead time constants from when the enforced dead times were all 3 usec:

(https://probesoftware.com/smf/gallery/395_05_10_22_10_35_28.png)

After adjusting the dead time constants for obtaining a zero slope k-ratio fit we obtain the following plot:

(https://probesoftware.com/smf/gallery/395_05_10_22_10_35_45.png)

OK, so let's now compare the dead times for the 1 usec enforced dead time calibration from 2010 (using the old WDS board electronics) with this recent 1 usec enforced dead time calibration (using the new WDS board electronics):
                   
                             1         2          3          4          5
1 usec (2010)    1.71    2.00     2.19     1.61     1.85      <--- OLD WDS board
1 usec (2022)    1.58    1.60     1.80     1.65     2.00      <--- NEW WDS board

OK, well that's a pretty mixed bag!  Specs 1, 2 and 3 went down in dead time, while 4 and 5 went up in dead time with both measurements at 1 usec enforced dead time.

But since all the values are 2 usec or less, I think my next task is to re-run the constant k-ratio calibration, this time using a 2 usec enforced dead time, and see what values we obtain...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on October 06, 2022, 05:29:09 AM
OK, so let's now compare the dead times for the 1 usec enforced dead time calibration from 2010 (using the old WDS board electronics) with this recent 1 usec enforced dead time calibration (using the new WDS board electronics):
                 
                             1         2          3          4          5
1 usec (2010)    1.71    2.00     2.19     1.61     1.85      <--- OLD WDS board
1 usec (2022)    1.58    1.60     1.80     1.65     2.00      <--- NEW WDS board

OK, well that's a pretty mixed bag!  Specs 1, 2 and 3 went down in dead time, while 4 and 5 went up in dead time with both measurements at 1 usec enforced dead time.

But since all the values are 2 usec or less, I think my next task is to re-run the constant k-ratio calibration, this time using a 2 usec enforced dead time, and see what values we obtain...

This mixed bag have its very clear explanation.

Old electronics has two analog signal multiplexers and 2 ADC, where spect 1,2,3 used one, and spect 3 and 4 used the second multiplexer and ADC. Both ADC sent its results in parallel; thus Your old dead times are very logic: spect 1 to 3 raises up, and then spect 4 and spect 5. Spect 4 had lower dead time as it needed then to share the ADC only with 1 other detector, and spect 1 compete with 2 spectrometers. Probably there was some prioritization if pulses comes to multiplexer from few spectrometers at the same time, and spectrometer with low number is then prioritized. (thus dt of 1st spect < 2nd spect < 3rd spect; and 4th < 5th spect).

With new board all spectrometers have its own ADC. However ADC results are sent with shared 8bit bus, and thus multiplexed in that digital bus. Because of multiplexing there should be prioritization if few ADC wants to send the result to FPGA at the same time, and thus again highest priority has lowest spectrometer number, and lowest priority has the spectrometer with the largest number. But wait, someone would say that 3rd and 4th spectrometers does not fit this scheme! - There is something strange with 3rd and 4th, because on our SXFive (with 5 spectrometers) these are switched in places in few places on the pipeline which is confusing. It is highly possible that FPGA see 3rd and 4th spectrometer in different order and thus prioritize the 4th before 3rd. So in this mixed bag the 5th spectrometer looks in disadvantage on new board. I am a bit surprised with these results, I was expecting all dead times better than before. Maybe multiplexing of that bus is not so much faster than I had anticipated...

Instead of going to 2us integer dead time test, I would propose you to redo same test but only with a single spectrometer (setting other spectrometer gain to 0 so that it would produce no counts) and see if dead times would not improve significantly. Especially for 5th spectrometer.

P.S. a quick test on our SXFive setting 975nA and Ti Ka on TiO2 and LPET on 2nd and 5th spect shows no influence in count rate when switching on/off one or the other - thus this prioritizing thing probably is not here. BTW max seen raw counts on both spectrometers at these insane current are 322.9kcps! maybe it needs more spectrometers to saturate the bus...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 06, 2022, 10:17:44 AM
Instead of going to 2us integer dead time test, I would propose you to redo same test but only with a single spectrometer (setting other spectrometer gain to 0 so that it would produce no counts) and see if dead times would not improve significantly. Especially for 5th spectrometer.

Sounds like an interesting test.

I suggest that you run some multiple (all 5) and single spectrometer constant k-ratio tests at 1 usec, and I'll run multiple and single spectrometer k-ratio tests at 2 usec and we can compare.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 12, 2022, 09:08:57 PM
I was not able to run the constant k-ratio test at 2 usec enforced dead time last weekend because they shut down all the chillers on Sunday because the process water was being turned off on Monday morning, but I will try to get to that hopefully this weekend.

In the mean time I took another look at the 1 usec enforced dead time measurements I did the weekend before (see above posts), and because I finally also remembered to run the primary standard before each secondary standard, the standard intensity drift correction in PFE did not confuse things (once it was turned off in the Analysis Options dialog). 
 
So here is the picoammeter linearity test produced from the same constant k-ratio data set shown above, but using only a single primary standard, which produces a nice check for the picoammeter calibration (since we are extrapolating to each different beam current). As I suspected from previous datasets we definitely have a small picoammeter calibration issue on our SX100:

(https://probesoftware.com/smf/gallery/395_12_10_22_8_53_57.png)

This means that when running say major elements at 30 nA, and then minor/trace elements at 60 nA, there will be a ~3% accuracy error between the two sets of element intensities. A bit more if we consider the matrix corrections.  Hopefully this will get fixed once our instrument engineer gets our high accuracy current source built (he's been swamped with instrument problems throughout the facility because of power outages from all the construction next door!).

Have you run a constant k-ratio check on your instrument?  What are you waiting for?   :D   If you have Probe for EPMA it's super easy with the latest version. See the PDF for a nice description of the acquisition process available from the Help menu (it only takes a couple hours of automation!). For those without PFE, see this post and the next few:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: entoptics on October 17, 2022, 02:50:58 PM
I played around with the constant k-ratio method over the weekend, and got some very good results.

One thing that may not have been stressed enough (at least for me :P ) was ensuring your PHA settings are suitable for the count rates you'll see. I assumed a "middle of the road" gain setting would be sufficient, but over the nA range I measured, it wasn't. On our JEOL 8500F, I had to up the gain for the higher count rates. Be sure to run a few test PHA scans at low-mid-high currents for all the elements you plan to use.

I've attached the results from Sc/GdScO3 for all five of my spectrometers (PET). Spec 3 (H-type) only goes to 40 nA due to the aforementioned PHA blunder.
(https://probesoftware.com/smf/gallery/1851_17_10_22_3_41_34.png)
I'm quite pleased with the linear response from ~10 kcps to 140 kcps. <0.5% variation.

I'd also note the variation in k-ratio across my spectrometers. I'm assuming this is a takeoff angle discrepancy. My setup has them arranged clockwise from 1 (7 o'clock) to 5 (5 o'clock), and you can see the k-ratios drop as you go around. Presumably there's a bit of stage/specimen tilt, altering the real takeoff value depending on spectrometer location?

I dug around PFE, and couldn't find a place to alter the takeoff angle for individual spectrometers. Is this possible? Would be interesting to change the values a smidge to see if the k-ratios would converge.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on October 20, 2022, 09:22:40 AM
I played around with the constant k-ratio method over the weekend, and got some very good results.

One thing that may not have been stressed enough (at least for me :P ) was ensuring your PHA settings are suitable for the count rates you'll see. I assumed a "middle of the road" gain setting would be sufficient, but over the nA range I measured, it wasn't. On our JEOL 8500F, I had to up the gain for the higher count rates. Be sure to run a few test PHA scans at low-mid-high currents for all the elements you plan to use.

Yeah. Getting a single set of PHA settings appropriate over a wide range of count rates takes some care. As you say, one should acquire PHA scans at both extremes of beam current before attempting to acquire constant k-ratios. I've posted about these PHA peak shifts myself recently:

https://probesoftware.com/smf/index.php?topic=1475.msg11330#msg11330

I've attached the results from Sc/GdScO3 for all five of my spectrometers (PET). Spec 3 (H-type) only goes to 40 nA due to the aforementioned PHA blunder.
(https://probesoftware.com/smf/gallery/1851_17_10_22_3_41_34.png)
I'm quite pleased with the linear response from ~10 kcps to 140 kcps. <0.5% variation.

Very nice dataset!  It's interesting that you used Sc La, was there a particular reason for that?

These k-ratios calculated using the logarithmic dead time expression in PFE?  How much did your dead time constants change from your previous values?

I'd also note the variation in k-ratio across my spectrometers. I'm assuming this is a takeoff angle discrepancy. My setup has them arranged clockwise from 1 (7 o'clock) to 5 (5 o'clock), and you can see the k-ratios drop as you go around. Presumably there's a bit of stage/specimen tilt, altering the real takeoff value depending on spectrometer location?

I dug around PFE, and couldn't find a place to alter the takeoff angle for individual spectrometers. Is this possible? Would be interesting to change the values a smidge to see if the k-ratios would converge.

I think this is an absolutely amazingly good idea!   8)

Perhaps rather than depend on the engineer to align our spectrometers and/or replacing crystals with asymmetrical diffraction, we should instead attempt to determine the effective takeoff angle of each spectrometer, by comparing these simultaneous k-ratios from constant k-ratio measurements (and of course we really only need k-ratios from a single beam current for this purpose).

And yes, it would not help in the case of samples with variable sample tilts (different each time they are inserted in the sample holder). However, if it was the entire sample holder that was tilted (reproduciblly) in a particular direction, then yes it would help very much!

The only "fly in the ointment" I can think of is how do we know what the correct or ideal k-ratio is for a given primary and secondary standard?  We can average a bunch of models of course, but then it comes down to all those particular details such as oxide layers, and operating voltage accuracy, carbon coating thickness, etc. in order to get an absolute k-ratio value to "shoot for" in order to adjust our effective takeoff angles to obtain this ideal k-ratio.

In any case I think it's worth working on this idea. So we modified the underlying physics code in CalcZAF/Probe for EPMA to support manually input effective takeoff angles for each element.  The takeoff angle in PFE is now defined internally (using combined conditions) as specific to each element.  So for a first effort we enabled the take off angle text control in the Combined Conditions dialog in CalcZAF:

(https://probesoftware.com/smf/gallery/1_20_10_22_8_38_48.png)

Go ahead and update to the latest PFE, then export a constant k-ratio sample from PFE using the Output | Save CalcZAF Format menu. Try reprocessing the data in CalcZAF based on the spectrometer orientation and let us know what you find. 

There could indeed be a different take off angle for each spectrometer. It's one reason why Aurelien specified defining the spectrometer orientation (and x/y/z coordinates) in the consensus k-ratio ratio measurement method to see if they could check the specimen tilt.  It would be very interesting to see if you can get consistent k-ratios by adjusting the "effective" takeoff angle of each spectrometer!

But it could be even worse than that, as I can imagine a spectrometer mechanism that is out of alignment variously over it's sin theta range!  That means that there could difference effective take off angles as a function of the spectrometer position!  That would require a visit from the engineer I expect.

Just to see what effect changing the takeoff angle for a single spectrometer I did a quick model test in CalcZAF. Here is Si Ka at 40 degrees take off (20 keV):

ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Si ka  1.6169  1.0000  1.0254  1.6579  1.0522   .9745   .5259  1.8390 10.8755 1542.63
   Mg ka  1.5028   .9946  1.0220  1.5275  1.0329   .9895   .5279  1.3050 15.3257 1491.24
   O  ka  2.3761   .9985   .9698  2.3009   .9549  1.0156   .2424   .5317 37.6152 3965.38

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA TAKEOFF KILOVOL                                       
   Si ka  .00000  .12041  19.962   -----  14.286    .333   40.00   20.00                                       
   Mg ka  .00000  .22619  34.550   -----  28.571    .667   40.00   20.00                                       
   O  ka  .00000  .19770  45.488   -----  57.143   1.333   40.00   20.00                                       
   TOTAL:                100.000   ----- 100.000   2.333


Note the new column for take off angle for each element! And here Si Ka at 39 degrees:

ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Si ka  1.6304  1.0000  1.0254  1.6718  1.0522   .9745   .5198  1.8390 10.8755 1542.63
   Mg ka  1.5028   .9946  1.0220  1.5275  1.0329   .9895   .5279  1.3050 15.3257 1491.24
   O  ka  2.3761   .9985   .9698  2.3009   .9549  1.0156   .2424   .5317 37.6152 3965.38

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA TAKEOFF KILOVOL                                       
   Si ka  .00000  .11941  19.962   -----  14.286    .333   39.00   20.00                                       
   Mg ka  .00000  .22619  34.550   -----  28.571    .667   40.00   20.00                                       
   O  ka  .00000  .19770  45.488   -----  57.143   1.333   40.00   20.00                                       
   TOTAL:                100.000   ----- 100.000   2.333


A difference of 1 degree in the take off angle results in a difference in the Mg Ka absorption correction in this system of around 0.8%, so not that large but certainly worth trying to correct for this in our software I think...

It would certainly be interesting to know how much the effective take off angle would need to change to correct for these observed differences in these simultaneous k-ratios we are seeing in our measurements.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 21, 2022, 09:32:46 AM
It's one reason why Aurelien specified defining the spectrometer orientation (and x/y/z coordinates) in the consensus k-ratio ratio measurement method to see if they could check the specimen tilt.

If anyone wants to try checking their "effective" take off angles for their spectrometers (using the constant k-ratio method on multiple spectrometers), be sure to first test your specimen tilt by focusing your light optics on a flat specimen using three corners of a triangle with the vertices a few or more millimeters apart.  Then just calculate your degrees tilt.

In PFE one can use the fiducial confirmation feature and it will calculate the tilt automatically for you. I'd say make sure that your specimen tilt is less than 0.5 degrees.  In my experience we see sample tilts on our one piece acrylic standard mounts of around 0.2 to 0.3 degrees on our Cameca instrument, when confirming the standard mount fiducials.

Once you know your specimen is mounted flat, then go ahead and test using simultaneous k-ratios for checking your spectrometer effective take off angles...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on October 26, 2022, 02:34:14 PM
Here is a recent constant k-ratio data set I acquired over the weekend on TiO2 and Ti at 20 keV. After optimizing the dead time constant (excluding the "terrifying" count rates), we obtain this plot:

(https://probesoftware.com/smf/gallery/395_26_10_22_2_01_39.png)

Zooming in we obtain this:

(https://probesoftware.com/smf/gallery/395_26_10_22_2_01_24.png)

Pretty constant k-ratios up to around ~300 kcps and higher. 

Surprisingly the various spectrometers all yield pretty consistent k-ratios (0.57 to 0.58).  Maybe that's partly because I've finally got the PHA settings properly adjusted!    :-[

By the way, here are the optimized dead times (using integer enforced dead times of 3 usec) at 140 nA:

SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LPET     PET     PET
DEAD:     2.71    2.60    2.66    2.70    2.60
DTC%:     55.9   145.3   200.0    44.3    72.5

DTC% is dead time correction (relative) percent!    :o
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 03, 2022, 11:19:54 AM
Here is a recent constant k-ratio data set I acquired over the weekend on TiO2 and Ti at 20 keV. After optimizing the dead time constant (excluding the "terrifying" count rates), we obtain this plot:

(https://probesoftware.com/smf/gallery/395_26_10_22_2_01_39.png)

Zooming in we obtain this:

(https://probesoftware.com/smf/gallery/395_26_10_22_2_01_24.png)

Pretty constant k-ratios up to around ~300 kcps and higher. 

Surprisingly the various spectrometers all yield pretty consistent k-ratios (0.57 to 0.58).  Maybe that's partly because I've finally got the PHA settings properly adjusted!    :-[

By the way, here are the optimized dead times (using integer enforced dead times of 3 usec) at 140 nA:

SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LPET     PET     PET
DEAD:     2.71    2.60    2.66    2.70    2.60
DTC%:     55.9   145.3   200.0    44.3    72.5

DTC% is dead time correction (relative) percent!    :o

And just to put things in perspective for the above "constant" k-ratio plots, here are the same data, but plotting using the traditional dead time correction method:

(https://probesoftware.com/smf/gallery/395_03_11_22_11_19_33.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 11, 2022, 02:42:27 PM
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on November 12, 2022, 11:50:29 AM
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?

Still constructing the generator. Duties first :P, other stuff at spare time. That is my plan to measure by doing such test: that is generate and send two pulses and change time interval between pulses until counting electronics will see a single instead of two pulse - it is basically measuring the dead time of that physically in controlled way. For influence of mentioned skew rate I have plan to do double or triple first and normal amplitude second pulse (again test with changing time interval) - if interval at which second pulse will get ignored will be same as with normal amplitude pulses - this skew hypothesis part can be discarded.

I am trying to understand what do You mean "pulse response time". Is it time taken between the X-ray ionisising gas - pulse in counter - pulse shapping - sending to counting electronics - counting and sending to acquisition board as (LV)TTL pulse. in between all of these and at every of these steps naturally there will be a delay - electronic signals travels near at speed of light, and indeed working in high frequency that somehow matters. Here those delays (in ns) are proportional to all counts, and if two x-ray beams had arrived to counter chamber with time difference for example sake lets say 5.5µs, the time difference between shapped and gain-amplified pulses will exactly the same. Now due to way it is sensed for digital signal (integral counting) that time can be a bit different when crossing to digital domain - and additionally digital domain is run by clock - thus time resolution  is granular. I should remind that in my opinion there is digital signal ceiling of 500kcps (raw counts) as digital (LV)TTL pulses has 1µs length, and are aligned at 1MHz Clock.

"non-rectilinear shape of the pulses" - rectilinear pulses are unnatural - they easy produce some artifacts (under-over shots), they are good for digital domain as in digital domain the rising and falling edges are detected for getting if it is 1 or 0. rectilinear pulse would be poor choise for amplitude carriage as it would behave very differently at low amplitudes and high amplitudes. The negative tail I believe however does influence missing of some counts, but lack of such tail would bring other problems (pulse pileups would "climb" to positive rail saturating OPAMPS).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 12, 2022, 12:54:02 PM
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?

I am trying to understand what do You mean "pulse response time". Is it time taken between the X-ray ionisising gas - pulse in counter - pulse shapping - sending to counting electronics - counting and sending to acquisition board as (LV)TTL pulse.

I am asking if you can compare for us the *intrinsic* dead time of the gas detector/pre-amplifier versus the *intrinsic* dead time of the pulse processing electronics versus the *intrinsic* dead time of the pulse counting electronics. That is assuming a single photon input, what is the natural width of this pulse for each segment of the WDS photon counting system? I'm attempting to understand the relative importance of each piece of the WDS system in contributing towards the total dead time interval that we actually observe.

"non-rectilinear shape of the pulses" - rectilinear pulses are unnatural - they easy produce some artifacts (under-over shots), they are good for digital domain as in digital domain the rising and falling edges are detected for getting if it is 1 or 0. rectilinear pulse would be poor choise for amplitude carriage as it would behave very differently at low amplitudes and high amplitudes. The negative tail I believe however does influence missing of some counts, but lack of such tail would bring other problems (pulse pileups would "climb" to positive rail saturating OPAMPS).

Yes, I know that rectilinear pulses are "unnatural". I am using the term as a mathematical ideal and asking if you can compare the behavior of ideal rectilinear pulses, with the behavior of the natural non-rectilinear pulses (that we actually have in our electronics).

That is, at low count rates when the pulses are far apart compared to their natural widths, the pulses can be modeled as perfect rectilinear pulses because they rarely overlap.

But, as the interval between the pulses decreases (the pulses begin to overlap), does it make sense that the pulse counting system (which I assume is triggered at some specific voltage level), will begin to behave in a non-linear fashion (compared to ideal pulse shapes) as the curved edges of these pulses increasingly overlap?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on November 14, 2022, 10:17:52 AM
There is no straight answer. Devil in details. Can I compare?.... gas and preamplifier - first, I can't see any missing pulses (at least on oscilloscope) at that level and because of that it produce enormously complicated overlap patterns which later levels of signal pipeline struggle to untangle. Would there be any dead time at gas counter and preamplifier - The counting electronics would have much easier life. So I am 99.9% sure that intrinsic dead time of pre-amplifier and gas counter at our achievable most extreme beam currents and even large XTALS and most intense line positions has dead time equal 0,  at least on Cameca Hardware. For Jeol I am not so pretty sure, as IMHO their preamplifier miss some important and crucial higher capacity HV backup capacitor and because of that bias voltage of cathode could be significantly drained down at burst of high rate X-ray (similarly to G-M counter). Most of dead time happens at analog pulse sensing and probably at digital pulse counting, probably - that will be possible to measure after I will finish construction of that generator.

The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb". more sophisticated signal processing using FPGA can with ease sense all pulses. Few months ago I had opportunity to watch new EDAX EDS detector  in action - there were no visible pileups even at 97% of dead time. Its signal processing moved completely to FPGA, where pulses are recognized and deconvoluted in real-time constantly in whole, not just some dumb voltage level triggering.

WE need to get something like that for WDS and will be able to do those few million counts a second!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 14, 2022, 11:33:35 AM
There is no straight answer. Devil in details. Can I compare?.... gas and preamplifier - first, I can't see any missing pulses (at least on oscilloscope) at that level and because of that it produce enormously complicated overlap patterns which later levels of signal pipeline struggle to untangle. Would there be any dead time at gas counter and preamplifier - The counting electronics would have much easier life. So I am 99.9% sure that intrinsic dead time of pre-amplifier and gas counter at our achievable most extreme beam currents and even large XTALS and most intense line positions has dead time equal 0,  at least on Cameca Hardware. For Jeol I am not so pretty sure, as IMHO their preamplifier miss some important and crucial higher capacity HV backup capacitor and because of that bias voltage of cathode could be significantly drained down at burst of high rate X-ray (similarly to G-M counter). Most of dead time happens at analog pulse sensing and probably at digital pulse counting, probably - that will be possible to measure after I will finish construction of that generator.

OK, that makes perfect sense. I remember now seeing schematics of EDS detector pulse steams and I think it's pretty much the same as you describe for WDS. 

The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb".

Thanks, I think I am beginning to understand these "dumb" details.   :)

So could these sideways deviations cause additional (non-linear) loss of counts that we observe at high enough count rates?  I'm trying to understand these dead time effects beyond simple photon coincidence.

...more sophisticated signal processing using FPGA can with ease sense all pulses. Few months ago I had opportunity to watch new EDAX EDS detector  in action - there were no visible pileups even at 97% of dead time. Its signal processing moved completely to FPGA, where pulses are recognized and deconvoluted in real-time constantly in whole, not just some dumb voltage level triggering.

WE need to get something like that for WDS and will be able to do those few million counts a second!

Absolutely.  Wouldn't it be great to have linear response up to 1 mcps!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 15, 2022, 09:09:39 AM
The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb".

Thanks, I think I am beginning to understand these "dumb" details.   :)

So could these sideways deviations cause additional (non-linear) loss of counts that we observe at high enough count rates?  I'm trying to understand these dead time effects beyond simple photon coincidence.

Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

Another thought: could the "effective" dead time interval actually increase as a function of count rate at very high count rates?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 25, 2022, 12:09:54 PM
This is a post on recent "constant" k-ratio measurements and some strange observations regarding the results.

Now that I think I've finally learned to properly adjust my PHA gain settings to work at count rates from zero to ~600 kcps, I've been noticing some consistently odd non-linearities in the constant k-ratio results.  These are quite small variations but they seem to be reproducibly present on my Cameca instrument, though maybe not on Anette's JEOL instrument.  But maybe that is just a question of the larger dead time constant on the Cameca instrument?

Beginning this story with Anette's instrument here are some TiO2/Ti metal k-ratios on her PETL spectrometer:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_27_29.png)

Take a look at the Y-axis range and you can see that we are seeing consistent accuracy in the TiO2/Ti k-ratios from 15 kcps to 165 kcps count rates of Ti ka in TiO2 and count rates from 28 kcps to 392 kcps in Ti metal (10 nA to 140 nA) within roughly a thousand or so PPM.  Pretty darn good! 

This very nicely demonstrates the sensitivity of the constant k-ratio method because the Y-axis can be expanded indefinitely as the slope of the k-ratios approaches zero (as they should in an well calibrated instrument!).  Her JEOL data was taken at 15 kV. Now here are some Ti Ka k-ratio data from my Cameca at 20 kV:
 
(https://probesoftware.com/smf/gallery/395_25_11_22_10_27_54.png)

First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 usec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!

But more interestingly (and also incomprehensibly) to me is that  these "squiggles" appear on all the spectrometers, even those with lower count rates as seen here:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_29_28.png)

So that might indicate to me that maybe these squiggles are due to a picoammeter non-linearity, but if you've been following along with these discussions you will remember that when using the constant k-ratio method, we measure both the primary and secondary standard at the *same* beam current. Therefore any picoammeter linearity should normalize out.  And in fact the picoammeter non-linearity on my instrument is much worse than these k-ratio data show, as previously plotted here:

https://probesoftware.com/smf/index.php?topic=1466.msg11324#msg11324

So I don't think it's the picoammeter.  Now, it is worth pointing out that using a traditional dead time calibration one would never see such tiny variations in the data.  To demonstrate this, here are the same Cameca k-ratio data as above, but this time plotted using the traditional linear dead time expression:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_28_18.png)

These k-ratio variations are even less evident in a traditional plot of a single material intensity vs. beam current as seen here:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_28_43.png)

The PHA data is here, first adjusted at 200 nA to ensure that the Ti Ka escape peak is above the baseline:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_30_23.png)

and here at 30 nA:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_29_58.png)

Remember, in integral mode, the counts to the right of the 5v X-axis are counted in the integration as shown previously:

https://probesoftware.com/smf/index.php?topic=1475.msg11356#msg11356
 
And are not cut off as we might expect, given the display in the PeakSight software.  And by the way, Anette has sent me some preliminary "gain test" data from her JEOL, and even though she had to deal with a shifting baseline, she also sees a constant intensity as a function of gain. She will post that data here soon I hope.

In the mean time does anyone have any theories on what could be causing these "squiggles" in the constant k-ratio data on my Cameca?  And why are they so much more pronounced than on Anette's JEOL instrument?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on November 28, 2022, 03:57:22 AM
Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

I returned with oscilloscope to the probe to answer these questions (I also got intrigued if pulse shape would not change somehow at higher count rates - which would be possible in case Shapping amplifier would have too short time constant; Wanted to check out that, especially that Brian recently doubted if 250ns is not too short as it is not common value). I made this GIF below to show the differences (or no differences actually) between common Ti Ka pulse registered at 1.4nA and "lonely" pulse "hunted" at 130 nA.
(https://probesoftware.com/smf/gallery/1607_28_11_22_3_42_20.gif)
I use here word "hunted" as it is not so simple to get pulse with "no pulse" before and after already at 130nA or 150kcps. Going to higher count rates such situation gets more and more rare, and gets more and more challenging to catch:
(https://probesoftware.com/smf/gallery/1607_28_11_22_3_55_53.bmp)

So the answer to Probeman is: There is rather no observable dependability between count rate and pulse shape, neither it morphs into more or less rectilinear at any count rate.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 28, 2022, 09:15:51 AM
Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

I returned with oscilloscope to the probe to answer these questions (I also got intrigued if pulse shape would not change somehow at higher count rates - which would be possible in case Shapping amplifier would have too short time constant; Wanted to check out that, especially that Brian recently doubted if 250ns is not too short as it is not common value). I made this GIF below to show the differences (or no differences actually) between common Ti Ka pulse registered at 1.4nA and "lonely" pulse "hunted" at 130 nA.

(https://probesoftware.com/smf/gallery/1607_28_11_22_3_42_20.gif)

So the answer to Probeman is: There is rather no observable dependability between count rate and pulse shape, neither it morphs into more or less rectilinear at any count rate.

That is very interesting, thanks.  I'm curious, what was the observed count rate at 130 nA?

But if it's not changes in pulse shape causing the non-linear response of the counting system above 50 kcps, what effect(s) do you think could be causing such extreme non-linearity, beyond simple photon coincidence, as shown here:

Now, it is worth pointing out that using a traditional dead time calibration one would never see such tiny variations in the data.  To demonstrate this, here are the same Cameca k-ratio data as above, but this time plotted using the traditional linear dead time expression:

(https://probesoftware.com/smf/gallery/395_25_11_22_10_28_18.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on November 29, 2022, 03:17:48 AM
First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 µsec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!
As you have estimated 2.6µs with logarithmic (I guess) equation, that means hardware is set to 3µs, correct? I had thought that I had already convinced You of benefits reducing it at least to 2µs (which should give you estimated "deadtime constant" somewhere between 1.5-1.8µs), so why You are still using 3µs?. You can reduce it safely in integral mode without any drawbacks (but on diff mode it is better to increase it at least to 4µs, if You use diff mode for anything at all). I had gathered some limited measurements on Ti/TiO2 with hardware DT set to 1µs. I need to pull and organize that data to show anything here.

I am aware about these "Squiggles" as You called, and pointed out already previously (bold part in quote):
Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).
There is no pulses missing, it is that this equation is not perfect. Think this like these numerous matrix correction models, which works OK and comparably at common acceleration voltages (7-25kV), but some gives very large biases for (very-)low voltage analyses. It is because some of them describe mathematically very oversimplified physical reality. As I said I had already made a MC simulation and there is no visible discrepancy between modeled input and observable output count rates, albeit I could not find the equation, as my greed for whole possible range of counts (lets stay up to 10Mcps) stalled me. At least your method extends the usable range to 150-200kcps, You can minimize effect of the first "bump" by calibrating dead time only up to 100kcps. Your log equation in current form is already nice improvement, as there is no need to be limited up to 10-15kcps anymore, or requires separate calibrations for high current, or using matrix matched standards (where in real its count-rate matched intensities provided better results, and was misinterpret to do anything with matrix).

I will try to redo montecarlo simulation using a real pulse shape, with better detailed simulation of detection - that should clear up things a bit, I think. The point is actually not how and where (inside GPC detector - photon coincidence vs at Shapping amplifier signal - pulse-pile-up) coincidences happen, but how they are ignored. This is what I think Your log equation starts to fail to account correctly at higher count rates (>150kcps).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on November 29, 2022, 08:37:32 AM
First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 µsec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!
As you have estimated 2.6µs with logarithmic (I guess) equation, that means hardware is set to 3µs, correct? I had thought that I had already convinced You of benefits reducing it at least to 2µs (which should give you estimated "deadtime constant" somewhere between 1.5-1.8µs), so why You are still using 3µs?. You can reduce it safely in integral mode without any drawbacks (but on diff mode it is better to increase it at least to 4µs, if You use diff mode for anything at all). I had gathered some limited measurements on Ti/TiO2 with hardware DT set to 1µs. I need to pull and organize that data to show anything here.

Yes, this is using the logarithmic expression with an integer DT of 3 usec. 

I have presented this data to the lab manager at UofO and mentioned to her that we could be utilizing the 2 usec integer DT, but since I retired earlier this year (now a courtesy faculty), I can only make suggestions.   :'(

I am aware about these "Squiggles" as You called, and pointed out already previously (bold part in quote):
Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).
There is no pulses missing, it is that this equation is not perfect. Think this like these numerous matrix correction models, which works OK and comparably at common acceleration voltages (7-25kV), but some gives very large biases for (very-)low voltage analyses. It is because some of them describe mathematically very oversimplified physical reality. As I said I had already made a MC simulation and there is no visible discrepancy between modeled input and observable output count rates, albeit I could not find the equation, as my greed for whole possible range of counts (lets stay up to 10Mcps) stalled me. At least your method extends the usable range to 150-200kcps, You can minimize effect of the first "bump" by calibrating dead time only up to 100kcps. Your log equation in current form is already nice improvement, as there is no need to be limited up to 10-15kcps anymore, or requires separate calibrations for high current, or using matrix matched standards (where in real its count-rate matched intensities provided better results, and was misinterpret to do anything with matrix).

Thank-you. But of course the logarithmic equation is not perfect!  We are merely trying to find a better mathematical model for what we observe- you know, science!   :D

As for lower count rates, we have already demonstrated that at low count rates the performance of the traditional and logarithmic models are essentially identical. Here is a sentence from our recent dead time paper: " In fact, at 1.5 us dead times, the traditional and logarithmic expressions produce results that are the same within 1 part in 10,000,000 at 1000 cps, 1 part in 100,000 at 10 kcps and 1 part in 10,000 at 20 kcps".

As for problems with the picoammeter, please remember that because both the primary and secondary standards for each k-ratio are measured at the *same* beam current, the accuracy or linearity of the picoammeter should not be an issue.  That's one of the advantages of the constant k-ratio method.

Seriously, I am totally happy with the performance of the log expression because as you say, it extends our quantitative accuracy by roughly a factor of 8x or so in count rates.  But I am also sure that it could be further improved upon.

At this point, I just have some intellectual curiosity as to what these "squiggles" are caused by in the Cameca, and why the JEOL instrument appears to not show similar artifacts.  Do you see anything like this in constant k-ratio measurements on your instrument?

I will try to redo montecarlo simulation using a real pulse shape, with better detailed simulation of detection - that should clear up things a bit, I think. The point is actually not how and where (inside GPC detector - photon coincidence vs at Shapping amplifier signal - pulse-pile-up) coincidences happen, but how they are ignored. This is what I think Your log equation starts to fail to account correctly at higher count rates (>150kcps).

Yes, depending on the dead time constants.  For Cameca instruments (~2 to 3 usec), the log expression begins to fail around 200 to 300 kcps, while for the JEOL (~1 to 2 usec) the log expression seems to be good up to 300 to 400 kcps.

This last weekend I measured constant k-ratios for Si Ka in SiO2 and benitoite (using the same bias voltages) and calculated the optimum dead times and find that they are very similar to the results from Ti Ka.  Interestingly the PHA settings were much easier to deal with since there is no escape peak!  Next I need to measure some more emission lines to see if there are any systematic differences.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 05, 2022, 09:19:08 AM
I revisited my MC simulations, and re-investigated pulse sensing electronic part of Cameca. I got much clearer view and found out that It works a bit different than I had fought initially, and after sorting it out it gets absolutely clear why this k-ratio shots-up after 200kcps or (more if using shorter hardware dead time constants). The answer is that integral mode on Cameca is not such "integral" - there is physical barrier stopping pulses near 0V and below from being counted at that mode. PHA distribution most left part looks always as natural decay (going toward 0V from most left "peak" distribution) and hides part of missed/ignored pulses.
The key finding is that Comparator compares reversed pulse (reversed by gain amplification electronics) signal with up-tight pulse signal (negative reversed to positive scaling 0 to 5V by sample and hold chip). The event of sensed pulse is triggered with rising edge of comparator output signal used as a clock at D-flipflop. That design works not so well with pulse-pileups, as even if D-flipflop is reset (after hardware set dead time times-out) triggering its RESET and D pins, for clock input to work it needs first to go from high state back to low state, and with dense pulse train this can be delayed a lot.

I always was thinking that it is using classical way of comparator-S/H chip tandem by comparing original (upright) with non-inverted delayed (by S/H chip) signals, which would put the comparator output to low state then pulse voltage is dropping after peak. Such design would be able to sense any shifted up or down pulses (shifted from pile up), even pulses which would start much at negative voltage and its top would not go over 0V. Anyway I think I should illustrate better these phenomenons to be better understood (probably in other thread).

Finally, I also was interested in what more detailed MC simulation of pulse will reveal.
So to remind everyone I was quite against the terminology used by probeman of "photon coincidence" detailed MC allowed me to look into this more closely, and numbers shows that I was partly wrong.
So first of all some constants: some other work shows that primary typical GFPC pulse (of random shape) is about 200ns, so if two photons arrives to detector within such a window we can say that there is photon coincidence. my simulation runs with 40ns granularity, if coincidence is within such window at least at this simulation the coincidence is unresolvable (looks as a single event with sum of collected energies).
So here are the numbers: the fraction of initial GFPC pulses which are presenting more than a single photon hit to the detector:
count rate | 40ns wind. | 200ns wind.
10k0.02%0.08%
100k0.2%0.81%
1M1.77%7.43%
10M16%50%

So what does this mean? with better counting design (i.e. FPGA based deconvolution) there would be still pretty huge limitations by perfectly concidenced (within 40ns window) photons. This is not so huge problem for low energy X-rays (below Ar esc peak energy) - As number of counts in piled-up pulse can be found by dividing it by average single pulse amplitude. With Ar esc pulses present, this gets challenging: is the piled-up pulse composed from 2 normal pulses, or is it 1 normal + 2 arg esc pulses? Even simple average normal pulse: is it representing single pulse or is it rather 2 or 3 piled up (coincidenced photons) arg esc pulses? So current way of counting, even if the pulse sensing would be significantly improved with newer electronic technology) that would be limited to ~500kcps as going above that would start increasing the uncertainty of measurement.
As probeman already noticed those high numbers of dead time correction - that is lots of uncertainty introduced to the measurement, this could be lowered down with better pulse sensing, but the limit would be pushed to the new boundaries.

Thanks to playing around with MC simulation I think I found some absolutely insane alternative to deal with this problem which would ditch the dead time completely and absolutely, but I would rather present this after hardware testing.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 05, 2022, 09:45:47 AM
Finally, I also was interested in what more detailed MC simulation of pulse will reveal.
So to remind everyone I was quite against the terminology used by probeman of "photon coincidence" detailed MC allowed me to look into this more closely, and numbers shows that I was partly wrong.

So first of all some constants: some other work shows that primary typical GFPC pulse (of random shape) is about 200ns, so if two photons arrives to detector within such a window we can say that there is photon coincidence. my simulation runs with 40ns granularity, if coincidence is within such window at least at this simulation the coincidence is unresolvable (looks as a single event with sum of collected energies).
So here are the numbers: the fraction of initial GFPC pulses which are presenting more than a single photon hit to the detector:
count rate | 40ns wind. | 200ns wind.
10k0.02%0.08%
100k0.2%0.81%
1M1.77%7.43%
10M16%50%

So what does this mean? with better counting design (i.e. FPGA based deconvolution) there would be still pretty huge limitations by perfectly concidenced (within 40ns window) photons. This is not so huge problem for low energy X-rays (below Ar esc peak energy) - As number of counts in piled-up pulse can be found by dividing it by average single pulse amplitude. With Ar esc pulses present, this gets challenging: is the piled-up pulse composed from 2 normal pulses, or is it 1 normal + 2 arg esc pulses? Even simple average normal pulse: is it representing single pulse or is it rather 2 or 3 piled up (coincidenced photons) arg esc pulses? So current way of counting, even if the pulse sensing would be significantly improved with newer electronic technology) that would be limited to ~500kcps as going above that would start increasing the uncertainty of measurement.
As probeman already noticed those high numbers of dead time correction - that is lots of uncertainty introduced to the measurement, this could be lowered down with better pulse sensing, but the limit would be pushed to the new boundaries.

Thanks to playing around with MC simulation I think I found some absolutely insane alternative to deal with this problem which would ditch the dead time completely and absolutely, but I would rather present this after hardware testing.

Hi SG,
This is a really interesting post, and most excellent work.  Of course we would all be very interested in any hardware breakthroughs you can come up with. One question: do you think these same hardware limitations also apply to the JEOL electronics?  I am asking because I am seeing some evidence that this is indeed the case and I was just posting about this when you posted this morning.

But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.

Based on this new information (we were a little stunned to say the least!) we ended up making some significant changes to the paper and in fact we have added you (Petras) in the acknowledgments section of our paper, if that is OK with you.  Your discussions have been very helpful and we very much appreciate your contributions to the topic. 

I still think that if you get a chance to perform some constant k-ratio measurements on your own instrument, you would find the data very interesting.  OK, back to the post I started on this morning...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 05, 2022, 10:01:47 AM
So I previously showed some strange "squiggles" in the constant k-ratio data from one of our large area crystal Cameca spectrometers for Ti Ka using TiO2 and Ti metal standards:

https://probesoftware.com/smf/index.php?topic=1466.msg11416#msg11416

As described in that previous post, at the time I was under the impression that these squiggles were only showing up in the Cameca instrument, but wanted to perform additional tests for just to compare with a different emission line. So here are constant k-ratio measurements on Benitoite and SiO2 on our LTAP crystal up to 200 nA, first with the traditional dead time expression:

(https://probesoftware.com/smf/gallery/395_05_12_22_8_59_45.png)

Clearly the traditional dead time expression is not very useful at these high count rates giving us a total vraiance of around 29%!  But just for fun, let's increase the dead time constant to an arbitrarily large value to try and "force" the k-ratios to be more constant:

(https://probesoftware.com/smf/gallery/395_05_12_22_9_14_59.png)

Unfortunately as we can see, with an arbitrarily large dead time constant, we are starting to over correct the lower intensities, while still under correcting the higher intensities giving us a total variance of around 7%, which is better but still not sufficient for quantitative work.  So let's try the logarithmic dead time expression:

(https://probesoftware.com/smf/gallery/395_05_12_22_9_19_32.png)

This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again.  Again, it is worth mentioning that unless one is utilizing the constant k-ratio method, these subtle variations would never be noticeable.  Also worth mentioning is the fact that at lower count rates, these "squiggles" are not nearly as pronounced as seen here on a normal TAP crystal from this same run:

(https://probesoftware.com/smf/gallery/395_05_12_22_9_23_55.png)

So about ~ 1.5% variance.

OK so what about the "squiggles" in the JEOL constant k-ratios I mentioned? Well I decided to look more closely at some of Anette's constant k-ratio measurements also using Si Ka and darn if I didn't find very similar "squiggles" when looking at k-ratios with the the highest intensities. So here are the Si Ka k-ratios on the JEOL TAP crystal:

(https://probesoftware.com/smf/gallery/395_05_12_22_9_55_20.png)

Please ignore the k-ratios at the highest count rates. These are due to difficulties with getting a proper tuning of the PHA bias/gain settings as this data is from back in August when were were still trying to figure out how to deal with the extreme pulse height depression at these crazy count rates.
 
The point is that these subtle "squiggles" are also visible in the constant k-ratio data on the JEOL instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 05, 2022, 12:31:44 PM
One question: do you think these same hardware limitations also apply to the JEOL electronics?  I am asking because I am seeing some evidence that this is indeed the case and I was just posting about this when you posted this morning.
JEOL clearly has a bit different approach and a different problem, their pulse sensing catch the background and that can make it more extendable-like, where Cameca PHA and sensing ignores anything near and below 0V. Hopefully Brian will get to that at some time, as he is already investigating pulse pre-amplifier and shapping part at Jeol Probe.

But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.
Now this surprise me and I would disagree. My first MC simulation, much oversimplified and modeling strictly only pulse-pileups (no PHA shift, actually no amplitude data, no Arg esc pulses, ideal deterministic pulse sensing with strict forced deadtime) was clearly demonstrating that neither classical neither Willis equations could be fitted to the "ideally" deterministically counted pulses (imposing only the hardware dead time). Thus I am very surprised and curious how Your MC could give opposite conclusions. Maybe Your model miss some crucial piece? Had Aurelien looked into my first MC? Periodic Pulses keeping the pulse train above the 0V (preventing comparator from setting its output to low, and thus flip-flop can't get triggered by rising edge from low to high state and can't send the signal that there was a pulse), and random pulses getting at negative voltage after tail/depression with its top not getting above 0V - those are culprit of increasing part of pulses not sensed. They are incoming from the detector to pulse sensing and PHA system (heck, I even spent some afternoon, counting pulses by hand (by eye) on oscilloscope at something like 1Mcps to prove myself that it is there, nothing is missing), but damn pulse sensing system have "closed eyes" and is not sensing it properly.
 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 05, 2022, 02:12:43 PM
But I also wanted to say to you that we also recently discovered that my co-authors and I were also partly wrong based on further MC simulations that we performed with Aurelien Moy for our paper.  Basically we found that the traditional dead time expression  corrects for even multiple photon coincidence and that the non-linear trends that we are observing at these excessively high count rates are due to some other hardware limitations in the instrument.  So these non-linear dead time expressions (Willis, six term, logarithmic and exponential) are correcting for effects other than simple photon coincidence. See the attached Excel spreadsheet by Aurelien Moy which compares several of these dead time expressions with his Monte Carlo modeling.
Now this surprise me and I would disagree. My first MC simulation, much oversimplified and modeling strictly only pulse-pileups (no PHA shift, actually no amplitude data, no Arg esc pulses, ideal deterministic pulse sensing with strict forced deadtime) was clearly demonstrating that neither classical neither Willis equations could be fitted to the "ideally" deterministically counted pulses (imposing only the hardware dead time). Thus I am very surprised and curious how Your MC could give opposite conclusions. Maybe Your model miss some crucial piece? Had Aurelien looked into my first MC? Periodic Pulses keeping the pulse train above the 0V (preventing comparator from setting its output to low, and thus flip-flop can't get triggered by rising edge from low to high state and can't send the signal that there was a pulse), and random pulses getting at negative voltage after tail/depression with its top not getting above 0V - those are culprit of increasing part of pulses not sensed. They are incoming from the detector to pulse sensing and PHA system (heck, I even spent some afternoon, counting pulses by hand (by eye) on oscilloscope at something like 1Mcps to prove myself that it is there, nothing is missing), but damn pulse sensing system have "closed eyes" and is not sensing it properly.

I know, we were totally surprised too.  As I said, we were stunned!    :o

But Aurelien said he looked at his code very carefully several times and is convinced that the traditional expression does deal properly with all (ideal) photon coincidence.  I add "ideal" as a qualifier because once the pulses start to overlap at very high count rates we think there are some non-linear effects (maybe due to the non-rectilinear shape of the pulses?) that start creeping in, hence the need for a logarithmic dead time correction at high count rates.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 08, 2022, 10:14:47 AM
I wanted to share the PHA scans for a couple of spectrometers on Si Ka, whose constant k-ratios were plotted in the above post:

https://probesoftware.com/smf/index.php?topic=1466.msg11432#msg11432

Here's spec 1 TAP at 200 nA:

(https://probesoftware.com/smf/gallery/395_08_12_22_9_46_31.png)

Remember we should always tune our PHA settings at the highest beam current we anticipate using, on the highest concentration that we will be utilizing, in a specific probe session. This is in to ensure that PHA peak will always stay above the baseline level, even with pulse height depression effects at the highest count rates.

Now the same spectrometer at 10 nA:

(https://probesoftware.com/smf/gallery/395_08_12_22_9_52_01.png)

Interestingly the PHA shift for Si Ka at lower count rates is much more subdued than for Ti Ka. Also, the lack of an escape peak makes things much easier!

Now for spectrometer 2 using a LTAP crystal (~370 kcps on SiO2) at 200 nA:

(https://probesoftware.com/smf/gallery/395_08_12_22_9_52_25.png)

Again, we do not care that the peak is being "cutoff" on the right side of the plot because in INTEGRAL mode the PHA system still counts those pulses as previously demonstrated using a gain test acquisition on Ti Ka on Ti meta:

(https://probesoftware.com/smf/gallery/395_08_12_22_10_06_02.png)

See here (and subsequent posts) for more details on PHA tuning:

https://probesoftware.com/smf/index.php?topic=1475.msg11330#msg11330

And finally spectrometer 2 LTAP again, but at 10 nA:
 
(https://probesoftware.com/smf/gallery/395_08_12_22_9_52_43.png)

Note the shift to the right at these lower count rates. But the important point is that the PHA peak is always *above* the baseline level from 10 nA to 200 nA, so we have a nice linear response in our electronics!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on December 11, 2022, 12:14:16 AM
This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again. 

The ”squiggles” are mostly the result of inappropriate application of your model, as τ may not be adjusted arbitrarily.  Its value may only be revealed by regression of appropriate data, but it may not be varied at will without violating the constraints imposed by those data; this ought to be pretty obvious.  If the "log" function is expanded as a power series and then truncated after its first-order term, it gives a linear expression that may be applied in the region of relatively low count rates (< 50 kcps).  The value of τ determined within that linear region must be consistent with that used in the converged series (the “log” equation).  In the squiggly plots, on one side or the other of a maximum or minimum, you are likely illustrating that the fractional correction to the count rate for at least one material used to construct the ratio decreases with increasing count rate.  Obviously, this is not physically realistic.

I’m not going into any further detail because I’ve already done that in my discussion of “delta” plots, in which it is revealed that the log expression in conjunction with arbitrary variation of τ produces physically unrealistic behavior. (https://probesoftware.com/smf/index.php?topic=1470.0)

Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times (https://probesoftware.com/smf/index.php?topic=1489.0)” topic.  Has this discussion been forgotten? 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 11, 2022, 10:08:02 AM
This gives us a total variance of around 0.6% which is pretty darn good, but lo and behold, there are those darn "squiggles" again. 

The ”squiggles” are mostly the result of inappropriate application of your model, as τ may not be adjusted arbitrarily.  Its value may only be revealed by regression of appropriate data, but it may not be varied at will without violating the constraints imposed by those data; this ought to be pretty obvious.  If the "log" function is expanded as a power series and then truncated after its first-order term, it gives a linear expression that may be applied in the region of relatively low count rates (< 50 kcps).  The value of τ determined within that linear region must be consistent with that used in the converged series (the “log” equation).  In the squiggly plots, on one side or the other of a maximum or minimum, you are likely illustrating that the fractional correction to the count rate for at least one material used to construct the ratio decreases with increasing count rate.  Obviously, this is not physically realistic.

I’m not going into any further detail because I’ve already done that in my discussion of “delta” plots, in which it is revealed that the log expression in conjunction with arbitrary variation of τ produces physically unrealistic behavior. (https://probesoftware.com/smf/index.php?topic=1470.0)

Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times (https://probesoftware.com/smf/index.php?topic=1489.0)” topic.  Has this discussion been forgotten?

There you go again. So, "in the the strongest possible terms", hey?  Yup, no one has forgotten that you are both stubborn and wrong.  Well, you are free to restrict your quantitative analyses to less than 50 kcps, if that is your choice.  But for those of us who enjoy scientific progress beyond the 1930s, we will continue to investigate our spectrometer response at these high count rates for trace element and high speed WDS quant mapping. 

By the way, similar "squiggles" are also seen in EDS at high count rates so these artifacts are not unique to software corrections for dead time:

(https://probesoftware.com/smf/gallery/395_11_12_22_9_07_28.png)
 
In the meantime, as we have previously pointed out, dead time is not a constant.  If it was a constant every detector would have the same value!   ;D  It is rather a "parametric" constant. Which is defined as "a. A constant in an equation that varies in other equations of the same general form, especially such a constant in the equation of a curve or surface that can be varied to represent a family of curves or surfaces."

Therefore, depending on the form of the equation (and the detector electronics), we might obtain slightly different constants, e.g., 1.32 usec using the traditional expression or 1.28 usec using the logarithmic expression. These slight differences are of course not visible except at count rates exceeding 100 to 200 kcps and only with the constant k-ratio method with its amazing sensitivity.

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call "physically unrealistic", we call empirical observation and the development of scientific models. We personally prefer to avoid large errors in our quantitative measurements while increasing our sensitivity and throughput, so we will continue to develop improved dead time models beyond what you learned in grad school:

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)
 
And by the way, the only person who has been making "arbitrary adjustments" to the dead time constants, has been you, in a blatant and pathetic attempt to discredit our efforts.  You were already called you out on this, but apparently you need to be reminded again. Please stop misrepresenting our work! Instead, we are carefully adjusting the dead time constant to yield a *zero slope* in our constant k-ratio plots as a function of count rate. Which I'm sure even you will agree is the analytical result we should observe in an ideal detector system.

The point being that our detection systems are not perfect (and neither are our models), hence "squiggles".   :)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 12, 2022, 09:36:27 AM
Once again, I warn anyone in the strongest possible terms not to use the “log” equation.  It departs in form from all other expressions used to correct for dead time and pulse pileup.  The forms of the expressions for extending and non-extending dead times have been known since the 1930s and are well supported experimentally; they are limiting expressions.  I tried to bring this to the forefront in my “generalized dead times (https://probesoftware.com/smf/index.php?topic=1489.0)” topic.  Has this discussion been forgotten?

From 1930'ies? Seriously? You know Gas quenching (required for proportional counter) was invented near decade later https://link.springer.com/article/10.1007/BF01333374 (https://link.springer.com/article/10.1007/BF01333374) and proportional counter itself as We know today appeared whole decade after 30ies. Equations made for G-M tube is not relevant to proportional counter because proportional counter is not G-M tube. It is different kind of regime (gas, gas pressure, geometry, presence of quenching, voltage of wire, sensing, electronics). More further observed missing count problem on counter is not due to detector but counting electronics, the problem of analog-digital domain crossing - that is problem way few decades later originating than people could get or imagine at 1930'ies and is not directly used in EDS (which btw is expandable) or WDS - because it is not relevant. If dead times could been sorted out in 1930 already, then there had been no ongoing attempts to improve it later, and still it is ongoing effort.

In my opinion log model is not good enough as it bends the count rate too little, but it bends it more than simple linear equation and is able to fit closer to the real input vs output count rates. And this is where I disagree with probeman:

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call physically unrealistic, we call empirical observation and scientific models. We personally prefer to avoid large errors in our quantitative measurements, so we will continue to develop improved dead time models beyond what you learned in grad school:

But I was not right, but wrong (in what?). I was wrong in thinking that this is dominated with pulse-pile up (that is in middle of pipeline with electronic pulse-pile ups). But my most recent MC simulation shows that actually probeman's coined "photon coincidence" indeed is present already at low count rates and is not so exotic as looked for me. Still, probability of 200ns pulse (Townsend avalanche) to overlap at least partly in finite time domain is smaller than pulses of 3.5µs to overlap at least partly in same finite time domain. As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right. To make right simulation it is important to understand how detection works and works not. My initial oversimplified simulation with taking oversimplified timing resolution steps of 1µs (thus working a bit poorer but  faster) already showed that coincidences plays the most important role in non-linear response of counting.

The other technical effects infuences the final counts, but is not dominating. The key concept to understand is that at counting we have finite resource – that is time. We normally consider that we count the pulses (the representation of single x-ray events), and then get the ratio of counts by dividing counted pulses from time, when detector was not blind (dead). Lets do a thought experiment, lets say we have 1 second as time domain for disposition at which pulses can appear. If we get pseudo-random 100 counts (which does not overlap anyhow in time domain at this particular example - thus "pseudo") and counting system is blind for 3µs per pulse, using simple formula we get then that "live time" is: 1_000_000 µs - 100*3µs = 999700 µs, so rate is 100ct/0.9997s. Let's push this thought experiment further: this time we would suppose that we "pseudo"-randomly place 200 000 pulses at that 1s time domain. Live time in that case is: 1_000_00µs - 200_000*3µs = 400 000µs, so rate is 200_000counts / 0.4s = 500kcps. In this thought experiment it should already be obvious that this equation in case pulses would be not overlapping anyhow in the time domain, it would be absolutely completely broken, as we had put 200 kilo counts at 1 second domain and got 500kcps. But we were using this formula for many year as it "kind-of" works as its imperfection was hidden behind the numbers at limited count range or situation was improved to expected results with real photon/pulse coincidences and "calibrating" or actually scaling the dead time constant (or kind of bending the equation a bit to give more expected results, at least to controlled range of count rate). Value of that "Constant" needs to be calibrated so that equation would work - it actually have no relation with real-world hardware dead time.
If we will go further with thought experiment, inputting 333333 non overlapping counts will leave us with 1µs as live time and will give bizzare 333.333Gcps!  :o  And trying to push 333334 non-overlapping pulses would overflow the equation and would tear the universe apart :P.
In real life, however, the live time can't get anything close to 1µs - due to true-randomness of pulses many of these in 333333 pulses would be piled-up/overlapped in time domain and there would be much more than 1µs time-without-any-pulse or live time left.
Anyway, the illustration of that thought experiment:
(https://probesoftware.com/smf/gallery/1607_12_12_22_9_32_56.png)

Lets look to this problem from a bit different perspective. Do we really need to measure pulses? We barely can do that as pulse-pile ups blurs the reality. However what we can measure well is time without any pulses! That is the live time! The live time at 0kcps will equal the real time. With increasing count rate live time will proportionally but not-linearly will decrease. Live-time is decreasing in non linear fashion thanks to the pulse pileups or photon coincidences. Think about it as this: lets say we have 1s time domain populated with random pulses taking the 0.2s, thus free time left is 0.8s. Lets say we want randomly to add one more pulse, Which is higher probability for that pulse to overlap those 0.2s or to land in free 0.8s region? The answer is probability of 8:2 to land in still not occupied timespace. Now lets reverse this situation, lets say we have hundred of thousands of counts covering 0.8s from 1s and there is free time domain (which we would call live time) only of 0.2s. What is the probability for such another random pulse to land in that still not occupied timespace? 2:8 - much smaller!. Any next count which diminishes the free-from-pulses timespace (or eats the live time away) makes it less probable for another pulse to be placed in that smaller free timespace ( Not every pulse eats the live time away!). Thus the real life live-time diminishes with 1-log-like fashion and can near to 0s then count rate is nearing to the ∞. Thus This log equation is much closer to the workings.

Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 12, 2022, 11:48:38 AM
In my opinion log model is not good enough as it bends the count rate too little, but it bends it more than simple linear equation and is able to fit closer to the real input vs output count rates. And this is where I disagree with probeman:

We already know from Monte Carlo modeling that the traditional expression correctly handles single and multiple photon coincidence (I was mistaken on that point originally and SEM geologist was right). See attached Excel spreadsheet in this post:

https://probesoftware.com/smf/index.php?topic=1466.msg11431#msg11431

However, the traditional expression clearly fails at count rates above 50K cps, so we must infer that various non-linear behaviors (probably more than one) are introduced at these high count rates, ostensibly by the pulse processing electronics.  That is the subject of the discussion we are having and if you cannot respond to the topic on hand, please go away. 

What you call physically unrealistic, we call empirical observation and scientific models. We personally prefer to avoid large errors in our quantitative measurements, so we will continue to develop improved dead time models beyond what you learned in grad school:

But I was not right, but wrong (in what?). I was wrong in thinking that this is dominated with pulse-pile up (that is in middle of pipeline with electronic pulse-pile ups). But my most recent MC simulation shows that actually probeman's coined "photon coincidence" indeed is present already at low count rates and is not so exotic as looked for me.
...
As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right. To make right simulation it is important to understand how detection works and works not. My initial oversimplified simulation with taking oversimplified (thus working a bit poorer) already showed that coincidences plays the most important role in non-linear response of counting.

It's funny how we seem to have swapped our positions!   :)

Originally you felt that "photon coincidence" (which by the way is a term I did not coin!), was not an issue at moderate count rates and I thought it was. I had thought that the traditional dead time expression only dealt with single photon coincidence, and when I tried the Willis two term expression and it worked better, my co-authors thought the 2nd term in that expression might be dealing with double photon coincidences which is the next most common coincidence type. That led eventually to the log expression.

But then Aurelien's Monte Carlo modeling (based on Poisson statistics) showed us that the traditional expression did indeed properly handle both single and multiple photon coincidence.  Yes, the Excel spreadsheet only reveals the results of his calculations and yes, we were stunned by this result.   :o

So we accepted these new results and now attempt to explain the improved performance of the log expression through non-linear behaviors of the pulse processing system. But you claim your Monte Carlo results show a different result!  Isn't this fun?   :)

Thus the real life live-time diminishes with 1-log-like fashion and can near to 0s then count rate is nearing to the ∞. Thus This log equation is much closer to the workings.

Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.

So perhaps we can perform a "code exchange" and try to resolve our differences?  If you are willing to zip up your code and attach it to a post, Aurelien has said he will do the same for his code.

How does that sound?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 13, 2022, 08:49:45 AM
Sounds really good.

My old MC simulation sits all the time in this forum post as the last attachment (it is jupyter python notebook):
https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892 (https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892)

I however am cleaning it up a bit and will upload better version soon of that simplified simulation.
It is very simplified with time step of 1µs, no argon esc peaks, no variations of amplitude, no pulse shape - whole pulse fits into 1µs and if two pulses overlap in same time step it is treated as full pile-up, and if pulses land in subsequent time steps - they are treated as not piled-up. I found initially that to get the model to come up with observed count rate I need to add +1µs to the hardware set dead time. This old code contains old wrong superstitions, I thought this is because 700ns from datasheet of hold time of sample-hold chip + 0.3µs -the default additional dead time in Peaksight. I have a different explanation why it works: Because there is no pulse shape in that model that additional 1 µs allows to skip pulses which would be missed by hardware as sample-hold output signal needs to drop below 0V to reset the trigger for pulse sensing (it basically senses only rising edge of pulse and only (and only) if it rises from 0V - that is not most briliant design of hardware to be honest). So in case the sensing is armed and counting electronics starts to look for pulse, as it "opens its eyes" and if at moment pulse is rising, but it is already in progress of being risen - that pulse will be lost, and so about whole 1µs can be shaved-off. That is oversimplification but surprisingly it gives observable results (at least on our SXFiveFE).
 
This simplified MC strong side is low memory footprint and high modeling speed. Some parts I am reusing in next generation of MC simulation (New generation is much slower because it simulates pulse shapes with 40ns resolution, thus instead of 1M point it does 25M points to cover 1s, makes it possible to simulate pulse sensing trigger PHA shift and count losses, noise and etc, it is memory hungry and terribly slow.)

IT has two steps:
1) modeling 1s time frame of 1M 1µs segments with random pulses (an 1D array)
2) simulating the counting based on moving through such array.

Modeling of signal uses pythons numpy libraries random number (array full of random numbers) generator. Such generated array is then checked for criteria (number smaller n times, where n is times the random generator should be triggered) taking out single pulses at random array positions. n number of such arrays are summed efficiently element-wise finally generating the array with 0 (no pulses), singlefold (1), double-pile-up's (2), triple-.. (3) and so on.

counting is based on iteration through the generated array from its first to its last index (this is not so fast). There is no pulse sensing triggered when 0 is encountered, then array pointer is increased and next value is checked. Then non zero value is found, it adds the number to main counter, arms small counter for dead time time out and with next array pointer iterations puts encountered pulses into separate blanked-pulse counter.
It keeps from adding pulses to main counter until the separate small hardware dead time counter times out. The counting simulation consolidates data how many which kind of pulses was counted, how many was missed and how many and what kind of pileups was encountered, and what raw numbers of count rate we would see on machine. Then changing hardware time (1µs ,2µs,3µs,4µs...) it correctly predicts the platoo where count rate nearly halts from rising while increasing the current.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Aurelien Moy on December 13, 2022, 10:19:56 AM
Lots of great posts here. I really enjoyed reading this topic.

I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Below I will try to present the Monte Carlo code I wrote, as best as I can. I would love any feedback on it or any correction if I made a mistake.

The algorithm I wrote is very basic and does not attempt to model a physical detector. It assumes the detector is either available to detect a photon or is dead. The detector can switch from one of these two states to the other without any time delay. Once the detector is dead, it stays dead for a time equals to the dead time. This dead time is non extensible, i.e., if a new photon hit the detector while being dead, this does not extend the dead time. I assumed the emission of photons to follow a Poisson distribution. It is generally well accepted that photon emission can be described by such a distribution: the probability that k photons reach the detector in the time interval ∆t (in sec) is given by:

(https://probesoftware.com/smf/gallery/1567_13_12_22_10_10_49.png)

where λ=N×∆t is the average number of photons reaching the detector in the time interval ∆t and N is the emitted (real) count rate in c/s.

The Monte Carlo algorithm works with four parameters: the number of steps simulated, a time interval (∆t) corresponding to the time length of each simulated step, in second, the count rate (N) of emitted photons reaching the detector, in count per second, and the detector dead time (τ), in second.  Here is the logigram (flowchart) of the code:

(https://probesoftware.com/smf/gallery/1567_13_12_22_10_13_17.png)

For each step of the simulation, a time interval ∆t is considered. Based on the count rate N and the time interval ∆t, the program simulates how many photons (k) are reaching the detector in this time interval using the Poisson distribution and random numbers. What I called photon coincidence is when more than 1 photon reached the detector in the time interval ∆t. Obviously, ∆t needs to be small enough compared to the detector dead time. In my simulations, I used ∆t=10 ns.

When at least one photon reaches the detector, the total number of detected photons is increased by one (only one photon at a time is detected) and the total number of emitted photons increases by k. As a result of the detection of a photon, the detector becomes dead for a period corresponding to the deadtime τ. The detector is then staying dead for a number of steps j=τ/∆t. During each of these j next steps, the program simulates how many new photons are reaching the detector. If any, these photons are not detected (because the detector is dead) but they are accounted for by the program in a variable tracking the total number of emitted photons. After j steps have passed, the detector is ready to detect a new photon. The process is repeated until the specified number of steps has been simulated.

Note that for the simulation to give realistic results, the time interval ∆t must be much smaller than the detector deadtime τ (ideally 100 to 1000 times smaller) and ∆t must be a multiple of τ.

I coded this algorithm in VBA and included it in the attached Excel spreadsheet. For a given number of simulation steps, targeted count rate, detector dead time and simulation time interval (∆t), the first spreadsheet will calculate the number of emitted photons (photons hitting the detector), the number of detected photons as well as the corresponding count rates.

The second spreadsheet will do the same but for targeted count rates of 100, 1000, 10000, 25000, 50000, 100000, 200000 and 500000 cps. The results will also be compared to the traditional dead time correction formula and plotted together. It takes about 8 seconds on my computer to simulate 30,000,000 steps and so about 1 minute to simulate the 8 targeted count rates above (the spreadsheet may seem frozen while calculating).

I was very surprised to see that this Monte Carlo code, which deals with multiple photon coincidence, gives the same results as the traditional dead time correction.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 13, 2022, 12:53:25 PM
I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Seems we come from different background an so we have a bit different approach albeit I  think we get to very similar result, and we see this result a bit different because we look with different zoom-out-in to it. Excel has not so robust interactive plotting, which python has and is easy to inspect small details and biases with plotting.
Poisson or not Poisson - to make it clear I didn't care - If I would be so good in math to understand those all notations without my head being overheating I would probably even would not had attempted to do any Monte-Carlo simulation in a first place. What I cared was that model would behave similarly enough to what can be seen on oscilloscope of raw detector signal (after shapping amplifier) - that is the any photon hit to detector in finite time space should be random and independent from other photons which hit the detector, albeit sum of photons should be possible to control (with some randomisation) but it should not influence the placement. So I guess the produced distribution of pulses are following Poisson distribution. My code probably looks to use the flat distribution as flat random distributions are indeed used for efficient vectorised computing (I am familiar how to write highly vectorised python numpy code). It makes this Poisson distribution from many runs of flat random distributions. Initial flat random distribution fill array of 1M length with random values ranging from 0 to 1M, so If I want more less 100 counts I generate other array from that with vectorised function checking all elements of array which are less than 100 - thus in the end I get about 100 (+/- some random number) events randomly placed at different indexes of 1M array. Such single trick can't produce overlaps. Thus If I want final distribution of 1000, I can i.e. run 10 times checking for <100 and summing such arrays. What is interesting subdividing the distribution is beneficent only to some extent (can produce the nth-pulse-pileups). Instead of going every finite time step by step and rolling the dice at every step (which BTW older hardware is terrible, and can produce only pseudo-random number), I overuse random number generators in efficient way and use the randomness of those flat distributions to be reshaped into Poisson distribution. Thus it can look a bit convoluted at code, but is very efficient in generating millions of random events at random timestamps.

The important thing in My MC simulation is that pulse generation and pulse sensing models are completely separate. I could generate pulse train and visualize (with huge time interval where pulse shape does not matter) and compare with what I had seen on oscilloscope (and I saw double, triple, quadruple and even quintuple pile-ups). That is why I also had chosen for that simplified MC step of 1µs, because this is the effective length of pulses and simulation with that could produce appearances and increase of every mentioned kind of pileups to what I could observe on oscilloscope.

It is not so hard to modify my code to do go with 10ns resolution. I will include that in the renewed version of MC.
Finally Push the simulation to higher count rates (at least to 1M) - it will then show much more better how actually the conventional recalculation equation and model derails.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 13, 2022, 03:58:47 PM
I looked at Sem-geologist’s MC code some time ago. I am not an expert in Python so I may have misunderstood some of it. In the function “detector_raw_model” an array of 1 million values is returned, each value representing the number of photons hitting the detector in a 1 µs interval. However, these values seem to be generated using purely random numbers (a flat distribution), not following a Poisson distribution. Is that correct? I would have expected the number of photons hitting the detector to follow a Poisson distribution and to be a function of the aimed count rate and the timing interval.

Seems we come from different background an so we have a bit different approach albeit I  think we get to very similar result...

I am not sure about a "similar result".

As I understand Aurelien did some kind of MC simulation of its own. However, that mentioned attached xlsx does not show the simulation itself but only its results which are compared, and I am very skeptical if that simulation was done right.
...
Of course there are other technical considerations and sources for some pulses be skipped from counting, but those are minor causes. So I can't understand how Aurelien's MC simulation could led to those conclusions... I am disturbed, as I am convinced that initial coming up with log equation was correct step and it takes care of pulse/photon coincidences much better than older broken equation.... and this step-back with saying that old equation takes (surprisingly) care of it - I can't understand it in anyway. I think Your MC missed something very important.

Aurelien's Monte Carlo code found (much to our surprise) that the traditional expression *does* account for multiple photon coincidence, while it is my understanding that you claimed your code found that the traditional expression only accounts for single photon coincidence. Are we misunderstanding you? If not, which conclusion is correct? 

Traditional (2 usec)            Monte Carlo (2 usec)
Predicted       Observed        Predicted       Observed
10              10              10              10
100             100             97              97
1000            998             1008            1008
10000           9804            9978            9782
100000          83333           99965           83347
200000          142857          199897          142787
400000          222222          400318          221948


We agree that this "step back" is very surprising.  But we do agree that the log expression is a step forward because it performs better with empirical data. But we think that is because of various non-linear behavior of the pulse processing electronics at moderate to high count rates.

To eliminate issues of graphical display, can you provide numerical values from your Monte Carlo calculations at these predicted and observed count rates?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 14, 2022, 04:41:09 AM
I am not sure about a "similar result".

Hmm... After looking once again finally I had spotted what is the crucial difference in our simulations. Mine  includes rudimentary the additional pulse length correction (remember, it can be sensed only rising up from 0V). It is "rudamentary" as it adds fixed 1µs to hardware time, which allows my MC simulation predictably with physical observations to arrive to similar count rates at high count rate (I am talking about >1M of count rates, and >800nA currents).

Ok, what I am babbling here? Maybe some illustrations will clear things up.

it is pulse sensing which miss the pulses and mess up the measurements. The detector, pre-amplifier and shapping amplifier forwards all information to gain and pulse sensing and PHA electronics (basically detector itself produce no dead time). Yes shapping amplifier tends to convolute the information a bit (albeit no information loss is there, and it can be deconvoluted with right signal processing techniques), and that troubles the oversimplified pulse sensing circuit from correctly recognizing every pulse with increasing pulse density.

So at least on Cameca hardware Gain electronics reverse the signal (pulses are negative). There are different designs of using Sample/Hold together with comparator for pulse sensing, and it looks that Cameca used the worst possible way (I see some reason behind why they do that way - they kinda shoot two foxes here with single bullet: this do pulse sensing and get rid from detecting noise as a pulse... but when if there are lots of "foxes" this shooting blows hunters own foot-off... :) ). There is 3 elements which work together to sense a pulse: comparator (compares two analog signals), sample-and-hold chip (which have two modes, in first mode it passes a bit delayed analog signal (from gain-amplified signal as an input), second mode is triggered when S/H pin is set HIGH – it takes a sample (which takes some short time) and hold (and output it) so long as S/H pin is kept HIGH. And the third element on Cameca design is a D-flip-flop. It is the component which glues the feedback of comparator, FPGA or microcontroler and sample and hold chip. Comparators output goes as Clock input of D-flip-flop chip. This kind of D-Flip-flop reacts only to the rising edge of comparators output (As a clock input, it ignores falling edge). Additional constraint is that it reacts to such rising edge only if its state is cleared - that is when flip-flops CLR pin is triggered and D is set to HIGH. The Clock rising edge of flip-flop then triggers (sets HIGH) its output Q which forwards that state to two devices: to FPGA (where it increases counter for integral mode counting by one); and other copy of Q state goes to Sample/Hold chip where it triggers and holds the S/H pin HIGH, so that S/H chip stops tracking the input but starts sampling and holding the amplitude of the pulse. The FPGA for set hardware time pulls the CLR and D to LOW and thus Flip-Flop is made inactive and does not react to any further CLK input from comparator. It does that for selectable amount of hardware enforced dead time (where on Cameca it is integer number of µs). The comparator compares the Gain-amplified and reversed signal (the main input to this pulse sensing circuit) and the output of Sample-and-hold chip (which takes gained reversed signal and reverses it one more time thus its output is uptight), the comparator output is set to HIGH if output of S/H chip is higher than main input of reversed gained signal (some of Comparators can have some threshold value for difference to switch the output state).

So below illustrations show these events in some unit-less time line. It contains two parallel time axes showing on top two analog inputs of the comparator, and below is shown the digital (TTL) output of comparator corresponding to the relation between those two analog inputs. At first lets look to the worst case scenario:
(https://probesoftware.com/smf/gallery/1607_14_12_22_3_48_22.png)
dashed blue line here marks how signal would look if S/H would be not pulled HIGH. TTL logic depends from generation of electronics. In old times it was 5V (TTL), newer designs moved to (LV)TTL - that is 3.3V.

The crucial thing is that FPGA will unblock the flip-flop after set enforced hardware dead time times out, but in some cases the comparator is still kept at HIGH state and thus prolonging the effective blocking of the sensing any new pulses, and thus it makes additional hardware dead time which depends from 2 things: First, the pulse length at positive side (at 0V, not at half amplitude) and in case of Cameca pulses it is about 1µs. In presented above worst case scenario that is about additional 1µs. Second, The probability of such pulse to appear at such position overlapping with the  - that depends from count rate.

The best case scenario is that there is no other pulse too-close to enforced dead time lifting moment and the comparator can be put to LOW state with very small delay after hardware enforced dead time is lifted. I emphasize the "small delay" - that will be not 0, and will depend from two factors: 1) the amplitude it was holding (the higher amplitude it was holding the longer way down to 0V, the delay will be bigger); 2) how fast the output can restore to track the input after the S/H pin is lowered. So, Indeed, this delay will depend from GAIN and HV BIAS but as it will be small, and its effects can statistically interfere only some range of counting rates, there never was straight forward linkage found.

The best case scenario:
(https://probesoftware.com/smf/gallery/1607_14_12_22_3_49_16.png)

And then at medium count rates there will be lots of intermediate situations:
(https://probesoftware.com/smf/gallery/1607_14_12_22_3_49_47.png)
there as we see the pulse which prolong the dead time happened at enforced time, its long pulse time however goes over that enforced time and prolongs the dead time after.

As this can be seen in the above pictures, this additional dead time will depend from pulse density and length of pulse. Would be the S/H chip comparator and flip-flop connected in better way the time span of pulse would have no influence to the result.

So my initial model adds +1µs to the dead time, and comes at observable count rate at very high count rates (>1Mcps), probably (I need to check) it is underestimating the observable counts at mid-range. I am rewriting that part to address that.

edit: Just to correct my claim about 1µs, actually pulse length measured at 0V is not 1µs but 1.2µs.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 14, 2022, 08:12:21 AM
Hmm... After looking once again finally I had spotted what is the crucial difference in our simulations. Mine  includes rudimentary the additional pulse length correction (remember, it can be sensed only rising up from 0V). It is "rudamentary" as it adds fixed 1µs to hardware time, which allows my MC simulation predictably with physical observations to arrive to similar count rates at high count rate (I am talking about >1M of count rates, and >800nA currents).

Ok, what I am babbling here? Maybe some illustrations will clear things up.

it is pulse sensing which miss the pulses and mess up the measurements. The detector, pre-amplifier and shapping amplifier forwards all information to gain and pulse sensing and PHA electronics (basically detector itself produce no dead time). Yes shapping amplifier tends to convolute the information a bit (albeit no information loss is there, and it can be deconvoluted with right signal processing techniques), and that troubles the oversimplified pulse sensing circuit from correctly recognizing every pulse with increasing pulse density.

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://probesoftware.com/smf/index.php?topic=1466.msg11391#msg11391

and here:

https://probesoftware.com/smf/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 14, 2022, 09:54:28 AM

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://probesoftware.com/smf/index.php?topic=1466.msg11391#msg11391

and here:

https://probesoftware.com/smf/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?

No. I thought I had already answered that rectinilearity of pulses are not there (it is like wishing that apples would be square - of course from transport management of apples to market POV - square shape of apples would make more sense - less space would be wasted in logistics, However, Nature had a bit different idea how it should work...) and it does not solve anything. I start feeling a bit devastated to argue about these rectilinearity stuff again. Maybe me being not-English native speaker stands like a huge wall preventing me to communicate my knowledge to others properly and efficiently with words and sentences... So I retract to image-drawing. Maybe in that way I can point how pointless it (this rectilinearity) is. Here those three figures remade with rectilinear like shapes. Figures are presented in the same sequence.
Worst case scenario:
(https://probesoftware.com/smf/gallery/1607_14_12_22_9_10_01.png)

Best case scenario:
(https://probesoftware.com/smf/gallery/1607_14_12_22_9_10_27.png)

mid-case scenario:
(https://probesoftware.com/smf/gallery/1607_14_12_22_9_11_03.png)

So does the rectilinearity solves anything? The outcome (the comparator output) looks absolutely exactly identical, thus it does not do anything.

So my message is that in some pulse sensing designs (i.e. those of Cameca) pulse length (the duration of pulse) is also crucial for correct dead time correction, and that is exaggeratedly included in my MC (worse case scenario applied for every pulse adding +1µs to the hardware enforced dead time in any case) - and that is wrong (yes mine MC is wrong too :) haha) as it overestimates dead time at mid-count rate range (10kcps - 500kcps, I guess), but allows to more correctly predict the ceiling of count rate which Cameca probe can spit out (the raw count rate). Aurelien's model supposes that pulse is only 10ns, and there is no afterward consequencies of the pulse. That is unrealistic on many levels as 1) the raw Townsend Avalanche on its own takes about 200ns. Shapping amplifier convolutes its duration to more less over 2µs, where following differentiation shortens it down to about 1.2µs. Aurelien's MC comes to same values as that equation as both of them ignores the pulse duration and how it affects the counting.
Try to expand the Aurelien's MC to 2Mcps input count rate and You will see it gets physically unachievable values.

My message is that dead time is not detectors problem, but pulse counting (and sensing) problem and both correction equation and MC modeling needs to know exactly how the counting hardware works and how it can fail to sense the pulses. Would be the design of counting different - the equation could be different. I.e. I know different type (which I was wishful naive thinking some time ago the Cameca uses) of pulse sensing with comparator and S/H chip. Such design would have no dynamic addition of dead time which depends from count rate itself. The key difference is that both inputs are uptight, and it is using the S/H chips one of best features - the delay of signal:
(https://probesoftware.com/smf/gallery/1607_14_12_22_9_46_51.png)

Such design has its week points as well, however it would have this "additional dead time" much less dynamic than in current Cameca pulse sensing design - and in that way actually the equation and Aurelien's MC would be more applicable to real hardware. Additionally the PHA shift would not make pulses not detectable as it still would be sensed. The integral mode of such design would be The Real True Integral mode.

And then there is alternative with FPGA real time deconvolution based design - which would render this whole effort pointless...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 14, 2022, 10:05:04 AM

OK, I think I understand. Are you saying that the non-rectilinear shape of the pulses can affect the "effective" (measured) dead time of the system at higher count rates as the pulses begin to overlap?  That is exactly what I was asking about back here!

https://probesoftware.com/smf/index.php?topic=1466.msg11391#msg11391

and here:

https://probesoftware.com/smf/index.php?topic=1466.msg11394#msg11394

If this is correct, then perhaps we can conclude that *if* the pulse processing electronics produced perfectly rectilinear pulses, the traditional dead time expression would correct perfectly for single and multiple photon coincidence (as Aurelien has shown). But if the pulse shapes are not perfectly rectilinear, it is possible that the "effective" dead time at higher count rates could behave in a non-linear fashion as these pulses start overlapping (as you seem to have shown)?

No. I thought I had already answered that rectinilearity of pulses are not there (it is like wishing that apples would be square - of course from transport management that would make more sense, less space would be wasted in logistics, but Nature had a bit different idea how it should work) and it does not solve anything. I start feeling a bit devastated to argue about these rectilinearity stuff again. Maybe me being not-English native speaker stands like a huge wall preventing me to communicate my knowledge to others properly and efficiently with words and sentences...

Yes, I think we have a language problem because you are missing the point I am trying to make.  Let me try again: We are not claiming that the pulses in our detection systems are rectilinear.

Instead, we are saying that *if* the pulses were rectilinear *or* if they are sufficiently spaced far enough apart in time so it doesn't matter what shape they are, then that might be the reason why at low count rates, the traditional expression appears to properly correct for single *and* multiple photon coincidence in these systems. That is what Aurelien's Monte Carlo modeling appears to support, since he does assume rectilinear pulse shapes and when he does, the response of the system is completely linear.

But at high count rates where the pulses start to overlap with each other, as you point out correctly I think, the actual non-rectilinear shape of these pulses appears to cause a non-linear response of the system, by creating an "effective" dead time with a larger value, as you showed in your previous post.  Hence the reason the logarithmic expression seems to perform better at these high count rates, when these non-rectilinear pulses begin to overlap in time.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 14, 2022, 10:28:19 AM
Its pulse duration, not its shape which matters. Look again to the figures – rectilinear pulses (of same duration) would give the absolutely same outcome for this kind of detection system (Cameca pulse sensing). It is not so important what the pulse shape is, but how pulse is sensed. The Aureliens MC and classical equation could be partly applicable on this alternative system proposed in my last post(which is actually well known and classical pulse sensing, not some unnecessary invention of Cameca, and I somehow suspect Jeol probably use that classical type, and that is why it is less affected at higher count rates). In way Cameca senses the pulses it introduce additional dynamic dead time. It is not rectangular shape in Aurelien's MC which is problem. Actually AFAI understood from Aurelien's replay and looking at his VBA code there is no shape modeling at all. Rectangularity of pulses is moot point. The problem is the unrealistically short pulse duration of only 10ns, or even less and supposing that pulses hitting the detector while it is "dead" has absolutely no consequences afterward. The problem is treating the event to be of same length as MC time resolution.

As for equation it does really poor job with coincidences as it overestimates the count rate at very low rates, and underestimates at high rates. Because of these contradictions at mid-rates it more less fits to observations as those contradictory workings compensate and thus it is accepted. Your invented log function is much better as it overestimates less the count rate at low rates, and underestimates less at high rates.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 14, 2022, 10:37:01 AM
Its pulse duration, not its shape which matters. Look again to the figures – rectilinear pulses (of same duration) would give the absolutely same outcome for this kind of detection system (Cameca pulse sensing). It is not so important what the pulse shape is, but how pulse is sensed. The Aureliens MC and classical equation could be partly applicable on this alternative system proposed in my last post(which is actually well known and classical pulse sensing, not some unnecessary invention of Cameca, and I somehow suspect Jeol probably use that classical type, and that is why it is less affected at higher count rates). In way Cameca senses the pulses it introduce additional dynamic dead time.

So you are saying that the reason the logarithmic expression performs better at high count rates, is because of the way the Cameca system senses the pulses, it introduces additional dead time?  So why do you think the JEOL system also performs better using the logarithmic expression?

It is not rectangular shape in Aurelien's MC which is problem. Actually AFAI understood from Aurelien's replay and looking at his VBA code there is no shape modeling at all. The problem is the unrealistically short pulse duration of only 10ns, or even less and supposing that pulses hitting the detector while it is "dead" has absolutely no consequences afterward.

I think the reason he's using such a short input pulse is because I thought we all agreed that the output from the detector itself is very short. And that the primary formation of dead time was a result of the pulse processing system whether it is due to pulse shape or as you say pulse sensing.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 14, 2022, 11:20:39 AM
I think he's using such a short input pulse is because I thought we all agreed that the output from the detector itself is very short. And that the primary formation of dead time was a result of the pulse processing system whether it is due to pulse shape or as you say pulse sensing.

Pulse shape is not pulse sensing. Pulse shape is produced with Shapping amplifier - it is analog device and works continuously without any hiccup or interruption. Again, measured typical Townsend events on Proportional counters have duration of about 200 ns. "Neither 5ns neither 3ns is the number", but 200 ns (more less). That is why Cameca chose to use A203 IC with fixed shapping time constant of 250ns, as that is optimal shaping time for proportional counter produced signals (it integrates fully all collected charge, and thus there is no problem of incomplete charge collection). That 250ns is a 1 sigma value, thus the generated pulse duration is about 2µs near its bottom (not at FWHM), (actually due to not symetric shape it is more like 4µs), and after differentiation it is shortened to ~1µs duration (if measuring only positive side of bi-polar). That is far from 1 or 10 ns.

"Sensing" is making digital sense of train of such pulses (it happens on different board dedicated for that) - recognizing and sending the digital impulse(s) that there was a pulse spotted in the continues analog signal incoming from the detector (From the shapping amplifier to be more precise), so that digital counter of pulses could be increased by 1, and also measurement of pulse amplitude be triggered. This sensing interrupts itself and thus the dead time is generated here.

I also got feeling (while reading Aurelien's VBA code) that MC of Aurelien is generalizing dead time to be the detector thing.
So I want to point out some crucial facts.
The Gas Flow proportional counter (sealed as well) itself has no deadtime (compared to G-M counters) as:
1) has quenching Gas presented in the mixture (prevents avalanche to spread uncontrollably by loop of secondary ionisation and UV propagation)
2) Townsend avalanche is happening at limited space and very limited fraction of wire (does not take whole wire, where on G-M it spreads on whole wire)
3) The voltage drop on wire is insignificant to the bias voltage (thus we need charge sensitive pre-amplifier to sense it at all, on G-M tube the voltage drop can be so big that it lowers the bias so much that it can interrupt/terminate ongoing avalanche on its own)

Then we look further in the pipeline to  Charge sensitive preamplifier and shapping amplifier - those are analog devices working continuously. Gain amplifiers are also analog devices (setting gain is digital, but amplification itself is analog and continuous without interruptions) and works continuously. The interuptions appear only at Pulse-Sensing circuits as this is where this analog signal needs to cross into digital domain and digital domain is not such continuous.

I can only guess how Jeol does things, as I have no hardware access to these machines. As I said the classical pulse sensing (comparing same polarity incoming and delayed signal) also has its weaknesses, which would show up with increasing count rate (it would have "Race-conditions" then comparator would send rising edge to D-flip-flop before the flip-flop being reset) - however such conditions would be rare in case of shorter set enforced dead time.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Aurelien Moy on December 14, 2022, 11:23:06 AM
I have been playing with Sem-geologist's Python code and it indeed leads to the same results as my MC code when using the same parameters: same aimed count rate, same total dead time and same time interval (1µs in the Python code).

I also replaced the function generating random pulses “detector_raw_model” by the NumPy built-in Poisson distribution (just to see how they compared). I did that by replacing it with
Code: [Select]
count_rate = 500000 #aimed count rate in c/s
delta_t = 1e-6 #time interval, here 1 µs.
ts = np.random.poisson(count_rate * delta_t, 1000000)

Below are the results I obtained between my MC code, Sem-geologist's MC code using the original detector_raw_model function and also using the NumPy Poisson distribution. I also plotted results obtained with the original dead time correction formula.

(https://probesoftware.com/smf/gallery/1567_14_12_22_11_13_36.png)

As we can see, the MC codes give similar results. However, they seem different from the original dead time formula. This is because the time interval Sem-geologist chose, 1 µs, is too large compared to the total dead time of 3 µs (2 µs +1 µs). This simulation time interval is too large and does not correctly take into account how many photons will be missed (not counted) by the detector.

To confirm this, I have modified the Python code to have a time interval of 0.1 µs. This was done by increasing dt ten times in the dead_time_model function, just after dt = software_set_deadtime + 1:

Code: [Select]
dt = dt * 10
and by increasing the size of the time_space array from 1 million to 10 millions as well as the values used to generate the batch array:

Code: [Select]
time_space = np.zeros(10000000, dtype=np.int64)  # 10M µs for 10MHz
    for i in range(aimed_pulse_rate // incremental_count_size):
        batch = np.where(np.random.randint(0,10000000,10000000) < incremental_count_size, 1, 0)
        time_space += batch
    return time_space

Here are the results I obtained with this smaller simulation time interval:

(https://probesoftware.com/smf/gallery/1567_14_12_22_11_13_51.png)

Note that I also changed the simulation time interval in my code to be 0.1 µs.

We can see that both codes are producing similar results and they are much closer to what the traditional dead time expression produces.

The only visible discrepancy is for a count rate of 500,000 c/s. This is because taking a simulation time interval of 0.1 µs is still not small enough compared to the 3 µs total dead time that we have considered. If I change the simulation time interval to 0.01 µs, we get:

(https://probesoftware.com/smf/gallery/1567_14_12_22_11_14_05.png)

I only did the calculations for a count rate of 500,000 c/s, lower count rates were not visibly different from the previous plot. Now, we can see that all the methods agree with each other.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 15, 2022, 11:18:54 AM
Thanks Aurelien,

I am now absolutely convinced that I suck at math :D, I was previously unable to understand how to use the numpy'es poisson function for random generation, thus was "reinventing the wheel". Generating random events directly with Poisson function is whole order faster, more simple and less error prone. This will speed up more advanced M-C when applying randomized amplitude and common pulse shape, and simulating the detection with more other lesser count hiding effect.

Albeit I still am scratching my head and can't understand why there is such a huge difference between course and fine grained durations (1µs vs 10 or 1ns) because the real counting system can't physically respond to anything above 500kHz. And pulses itself at counting has 1.2µs duration... How exactly decreasing the step down to 10ns helps the simulation? Maybe it does not bring the results of simulation more in line with what we see observed on machine, but close to theoretical calculation thus it fits so good with the equation.

I am staying out of work today, and I see I forgot to take the data of raw achievable count rate limits at 1300nA for Cr on LPET (should be about 2.8Mcps) and lesser ultra high count rates. The initial simulation when changing the enforced dead time constant could get to exactly observed achievable limits at such very high current (that is ~326kcps at 1µs enforced dead time, 276kcps at 2µs, 199kcps at 3µs...109kcps at 7µs. However at 500kcps input count it clearly underestimated largely the detected count rates from observed on machine. Also I could anecdotally observe diminishing count rate then increasing the gain at very high count rates (which has explanation in my previous post explaining how GAIN and BIAS can influence the additional dead time (The Sample/Hold chip has limited Voltage slew rate and thus the higher major dominant pulse amplitude - the larger voltage difference to 0V and thus the longer it takes for it to drop.).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 15, 2022, 02:16:20 PM
I am now absolutely convinced that I suck at math :D, I was previously unable to understand how to use the numpy'es poisson function for random generation, thus was "reinventing the wheel". Generating random events directly with Poisson function is whole order faster, more simple and less error prone. This will speed up more advanced M-C when applying randomized amplitude and common pulse shape, and simulating the detection with more other lesser count hiding effect.

Aurelien is amazing, isn't he?

First some definitions:
Incident photon: a single (initial) photon event which creates a nominal dead time period.
Coincident photon: one of more photons that arrive within the dead time interval of a previous incident (initial) photon.

So would you say that the cause of these increasingly non-linear effects at moderate to high count rates is due to the manner in which the pulse processing electronics responds to photon coincidence at high count rates?

Specifically when a photon arrival is coincident within the incident (initial) photon's dead time interval, does it somehow cause an "extending" of the nominal dead time interval from the incident photon? 

Furthermore, is it possible that this "extension" of the dead time at these higher count rates could depend on the number of photons coincident with the incident (initial) photon?  In other words, could two (or more) coincident photons create a longer "extension" of the dead time interval than a single coincident photon?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on December 19, 2022, 09:37:20 AM
Aurelien is amazing, isn't he?

Yes I agree, I think he is :)

First some definitions:
Incident photon: a single (initial) photon event which creates a nominal dead time period.
Coincident photon: one of more photons that arrive within the dead time interval of a previous incident (initial) photon.

So would you say that the cause of these increasingly non-linear effects at moderate to high count rates is due to the manner in which the pulse processing electronics responds to photon coincidence at high count rates?

Yes. And it depends very much from how pulse recognition/sensing is implemented. The Cameca has quite weird signal sensing/recognition implementation. The problem is that it does not change linearly and predictable by changing the count rate. Also because of bipolar pulses (depending how they overlap at some pointed timestamps such overlap can be destructive and at some constructive, that is adding can reduce intensity of amplitude or increase it.) It is not just simple relation where we get more dead time if we increase or decrease the hardware enforced dead time or/and count rate. It is quite over-complicated and I still am trying to figure out most of corner cases and details.
Not all datasheets contain sufficient information, and brief properties available in datasheets allows to construct only brief model.

Specifically when a photon arrival is coincident within the incident (initial) photon's dead time interval, does it somehow cause an "extending" of the nominal dead time interval from the incident photon?


It can cause extension if they do constructively interfere, and its pulse (without taking into account the after-pulse of reversed polarity) duration overlaps with ending of enforced dead time (integer time enforced with FPGA, lets call it tau_he (hardware enforced). If there is no coincident photons, enforced dead time normally is prolonged with additional dead time caused by S/H amplifier being switched back to sampling mode and trying to catch up with input signal (the initial signal incoming from gain amplifier). That catching-up can take about additional 300ns (that is why default additional dead time by Cameca in Peaksight is set to 0.3µs - I finally see the reason behind the default number.).  Lets call this additional dead time tau_shr (sampling-hold restore). That additional dead time will wary from PHA distribution. 300ns is applicable only for cases when PHA distribution is centered at the middle. If distribution is highly pushed to 5V (increasing bias and gain) then this tau_shr will raise up to 600ns ( for low count rates) but with PHA shift to left (at higher count rates) it will go down to 300ns (and lees) for high count rates. However, that tau_shr can also be shortened overusing bipular-pulse's negative afterpulse i.e. setting tau_he to 1µs - the comparator will be reset faster than 300ns, because the as bipolar pulse incoming from gain amplifier is reversed, and so bipolar-after pulse (in that reversed polarity will be also positive value) amplitude will be much closer to holded value by S/H amplifier and there is less of V difference to cut through to switch the comparator state to LOW which effectively prepares it to sens the next pulse. - That makes some kind of paradox. It also convinces me that for integral mode 1µs enforced time is even better than 2µs.

Then incident photon pulse is overlapped with coincident photon pulse - Thanks to bipolar pulse nature - this however does not extend in-perpetum (as in well known design in EDS), but extends the dead time with max about 1.2µs (lets call this tau_ppu (pulse-pile-up)) in worst case scenario. Nevertheless if hitting at right moment it also can shorten the tau_shr - thus it is very complicated matter with no definitive answer still.
Furthermore, is it possible that this "extension" of the dead time at these higher count rates could depend on the number of photons coincident with the incident (initial) photon?  In other words, could two (or more) coincident photons create a longer "extension" of the dead time interval than a single coincident photon?
It is not only the number but also its position as it could extend a bit more than 1 pulse width (if interferes constructively) or could shorten the dead time (if interfers destructively). The tau_he cant be anyhow shortened by coincident photon induced pulses, but tau_shr can, So the dead time can range (per single events): from
Code: [Select]
tau = tau_he + tau_shr to 
Code: [Select]
tau = tau_he + tau_ppu
tau_ppu is competing with tau_shr, thus the one which is longer overshadows other process.

The worst thing is this sample/hold amplifier workings are a bit secretive, I find documentation quite too little informative. Probably I will end buying one chip to experiment with it (it is quite expensive).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 26, 2022, 12:22:54 PM
I've had a few off-line conversations recently regarding the subject of this topic and what seems to help people the most is to consider this simple question:

"Given a primary standard and a secondary standard, should the k-ratio of these two materials (ideally) be constant (within counting statistics) over a range of beam currents?"

Think about it and if your answer is yes, then you have already grasped most of what is necessary to understand the fundamentals of this topic.

Next combine that understanding of the constant k-ratio with the selection of two materials with significantly different concentrations (i.e., count rates) and you will understand precisely how we have arrived at, not only this new method for the calibration of our dead time "constants", but also improved dead time expressions, in order to account for the observable non-linear behavior of our pulse processing electronics at moderate to high count rates.
Title: Constant k-ratio method for dead time calibration
Post by: Dan R on January 05, 2023, 10:16:49 AM
...But I quickly want to point out that this dead time calibration method is now considered obsolete as it is only good to around 50 kcps. Instead you should be utilizing the much discussed "constant k-ratio" method (combined with the new logarithmic dead time expression), which allows one to perform quantitative analyses at count rates up to 300 or 400 kcps. More information on the "constant k-ratio" can be found in this topic and you might want to start with this post here, though the entire topic is worth a read:

https://probesoftware.com/smf/index.php?topic=1466.msg11416#msg11416

In fact, if you have a fairly recent version of Probe for EPMA, the step by step directions for this "constant k-ratio" method are provided as a pdf from the Help menu...

Thanks probeman, I will try the constant k-ratio method.

@John Donovan -- i did see the following error when i attempted the procedure in the constant k-ratio.pdf

I am using PFE v 13.1.8 and my probewin.ini file is changed to:
[software]
DeadtimeCorrectionType=4      ; normal deadtime forumula=1, high-precision dt formula=2

but i get an error saying that this new value is out of range:
InitINISoftware
DeadtimeCorrectionType keyword value out of range in ...


FYI it works as it should if i set the value to 3 (i.e., logarithmic deadtime correction)

Title: Re: Constant k-ratio method for dead time calibration
Post by: John Donovan on January 05, 2023, 11:16:01 AM
@John Donovan -- i did see the following error when i attempted the procedure in the constant k-ratio.pdf

I am using PFE v 13.1.8 and my probewin.ini file is changed to:
[software]
DeadtimeCorrectionType=4      ; normal deadtime forumula=1, high-precision dt formula=2

but i get an error saying that this new value is out of range:
InitINISoftware
DeadtimeCorrectionType keyword value out of range in ...

FYI it works as it should if i set the value to 3 (i.e., logarithmic deadtime correction)

It's because your Probe for EPMA version is way out of date!  Version 13.1.8 is all the way back from July this summer!   :)

Actually a value of 3 in the probewin.ini file is the six term expression (the options in the Analysis Options dialog are zero indexed, so 4 is the logarithmic expression).

[software]
DeadtimeCorrectionType=3   ; 1 = normal, 2 = high precision deadtime correction, 3 = super high precision, 4 = log expression (Moy)

The pdf instructions are correct, but you need to update PFE using the Help menu to v. 13.2.3.

In practice the six term and the logarithmic expressions are almost identical. The logarithmic expression is just very elegant!  See here:

https://probesoftware.com/smf/index.php?topic=1466.msg11041#msg11041
Title: Re: Constant k-ratio method for dead time calibration
Post by: Dan R on January 06, 2023, 08:23:01 AM
Alright, I performed the newer constant k-ratio deadtime calculations as discussed using Si Ka on TAP/PETJ for all 5 spectrometers on Orthoclase | Si metal.
Here are how my values (usec) compare:
                     Initial DT (2013)    2023 (Carpenter, full-fit)     2023 (Constant K-ratio)   
Sp1 (TAP, FC)        1.72                              1.85                         1.5185
Sp2 (TAP, FC)        1.49                              1.25                         1.1650
Sp3 (PETJ, Xe)      1.70                              1.43                          1.2000
Sp4 (PETJ, Xe)      1.65                              1.42                          1.4000
Sp5 (TAP, FC)        1.68                              1.70                          1.4450

Now for some of my Flow counters, i got some odd behavior shown in the attached image. Any idea whats going on here? this is as good as i could get it...
(http://Example_constant_kratio.PNG)
Title: Re: Constant k-ratio method for dead time calibration
Post by: Probeman on January 06, 2023, 08:57:59 AM
Now for some of my Flow counters, i got some odd behavior shown in the attached image. Any idea whats going on here? this is as good as i could get it...
(http://Example_constant_kratio.PNG)

Hi Dan,
The cool thing is that this sort of sensitivity is only possible with the constant k-ratio method.  If you were looking at the slope of a diagonal plot with the traditional dead time calibration method, you would never even notice this tiny effect!

It merely means that you are slightly over correcting at moderate count rates and slightly under correcting at higher count rates. With a strong line on a large area crystal one can easily obtain count rates well over 500 kcps. I think Anette was able to get around 800 kcps at 200 nA on Si metal on her TAPL crystal. 

Even with the logarithmic dead time expression, at count rates over 300 or 400 kcps the pulse processing electronics just goes even further non-linear and the log expression isn't able to handle things above these count rates. SEM Geologist mentioned this in the other topic, so we tried an exponential expression that Brian Joy had found in the literature, but it is limited to count rates around 200 kcps because of the math (basically the product of cps and tau must be less than 1/e).  You can try it in PFE as option #5, but we don't think it's going to be useful here so just stick with the logarithmic expression.

So I would simply decrease your dead time constant slightly so that at least the k-ratios at count rates up to 300 or 400 kcps are zero slope. Above that count rate things will get out out hand, but it still much better than the traditional dead time expression which is limited to 50 kcps or so
Title: Re: Constant k-ratio method for dead time calibration
Post by: Dan R on January 06, 2023, 09:12:37 AM
Thanks probeman, now in a few of the other topics (and Probe Software provides the option) people show crystal specific deadtimes. Is this necessary since even 4 crystal spectrometers all use the same counter in a given spectrometer? Or is it really a function of how far you want to go down the rabbit hole?

I guess if you need the accuracy for a specific type of measurement it may be useful? But my probe analyzes multiple different material systems under different conditions, therefore I guess the general option makes more sense and for accurate work i should just stick with lower count rates? Is that the general best practice?









Title: Re: Constant k-ratio method for dead time calibration
Post by: Probeman on January 06, 2023, 10:03:48 AM
Thanks probeman, now in a few of the other topics (and Probe Software provides the option) people show crystal specific deadtimes. Is this necessary since even 4 crystal spectrometers all use the same counter in a given spectrometer? Or is it really a function of how far you want to go down the rabbit hole?

That's a great question. The reason for the ability to specify different dead time constants for each crystal on each spectrometer in the SCALERS.DAT file was to allow for the possibility that emission lines of significantly different energies might show consistently different dead time constants. Paul Carpenter had suggested this as a possibility many years ago, so that ability was added by Probe Software way back then.

Basically if the crystal dead time values on lines 72 to 77 are non-zero, they will over ride the spectrometer dead time values entered on line 13 individually for each crystal.  We need to perform more careful measurements of these effects and see if emission energy, gain or bias have a consistent effect on the observed dead time constants.  This topic discusses these possible trend issues in detail:

https://probesoftware.com/smf/index.php?topic=1475.0

I guess if you need the accuracy for a specific type of measurement it may be useful? But my probe analyzes multiple different material systems under different conditions, therefore I guess the general option makes more sense and for accurate work i should just stick with lower count rates? Is that the general best practice?

Personally I am only using the spectrometer specific values as defined on line 13 of my SCALERS.DAT file as I have not seen compelling evidence that emission energy affects the dead time constants. The difficulty is that the measurement of dead time is very sensitive to the proper adjustment of the PHA gain/bias. Specifically making sure that the PHA peak (including the escape peak if present) is fully above the baseline value at the highest expected count rates (usually on the primary standard at the highest expected beam current) is essential.  Again, at lower count rates the PHA peak will shift to the right and sometime appear to be "cutoff" by the right side of the plot, but in INTEGRAL mode, all these counts are still integrated in the measurements.

If you are willing it would also be interesting to share with us a plot of constant k-ratios for all your spectrometers to see how consistent they are. Here is a plot from my lab:

https://probesoftware.com/smf/index.php?topic=1466.msg11352#msg11352

Note how spec3 (LPET) starts going extremely non-linear above 500 kcps (the 1 atm and 2 atm labels in the plot are switched I just noticed!). This is actually a 2 atm flow detector. Spec2 (LPET) starts going crazy at around the same count rate and that is a 1 atm flow detector.

The good news is that all spectrometers give quite similar k-ratios at count rates below 400 kcps. And even more impressive is that this is with dead time correction percents (relative) of up to 200%!
Title: Re: Constant k-ratio method for dead time calibration
Post by: Dan R on January 06, 2023, 12:48:20 PM
Quote from: Probeman link=topic=1160.msg11521#msg11521

If you are willing it would also be interesting to share with us a plot of constant k-ratios for all your spectrometers to see how consistent they are. Here is a plot from my lab:

https://probesoftware.com/smf/index.php?topic=1466.msg11352#msg11352


Sure, here are my results -- I may redo Sp3 and Sp4 using Ti Ka to see if I can up the count rates, but at least my deadtimes are better than they were before. I guess k-ratio discrepancies could be related to sample tilt or another geometric effect.
(http://Spectro_constant_kratios.PNG)
Title: Re: Constant k-ratio method for dead time calibration
Post by: Probeman on January 06, 2023, 01:34:56 PM
Quote from: Probeman link=topic=1160.msg11521#msg11521

If you are willing it would also be interesting to share with us a plot of constant k-ratios for all your spectrometers to see how consistent they are. Here is a plot from my lab:

https://probesoftware.com/smf/index.php?topic=1466.msg11352#msg11352


Sure, here are my results -- I may redo Sp3 and Sp4 using Ti Ka to see if I can up the count rates, but at least my deadtimes are better than they were before. I guess k-ratio discrepancies could be related to sample tilt or another geometric effect.
(http://Spectro_constant_kratios.PNG)

Hi Dan,
If possible please output these results in a single plot (all 5 spectrometers) and then post as an "in line" image in the constant k-ratio topic here:

https://probesoftware.com/smf/index.php?topic=1466.0

using the forum image gallery as described here:

https://probesoftware.com/smf/index.php?topic=2.msg4040#msg4040

It's a more appropriate topic for continuing this discussion of these "simultaneous k-ratios". Once you have done that I will have some things to say that might be helpful.
Title: Re: Constant k-ratio method for dead time calibration
Post by: Dan R on January 06, 2023, 02:12:24 PM
Sure, let's try this:
(https://probesoftware.com/smf/gallery/270_06_01_23_2_09_40.png)

As i stated before:
Quote from: Probeman link=topic=1160.msg11521#msg11521

If you are willing it would also be interesting to share with us a plot of constant k-ratios for all your spectrometers to see how consistent they are. Here is a plot from my lab:

https://probesoftware.com/smf/index.php?topic=1466.msg11352#msg11352


Sure, here are my results -- I may redo Sp3 and Sp4 using Ti Ka to see if I can up the count rates, but at least my deadtimes are better than they were before. I guess k-ratio discrepancies could be related to sample tilt or another geometric effect.
(http://Spectro_constant_kratios.PNG)

Any feedback is appreciated!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Dan R on January 06, 2023, 02:35:44 PM
So i'm quoting myself from another topic, but probeman thought it would be appropriate here. These are results from a ~10 year old JEOL 8530F measuring SiKa on TAP/FC and PETJ/Xe spectrometers. Any feedback would be appreciated -- are the low K-ratios from Sp3 and Sp4 compared to the TAP/FC a result of aging Xe detectors?
Thanks,
Dan

Sure, let's try this:
(https://probesoftware.com/smf/gallery/270_06_01_23_2_09_40.png)

As i stated before:
Quote from: Probeman link=topic=1160.msg11521#msg11521

If you are willing it would also be interesting to share with us a plot of constant k-ratios for all your spectrometers to see how consistent they are. Here is a plot from my lab:

https://probesoftware.com/smf/index.php?topic=1466.msg11352#msg11352


Sure, here are my results -- I may redo Sp3 and Sp4 using Ti Ka to see if I can up the count rates, but at least my deadtimes are better than they were before. I guess k-ratio discrepancies could be related to sample tilt or another geometric effect.
(http://Spectro_constant_kratios.PNG)

Any feedback is appreciated!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on January 06, 2023, 03:20:16 PM
So i'm quoting myself from another topic, but probeman thought it would be appropriate here. These are results from a ~10 year old JEOL 8530F measuring SiKa on TAP/FC and PETJ/Xe spectrometers. Any feedback would be appreciated -- are the low K-ratios from Sp3 and Sp4 compared to the TAP/FC a result of aging Xe detectors?

(https://probesoftware.com/smf/gallery/270_06_01_23_2_09_40.png)

Interesting. These are all quite low count rates, so no large area TAP crystals?

So just to confirm, all 5 spectrometers measured k-ratios of orthoclase/Si metal at the same beam energy?  And you measured the primary and secondary standards at the same beam current at each beam current, following the instructions in the constant k-ratio document? And you turned off the standard drift correction? And you adjusted the dead time constants using the logarithmic expression?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Dan R on January 06, 2023, 03:24:24 PM
Correct. I have not moved any slits etc. all counts for 60 sec on peak 10 sec off. Pha parameters set so that i could get good results at high and low counts. No obvious charging that I could detect.

All done according to the pdf I was surprised at the count rates also, si metal counts were ~2500 cps/nA on tap sp1, orthoclase was ~600 cps/nA. All measurements at 15kV.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on January 06, 2023, 04:49:12 PM
Correct. I have not moved any slits etc. all counts for 60 sec on peak 10 sec off. Pha parameters set so that i could get good results at high and low counts. No obvious charging that I could detect.

All done according to the pdf I was surprised at the count rates also, si metal counts were ~2500 cps/nA on tap sp1, orthoclase was ~600 cps/nA. All measurements at 15kV.

Hi Dan,
OK, thanks.

Remember, you only need to check the PHA settings at the *highest* (expected) count rate in INTEGRAL mode.  Also when you calculate your "predicted" count rates, you should use the count rates observed at the lowest beam currents (in cps/nA) for the primary standard and just multiple by the beam currents.  After all, it's the high count rates on the primary standard (Si metal in this case) that is stressing the dead time correction model in the k-ratios.

I know this goes against everything we were taught back in the day, but if the PHA peak is fully above the baseline level at the highest (expected) count rate, then at lower count rates the PHA peak will merely shift to the right and still be counted in INTEGRAL mode as described here:

https://probesoftware.com/smf/index.php?topic=1466.msg11450#msg11450

Also be sure your beam is defocused to 5 or 10 um or so.  I assume that none your your crystals are TAPL crystals?  That would explain the lower count rates.  But really it's not the count rates that bother me, and your k-ratios look fairly constant, though I don't understand why they are going non-linear at these relatively low count rates. 

But what really bothers me is that your spectrometers are giving such different k-ratios.  That is, we don't really care what the absolute values of the k-ratios are, after all we are measuring Si Ka in Orthoclase relative to Si metal, and of course there is a significant emission (energy) peak shift between the two materials, especially for Si Ka.

But regardless of what that k-ratio value is observed to be (at the lowest count rates where the dead time correction is insignificant), we still should be seeing that *same* k-ratio on all our spectrometers.  Your plot shows k-ratios from ~0.17 to ~ 0.25 for Si ka in these two materials and that is a significant difference as you have noted on your plot.  Though if you utilized a mix of TAP and PET crystals that could explain some of the differences.  Can you plot the PET and TAP constant k-ratios in separate plots (maybe that is what you did to begin with)?   :-[

Yes, it could also be sample (or stage) tilt. Scott Boroughs raised that question to me here:

https://probesoftware.com/smf/index.php?topic=1466.msg11329#msg11329

as he saw similar (though smaller) variations in his simultaneous constant k-ratios, which he felt might also be due to sample (or stage) tilt in that it appeared systematic with respect the spectrometer orientations around the instrument. And I responded in the next post that this is something, which if appropriately characterized, we could compensate for in the absorption correction by changing the "effective" takeoff angle for each spectrometer. 

As you can see in my response, Probe Software had modified the underlying physics code in CalcZAF to handle different spectrometer take off angles for each spectrometer and utilizing that in the absorption correction.  I haven't heard back from Scott if a change in the effective takeoff angle due to sample (or stage) tilt would compensate for this.

More disturbing is the possibility that the variation in these simultaneous constant k-ratios could be the result of differences due to asymmetrical Bragg diffraction. In this case, each crystal could demonstrate a different effective takeoff angle.  The problem is knowing which spectrometer has the correct effective takeoff angle!  And for that determination we would need to worry about differences in the emission peak position, differences in the carbon coating, native oxide thickness, etc, etc. All the usual suspects we deal with in any quantitative analysis...

PS you might want to try Ti Ka on TiO2 and Ti metal as the emission peak shifts will be significantly less for these materials, though you'll still have the native oxide layer issue with Ti metal.  But again, the absolute value of the k-ratios do not matter, only that they remain constant as a function of count rates!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Dan R on January 06, 2023, 05:16:16 PM
Thanks for the input! I'm about to have a PM so I will do this again once that's over and assuming the spectros pass the testing.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on January 06, 2023, 05:24:19 PM
Thanks for the input! I'm about to have a PM so I will do this again once that's over and assuming the spectros pass the testing.

Cool. 

Just a thought: you might want to present this data to your engineer and ask them why the spectrometers are yielding such different k-ratios...

Of course if these two plots are actually showing the difference due to PET vs. TAP, that larger difference (0.2x vs. 0.1x) in the k-ratios is probably explainable simply by the Si Ka emission line shift from metal to oxide with these different spectral resolution Bragg crystals:

(https://probesoftware.com/smf/gallery/395_06_01_23_5_53_05.png)

But because the PET crystal k-ratios should be similar to themselves and the TAP k-ratios should be similar to themselves, and they are actually pretty different even just compared to themselves, there still appears to be a problem with the different spectrometers. To avoid this PET/TAP spectral resolution peak shift issue it would be better to re-run the constant k-ratios using orthoclase and SiO2 as the primary standard instead of Si metal when you get a chance (or Ti Ka using SrTiO3 and TiO2, though of course that would be for PET and LiF crystals).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on January 10, 2023, 08:20:19 AM
We have saying in Poland "the deeper in the forest - the more trees" (BTW, I am not a pole). Anyway, the more I look into Cameca counting electronics the more confused I am getting. I have access to the new generation (2014) and old generation (SX100, 1998) counting boards. The working principles looks very similar, and I thought that gain amplification is reversing the polarity of the signal, but after latest inspection I think I was in error (thus those comparator input plots in previous posts by me are wrong!). I also previously overlooked very important diode between gain amplification and counting part - that means that counting part sees only positive pulse parts, and ignores the negative "after-pulse" - thus my previous plots of "what the comparator would see" are partially wrong, and there is rather no possibility in decreasing the additional dead time (Which is actually a very good finding from math model construction POV). My speculation that at high count rate the dead time also could in some cases be shortened is wrong. Thus the increase in the additional dead time should scale progressively with increasing count rate.

The worst part in investigating these boards are that those are very timing sensitive with main VME CPU Motorola board, if it is mounted with extension board the firmware will not boot it (WDS board) and thus it can't be probed with oscilloscope in its natural working conditions (other boards (especially older generation) could be troubleshooted like that). I think I am giving up the ultimate idea to understand the system in very details and will rather focus my efforts to finish the arbitrary wave generator to measure the dead time of such counting system as a whole, and move on with FPGA design of new pulse counting system based on real-time deconvolution (Having working pulse generator, which can mimic the detector output will allow me to experiment without taking time of EPMA).

At least with recognizing those diodes between amplification and sensing parts I got a final and definitive answer to the question Why is the PHA minimum 500mV...? (https://probesoftware.com/smf/index.php?topic=1240). It also definitely answers some of my hypothesis of possibility about integral mode for pulses below 560mV - it is not possible at current hardware. And finally it is clear why the count rate stops increasing when hitting some high count rate - pulses with the top below 560mV will be blocked after gain amplification and won't even reach the pulse sensing and counting part. I think dependency of baseline shift from count rate can be defined mathematically from count rate - and thus fraction of shifted out pulses (below baseline) should be then possible to calculate. The increasing of Gain can efficiently increase amplitude of pulses. It is not only Ar esc pulses, but normal pulses too if they happen to be produced at moment after few positively interfering coincident pulses (and thus after pulse is deeper than from single event - deep enought to hide the whole normal pulse from primitive detection system). This is demonstrated in this oscilloscope annotated snapshot:
(https://probesoftware.com/smf/gallery/1607_17_08_22_2_08_01.bmp)
and This is where I was some time ago wrong. Previously, I was sure that pulse nr 3 in that picture would be counted in integral mode if dead time would be set to 1µs - but it would not reach the pulse sensing electronics due to diode threshold of 560mV. In case of increasing gain (in this case enormously, probably at max available 4095), maybe this 3rd pulse could be pulled-out over the diode threshold for detection. However, had this 3rd pulse be Ar esc, or randomly of slightly lower amplitude, it would have absolutely no chance to pass the diode between amplification and sensing parts even with maximum gain amplification. The gain amplification expands the amplitude symmetrically around 0V. I just will repeat - decreasing/increasing the gain actually does not shift the PHA left or right (the notation which is unfortunately used often in these forums), but just expands the amplitude around 0V (everything what is positive gets more (or less) positive,  everything what is negative gets more (or less) negative, and zero stays zero).

It still is not satisfactory to explain the squiggle (I think it is more general problem with general approach of i_raw/(1-tau*i_raw)), but I  think this could explain why log equation fits better than canonical equation.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on January 10, 2023, 09:29:23 AM
...that means that counting part sees only positive pulse parts, and ignores the negative "after-pulse" - thus my previous plots of "what the comparator would see" are partially wrong, and there is rather no possibility in decreasing the additional dead time (Which is actually a very good finding from math model construction POV). My speculation that at high count rate the dead time also could in some cases be shortened is wrong. Thus the increase in the additional dead time should scale progressively with increasing count rate.

I think that this makes sense. At count rates below ~50 kcps we have a fairly linear response and the traditional expression seems to be working well.  Above ~50 kcps and up to about 300 to 400 kcps, the observed non-linear of the pulse processing system becomes very evident and the logarithmic expression seems to work well to handle this non-linearity. But above 300 to 400 kcps I suspect that the system becomes even more non-linear response and this could indeed be due to an "extending" of the dead time constant (increased dead time constants) beyond what we calibrated at count rates below 300 to 400 kcps.

I just will repeat - decreasing/increasing the gain actually does not shift the PHA left or right (the notation which is unfortunately used often in these forums), but just expands the amplitude around 0V (everything what is positive gets more (or less) positive,  everything what is negative gets more (or less) negative, and zero stays zero).

It still is not satisfactory to explain the squiggle (I think it is more general problem with general approach of i_raw/(1-tau*i_raw)), but I  think this could explain why log equation fits better than canonical equation.

Understood. But since we cannot see what is getting shifted negative in our PHA distribution plots, I think it makes sense to keep describing the gain shifting as a shift to the right.

As for these subtle "squiggles" I think it is important for other readers of this topic to keep in mind that these artifacts are quite small (~1% or less) and would be quite unobservable without the sensitivity of the constant k-ratio method.

SG: have you had a chance to run some constant k-ratio measurement of your own?  I think it would be most excellent if you could share what you find on your system with us. Maybe start with k-ratio measurements of SrTiO3 and TiO2 for Ti Ka (LIF and PET) and SiO2 and say a robust silicate for Si Ka (PET and TAP).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on February 07, 2023, 12:01:48 PM
I thought I would follow up with some high speed mapping we did recently on some olivines provided to me by Peng Jiang, now at the University of Hawaii.

In an effort to demonstrate the utility of the new non-linear (logarithmic) dead time correction we mapped an olivine grain for Si, Fe, Mg, Ca and Mn using a 200 nA beam, while the standards were acquired, as usual at 30 nA. All 5 (3 major and 2 minor) elements were acquired simultaneously using a 500 msec pixel dwell time and using the MAN background correction to avoid having to also acquire off-peak background maps.

Here are the quantitative results using the traditional (linear) dead time first:

(https://probesoftware.com/smf/gallery/1_07_02_23_11_36_02.jpeg)

Note that the totals map shows olivine totals around 97 wt%.  Next we turn on the logarithmic dead time correction and re-run the pixel quantification and we now obtain this map:

(https://probesoftware.com/smf/gallery/1_07_02_23_11_41_58.jpeg)

Note the the totals map has improved to over 99 wt%, next we take a look at the detection limits map that was calculated at the same time:

(https://probesoftware.com/smf/gallery/1_07_02_23_11_42_34.jpeg)

Note that the *single* pixel detection limits are around 400 PPM for Ca and 800 PPM for Mn. Of course we expect to get some improved sensitivity when using the MAN corrected because there is essentially no variance associated with the MAN background intensities (Donovan et al., 2016), but for 400 msec integration time that is not too bad.

And remember when averaging pixels we can further improve these detection limits by a factor of Sqrt(2) for each doubling of the pixels averaged...

It is also interesting to note the subtle variation in the detection limit maps, which I think reflects a small amount of drift in the standard/background intensities acquired before and after the x-ray map acquisition as seen here:

Selected Samples...
Un    7  Jiang olivine random points at 15.00 keV

Assigned average standard intensities for sample Un    7  Jiang olivine random points

Drift array background intensities (cps/1nA) for standards:
ELMXRY:    mg ka   ca ka   mn ka   si ka   fe ka
MOTCRY:  1   TAP 2  LPET 3  LLIF 4   TAP 5   LIF
INTEGR:        0       0       0       0       0
STDASS:       12     306      25      14     395
STDVIR:        0       0       0       0       0
             .64    1.17    1.21    1.30     .43
             .65    1.16    1.19    1.33     .44
             .64    1.16    1.22    1.27     .43
 
Drift array standard intensities (cps/1nA) (background corrected):
ELMXRY:    mg ka   ca ka   mn ka   si ka   fe ka
MOTCRY:  1   TAP 2  LPET 3  LLIF 4   TAP 5   LIF
STDASS:       12     306      25      14     395
STDVIR:        0       0       0       0       0
          628.90  202.25  516.71  778.58  139.57
          621.29  201.76  517.08  776.49  139.44
          629.98  203.40  518.29  788.12  140.13

The point being of course being that with the new logarithmic dead time expression in Probe for EPMA, we can now perform high accuracy and high sensitivity quantitative point analyses and high speed (high beam current/low dwell time) X-ray mapping at the same time for major, minor and trace elements.

As mentioned in prior posts, those of you that already have Probe for EPMA should first use the Help | Update Probe for EPMA menu to get the latest version of Probe for EPMA and then also refer to the Help | How To Use Constant K-ratios To Calibrate Your Instrument menu pdf document to perform high sensitivity dead time calibrations to allow you to take advantage of this new dead time expression. This will allow for quantitative analyses up to several hundred nA of beam current even with large area Bragg crystals.

I hope to provide some additional examples of high speed quantitative mapping using the logarithmic dead time expression, but if you have any examples of your own that you'd like to share, please feel free to post them here.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on February 10, 2023, 10:23:59 AM
In case anyone is curious about the PHA tuning adjustments for the above high speed maps I'll remind everyone that when utilizing a large range of count rates we should keep in mind the pulse height depression which occurs at high count rates.

This pulse height depression effect causes the PHA peak to shift towards lower voltages at higher count rates, thus increasing the possibility of some counts being cut off by the baseline level at these higher count rates. The solution is to always tune ones PHA settings at the highest expected count rate (highest beam current on a material with the highest expected concentration- usually one's primary standard).

In the above high speed mapping example we utilized SiO2 as the primary standard for Si Ka, therefore we should tune our PHA settings on that material at the highest expected beam current. However, since we intend to acquire our olivine unknowns at 200 nA and our primary standards at only 30 nA, and since the concentration of Si in SiO2 is about 50% and the concentration of Si in olivines is about 20% (in round numbers), we could compare these concentrations and beam currents by considering that our olivines have 2.5 times less Si than our primary standard, but will be measured at 6.6 times the beam current, so we should probably tune our Si Ka PHA on the olivine unknown at 200 nA for the highest expected count rate (of course the exact count rate depends on the absorption correction differences between these materials also, but we're just speaking in round numbers here).

But to make things more interesting I decided to tune the PHA settings on the primary standards at 200 nA. Here is Si Ka on SiO2 at 200 nA:

(https://probesoftware.com/smf/gallery/1_10_02_23_9_55_18.png)

Note that the gain was adjusted to place the PHA peak fully above the baseline level even at this quite high count rate (~160 kcps). And remember, although the PHA peak appears to be slightly cut off at the right side of the plot, that is merely an artifacts of the PHA display system. All counts to the right of the plot axis are fully counted because we are in INTEGRAL mode.

Next here is the Si Ka PHA scan again on SiO2 at the same PHA settings but using a beam current of 30 nA:

(https://probesoftware.com/smf/gallery/1_10_02_23_9_55_40.png)

We can see that the PHA peak has shifted even further to the right, but again we don't care as all pulse to the right of the plot will all be counted in INTEGRAL mode.

Remember, on Cameca instruments we will be adjusting the gain to place the PHA peak above the baseline at the highest expected count rate, while on JEOL instruments, we will be adjusting the bias the place the PHA peak above the baseline level at the highest expected count rate.
Title: Re: Constant k-ratio method for dead time calibration
Post by: Ben Buse on April 04, 2023, 03:14:22 AM
Hi all, since my instrument (JEOL 8530F) is ~10years old and i had some extra time to play with new features, I decided to use Startwin to automate deadtime measurements.

Using Paul's xls sheet, two deadtimes are given, one fitting all of the data (DT us All) and one fitting only the higher beam currents (DT Last)-- do you have a rule of thumb as to which I should input into the scalars.dat file?

I updated my scalars.dat file in line ~77, so that should override the parameters specified in Line 13 correct?

Here are my values (micro-sec) for comparison over time:
Initial DT (Kremser, 2013)           2023 Values (full-fit, Si Ka on TAP/PETJ)
Sp1   1.72                                  1.85
Sp2   1.49                                  1.25
Sp3   1.70                                  1.43
Sp4   1.65                                  1.42
Sp5   1.68                                  1.70

Do these changes make sense?

The values don't seem too unreasonable though some are maybe a little high, but the instrument is pretty old. Have you ever had any detectors replaced?  The Xenon detectors can age particularly fast and should be replaced about every 5 years or so.

But I quickly want to point out that this dead time calibration method is now considered obsolete as it is only good to around 50 kcps. Instead you should be utilizing the much discussed "constant k-ratio" method (combined with the new logarithmic dead time expression), which allows one to perform quantitative analyses at count rates up to 300 or 400 kcps. More information on the "constant k-ratio" can be found in this topic and you might want to start with this post here, though the entire topic is worth a read:

https://probesoftware.com/smf/index.php?topic=1466.msg11416#msg11416

In fact, if you have a fairly recent version of Probe for EPMA, the step by step directions for this "constant k-ratio" method are provided as a pdf from the Help menu, but I've also attached the pdf to this post as well.

Please be sure to ask if you have any questions at all.

Hi John,

Just tried your constant k-ratio deadtime measurement as described in the pdf, it's real nice being able to modify deadtime and see affect on raw kratio x-y scatter plot, - I was conservative and stopped short of when pha peak width became enormous. On XY plot is raw counts - counts per sec?

Thanks

Title: Re: Constant k-ratio method for dead time calibration
Post by: John Donovan on April 04, 2023, 07:59:32 AM
Hi John,

Just tried your constant k-ratio deadtime measurement as described in the pdf, it's real nice being able to modify deadtime and see affect on raw kratio x-y scatter plot, - I was conservative and stopped short of when pha peak width became enormous. On XY plot is raw counts - counts per sec?

Thanks

Well it depends on what you choose for your plot axes. I usually plot k-ratio on the Y axes and beam current on the X axis.  But you can also plot raw counts on the X axis as shown in this post here:

https://probesoftware.com/smf/index.php?topic=1466.msg11248;topicseen#msg11248

and yes, those would be in raw cps (uncorrected for dead time).  But remember, those raw counts in the Output Standard and Unknown XY Plots menu, will be for the samples selected, so that will be the unknown or secondary standard raw counts (not the primary standard).  Which is why I re-plotted the data in Grapher after exporting.  So I could have the secondary standard k-ratios on the Y axis and the primary standard raw counts on the x-axis.

The constant k-ratio is a pretty cool method isn't it? You should share some k-ratio data with us...  how high did you go in count rate on your primary standard?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on April 19, 2023, 01:11:51 PM
This post here:

https://probesoftware.com/smf/index.php?topic=340.msg11795#msg11795

has absolutely nothing to do with the constant k-ratio method for dead time and picoammeter calibration, but it does nicely illustrate why plotting deviations (or k-ratios!) on a horizontal axis improves ones ability to see small artifacts in the data...
Title: Re: Constant k-ratio method for dead time calibration
Post by: Probeman on April 19, 2023, 01:14:22 PM
...I was conservative and stopped short of when pha peak width became enormous.

Remember, as long as you are in integral PHA mode and your PHA peak (and escape peak if present), are above the baseline level, it doesn't matter how wide your PHA peak is!

All the photons will get counted in integral mode.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 17, 2023, 09:26:51 AM
We are pleased to announce the publication of our new paper on improving dead time corrections in WDS EPMA:

John J Donovan and others, A New Method for Dead Time Calibration and a New Expression for Correction of WDS Intensities for Microanalysis, Microscopy and Microanalysis, 2023

https://academic.oup.com/mam/advance-article-abstract/doi/10.1093/micmic/ozad050/7165464
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on May 21, 2023, 12:13:02 AM
Congratulations with your paper! I really hope community will widely recognize this outstanding problem and adapt the solution as fast as possible. I believe inadequate dead-time corrections could be behind lots of historically created biases and main source of discrepancies of fundamental measurements (like MAC reconstruction with changing acceleration voltage, or matrix match standard requirements...). The only downside (for me) is that only those with ProbeSoftware has this new dead time correction methods available...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 21, 2023, 10:33:24 AM
Congratulations with your paper! I really hope community will widely recognize this outstanding problem and adapt the solution as fast as possible. I believe inadequate dead-time corrections could be behind lots of historically created biases and main source of discrepancies of fundamental measurements (like MAC reconstruction with changing acceleration voltage, or matrix match standard requirements...). The only downside (for me) is that only those with ProbeSoftware has this new dead time correction methods available...

Thank-you SG.

Also your insight and discussion in this topic was much appreciated by all of us in the writing of the paper.  As you know we did thank you (and Ed Vicenzi) in the acknowledgements.

Yes, currently only Probe Software Probe for EPMA (for quant points) and CalcImage (for quant maps) have the new logarithmic dead time expression, but anyone with any software can still perform the constant k-ratio measurements described in the paper and check their dead time and picoammeter calibrations.  I think that is the most important aspect of the paper.

As for the new expression, now that it is published, it can be implemented by Cameca and JEOL if they decide to- it would be very easy!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 26, 2023, 06:31:36 AM
Maybe this question belongs more in the History of EPMA topic,

https://probesoftware.com/smf/index.php?topic=924.0

but since this topic discusses the constant k-ratio method for determining spectrometer calibrations, maybe it works here too:

So why is a k-ratio called a k-ratio?

Is it because in the beginning, Castaing was taking the ratio of two K emission lines?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on December 22, 2023, 10:05:59 AM
In case anyone is inspired to run some constant k-ratio calibrations, to determine their dead time constants and picoammeter linearity, over the holiday:

https://probesoftware.com/smf/index.php?topic=1466.msg11173#msg11173

I thought I would provide another example of how the PHA settings should be tuned when attempting to acquire k-ratios from say 10 nA to 200 nA. Remember, always use INTEGRAL mode and adjust your PHAs on the highest concentration of the element, at the highest beam current.  In this case, Ti Ka on Ti metal at 200 nA:

(https://probesoftware.com/smf/gallery/395_22_12_23_9_55_38.png)

That is, at the highest count rates observed (highest concentration at highest beam current), in the above PHA scans, the Ti escape peaks are fully above the baseline levels, while though the main PHA peaks are off to right, but are still fully counted in INTEGRAL mode. 

At lower count rates (e.g., lower beam currents and/or lower concentrations for example TiO2), your PHA peaks will shift to the right, but because you are in INTEGRAL mode, all photons will still be counted!

Happy holidays!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probing on March 11, 2024, 03:02:33 PM
We are pleased to announce the publication of our new paper on improving dead time corrections in WDS EPMA:

John J Donovan and others, A New Method for Dead Time Calibration and a New Expression for Correction of WDS Intensities for Microanalysis, Microscopy and Microanalysis, 2023

https://academic.oup.com/mam/advance-article-abstract/doi/10.1093/micmic/ozad050/7165464

How to get the "best" dead time constant which produces the "best" zero slope "k-ratio v. current" line in your constant k-ratio method? I don‘t have the PROBESOFTWARE, so can I still perform dead time calibration by applying the constant k-ratio method?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on March 11, 2024, 03:12:32 PM
We are pleased to announce the publication of our new paper on improving dead time corrections in WDS EPMA:

John J Donovan and others, A New Method for Dead Time Calibration and a New Expression for Correction of WDS Intensities for Microanalysis, Microscopy and Microanalysis, 2023

https://academic.oup.com/mam/advance-article-abstract/doi/10.1093/micmic/ozad050/7165464

How to get the "best" dead time constant which produces the "best" zero slope "k-ratio v. current" line in your constant k-ratio method? I don‘t have the PROBESOFTWARE, so can I still perform dead time calibration by applying the constant k-ratio method?

Yes, you can perform the dead time calibration using the constant k-ratio method for general use as described here:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102

But if you do not have the Probe for EPMA software, you will not be able to take advantage of the new non-linear expressions described in our paper for count rates above 30 to 50 kcps. In other words, you could perform the dead time calibration using the constant k-ratio method using the traditional linear expression, but you probably won't see a huge difference in the dead time value you obtain below 30 to 50 kcps.  Above those count rates, the tradition linear expression fails as shown in the linked post below.

That said, I think the constant k-ratio method is much easier to use and more precise for dead time calibrations whatever dead time expression is being used...

See here also:

https://probesoftware.com/smf/index.php?topic=1466.msg11173#msg11173
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probing on March 11, 2024, 07:53:19 PM
We are pleased to announce the publication of our new paper on improving dead time corrections in WDS EPMA:

John J Donovan and others, A New Method for Dead Time Calibration and a New Expression for Correction of WDS Intensities for Microanalysis, Microscopy and Microanalysis, 2023

https://academic.oup.com/mam/advance-article-abstract/doi/10.1093/micmic/ozad050/7165464

How to get the "best" dead time constant which produces the "best" zero slope "k-ratio v. current" line in your constant k-ratio method? I don‘t have the PROBESOFTWARE, so can I still perform dead time calibration by applying the constant k-ratio method?

Yes, you can perform the dead time calibration using the constant k-ratio method for general use as described here:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102

But if you do not have the Probe for EPMA software, you will not be able to take advantage of the new non-linear expressions described in our paper for count rates above 30 to 50 kcps. In other words, you could perform the dead time calibration using the constant k-ratio method using the traditional linear expression, but you probably won't see a huge difference in the dead time value you obtain below 30 to 50 kcps.  Above those count rates, the tradition linear expression fails as shown in the linked post below.

That said, I think the constant k-ratio method is much easier to use and more precise for dead time calibrations whatever dead time expression is being used...

See here also:

https://probesoftware.com/smf/index.php?topic=1466.msg11173#msg11173

I like to know how the ideal dead time constant is produced. In the example of "TiO2/Ti"  shown in your post, when you found the DT constant (1.32 us) was too high, you dropped it  to 1.28 us and got a more "flat" line. So my question is why it is 1.28, but not 1.27 or 1.29. How is the exact number produced, using some algorithm or by repeated iterate and trial? If the later is the case, how do you determine the observed "k-ratio v. current" line, which depends on the exact DT constant, is the best?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on March 11, 2024, 09:47:38 PM
We are pleased to announce the publication of our new paper on improving dead time corrections in WDS EPMA:

John J Donovan and others, A New Method for Dead Time Calibration and a New Expression for Correction of WDS Intensities for Microanalysis, Microscopy and Microanalysis, 2023

https://academic.oup.com/mam/advance-article-abstract/doi/10.1093/micmic/ozad050/7165464

How to get the "best" dead time constant which produces the "best" zero slope "k-ratio v. current" line in your constant k-ratio method? I don‘t have the PROBESOFTWARE, so can I still perform dead time calibration by applying the constant k-ratio method?

Yes, you can perform the dead time calibration using the constant k-ratio method for general use as described here:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102

But if you do not have the Probe for EPMA software, you will not be able to take advantage of the new non-linear expressions described in our paper for count rates above 30 to 50 kcps. In other words, you could perform the dead time calibration using the constant k-ratio method using the traditional linear expression, but you probably won't see a huge difference in the dead time value you obtain below 30 to 50 kcps.  Above those count rates, the tradition linear expression fails as shown in the linked post below.

That said, I think the constant k-ratio method is much easier to use and more precise for dead time calibrations whatever dead time expression is being used...

See here also:

https://probesoftware.com/smf/index.php?topic=1466.msg11173#msg11173

I like to know how the ideal dead time constant is produced. In the example of "TiO2/Ti"  shown in your post, when you found the DT constant (1.32 us) was too high, you dropped it  to 1.28 us and got a more "flat" line. So my question is why it is 1.28, but not 1.27 or 1.29. How is the exact number produced, using some algorithm or by repeated iterate and trial? If the later is the case, how do you determine the observed "k-ratio v. current" line, which depends on the exact DT constant, is the best?

It is easy. One simply adjusts the dead time constant (depending on which dead time expression is being utilized (because the dead time constant is actually a *parametric* constant as described in the paper) in order to obtain a trend with a slope close to zero.

Therefore, the goal is to obtain a dead time constant (with a suitable dead time expression) that yields a flat (zero slope) response from low count rates to the highest possible count rates.  One can do this visually by inspection because a zero slope is easy to evaluate.

Obviously at some point (above several hundred kcps) the dead time correction becomes very large and quite sensitive to small changes in the dead time constant.  But we found the logarithmic dead time expression can yield quantitative results from zero to several hundred kcps count rates once the dead time value was properly adjusted.

Don't forget, as described in the paper one can also utilize the same constant k-ratio dataset to check their picoammeter linearity and also the agreement of (simultaneous) k-ratios from one spectrometer to another (and even from one instrument to another given the same materials), which can be used to check one's effective take-off angle for each (WDS and EDS) spectrometer.

I've attached the paper to this post for everyone's convenience.