Probe Software Users Forum

General EPMA => Discussion of General EPMA Issues => Topic started by: Probeman on May 26, 2022, 09:50:09 AM

Title: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 26, 2022, 09:50:09 AM
John Fournelle and I were chatting a little while back discussing dead time and how to calibrate our detectors and electronics and we realized that this also depends on the linearity of our picoammeter.
 
Normally when performing a dead time calibration we use a single material such as Ti metal (for LiF and PET) or Si metal (for PET and TAP), because these materials will yield a high count rate and also are conductive, so hopefully less chance of sample damage and/or charging.

We then repeatedly increment the beam current and measure the count rate as a function of beam current. The idea being that without dead time effects our count rate vs. beam current should be exactly proportional, that is a doubling of beam current should produce a doubling of count rate.

But because of the dead time characteristics of all detection systems (the interval during which the detector is busy processing a photon pulse), the system will be unavailable for photon detection sometimes, and that unavailability is simply a probability based on the length of the system (pulse processing) dead time and the count rate. 

Note that EDS systems, automatically "extend" the live time while processing photons so the dead time correction is part of the EDS hardware, while WDS systems must have the dead time correction applied in software after the measurements have been completed.

So this simple trend of count rate vs. beam current is utilized to calibrate our WDS spectrometers. However John Fournelle and I realized that if the picoammeter response is not accurate, the resulting dead time calibration will also not be accurate.

Even more to the point, what exactly is it we are doing with our microprobe instruments? We are simply measuring k-ratios. That is all we do. Everything else we do after that is physics modeling.  The electron microprobe is a k-ratio machine, so perhaps that should be our focus. And that is exactly the point of the "consensus k-ratio" project as originally suggested by Nicholas Ritchie:

https://probesoftware.com/smf/index.php?topic=1239.0

If we cannot accurately compare our k-ratio measurements to the k-ratio measurements from another lab, we do not have a science.  See the consensus k-ratio project topic:

https://probesoftware.com/smf/index.php?topic=1442.0

That is to say, using the same *two* materials (in order to obtain a k-ratio), and at a given detector takeoff angle and electron beam energy, we should obtain the same k-ratio, not only on all of our spectrometers, but also on all instruments. See topic on simultaneous k-ratios:
 
https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

Now, if are in agreement so far, let's ask another question: at a given takeoff angle and electron beam energy, and two materials containing the same element (and no beam damage/sample charging!), should the instrument (ideally) produce the same k-ratio at all beam currents?

John Fournelle and I believe the answer to this question is "yes".  Now if the two materials have significantly different concentrations of an element, the count rates on these two materials will be significantly different, and therefore the dead time calibration (and picoammeter!) accuracy are critical in order to obtain accurate (the same) k-ratios at different beam currents.

So first we looked at the k-ratio measurements from the MgO, Al2O3, MgAl2O4 round robin organized by Will Nachlas, where I measured only a few different beam currents starting at 30 nA but and then at lower beam current to reduce the effects of any mis-calibration of the dead time constants.  Note: these are just the first thing we looked at, as these measurements are by no means enough data, we need many more measurements and at higher beam currents!

Here are two results, the first using the original dead time calibration from 2015:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_35_55.png)

where one can see that the higher beam current measurement yield a larger k-ratio. This (positive slope) "trend" should mean that the dead time constant is too small. And now using the new dead time calibration from this year on the same data again:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_36_27.png)

So the slope has decreased as expected, but the dead time constant may still need to be increased. How can this be? Well maybe the picoammeter is not accurate...  we need better (and more) measurements!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 26, 2022, 10:06:54 AM
Next weekend I went into the lab and ran some different materials that should be more suitable (more electrically and thermally conductive). I choose Zn, Te, Se, ZnTe and ZnSe in order to measure k-ratios for Zn Ka, Se la and Te La on LiF, TAP and PET with emission energies of 8.63, 1.38 and 3.77 keV.

These are still not enough measurements because these were run manually (more on that later), but here are some Zn Ka measurements using our latest (traditional) current dead time calibration method:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_55_04.png)

By the way, the above plot is from using the Output | Output Standard and Unknown XY Plots menu in probe for EPMA, and selecting "On Beam Current" for the X axis and one of the element "Raw K-ratios" for the Y axis. 

Then we *manually* adjusted the dead time using the Update Dead Time Constants dialog in Probe for EPMA (from the Analytical menu):

https://probesoftware.com/smf/index.php?topic=1442.0

in order to obtain a more constant k-ratio as a function of beam current as seen here:

(https://probesoftware.com/smf/gallery/395_26_05_22_9_55_30.png)

Now that seems to be an improvement but still not perfect. But we need many more measurements and I hope to get to that this weekend.  But please make your own measurements and post what you find from your instruments.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on May 27, 2022, 10:12:55 AM
In order to acquire these "constant k-ratio" datasets at different beam currents, and to ensure that the (primary) standard utilized for each beam current acquisition is at the same beam current as the secondary standard, one must acquire the datasets one beam current at a time. That is, all primary and secondary standards (or unknowns) must be acquired together at each beam current.

Until just now this had to done semi-manually in Probe for EPMA. It might seem reasonable that one could utilize the "multiple setups" feature in the Automate! window, but unfortunately this feature was originally designed for the acquisition thin film calibrations where each standard and unknown are acquired at multiple beam voltages, e.g., 10 keV, 15 keV, 20 keV.

Therefore the program would acquire each sample for *all* the (multiple) sample setups assigned to it. In other words the acquired samples might look like this acquisition, from a thin film run:

(https://probesoftware.com/smf/gallery/1_27_05_22_10_05_23.png)

But for the constant k-ratio method we need the samples acquired one (beam current) condition at a time, for *all* samples, as shown here from the Zn, Te, Se semi-manually acquired data shown in the previous posts:

(https://probesoftware.com/smf/gallery/1_27_05_22_10_08_55.png)

The reason of course is because samples with different accelerating voltages do not get utilized for quantification, because the k-ratios will be different. But that is not true for samples acquired with different beam currents!  These k-ratios should be the same.  But since that is exactly what we are trying to measure, it is best to have each set of beam current measurements grouped together in time.

However, we recently thought of a way to modify the automatically code to handle this constant k-ratio vs beam current acquisition so that it can be fully automated. We added a new checkbox to the multiple sample setups dialog as shown here (accessed as usual from the Automate! window):

(https://probesoftware.com/smf/gallery/1_27_05_22_9_52_05.png)

 8)

The only caveat is that all the samples selected should have the same number of (multiple) sample setups assigned to all samples, which is of course exactly what we want.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 09:44:19 AM
OK, so because Probe Software was able to implement a method to automate the acquisition of the "constant k-ratio" test, that is acquire k-ratios at multiple beam currents as seen here:

https://probesoftware.com/smf/index.php?topic=40.msg10899#msg10899

To remind everyone, the traditional dead time calibration method relies on comparing count rates on a pure material (usually a pure metal such as Ti for LiF and PET or Si metal on PET or TAP), as a function of beam current.  While this new "constant k-ratio" method, attempts to calibrate both the dead time *and* any picoammeter non-linearity, by measuring k-ratios of a primary standard and a secondary standard, as a function of beam current.

The idea being that the k-ratio should remain constant as a function of beam current (at a given beam energy and takeoff angle). And while recognizing that this method is not a replacement for having a well calibrated picoammeter, it can reveal problems in one's picoammeter calibration.

I was able to acquire a pretty dense set of k-ratios for Zn Ka, Te La and Se la using pure metal primary standards and ZnTe and ZnSe using the following beam currents:  6, 8, 10, 15, 20, 40, 60, 80, 100, 120, 140, 160, 180 and 200 nA. This was 60 sec on-peak, 10 sec of-peak and 6 points per standard. So it took about 13 hours.

So let's start with an example of Zn Ka of LLIF, which had last been dead time calibrated (using the traditional dead time calibration method on Ti metal) at 3.5 usec.  Here is what we see using Zn as the primary standard and ZnTe as the secondary standard:

(https://probesoftware.com/smf/gallery/395_31_05_22_9_34_01.png)

So we see two things: first that is a very large variance in the k-ratio! Second, there is an odd anomaly at 40 nA and third, that the dead time constant is too small, as the slope of the k-ratios is generally positive. 

Note the new "string selection" control in the Output | Output Standard Unknown XY Plots menu window in Probe for EPMA. Now let's use the Update Dead Time Constants dialog in Probe for EPMA as described here:

https://probesoftware.com/smf/index.php?topic=1442.msg10641#msg10641

and change the dead time constant in an attempt to obtain a more constant k-ratio trying 3.8 usec first:

(https://probesoftware.com/smf/gallery/395_31_05_22_9_38_58.png)

So that is a bit improved as one can see from the y axis k-ratio range. But there is still a large range of k-ratio as a function of beam current, and I suspect it is related to the picoammeter (mis-calibration). Remember, on a Cameca instrument, the beam current ranges are 0 to 5 nA, 5 to 50 nA and 50 nA to 500 nA (I think, but please correct me if that is wrong!).

I will provide another example soon, but please let me know what you think and/or if you have any "constant k-ratio" data to share on your JEOL or Cameca instrument.

By the way, what are the beam current ranges for the JEOL picoammeters?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 11:40:15 AM
Then again, maybe not!

So considering this is a large area crystal we might expect that such high count rates might require the use of the high precision dead time correction as seen here:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_09_36.png)

and documented here:

(https://probesoftware.com/smf/gallery/1_21_04_21_9_56_07.png)

So using this high precision dead time expression with the original dead time constant of 3.5 usec we get a much different plot:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_13_15.png)

So now we have a too large dead time constant!  What would it take to get a more constant k-ratio as a function of beam current? How about 2.9 usec?

(https://probesoftware.com/smf/gallery/395_31_05_22_11_33_30.png)

OK, so that is better, though there is still an anomaly at 40 nA and the high precision equation starts to break down at beam current over 100 nA, but it's pretty constant (expect for 40 nA) up to around 100 nA. 

So several conclusions.

1. I still think my picoammeter needs adjustment with the high precision current source (we're working on that), particularly given the the issue at 40 nA.

2. I think we might try a "super" high precision dead time correction with a 3rd factorial term.   :o

Finally, given these results I agree with Owen Neill who said recently that we all should be using the high precision dead time equation option in Probe for EPMA for best accuracy.

More to come, but in the meantime here's Te La on a PET crystal (about half the x-ray count rate as Zn ka on LLiF), which is actually quite good except for the "glitch" at 40 nA:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_37_37.png)

Because the count rate was lower than the LLiF, we don't see the need for a "super" high precision dead time correction but I think we will see if Donovan will implement that for us...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on May 31, 2022, 02:11:46 PM
So that is a bit improved as one can see from the y axis k-ratio range. But there is still a large range of k-ratio as a function of beam current, and I suspect it is related to the picoammeter (mis-calibration). Remember, on a Cameca instrument, the beam current ranges are 0 to 5 nA, 5 to 50 nA and 50 nA to 500 nA (I think, but please correct me if that is wrong!).

Yes.
Cameca has 5 ranges:
up to 0.5nA, 0.5-5nA, 5-50nA, 50-500nA, 500nA-10µA (yes you read it right, last bin is up to 10 micro Amperes, and it is possible to get beam currents of few µA on SX100 and SXFiveFE).
Now I see Your range 5 to 50 is probably misaligned. Probably, as the data covers only 1 and a half from 5 picoamperometer ranges.
This whole en-devour in my humble opinion is wrong way from finding, identifying and fixing problems where it originates, and it completely mingles two completely not related issues or shuffles the weight of one onto other and reverse. Which of your current measurement ranges are correct? 5-50, or 50-500nA? because I see in the end You had settled on 2.9µs which somehow "flattens" the k-ratios at 50-500, but I see clearly that 2.9µs at range at 5-50nA is clearly wrong. The measurement at 40nA probably are not anomaly at all, and rather 50-500 range is wrong. it would be interesting to see intensity changes at points 480nA, 498nA, 502nA, 520nA, if there would be step between 498 and 502 it would tell that 50-500 range is wrong (of course only if 500nA-10µA range is closer to correct measurement). Also 5-50 range is a bit tricky as at that range some beam-crossover funkiness happens. Are Your beam well aligned? Try using different I-emission - that moves that crossover point to different C1 and C2 position (and also different nA value) and could move the possible point of current anomaly to different spot - and that could identify problem of/or if "part of the beam-missing the faraday cup".

As all ranges are available on SXFiveFE effortlessly I had done such tests to make sure that the beam current measurement is continues with crossing the ranges, ant it was perfect curved line in beam current vs count rate with no discontinuities or visible steps at 500, 50, 5, 0.5nA. (the critical part is to include measurements from both sides close to it , i.e. 505 and 495nA, or 0.55nA and 0.45nA and so on). Had not seen such discontinuities on SXFiveFE (the column is different from tungsten/LaB6 one), I am going to check SX100 as soon as possible.
And that is correct procedure to check the picoamperometer continuity without going into dead time, which are counter issues. and k-ratios just sums all issues into single lump hiding the precise origin of it.
For checking picoamperometer linearity I would skip the WDS and its gas counting electronics at all. Or at least I would choose very weak lines and moderate concentrations ) i.e. 2nd order lines. So that dead-time non linearity would not bother the measurement up to those 200nA. As for measurement, fixed 60s for 200nA and same for 5nA beam is unfair. It would be better to count up to some fixed count number thus 5nA would be counted much longer, and 200nA much shorter (or normally as for 2nd order weak lines).
But even better, if Your probe is equipped with SDD EDS detector why not use total counts from that for current vs x-rays, as EDS has very sophisticated electronic hardware for dealing with pulse pile-ups (none on WDS), and with selected high throughput (or shortest shaping time) and (if equipped) medium or small aperture that should give really much better insight to picoamperometer and its linearity with no problem up to 200nA.

Only after identifying the picoamperometer (beam-faraday-cup) issues, artefacts and workarounds or fixes, it is sensible to move with dead time estimation and calibration.

BTW. You came to value of 2.9µs. What dead time is set in your Peaksight? 3µs?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on May 31, 2022, 03:51:36 PM
I agree that this measurement mingles both the dead time calibration and the picoammeter calibrations. This point was clearly stated in the opening post.

However, to my mind the value of this method is, that it gives one a quantitative understanding of the total mis-calibration of the instrument.  These are instruments that merely generate k-ratios after all!

If all is good, then one is good. If not good, then how good or how bad?  This can be ascertained by looking at the Y axis in k-ratio units which for major elements is close to the concentrations (assuming the primary standard is a pure element, and if not, it is a simple calculation).

For me at least I find this helpful.  However, this method does re-iterate the need for a better dead time correction in software *and* an honest to god picoammeter calibration, which we are working on.

That said, it was pleasing to see the accuracy of the Te La line up to even 200 nA.  And as promised here is a closer look at the picoammeter (mis)calibration on Te La up to 100 nA (run last night).

(https://probesoftware.com/smf/gallery/395_31_05_22_3_50_08.png)

Not terrible at least, actually a sub percent level of variance.  But we are proceeding with obtaining a high accuracy current source nonetheless...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 01, 2022, 01:26:58 PM
So you all will remember Probeman showed this plot above using the high precision dead time equation for the Zn Ka line of LLiF with a dead time of 2.9 usec:

(https://probesoftware.com/smf/gallery/395_31_05_22_11_33_30.png)

Well, just for fun we've implemented a three term factorial dead time expression which we call the "super high precision" deadtime expression.  It only really affects count rates above 100K cps.  But in the above Zn Ka plot the Zinc standard is producing 140K cps at 200 nA on a LLIF crystal!

Even setting the 40 nA k-ratios issue aside, we still have some picoammeter calibration issues, but the high current k-ratio values are a bit more consistent:

(https://probesoftware.com/smf/gallery/1_01_06_22_1_24_55.png)

What's amazing is how sensitive the dead time constant is when one is at such high count rates!  Just a difference of 0.01 usec makes a visible difference.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 01, 2022, 05:49:42 PM
Once ones dead time constants are properly adjusted, it's a bit amazing how accurate things can get.

Here is an analysis of ZnTe using Zn, Se and Te pure metal standards at 6 nA:

St  658 Set   1 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:     6.81    6.81    6.81     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
    19  33.714   -.047  66.562    .000    .000 100.229
    20  33.838   -.110  66.537    .000    .000 100.265
    21  33.835   -.084  66.782    .000    .000 100.533
    22  33.781   -.050  66.650    .000    .000 100.381
    23  33.821   -.063  66.776    .000    .000 100.533
    24  33.860   -.030  67.059    .000    .000 100.890

AVER:   33.808   -.064  66.728    .000    .000 100.472
SDEV:     .053    .029    .192    .000    .000    .242
SERR:     .022    .012    .078    .000    .000
%RSD:      .16  -45.42     .29   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:     -.21     ---     .92     .00     .00
DIFF:    -.072     ---    .608     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1841.76 2019.25  749.16     .00     .00

UNKF:    .3628  -.0002   .6340   .0000   .0000
UNCT:   668.12    -.44  474.98     .00     .00
UNBG:    13.08    3.90    4.89     .00     .00

ZCOR:    .9320  2.9673  1.0524   .0000   .0000
KRAW:    .3628  -.0002   .6340   .0000   .0000
PKBG:    52.09     .89   98.07     .00     .00
INT%:     ---- -117.13    ----    ----    ----

And here at 200 nA:

St  658 Set  14 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.61  200.61  200.61     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   409  33.831   -.075  66.756    .000    .000 100.513
   410  33.847   -.065  66.751    .000    .000 100.533
   411  33.858   -.073  66.759    .000    .000 100.544
   412  33.861   -.063  66.778    .000    .000 100.575
   413  33.877   -.060  66.864    .000    .000 100.681
   414  33.890   -.066  66.870    .000    .000 100.694

AVER:   33.861   -.067  66.796    .000    .000 100.590
SDEV:     .021    .006    .055    .000    .000    .078
SERR:     .009    .002    .023    .000    .000
%RSD:      .06   -8.47     .08   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:     -.06     ---    1.02     .00     .00
DIFF:    -.019     ---    .676     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1809.87 1841.21  749.56     .00     .00

UNKF:    .3633  -.0002   .6347   .0000   .0000
UNCT:   657.56    -.42  475.71     .00     .00
UNBG:    13.24    3.94    4.84     .00     .00

ZCOR:    .9320  2.9675  1.0524   .0000   .0000
KRAW:    .3633  -.0002   .6347   .0000   .0000
PKBG:    50.68     .89   99.20     .00     .00
INT%:     ---- -115.83    ----    ----    ----
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on June 02, 2022, 07:17:39 AM
probeman,
I want to ask again, what is the pulse blanking (the integer) value on the spectrometer of SX100 (the integer dtime, which is sent to cameca hardware when spectrometer is setup prior starting counting) for which You had found out the dead time to be 2.9µs?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 02, 2022, 08:37:29 AM
probeman,
I want to ask again, what is the pulse blanking (the integer) value on the spectrometer of SX100 (the integer dtime, which is sent to cameca hardware when spectrometer is setup prior starting counting) for which You had found out the dead time to be 2.9µs?

Sorry, I saw your question and meant to reply, but wanted to get all 5 spectrometers calibrated. These are using the following emission lines:

      1      2       3         4       5
    PET    LTAP    LLIF      PET     LiF
   Te La   Se La   Zn Ka    Te La   Zn Ka

The "enforced" (integer) dead time for all the spectrometers is 3 usec. For my spectrometers I'm getting calibrated dead times of 2.85, 2.80, 2.80, 3.00 and 3.00 usec, respectively. The 3rd digit actually matters at high beam currents!   :o

But (to everyone), what I'm finding really interesting in all this is that based on these k-ratio versus beam current plots, the software dead time correction needs to be expanded to include more factorial terms for accuracy at high beam currents.

So, the "normal" dead time expression is:

Code: [Select]
' Normal deadtime correction
If DeadTimeCorrectionType% = 1 Then
temp# = 1# - cps! * dtime!
If temp# <> 0# Then cps! = cps! / temp#
End If

Which I've had as the default since forever.  In fact as seen below, this expression starts failing even at 20 to 30 nA on large area Bragg crystals!  So seeing as we are routinely getting close to 50K cps for many modern spectrometers, we really should, as Owen Neill has mentioned, be using (at least) the high precision form of the equation which is here:

Code: [Select]
' Precision deadtime correction
If DeadTimeCorrectionType% = 2 Then
temp# = 1# - (cps! * dtime! + cps! ^ 2 * (dtime! ^ 2) / 2#)
If temp# <> 0# Then cps! = cps! / temp#
End If

This "high precision" expression doesn't start failing until around 100 nA on large area Bragg crystals. So, what is clear to me now is, if we want to have excellent accuracy at even higher beam currents, we really need to utilize a more extended version of the dead time equation, which I have attempted to implement here:

Code: [Select]
' Super precision deadtime correction
If DeadTimeCorrectionType% = 3 Then
temp2# = 0#
For n& = 2 To 6
temp2# = temp2# + cps! ^ n& * (dtime! ^ n&) / n&
Next n&
temp# = 1# - (cps! * dtime! + temp2#)
If temp# <> 0# Then cps! = cps! / temp#
End If

So this uses exponents up to ^6!

Honestly I had never previously appreciated the importance of the dead time expression having enough factorial terms, until I started plotting up these k-ratio versus beam current plots. What a revelation I have to say.  ;D

Here is what I mean. Using the normal dead time expression we obtain this on ZnTe/Zn on my LLiF spectrometer:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_20_40.png)

As one can see it begins to fail at around 20 to 30 nA (ignoring the "glitch" at 40 nA). Now let's try the "high precision" version of the dead time expression with the extra factorial term:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_21_16.png)

And here is the Zn Ka data using the dead time expression with 6(!) factorial terms:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_22_05.png)

So there's still something not quite right with my picoammeter (which I will discuss in my next post), but I have to say this has been a real learning experience for me.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 02, 2022, 08:42:36 AM
This "super high precision" dead time correction is now available in the latest version 13.1.5 Probe for EPMA.

(https://probesoftware.com/smf/gallery/1_02_06_22_8_41_21.png)

We're calling it the "three factorial expression", but as Probeman mentioned above it's actually 6 factorials!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 02, 2022, 12:35:51 PM
And here is the Zn Ka data using the dead time expression with 6(!) factorial terms:

(https://probesoftware.com/smf/gallery/395_02_06_22_8_22_05.png)

So there's still something not quite right with my picoammeter (which I will discuss in my next post), but I have to say this has been a real learning experience for me.

So here is why I think the k-ratios take a small dip in the quoted plot above:

(https://probesoftware.com/smf/gallery/395_02_06_22_12_22_37.png)

Note that this is a plot of the Zn on-peak counts (not k-ratio) and notice also that the dip in the k-ratio plot seems to correspond with the bump in the on-peak counts.

I suspect that this is why my picoammeter needs adjustment. Finally, as Mike Jercinovic has pointed out, if the problem is in the picoammeter, the mis-calibration should show in all spectrometers and these plots would seem to confirm that:

(https://probesoftware.com/smf/gallery/395_02_06_22_12_32_43.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_32_57.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_33_09.png)

(https://probesoftware.com/smf/gallery/395_02_06_22_12_33_21.png)

I suspect the "break" in the 40 nA beam current setting in the k-ratio plots (as seen in previous posts) may only be a beam current regulation issue for Cameca instruments at that "crossover").
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 03, 2022, 09:38:16 AM
To summarize:

1. We should all be using the "high precision" dead time expression (or even better the "super high precision" dead time expression!) for correction of measured intensities in software.

2. One can test the overall accuracy of the dead time calibration and the picoammeter calibration using the "constant k-ratio" test, where one measures k-ratios over a range of beam currents.  These measured k-ratios should (ideally) be constant (within counting precision) as a function of beam current (for a given beam energy and takeoff angle).

The constant k-ratio test is useful because it yields a plot that is easily interpreted in order to evaluate the overall accuracy of the k-ratios produced by the instrument.

3. Once the dead time constants in software are adjusted until the resulting k-ratios are as constant as possible, then any remaining inaccuracy is due to the picoammeter (mis)calibration.

4. The picoammeter calibration accuracy can be seen by a simple plot of cps/nA (dead time corrected) as a function of beam current. The on-peak intensities should ideally be constant as a function of beam current.

5. The dead time calibration of each spectrometer is easily performed using the constant k-ratio test, but you may need to consult with your instrument engineer to perform a calibration of your picoammeter.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 03, 2022, 12:35:25 PM
OK, so this may help.

Here are some analyses using pure metal standards acquired at 10 nA, and the secondary standards acquired at 200 nA!   First ZnTe at 200 nA:

St  658 Set  14 ZnTe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.61  200.61  200.61     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   409  33.512   -.063  66.857    .000    .000 100.306
   410  33.528   -.054  66.851    .000    .000 100.325
   411  33.539   -.061  66.860    .000    .000 100.337
   412  33.542   -.053  66.878    .000    .000 100.367
   413  33.558   -.050  66.964    .000    .000 100.473
   414  33.571   -.056  66.971    .000    .000 100.486

AVER:   33.542   -.056  66.897    .000    .000 100.382
SDEV:     .021    .005    .055    .000    .000    .078
SERR:     .009    .002    .023    .000    .000
%RSD:      .06   -9.22     .08   .0000   .0000

PUBL:   33.880    n.a.  66.120    n.a.    n.a. 100.000
%VAR:    -1.00     ---    1.17     .00     .00
DIFF:    -.338     ---    .777     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1837.10 2016.76  748.16     .00     .00

UNKF:    .3600  -.0002   .6359   .0000   .0000
UNCT:   661.35    -.38  475.71     .00     .00
UNBG:    13.24    3.94    4.84     .00     .00

ZCOR:    .9317  2.9644  1.0521   .0000   .0000
KRAW:    .3600  -.0002   .6358   .0000   .0000
PKBG:    50.97     .90   99.20     .00     .00
INT%:     ---- -114.56    ----    ----    ----

And now ZeSe at 200 nA:

St  660 Set  14 ZnSe (synthetic), Results in Elemental Weight Percents
 
ELEM:       Zn      Se      Te      Te      Zn
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL
BGDS:      EXP     EXP     LIN     LIN     LIN
TIME:    60.00   60.00   60.00     .00     .00
BEAM:   200.65  200.65  200.65     .00     .00
AGGR:        2               2               

ELEM:       Zn      Se      Te      Te      Zn   SUM 
XRAY:     (ka)    (la)    (la)    (la)    (ka)
   415  45.476  53.333   -.002    .000    .000  98.807
   416  45.427  53.872    .005    .000    .000  99.304
   417  45.356  54.034    .005    .000    .000  99.395
   418  45.457  53.843   -.001    .000    .000  99.299
   419  45.383  53.666   -.002    .000    .000  99.046
   420  45.181  53.264    .000    .000    .000  98.444

AVER:   45.380  53.669    .001    .000    .000  99.049
SDEV:     .107    .310    .003    .000    .000    .366
SERR:     .044    .127    .001    .000    .000
%RSD:      .24     .58  475.68   .0000   .0000

PUBL:   45.290  54.710    .000    n.a.    n.a. 100.000
%VAR:      .20   -1.90     .00     .00     .00
DIFF:     .090  -1.041    .000     ---     ---
STDS:      530     534     552       0       0

STKF:   1.0000  1.0000  1.0000   .0000   .0000
STCT:  1837.10 2016.76  748.16     .00     .00

UNKF:    .5029   .2512   .0000   .0000   .0000
UNCT:   923.85  506.53     .00     .00     .00
UNBG:    10.69    4.80    3.25     .00     .00

ZCOR:    .9024  2.1369  1.1714   .0000   .0000
KRAW:    .5029   .2512   .0000   .0000   .0000
PKBG:    87.42  106.59    1.00     .00     .00
INT%:     ----     .00    ----    ----    ----


And remember, this is with the picoammeter still not calibrated properly!    :o

I would very much welcome seeing constant k-ratio data from other instruments... if you want feel free to call me and I can talk you through the procedure in Probe for EPMA. It's completely automated now!   ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 03, 2022, 03:00:18 PM
So there remains the question of emission line energies and dead time calibration.

I will run some more measurements this weekend, but it may simply be the case that Cameca instruments, with their "enforced" integer dead time electronics do not experience variable pulse widths as a function of emission line energies.

In the mean time it would be most helpful if we could obtain additional constant k-ratio measurements from other instruments, particularly JEOL instruments.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 04, 2022, 10:57:48 AM
So I used the same dead time constants from the last Sunday run and applied them to the Monday run where I acquired more beam currents but only up to 100 nA and everything looked very stable and consistant using the "super high precision" dead time correction expression (with six terms).

(https://probesoftware.com/smf/gallery/395_04_06_22_1_24_32.png)

(https://probesoftware.com/smf/gallery/395_04_06_22_10_55_25.png)

(https://probesoftware.com/smf/gallery/395_04_06_22_10_55_54.png)

You get the picture...  again we see the "glitch" at around 40 nA, but the k-ratios are quite constant from 6 to 100 nA.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 04, 2022, 11:01:16 AM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

Edit by John: update pdf attachment for standard intensity drift correction notes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 06, 2022, 12:53:16 PM
I ran different elements (emission energies) on the instrument yesterday to see if I could tease out any trends (or not) in the dead time calibrations using the new "super high precision" dead time correction expression. Unfortunately I didn't think to keep the bias voltages exactly the same on all spectrometers so that is a possible variable not controlled for.  But the initial data is still worth examining I think.

Here is the run I did on 05/29/2022 up to 200 nA using Zn Ka, Se La and Te La at 8.64, 1.38 and 3.77 keV respectively:
                 1            2            3           4            5
              Te La     Se La     Zn Ka     Te La     Zn Ka
               PET       LTAP       LLIF        PET        LIF
BIAS    1320v     1330v     1850v     1340v     1840v
DT      2.85us    2.80us    2.80us    3.00us    3.00us

When I plotted up the new data from 06/04/2022 using the sample DT constants from 05/29/2022 I saw some significant differences, for example on Sp 1 when going from Te La (PET) to Se La (TAP) and using the same bias voltages the k-ratio plot looks like this:

(https://probesoftware.com/smf/gallery/395_06_06_22_12_48_26.png)

After the DT is adjusted to 3.30 used, in order to produce a more constant k-ratio, we obtain this:

(https://probesoftware.com/smf/gallery/395_06_06_22_12_48_40.png)

So here is a summary of the run from yesterday using different emission lines on the spectrometers and adjusted to obtain a constant k-ratio as a function of beam current:

                 1            2           3            4            5
              Se La     Te La     Te La     Se La     Te La
               TAP       LPET       LPET        TAP        PET
BIAS    1320v     1320v     1850v     1313v     1850v
DT      3.30us    2.60us    2.70us    3.20us    2.90us

The bias voltages is red were modified from the previous run (note that Sp 3 and Sp 5 are 2 atm detectors). So, you can see that going from Te La (PET) to SE La (TAP) on sp 1 and 4 the emission energy went down, but the DT required for a constant k-ratio went up (both low pressure detectors).

However, on Sp 3 and 5, going from Zn Ka (LIF) to Te La (PET), the emission energies also went down, but the DT had to be adjusted down slightly (by 0.1 usec), to obtain a constant k-ratio. But both of these were 2 atm detectors, so that is another variable.

Meanwhile on Sp 2 going from Se La (TAP) to Te La (PET) the emission energy went up, but the DT had to be adjusted down slightly to obtain a constant k-ratio.

A bit of a mixed bag to say the least, so I am going to try some other emission lines this weekend.  By the way, I heard back from Cameca and they only utilize the "normal" or classic dead time expression, which we now know will not work above 50K cps.

In any case, one can specify different dead time constants for different crystals in Probe for EPMA, so maybe this variation in DT is something that can be dealt with.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 08, 2022, 04:22:08 PM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

We added a final section to the above pdf document attached to this message:

https://probesoftware.com/smf/index.php?topic=1466.msg10920#msg10920

Describing how to edit your SCALERS.DAT file once you have determined your new dead time constants using the "super high precision" expression.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 10, 2022, 12:46:57 PM
OK, so this is pretty cool.

It just occurred to me last night (yes I was dreaming about WDS!), that these "constant k-ratio" measurements can characterize not only our dead time constants and our picoammeter calibrations, but also our "effective" takeoff angles! The effective takeoff angle being the actual angle of X-ray measurement defined by our Bragg crystal (is it symmetrically diffracting?) and the spectrometer alignment and the surface of our sample holder. Of course, this requires that one measures the same element and x-ray line on more than one spectrometer!

So the reason this "constant k-ratio" method is interesting is not only because we should we get the same k-ratio at any beam current, but we should also get the same k-ratios (within precision) for *all* the spectrometers on our instrument, assuming of course the same element, X-ray line, beam energy and takeoff angle are utilized in the k-ratio measurement.

This is exactly the "simultaneous k-ratio" test that is often utilized in initial instrument acceptance testing:

https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

So here is a "constant k-ratio" plot of the two spectrometers using the same (Se La) emission line measured on two spectrometers using TAP crystals:

(https://probesoftware.com/smf/gallery/395_10_06_22_2_03_20.png)

As you can see spectrometers 1 and 4 agree pretty well with each other, which is impressive because the Se La line is only 1.38 keV, so fairly low energy and therefore more affected by variations in the effective takeoff angle.  Now how about Te La on three spectrometers using PET crystals:

(https://probesoftware.com/smf/gallery/395_10_06_22_12_38_59.png)

Hmmm, seems we might have a small difference between the two LPET crystals and the normal PET crystal.  The cool thing about using the constant k-ratio method for this simultaneous k-ratio evaluation is that one can obtain an immediate sense of the relative accuracy of the error. Our investigations continue...

I guess the point is that we need to make sure we have consistent k-ratios not only for different beam currents (dead times and picoammeter) but also between our spectrometers, before we start comparing our k-ratios to other instruments (which are hopefully equally well calibrated in these parameters!).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on June 11, 2022, 10:01:51 AM
Does anyone know what dead time expressions Cameca and JEOL are using for their WDS intensities?  Or Bruker and Thermo WDS?

By the way, we wrote up the complete procedure for running the constant k-ratio test and re-processing the data, and it is attached below (login to see attachments as usual).

Let us know if the document is unclear at any point.

We added a final section to the above pdf document attached to this message:

https://probesoftware.com/smf/index.php?topic=1466.msg10920#msg10920

Describing how to edit your SCALERS.DAT file once you have determined your new dead time constants using the "super high precision" expression.

We added yet another section to the constant k-ratio method procedure on simultaneous k-ratios in the pdf attached here.

Edit by John: update pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on June 12, 2022, 06:27:52 PM
I'd like to point out that the (simple) expression for deadtime commonly in use, N’/I = k(1-N’τ), and lending itself to illustration on plots of cps/nA versus cps, is not the only means of calculating deadtime (simply).  Heinrich et al. (1966; attached) applied the so-called “ratio method,” in which the ratios of the observed count rates (N1’ and N2’) of two X-ray lines (they used Cu Ka and Cu Kb on Cu metal) measured simultaneously on two spectrometers at varying beam current (to produce two datasets in which N1’ alternately represents Cu Ka or Cu Kb) are used to determine the deadtimes for both spectrometers.  Although the expressions are linear and only applicable at relatively low count rates, since evaluation of the deadtime by this means only involves consideration of slopes and intercepts on plots of N1’/N2’ versus N1’ (Figs. 7 and 8 ), inaccuracy in the beam current measurement is irrelevant.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on June 13, 2022, 12:21:26 AM
This thread made me sit few days on SX100 and do some checking.
The production of some plots and consolidating the data will take some time.

However at this moment with 100% being sure I can point to few problems of non-linearity:
1. Widely used and evangelized here in this forums differential PHA mode with wide-window (differently to integral method) will introduce non linearity at high count rates, as PHA "peaks" of double and triple pulse-pileups will cross into/move into the PHA window. That makes counting particularly prone to be affected by random fluctuations of temperature and pressure. Better would be to use integral (simpler), or narrow (moving with the peak) window. The second one would have pseudo-expandable dead time behavior. The count rate between integral and wide-window PHA drops down to 95% at worst case. I see absolutely no advantage of wide window vs integral, as integral will have simple parabola shape in beam_current vs intensity plot, where wide-window PHA will have similar parabola with distortions (waves) at high current. Plots in this method thread does not catch that as jumps from 140 to 200 without smaller steps in between.
2. This proposed factorial math model does not work well. In case the higher count rate is fitted correctly - the lower count rate is the overestimated. In particularly if ignoring point one, it can produce wrong fitting for both high and low currents.
3. 2nd point is baseless claim? How to explain those dead time of 2.9 us while hardware blanks pulses for 3us. Unless this SX100 is accelerated to relativistic speeds or it have a Black Hole under there is no physical way for pulses be passed before unblanking. It rather evidences over fitting of that method at low currents (actually at low count rates, we should not care about beam current at all), where count rates are overestimated. I already had shared the jupyter notebooks with MC simulation in some other thread. There It was clear that that formula overestimates the rate at low count rate.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 13, 2022, 10:14:19 AM
I'd like to point out that the (simple) expression for deadtime commonly in use, N’/I = k(1-N’τ), and lending itself to illustration on plots of cps/nA versus cps, is not the only means of calculating deadtime (simply).  Heinrich et al. (1966; attached) applied the so-called “ratio method,” in which the ratios of the observed count rates (N1’ and N2’) of two X-ray lines (they used Cu Ka and Cu Kb on Cu metal) measured simultaneously on two spectrometers at varying beam current (to produce two datasets in which N1’ alternately represents Cu Ka or Cu Kb) are used to determine the deadtimes for both spectrometers.  Although the expressions are linear and only applicable at relatively low count rates, since evaluation of the deadtime by this means only involves consideration of slopes and intercepts on plots of N1’/N2’ versus N1’ (Figs. 7 and 8 ), inaccuracy in the beam current measurement is irrelevant.

Hi Brian,
I saw your post last night and was planning on responding this morning and when I got up to do so, your post has been removed and replaced with the above post.  I was so looking forward to responding to your previous comments.  Your feedback is always appreciated even when we're not in complete agreement!

Just working from memory I would just explain that with regards to your comment on simultaneous k-ratio measurements, you are correct, one should measure k-ratios on all 5 spectrometers and we did so, but just not using the same lines. The reason being because this topic started out looking at a new method to calibrate dead times using soft x-rays (Al Ka and Mg Ka) and because of issues with beam damage and subsequent curiosity in evaluating the effects from different emission energies, we had quickly moved to looking at Zn Ka, Se La and Te La on more electrically conducting materials

However, now that the software has been improved to completely automate the acquisition of these "constant k-ratio" datasets (with a y-axis stage increment for each beam current sample setup), yesterday we acquired some additional data sets, specifically Ti Ka on all 5 spectrometers.  Here is using Ti metal as the primary standard and TiO2 as the secondary standard over a range of beam currents.

(https://probesoftware.com/smf/gallery/1_13_06_22_9_03_32.png)

These k-ratios were calculated using the *same* dead time constants from the Zn, Se and Te calibration runs which is pretty good confirmation that emission energy doesn't seem to be a big factor in dead time. At least for Cameca instruments.  Unfortunately we still have no data from any instruments other than the Oregon instrument, but I am very much looking forward to seeing data from other instruments, especially JEOL instruments. 

The reason I think that different emission energies *might* affect JEOL instruments more  (mainly based on reports years ago from Paul Carpenter on his 8200 instrument), is that Cameca uses an "enforced" dead time circuit that forces all pulses to some integer value duration, say 3 usec. This circuit does not force the pulse width exactly to that value, hence the reason why the Cameca software includes a non-integer tweak to the software dead time correction.  In any case this electronic feature might help keep the pulse widths more consistent as a function of emission line energy.

Please note that one can see several artifacts in the above constant k-ratio plot.  The first is the anomaly at 60 nA.  It's interesting as we avoided performing any measurements around 40 nA because we had been seeing a similar anomaly. However it seems to also appear at 60 nA, perhaps when time the picoammeter range switches from the 5 to 50 nA to the 50 to 500 nA range?  We should perhaps try some measurements going from high beam currents to low beam currents.

Note also that spectrometer 3 using a LLIF Bragg crystal seems to yield significantly different k-ratios (by a couple of percent) than the other spectrometers, including a normal LiF Bragg crystal on spectrometer 5. I suspect that spectrometer 3 has some alignment issues which is interesting since we have just had a maintenance performed by Cameca, but perhaps the problem is asymmetrical Bragg diffraction. The large area crystals do seem to be more susceptible to these sorts of artifacts.

On the Heinrich paper, I had not seen this method before, thanks for sharing that.  I will definitely give that a try. With these recent Probe for EPMA software features (running multiple setup automatically one at a time and implementing a y stage axis bump for each sample setup) this is now a very easy thing to do.  I hope you also will "fire up" PFE with this new "super high precision" dead time expression and see what you obtain on your instrument for these constant k-ratio measurements.

In your previous comment you also mentioned your concerns with making one adjustment for separate calibration issues and I agree completely. Maybe you missed my earlier discussion of that very point where I said that I have concerns with making one adjustment for dead time calibration and picoammeter linearity. But it soon became clear after some experimentation, that adjusting the dead time constant (to improve the consistency of k-ratios over a large range of beam current), did not actually remove the picoammeter miscalibrations, it just made them much more clearly visible.  See this post for that data:

https://probesoftware.com/smf/index.php?topic=1466.msg10912#msg10912

So in the above post, the first plot (in the quotation area) is the constant k-ratio plot showing some small anomalies after the dead time has been adjusted to yield the most consistent k-ratios over the range of beam current for each spectrometer.

What is interesting are the following *on-peak* intensity plots (also DT corrected) of the different spectrometers all showing the same variation which seems to be related to the different picoammeter ranges (the cps/nA intensity offset occurring on all spectrometers at around 40 nA).  I find that very interesting and suggests to me that our picoammeter ranges require some adjustment.  The only time one might be compensating for picoammeter miscalibration using this dead time adjustment is if the picoammeter was non-linear in a very linear manner!   But that would also occur for the traditional dead time calibration method using a single material (and single emission line).

As for the more recent simultaneous k-ratio observations those are simply a nice side benefit of these constant k-ratio measurements. And unsurprisingly these simultaneous k-ratio offsets seem to be very consistent over the range of beam currents just as one would expect from a spectrometer/crystal alignment/effective takeoff angle issue(s).

I am really stoked at how useful these constant k-ratio measurements seem to be and I really love how by using k-ratio units we obtain very intuitive plots of the thing we actually care about in our instrument performance, that is: k-ratios!  I look forward to measurements from your JEOL instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 13, 2022, 11:09:54 AM
This thread made me sit few days on SX100 and do some checking.
The production of some plots and consolidating the data will take some time.

However at this moment with 100% being sure I can point to few problems of non-linearity:
1. Widely used and evangelized here in this forums differential PHA mode with wide-window (differently to integral method) will introduce non linearity at high count rates, as PHA "peaks" of double and triple pulse-pileups will cross into/move into the PHA window. That makes counting particularly prone to be affected by random fluctuations of temperature and pressure. Better would be to use integral (simpler), or narrow (moving with the peak) window. The second one would have pseudo-expandable dead time behavior. The count rate between integral and wide-window PHA drops down to 95% at worst case. I see absolutely no advantage of wide window vs integral, as integral will have simple parabola shape in beam_current vs intensity plot, where wide-window PHA will have similar parabola with distortions (waves) at high current. Plots in this method thread does not catch that as jumps from 140 to 200 without smaller steps in between.

Hi SG,
Looking forward to your data!   Hopefully you can also utilize this new "super high precision" dead time expression. I found that using the traditional expression rapidly fails above 50K cps. See here for an example:

https://probesoftware.com/smf/index.php?topic=1466.msg10909#msg10909

I also actually agree with your admonition of not using differential mode for these high current k-ratio measurements.  All the constant k-ratio measurements I have done for the last few weeks are using integral mode always.

2. This proposed factorial math model does not work well. In case the higher count rate is fitted correctly - the lower count rate is the overestimated. In particularly if ignoring point one, it can produce wrong fitting for both high and low currents.

OK, here we can disagree and the data I have supports this.  As for the math, you must have made a mistake in your calculations because the dead time correction is a simple probability calculation, and the Taylor Expansion series rigorously describes these probabilities.  As you can see from the most recent data in the plot above in my response to Brian, the lower beam current k-ratios seem to be very much in agreement with each other.  What sort of issues are you seeing on your instrument? 

And here is a plot also from yesterday showing the k-ratio for Ti metal as primary standard and SrTiO3 as a secondary standard, again showing the consistency in the k-ratios at lower beam currents, again using the "super high precision" expression:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_27_05.png)

Doesn't seem to be hurting the lower count rates to my eye. The traditional dead time expression seems to start failing even at moderate beam currents on my LPET using Ti Ka for example.

3. 2nd point is baseless claim? How to explain those dead time of 2.9 us while hardware blanks pulses for 3us. Unless this SX100 is accelerated to relativistic speeds or it have a Black Hole under there is no physical way for pulses be passed before unblanking. It rather evidences over fitting of that method at low currents (actually at low count rates, we should not care about beam current at all), where count rates are overestimated. I already had shared the jupyter notebooks with MC simulation in some other thread. There It was clear that that formula overestimates the rate at low count rate.

Well there must be a black hole underneath my instrument as it's not at all clear to me.   ;D

I would simply attribute these values being slightly less than exactly 3 usec to the fact that electronics itself can be miscalibrated.  Simply put: how do we know these "blanking" pulses are *exactly* 3 usec?  Knowing nothing about the electronic details I might ask: exactly how good are those resistor values?  I suspect it is possible they might be a little more or a little less than the specified integer dead times.  The dead time calibration simply measures this nominal enforced pulse width empirically.

Let's do an experiment. Here is the k-ratios for spec 1 PET looking at Ti Ka using the "empirically" found dead time of 2.85 usec:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_49_03.png)

Looks OK, but clearly as pointed out previously there may be some picoammeter adjustments necessary based on the simple count rate plots in previous posts.  Now let's change it to 3.0 usec as you suggest:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_50_16.png)

Well that definitely looks worse to my eye.  Forgive me but I guess my instrument has a black hole underneath it!   And now let's try the traditional dead time expression with 3 usec:

(https://probesoftware.com/smf/gallery/395_13_06_22_10_55_45.png)

Now that's even worse than before.  I'm not saying this is all figured out, that's why more data from more instruments would be helpful. Let's see some constant k-ratio data from your instrument.  Here's mine again using the "super high precision" Taylor Series expansion expression for DT correction:

(https://probesoftware.com/smf/gallery/395_13_06_22_11_03_11.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 17, 2022, 11:06:43 AM
So here's a very different use case of the constant k-ratio method acquired by Ying Yu at University of Queensland.

She has an old JEOL 8200 which doesn't have any large area crystals and of course JEOL dead time constants tend to be around half of Cameca's so that's another advantage.

So here is a data set using Cu ka on CuFeS2, and Cu metal as the primary standard, on LIF going up to 120 nA using the traditional dead time expression:

(https://probesoftware.com/smf/gallery/395_17_06_22_10_57_21.png)

Pretty constant I'd say. It helps that her DT constants are around only 1.5 usec.  And here is the same data but plotted using the super high precision dead time expression:

(https://probesoftware.com/smf/gallery/395_17_06_22_10_57_35.png)

If you look very closely one can see that the data points on the right at the highest beam currents are very slightly lower.  How is this possible?  Well even at 120 nA on pure Cu, she's only getting around 30K cps of Cu Ka!

So in this case of an old JEOL instrument with very low count rates, the normal (traditional) dead time expression is good enough. 

To re-iterate, at dead times from 1 to 2 usec I would expect the traditional (normal) single term expression to be good to around 50K cps. Though Cameca instruments with dead times around 3 usec, might benefit from the two term high precision expression.

However, over 50K cps the high precision (two term) expression should perform better, and at over 100K cps, the super high precision (multi-term) expression will probably be necessary.  I guess the bottom line is that no matter what your count rates are, the multi-term "super high precision" dead time expression won't hurt, and in many cases (large area crystals and/or higher beam currents and/or maybe Cameca instruments in general), it will definitely help!

I'd be very interested in additional constant k-ratio measurements from any one willing to do some of these measurements.  The latest instructions for acquiring constant k-ratios are attached below.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 18, 2022, 09:49:55 AM
In the above post, we showed data from Ying Yu's lab which demonstrated no change in intensities between using the normal (traditional) dead time expression and the "super high precision" dead time expression at low to moderate beam currents, and only the slightest intensity differences at high beam currents.

This is due to the fact that her instrument is only producing ~30k cps on pure Cu and pure Fe metal even at 120 nA beam current!  So for her instrument with its 1.45 usec dead times, the traditional dead time expression is more than sufficient. Though of course it doesn't hurt to utilize the "super high precision" dead time expression as the default (maybe they will utilize beam currents of 200 nA at some point).

Meanwhile on our SX100 instrument we remeasured Ti on Ti metal, TiO2 and SrTiO3 up to 200 nA *and* also we acquired an EDS spectrum with each data point using our Thermo Pathfinder EDS spectrometer (10 sq. mm). At 200 nA this results in ~220K cps on our PET crystals, ~600K cps (!) on our LPET crystal and ~360K cps on our EDS detector.   And please note, for the Ti Ka by EDS, the ~360K cps is not the whole spectrum count rate, it's merely the Ti ka *net intensity* count rate!   :o

The results for all 5 WDS spectrometers using the "super high precision" dead time expression, and also the EDS detector (of course the EDS detector is correcting for dead time losses using hardware), can be seen here:

(https://probesoftware.com/smf/gallery/395_18_06_22_9_41_46.png)

The WDS spectrometers all look good (though with a bit of an possible asymmetrical diffraction outlier with the LLIF crystal on spec 3), and most impressively, the EDS detector did quite well up until around 200 nA, when the "wheels start to come off" around 85% DT.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 20, 2022, 11:24:31 AM
This is insane.  Here are quant calculations using the "super high precision" dead time expression on the most recent data set where I measured Ti Ka on Ti metal, TiO2 and SrTiO3.

Note that the absolute value of the k-ratio does not matter for this "constant k-ratio" dead time calibration method. The only thing we care is that the k-ratio remains constant as a function of beam current.

Also, for quantification I've utilized the "aggregate" feature in Probe for EPMA to combine the Ti ka intensities from all 5 spectrometers, because the matrix would be non-physical if Ti intensities from 5 spectrometers were added to the specified strontium and oxygen concentrations during the matrix iteration.

So here is Ti Ka measured on 5 spectrometers, using Ti metal as a primary standard measured at 12 nA, and TiO2 measured as a secondary standard measured at 200 nA:

St   22 Set   9 TiO2 synthetic, Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC
BGDS:      EXP     EXP     LIN     EXP     LIN
TIME:    60.00     .00     .00     .00     .00     ---
BEAM:   201.36     .00     .00     .00     .00     ---
AGGR:        5                                     ---

ELEM:       Ti      Ti      Ti      Ti      Ti       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()
   247  60.140    .000    .000    .000    .000  40.050 100.190
   248  60.148    .000    .000    .000    .000  40.050 100.198
   249  60.121    .000    .000    .000    .000  40.050 100.171
   250  60.137    .000    .000    .000    .000  40.050 100.187
   251  60.084    .000    .000    .000    .000  40.050 100.134
   252  60.088    .000    .000    .000    .000  40.050 100.138

AVER:   60.120    .000    .000    .000    .000  40.050 100.170
SDEV:     .027    .000    .000    .000    .000    .000    .027
SERR:     .011    .000    .000    .000    .000    .000
%RSD:      .05   .0000   .0000   .0000   .0000     .00

PUBL:   59.939    n.a.    n.a.    n.a.    n.a.  40.050  99.989
%VAR:      .30     .00     .00     .00     .00     .00
DIFF:     .181     ---     ---     ---     ---    .000
STDS:      522       0       0       0       0     ---


and here is SrTiO3 again using Ti metal as a primary standard measured at 12 nA, and SrTiO3 measured as a secondary standard measured at 200 nA:

St  251 Set   9 Strontium titanate (SrTiO3), Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti      Sr       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC    SPEC
BGDS:      EXP     EXP     LIN     EXP     LIN
TIME:    60.00     .00     .00     .00     .00     ---     ---
BEAM:   200.42     .00     .00     .00     .00     ---     ---
AGGR:        5                                     ---     ---

ELEM:       Ti      Ti      Ti      Ti      Ti      Sr       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()      ()
   253  26.226    .000    .000    .000    .000  47.742  26.154 100.122
   254  26.244    .000    .000    .000    .000  47.742  26.154 100.140
   255  26.228    .000    .000    .000    .000  47.742  26.154 100.124
   256  26.218    .000    .000    .000    .000  47.742  26.154 100.114
   257  26.209    .000    .000    .000    .000  47.742  26.154 100.105
   258  26.209    .000    .000    .000    .000  47.742  26.154 100.105

AVER:   26.222    .000    .000    .000    .000  47.742  26.154 100.118
SDEV:     .013    .000    .000    .000    .000    .000    .000    .013
SERR:     .005    .000    .000    .000    .000    .000    .000
%RSD:      .05   .0000   .0000   .0000   .0000     .00     .00

PUBL:   26.103    n.a.    n.a.    n.a.    n.a.  47.742  26.154  99.999
%VAR:      .46     .00     .00     .00     .00     .00     .00
DIFF:     .119     ---     ---     ---     ---    .000    .000
STDS:      522       0       0       0       0     ---     ---


I am attempting to measure these different emission lines at the same detector bias and only adjusting the gain to place the PHA peak a little to the right of center at a moderate beam current.  The idea being that as the count rate increases and the PHA experiences "pulse depression", the PHA peak will shift to the left, but still be within the range of the counting electronics.  All measurements are also done using "integral" mode.

I am looking at the data and looking for trends in the dead time constant as a function of emission energies, and I think I may be seeing something, but only between the 1 atm and 2 atm flow detectors.

It would be great to get some constant k-ratio measurements on a modern JEOL instrument with large area crystals with count rates exceeding 100K cps to compare...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 21, 2022, 09:16:06 AM
Now that our dead times are pretty well adjusted using the constant k-ratio method, we might be able to observe more subtle miscalibration issues such as the picoammeter calibration.

If ones picoammeter is miscalibrated, then the effect should be seen in all 5 spectrometers. Here are some plots where the intensities for all 5 spectrometers were aggregated using the aggregate feature in Probe for EPMA and the weight percent quantified. First for TiO2 using Ti metal as a primary standard (as a function of beam current):

(https://probesoftware.com/smf/gallery/395_21_06_22_9_09_45.png)

and here for SrTiO3 again using Ti metal as a primary standard:

(https://probesoftware.com/smf/gallery/395_21_06_22_9_10_02.png)

Although the effect is rather small we can see the offset between the 5 to 50 nA and the 50 to 500 nA range. We are attempting to obtain a high accuracy current source to calibrate our picoammeter and will let you know how it goes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 25, 2022, 08:21:43 AM
I was able to acquire a data set on all 5 spectrometers this time for Si Ka using PET and TAP Bragg crystals up to 200 nA.

Here are k-ratios for all 5 spectrometers using SiO2 as the primary standard and benitoite as the secondary standard, and again we can see that our spectrometer 3 with a large area crystal is offset from the other spectrometers as it was for Ti Ka:

(https://probesoftware.com/smf/gallery/395_25_06_22_8_16_12.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on June 28, 2022, 09:26:55 AM
Again, this time combining the Si Ka intensities from all 5 spectrometers (PET and TAP) using the "aggregate" feature in Probe for EPMA (to check for picoammeter calibration issues), we can see that the quantification is fairly reasonable, but it appears there is a small picoammeter mis-calibration between the 5 to 50 nA and the 50 to 500 nA ranges:
 
(https://probesoftware.com/smf/gallery/395_25_06_22_8_16_28.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 03, 2022, 12:01:35 PM
Here are some k-ratio plots using SiO2 as a primary standard and benitoite as a secondary standard using the three different dead time expressions in Probe for EPMA.

Regardless of the k-ratios we obtain, the essential point is that these k-ratios  should remain constant, so matter what the count rates (beam currents) are. These plots shown below, as mentioned in my reply to Brian Joy in the Heinrich Ka/Kb ratio dead time method topic, seen here:

https://probesoftware.com/smf/index.php?topic=1470.msg10971#msg10971

are relatively immune to the accuracy of the picoammeter calibration because each pair of primary and secondary standards are measured together at each beam current.  This was a point that I had not emphasized enough in previous posts on this constant k-ratio method for the determination of dead time constants.  The point being that as long as the beam current is stable at each beam current measurement, the constancy of k-ratios measured at each beam current reveals the value of the correct dead time constant.

Here is the traditional dead time expression using SiO2 as a primary standard and benitoite as the secondary standard, where each k-ratio (pair of materials) is measured at the same beam current:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_54_38.png)

and here for the two term (Willis , 1992) dead time expression:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_54_57.png)

Note that the low beam current k-ratios are unchanged, but the high beam current k-ratios are much improved (more constant).

And here the six term expansion (Taylor series) of the dead time expression:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_55_21.png)

Not bad at all considering our 200 nA measurements yield ~120K cps on SiO2!

The problems with the picoammeter calibration will not become apparent until one plots the k-ratios of the benitoite secondary standard using a single primary standardization at a low beam current as seen here:

(https://probesoftware.com/smf/gallery/395_03_07_22_11_55_43.png)

Here we can see the approximately 1 % difference in the picoammeter calibration between the 5 to 50 nA range and the 50 to 500 nA range. We are hoping to obtain a high accuracy current source in the next few weeks and will let you know if we can improve this miscalibration between the picoammeter ranges.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 05, 2022, 10:23:30 AM
OK, this is a little bit insane, but I decided to run the benitoite and SiO2 k-ratios up to 400 nA of beam current. Just to see where the "wheels come off"!   ;D

(https://probesoftware.com/smf/gallery/395_05_07_22_10_13_21.png)

As you can see, things are pretty darn good up to 250 nA, but then after that the instrument automatically switches from the 150 um beam regulation aperture to the 200 um unregulated aperture, and then things aren't quite as good, but still only off by about 5%, which is probably fine for ultra high sensitivity trace element work.

Please keep in mind that even at 250 nA on the LTAP Bragg crystal we are getting over 400K cps coming into the detector!  And the k-ratios are essentially constant from 10 nA to 250 nA!    :o

Though maybe some aperture alignment or calibration work on our picoammeter would take care of this 5% variance with the unregulated aperture. I will let you know what we find out.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Anette von der Handt on July 07, 2022, 07:05:18 PM
Here is some data from a JEOL probe: Newly installed JEOL JXA-iHP200F at University of British Columbia.

Ti ka on LIFL (Spec 2&5) and PETL (Spec3) at 15kV. K-ratios on synthetic TiO2 and Ti-metal.

Normal Deadtime Correction
(https://probesoftware.com/smf/gallery/17_07_07_22_6_51_04.png)

Precision Deadtime Correction:
(https://probesoftware.com/smf/gallery/17_07_07_22_6_52_03.png)

Super Precision Deadtime Correction:
(https://probesoftware.com/smf/gallery/17_07_07_22_6_52_37.png)

All scaled the same. Count rates at 200nA are 2LIFL: 46700 cps, 3PETL: 288800 cps, 5LIFL: 32300 cps.

Very convincing win for using the Super Precision Deadtime correction. I almost want to turn it into an animated gif.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 08, 2022, 10:25:35 AM
Cool data!   Spectrometer 3 with the large PET really shows the benefits of the six term dead time expression very nicely!   With the traditional expression the k-ratios on spec 3 start to "head south" around 50 nA. 

A couple of other observations that I'm sure you also see:

It also demonstrates the "simultaneous k-ratio" test using the same data set!  That is to say, spectrometer 2 large LIF either has an alignment problem or perhaps an asymmetrical diffraction issue (just as I see on my spec 3 with a large LiF).  Of course it could be that the other two spectrometers are off and spec 2 is fine, but if we take a look at a quick calculation in CalcZAF for TiO2 (because you used a pure element as the primary standard), we see a calculated k-ratio of around 0.55:

SAMPLE: 32767, TOA: 40, ITERATIONS: 0, Z-BAR: 16.39299

 ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Ti ka   .9950  1.0000  1.0861  1.0806  1.1251   .9653   .9770  4.9670  3.0199 91.5617
   O  ka  6.6118  1.0000   .8910  5.8910   .8469  1.0521   .1060   .5317 28.2114 13655.4

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA KILOVOL                                       
   Ti ka  .00000  .55477  59.950   -----  33.333   1.000   15.00                                       
   O  ka  .00000  .06798  40.050   -----  66.667   2.000   15.00                                       
   TOTAL:                100.000   ----- 100.000   3.000

So it appears to me that it must be an "effective takeoff" issue of some kind for spec 2.  This is a good example of why we need consensus k-ratios as Nicholas Ritchie has suggested.

I'm also very pleased to see that apparently the new JEOL instrument does not show any beam current "glitches" within this range.  It would be worth seeing a plot of the "picoammeter test" using the same data, but where you disable all the Ti standards except for one at 10 nA, and then plot the k-ratios for TiO2 using that single standard.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 08, 2022, 06:47:08 PM
So Anette sent me her MDB files and I plotted the one remaining test on her constant k-ratio data set using Ti ka, which is the test where one disables all primary standards except one, say at 10 nA, and then analyzes all the secondary standards using that single primary standard.

This is essentially a test of the picoammeter accuracy (once the dead time constant is properly determined).  Here is the data using a single Ti metal standard at 10 nA and all the TiO2 secondary standards from 10 nA to 200 nA. Remember on spectrometer 2 LPET,  this is over 110K cps at 200 nA!

(https://probesoftware.com/smf/gallery/395_08_07_22_6_41_24.png)

Not too bad I'd say!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 10, 2022, 12:53:14 PM
Looking through her data, I note that Anette now has the EPMA record for highest count rate with a constant k-ratio:

(https://probesoftware.com/smf/gallery/395_10_07_22_12_51_55.png)

Spectrometer 3 with a PETL crystal with 540K cps on Ti metal with ~1% accuracy!

"Super high" precision dead time correction expression rules!

 8)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 13, 2022, 09:43:18 AM
Looking through her data, I note that Anette now has the EPMA record for highest count rate with a constant k-ratio:

(https://probesoftware.com/smf/gallery/395_10_07_22_12_51_55.png)

Spectrometer 3 with a PETL crystal with 540K cps on Ti metal with ~1% accuracy!

"Super high" precision dead time correction expression rules!

 8)

OK, so Anette and I went over this Ti data from her JEOL instrument again and we found a small mystery regarding the dead time loses at very high beam currents that maybe someone (SEM geologist/Brian Joy?) can help us with.

The graph quoted above (from the previous post) isn't quite correct because that plot of k-ratios is not based on the standards (primary and secondary) being measured at the same beam currents, but rather it's the k-ratios using a primary standard measured at one beam current, and all the secondary standards (TiO2) being measured from 10 to 200 nA.  So it's really a plot of the picoammeter accuracy, which does look very good actually.    :)

But the claim of 540K cps on the Ti standard at 200 nA is not correct because the Ti metal standard used in the graph was measured at a lower beam current.  The secondary TiO2 standards however were measured at all the different beam currents, and the count rate on the TiO2 secondary standard at 200 nA would be around half that of the metal so ~250K cps.  Which of course is still pretty impressive.

However a plot of the constant k-ratios plotted using primary and secondary standards (TI and TiO2) measured at the same beam currents looks like this:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_21_54.png)

It is still quite constant over the range of beam currents, but there is a small uptick in the k-ratios on Sp 3 using a PETL crystal at the highest beam currents.  So what is that uptick from?  Note for the Ti metal standard at 180 and 200 nA, the count rate is indeed over 500K cps!

Well at first we thought maybe the expanded dead time correction needed even more terms of the Taylor expansion series, so we increased them from 6 to 12, and it actually did slightly help the k-ratios, but just barely.  In fact we can see the problem is in the primary standard counts as seen here:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_22_15.png)

The last standard intensity was measured at 200 nA, the one above that at 180 nA, etc.

So even the expanded dead time correction starts to fail at count rates above 500K cps, but only by a percent or so (k-ratio 0.55 to 0.56). Which is not even as much as the offset visible in Sp 2 (red circles), probably from an effective take off angle problem on that spectrometer.

So we have to wonder what mechanism is causing the dead time to increase at counting rates over 500K cps on Sp 3 (PETL).  Any ideas?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 13, 2022, 02:45:47 PM
However a plot of the constant k-ratios plotted using primary and secondary standards (TI and TiO2) measured at the same beam currents looks like this:

(https://probesoftware.com/smf/gallery/395_13_07_22_9_21_54.png)

It is still quite constant over the range of beam currents, but there is a small uptick in the k-ratios on Sp 3 using a PETL crystal at the highest beam currents.  So what is that uptick from?  Note for the Ti metal standard at 180 and 200 nA, the count rate is indeed over 500K cps!

I don’t necessarily have an answer, but I’ve modified my plot of N’12/N’32 versus N’12 for Ti to show both the uncorrected data and corrections based on N = N’/(1-N’τ) and N = N’/(1-(N’τ+N’2(τ2/2))).  (The measured count rate for Ti Kβ on channel 2/LiFL is represented by N’12, and the measured count rate for Ti Kα on channel 5/LiFH is represented by N’32.)  Note that the non-linear dead time correction introduces systematic error beginning at relatively low count rate, with the fixed ratio (0.0957) under-predicted.  Keep in mind that essentially all non-linear behavior is accounted for by the Ti Kα measurement on channel 5/LiFH (N’32)

I need to see just the right kind of plot in order to approach a problem like this.  I like to see the uncorrected ratios plotted along with the corrected values for the different models for one spectrometer or spectrometer pair at a time, and I like to see a lot of data.  I would have collected more than 55 ratios, but I didn’t want to spend all night in the lab.

What are the actual measured values of Ti Kα cps on Anette’s channel 3/PETL?  Is it possible that your plot illustrates the approach to X-ray counter paralysis?  I find that I reach this point somewhere in the vicinity of 300 kcps (uncorrected), but I haven’t explored this limit in detail.

Do you happen to have the Willis (1993) reference?  It’s pretty obscure.

(https://probesoftware.com/smf/gallery/381_13_07_22_2_26_52.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 13, 2022, 03:24:57 PM
I think you could be correct that the detector itself is getting saturated above 500K cps.

I compared the traditional correction with the expanded correction and I'm getting a smaller difference at low count rates  Here is the tradtional expression on pure Ti metal at 10 nA:

ELEM:       Ti      Ti      Ti
STKF:   1.0000  1.0000  1.0000     ---
STCT:   445.90 2819.27  293.08     ---

And here with the six term expanded expression:

ELEM:       Ti      Ti      Ti
STKF:   1.0000  1.0000  1.0000     ---
STCT:   445.91 2821.42  293.08     ---

That's a difference of 0.0007 or 0.07% on the PETL spectrometer.  On the lower count rates channels the difference is not even (barely) visible in 5 significant figures. Was your 0.09 number the percent difference? 

I attribute this slight difference on the PETL crystal at 2800 cps/nA to the fact that even at relatively reasonable count rates (~28K cps) the traditional expression is already failing in precision.

The Willis paper has been hard to track down.  I've attached what we found below.  The phrase "dead time" does actually appear in the paper but it's on optimizing neural nets!

As requested, I turned off the dead time correction in Probe for EPMA completely (it's a checkbox under Analytical Options) and we obtain the following very *non-constant* k-ratios:

(https://probesoftware.com/smf/gallery/395_13_07_22_3_19_15.png)

 :o
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 13, 2022, 10:55:25 PM
Hi John,

The uncorrected data from Anette appear to indicate that nothing unusual is happening in the channel 3 X-ray counter; the measured k-ratio trends upward monotonically with beam current, as is expected.  This means that the strange upward swing at high count rate (not current) in your plot of corrected k-ratio versus current is likely due to your model for N.  This is exactly why I advocated for plotting in the manner that I did two posts above.  I was even able to point out unphysical behavior in the 2nd order expression for N, manifested as clear negative deviation from the ratio, N12/N32, established in my linear fit.

Brian
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 14, 2022, 07:59:50 AM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

(https://probesoftware.com/smf/gallery/395_14_07_22_7_50_07.png)

(https://probesoftware.com/smf/gallery/395_14_07_22_7_56_21.png)
 
(https://probesoftware.com/smf/gallery/395_14_07_22_7_56_34.png)

You'll notice that the extra terms do not affect the lower count rate channels.  But they do help enormously with the very high count rates on spectro 3. In fact one should note in the last (six term expression) plot that spec 3 k-ratios follows spec 5 wonderfully closely, at least at count rates under 500K cps.

I do think you're right about the paralyzing behavior of the detector at these very high count rates.  You said you saw this occur yourself at count rates over 300K cps.  Why then do you not think it happens at count rates above 500K cps?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 14, 2022, 07:20:19 PM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

Yes, I’m aware that the linear model will not produce useful results at high count rate (> several tens kcps).

Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

If the counter is nearing paralysis, then, obviously, the Ti Ka count rate on Ti metal will produce this effect at lower current than TiO2.  This would be manifested as increasing positive deviation from rough linearity at high current on the plot of k-ratio versus current.  If I put a ruler up to your plot, then I can in fact see the apparent k-ratio deviating in this manner (but I need a ruler to see it).

When dealing with these high count rates, it really is necessary to specify whether the stated count rate is corrected or not (like the 506 kcps on Ti metal at 180 nA); this is one advantage of plotting against specified measured or corrected count rate rather than current.  On my channel 2/PETL, I see no obvious evidence for paralysis at 200 nA when measuring Ti Ka on high-purity TiO2 (with measured count rate between 250 and 300 kcps).  When I do a peak search at 400 nA to simulate the count rate on Ti metal, I get a peak with a distinctly flat top, indicating onset of paralysis.

Considering the above, it appears likely that your k-ratios collected above 140 nA are in fact affected by abnormal counter behavior, and so my first impression of the uncorrected ratios was wrong.  (But who could blame me considering that your plot contains no explicit information on measured count rate?)  What bothers me about your k-ratio versus current plots, though, is the fact that I can see patterns in the corrected values.  For instance, why do the corrected ratios for Anette’s channels 2 and 3 decrease in similar fashion when progressing from about 40 to 100 nA?  Why does a maximum appear to occur at 40 nA for the corrected channel 5 ratios?

I think that you need to investigate your model further to see if it is producing unphysical behavior.  I’ve already pointed out a potential problem on my N12/N32 versus N12 plot for Ti (shown again below).  It is absolutely physically impossible for N12/N32 to fall below the ratio determined in my linear fit, as this fit gives the extrapolation to zero count rate (noting that I collected abundant data in the linear region).  You or somebody else absolutely needs to test the higher order models in the same fashion.  Forming a ratio of Ti Ka and Ti Kb (with Kb measured on a spectrometer that produces relatively low count rate) is especially useful because the Kb count rate can be corrected reasonably with the linear model.  If you want to stick with k-ratios, then use a secondary standard that doesn’t contain much of the element under consideration (like Fe in bornite, Cu5FeS4, while using Fe metal as the primary standard).  On my plot of the uncorrected or linearly corrected data, note that no obvious deviation from linearity occurs below 85 kcps.

(https://probesoftware.com/smf/gallery/381_13_07_22_2_26_52.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 15, 2022, 09:22:25 AM
Something is happening above 500K cps. Try the Heinrich linear method at count rates over 500K cps and let us know what you see.

It's clear to me at least that the additional terms of the Taylor expansion series in the dead time correction have an enormous benefit in allowing us to maintain constant k-ratios over a much larger range of count rates (beam currents) than before.  This is particularly important for new instruments with large area Bragg crystals that can easily attain these 100K cps count rates at moderate conditions.

Yes, I’m aware that the linear model will not produce useful results at high count rate (> several tens kcps).

Well that's sort of the point of this topic!  To reiterate:

1. Using the constant k-ratio method we can acquire k-ratios that allow us to determine the dead time constants for each spectrometer (and each crystal energy range if desired).

2. We can display the same k-ratio data using a primary standard measured at a single beam current to determine the accuracy of our picoammeter calibration.

3. We can plot k-ratios from multiple spectrometers so we can compare the effective takeoff angles of each of our spectrometers/crystals to determine our ultimate quantitative accuracy.

4. And finally, using the expanded dead time correction expression, we can correct WDS intensities at count rates up to around 500k cps with accuracy not previously possible.

Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

The traditional dead time expression, by utilizing only a single term of this Taylor expansion series, is simply a very crude approximation of this probability, and therefore is only accurate for count rates up to around 50K cps. Though it depends on the actual dead time, so Cameca instruments with roughly 3 usec dead times and using the traditional expression are probably only accurate up to 50K cps. While JEOL instruments with dead times around 1.5 usec, may be able to get up to ~80K cps with the traditional expression, as you have shown.

As Willis pointed out in 1993, by utilizing a second term in the dead time expression one can obtain better precision in this probability estimate, and we find that one can get up to count rates around 100K cps or so before the wheels come off.  Maybe a little higher on a JEOL with shorter dead times.

But by utilizing an additional (4) terms of this probability series, we can now get high accuracy k-ratios up to count rates close to 500K cps.  It's just math.

As far as the effects of peak shapes go at these high currents, I would have thought that the k-ratio data speaks for itself!  But I remember now that I did do a screen capture of the PHA peak shapes looking at Mn Ka on Mn metal at 200 nA last week:

(https://probesoftware.com/smf/gallery/395_15_07_22_8_46_55.png)

The LPET count rates were over 240K cps at 200 nA. Surprisingly good I think for an instrument with 3 usec dead times!  I'll try and remember to do a wavescan at 200 nA next time I'm in the lab, but again, the accuracy of the k-ratio data tells me that we are able to perform quantitative analysis at count rates never before attainable. 

If the counter is nearing paralysis, then, obviously, the Ti Ka count rate on Ti metal will produce this effect at lower current than TiO2.  This would be manifested as increasing positive deviation from rough linearity at high current on the plot of k-ratio versus current.  If I put a ruler up to your plot, then I can in fact see the apparent k-ratio deviating in this manner (but I need a ruler to see it).

When dealing with these high count rates, it really is necessary to specify whether the stated count rate is corrected or not (like the 506 kcps on Ti metal at 180 nA); this is one advantage of plotting against specified measured or corrected count rate rather than current.  On my channel 2/PETL, I see no obvious evidence for paralysis at 200 nA when measuring Ti Ka on high-purity TiO2 (with measured count rate between 250 and 300 kcps).  When I do a peak search at 400 nA to simulate the count rate on Ti metal, I get a peak with a distinctly flat top, indicating onset of paralysis.

Considering the above, it appears likely that your k-ratios collected above 140 nA are in fact affected by abnormal counter behavior, and so my first impression of the uncorrected ratios was wrong.  (But who could blame me considering that your plot contains no explicit information on measured count rate?)  What bothers me about your k-ratio versus current plots, though, is the fact that I can see patterns in the corrected values.  For instance, why do the corrected ratios for Anette’s channels 2 and 3 decrease in similar fashion when progressing from about 40 to 100 nA?  Why does a maximum appear to occur at 40 nA for the corrected channel 5 ratios?

I think that you need to investigate your model further to see if it is producing unphysical behavior.  I’ve already pointed out a potential problem on my N12/N32 versus N12 plot for Ti (shown again below).  It is absolutely physically impossible for N12/N32 to fall below the ratio determined in my linear fit, as this fit gives the extrapolation to zero count rate (noting that I collected abundant data in the linear region).  You or somebody else absolutely needs to test the higher order models in the same fashion.  Forming a ratio of Ti Ka and Ti Kb (with Kb measured on a spectrometer that produces relatively low count rate) is especially useful because the Kb count rate can be corrected reasonably with the linear model.  If you want to stick with k-ratios, then use a secondary standard that doesn’t contain much of the element under consideration (like Fe in bornite, Cu5FeS4, while using Fe metal as the primary standard).  On my plot of the uncorrected or linearly corrected data, note that no obvious deviation from linearity occurs below 85 kcps.

Well I'm glad you now see it.  And thank-you for taking the time to debate this with me. I have to say, all of this argument has actually helped me to appreciate exactly how good this new method and expression are.

The small deviations you point out are interesting and perhaps will provide additional insight into the inner workings of our instruments, but it should be noted that they are in the sub 1% level and significantly smaller than the k-ratio variations from one spectrometer to another.  The fact that we can attain 1% k-ratio accuracy up to 500K cps is, to me at least and Anette as well, the take home message in my book.

Here's an idea: I can't send you Anette's data until I ask her, but perhaps your best bet for understanding this constant k-ratio method and the new dead time expression is to perform a constant k-ratio run yourself. 

You already own Probe for EPMA, so why don't you just fire it up, go to the Help menu and update it to the latest version so you have the new dead time expression.  Then using Ti metal and TiO2, or any two materials with a large difference in count rates, try it out on your PET and LIF crystals.  Do you have any large area crystals? That is where these effects will be most pronounced. The procedure has been fully documented and is attached below.

Remember, Probe for EPMA has the traditional (single term) expression, the Willis (two term) expression, and the new six term expression, all available with a click of the mouse.  They are each simply more precise formulations of the probability calculation of randomly overlapping time intervals.

(https://probesoftware.com/smf/gallery/395_15_07_22_9_00_30.png)

And by unchecking the Use Dead Time Correction checkbox you can even turn off the dead time correction completely!

With the latest version of Probe for EPMA, it just takes a few minutes to set up a completely automated overnight run using the multiple sample setups feature with different beam currents.  See the attached PDF document for complete details.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 15, 2022, 09:41:04 PM
Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

Linear:  N'/N = 1 – N’τ (slope = -τ)
Non-linear:  N'/N = 1 – (N’τ + N’2(τ2/2))

(https://probesoftware.com/smf/gallery/381_15_07_22_9_35_39.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 15, 2022, 10:29:53 PM
Your uncorrected data are difficult to interpret in part because both the primary and secondary standards require large, non-linear dead time corrections that lead to a roughly linear appearance of the uncorrected plot of k-ratio versus current.  You also haven’t presented peak searches or wavelength scans so that peak shapes can be compared at increasing count rates.

This is what you keep saying but I really don't think you have thought this through.   There is nothing non-linear about the expanded dead time correction. The dead time expression (all of them) are merely a logical mathematical description of the probability of two photons entering the detector within a certain period of time.

Linear:  N/N’ = 1 – N’τ (slope = -τ)
Non-linear:  N/N’ = 1 – (N’τ + N’2(τ2/2))

(https://probesoftware.com/smf/gallery/381_15_07_22_9_35_39.png)

Now you're just arguing semantics. Yes, the expanded expression is not a straight line the way you're plotting it here (nice plot by the way!), but why should it be? It's a probability series that produces a linear response in the k-ratios. Hence "constant" k-ratios.   8)

It's the additional terms of the Taylor expansion series which is why it works at high count rates. Whereas the single term expression fails miserably, as you have already acknowledged.

And as your plot nicely demonstrates, and I have noted previously, there is little to no difference at low count rates, and hence no "non-linearities" that you seem to be so concerned with. This is good news for everyone because the more precise dead time expressions can be utilized under all beam current conditions.

Think about it: plotting these expanded expressions on a probability scale would produce a line that approaches a straight line. That's the point about "linear" that I've been trying to make clear.

And if you plot up the six term dead time expression you will find that it approaches the actual probability of a dead time event more accurately at even higher count rates. As demonstrated by the constant k-ratio data from both JEOL and Cameca instruments in this topic.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 16, 2022, 09:04:42 AM
Here's something interesting that I just noticed.

When I looked at the dead times Anette had in her SCALERS.DAT file, and the dead times after she optimized them with her Te, Se and Ti k-ratio data, there is an obvious shift from higher to lower dead times constants:

Ti Ka dead times, JEOL iHP200F, UBC, von der Handt, 07/01/2022
Sp1     Sp2    Sp3     Sp4     Sp5
PETJ    LIFL    PETL   TAPL    LIFL
1.26   1.26    1.27    1.1     1.25          (usec) optimized using constant k-ratio method
1.52   1.36    1.32    1.69    1.36         (usec) JEOL engineer using traditonal method


I suspect that the reason she found smaller dead time constants using the constant k-ratio method is because she has an instrument that produces quite high count rates, so when the JEOL engineer tried to compensate for those higher count rate (using the traditional dead time expression), he had to increase the apparent dead time constants to get something that looked reasonable. And the reason is of course that the traditional dead time expression just doesn't cut it at count rates attained at even moderate beam currents on these new instruments.

In fact I found exactly the same thing on my SX100. Using the traditional single term expression I found I was having to increase my dead time constants to around 4 usec!  That was when I decided to try the existing "high" precision (two term) expression from Willis (1993).  And that helped but it still was showing problems at count rates exceeding 100K cps.

So that is when John Fournelle and I came up with the expanded dead time expression with 6 terms. Once that was implemented everything fell nicely into place and now we can get consistently accurate k-ratios at count rates up to 500K cps or so with dead time constants at or even under 3 usec!  Of course above that 500 K cps count rate we start seeing the WDS detector showing a little of the "paralyzing" behavior discussed earlier. 

I'm hoping that Mike Jercinovic will perform some of these constant k-ratio measurements on his VLPET (very large) Bragg crystals on his UltraChron instrument and see what his count rates are at 200 nA!    :o

It's certainly worth considering how much of a benefit this new expression is for new EPMA instruments with large area crystals. For example, on Anette's PETL Bragg crystals she is seeing count rates on Ti Ka of around 2800 cps/nA. So at 10 nA she's getting 28K cps and at 30 nA she's getting around 84K cps!

That means that using the traditional dead time expression she's already seeing accuracy issues at 30 nA!  And wouldn't we all like to go to beam currents of more than 30 nA and still be able to perform accurate quantitative analyses?

 ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 17, 2022, 09:26:58 AM
Brian Joy's plots of the traditional and Willis (1993) (two term) dead time expressions are really cool, so I added the six term expanded dead time expression to this plot. 

First I wrote code to calculate the dead time corrections for all of the Taylor series expressions from the traditional (single term) to the six term expression that we call the "super high" precision expression. It's extremely simple to generate the Taylor series to as many terms as one wants, as seen here:

Code: [Select]
' For each of the number of Taylor expansion terms
For j& = 0 To 5

temp2# = 0#
For i& = 2 To j& + 1
temp2# = temp2# + cps& ^ i& * (dtime! ^ i&) / i&
Next i&
temp# = 1# - (cps& * dtime! + temp2#)
If temp# <> 0# Then corrcps! = cps& / temp#

' Add to output string observed cps divided by corrected (true)
astring$ = astring$ & Format$(CSng(cps& / corrcps!), a80$) & vbTab
Next j&

The output from this calculation is seen here:

(https://probesoftware.com/smf/gallery/395_17_07_22_9_10_30.png)

The column headings indicates the number of Taylor probability terms in each of the columns (1 = the traditional single term expression). This code is embedded in the TestEDS app under the Output menu but the modified app has not yet been released by Probe Software as yet!

Plotting up the traditional, Willis and six term expressions we get this plot:

(https://probesoftware.com/smf/gallery/395_17_07_22_9_19_47.png)

The 3, 4 and 5 term expressions plot pretty much on top of each other since this plot only goes up to 200K cps, so they are not shown, but you can see the values in the text output.  On a JEOL instrument with ~1.5 usec dead times, up to about 50K to 80K cps, the traditional expression does a pretty good job, but above that the Willis (1993) expression does better, and above 160K cps we need the six term expression for best accuracy.

As we saw from our constant k-ratio data plots, the six term expression really gets going at 200K cps and higher.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 19, 2022, 01:45:48 PM
We modified the TestEDS app a bit to only display results for only the traditional (single term), Willis (two term) and the new six term expression dead time expressions. And we increased the observed count rates to 300K cps and also added output of the predicted (dead time corrected) count rates as seen here:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_20_11.png)

Note that this version of TestEDS.exe is available in the latest release of Probe for EPMA (using the Help menu).

Now if instead of plotting the ratio of the observed to predicted count rates on the Y axis, we instead plot the predicted count rates themselves, we can see this plot:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_20_39.png)

Note that unlike the ratio plot, all three of the dead time correction expressions show curved lines.  This is what I meant when I stated earlier that it depends on how the data is plotted.

Note also that at true (corrected) count rates around 400 to 500K cps we are seeing differences in the predicted intensities between the traditional expression and the new six term expression of around 10 to 20%!

To test the six term expression we might for example measure our primary Ti metal standard at say 30 nA, and then a number of secondary standards (TiO2 in this case) at different beam currents, and then plot the k-ratios for a number of spectrometers first using the traditional dead time correction expression:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_30_06.png)

Note spectrometer 3 (green symbols) using a PETL crystal.  Next we plot the same data, but this time using the new six term dead time correction expression:

(https://probesoftware.com/smf/gallery/395_19_07_22_1_30_23.png)

The low count rate spectrometers are unaffected, but the high intensities from the large area Bragg crystal benefit significantly in accuracy. 

I can imagine a scenario where one is measuring one or two major elements and 3 or 4 minor or trace elements, using 5 spectrometers. The analyst measures all the primary standards at moderate beam currents, but in order to get decent sensitivity on the minor/trace elements the analyst selects a higher beam current for the unknowns. 

Of course one can use two different beam conditions for each set of elements, but that would take considerably longer.  Now we can do our major and minor elements together using higher beam currents and not lose accuracy.    8)

I've always mentioned that for trace elements, background accuracy is more important than matrix corrections, but that's only because our matrix corrections are usually accurate to 2% relative or so.  Now that we know that we can see 10 or 20% relative errors in our dead time corrections at high beam currents, I'm going to modify that now and say, that the dead time corrections might be another important source of error for trace and minor elements if one is using the traditional (single term) dead time expression at high beam currents.

The good news is that the six term expression has essentially no effect at low beam currents, so simply select the "super high" precision expression as your default and you're good to go!

(https://probesoftware.com/smf/gallery/395_15_07_22_9_00_30.png)

Test it for yourself using the constant k-ratio method and feel free to share your results here. 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 20, 2022, 11:20:41 PM
First I wrote code to calculate the dead time corrections for all of the Taylor series expressions from the traditional (single term) to the six term expression that we call the "super high" precision expression. It's extremely simple to generate the Taylor series to as many terms as one wants, as seen here:

Code: [Select]
' For each of the number of Taylor expansion terms
For j& = 0 To 5

temp2# = 0#
For i& = 2 To j& + 1
temp2# = temp2# + cps& ^ i& * (dtime! ^ i&) / i&
Next i&
temp# = 1# - (cps& * dtime! + temp2#)
If temp# <> 0# Then corrcps! = cps& / temp#

' Add to output string observed cps divided by corrected (true)
astring$ = astring$ & Format$(CSng(cps& / corrcps!), a80$) & vbTab
Next j&

For a given function, f(x), the Taylor series generated by f(x) at x = a is typically written as

f(a) + (x-a) f’(a) + (x-a)2f’’(a)/2! + … + (x-a)nf(n)(a)/n! +…

If a = 0, it is often called the Maclaurin series.  For example, the Maclaurin series for f(x) = exp(-x) looks like this:

exp(-x) = 1 – x + x2/2 – x3/6 + x4/24 – x5/120 + x6/720 + … + (-x)nf(n)(0)/n! +…

Just for the sake of absolute clarity, could you identify the function, N(N’), that you’re differentiating to produce the Taylor or Maclaurin polynomial (where N’ is the measured count rate and N is the corrected count rate)?  I’m specifically using the term “polynomial” because the series, which is infinite, has been truncated.

The function, N(N’), presented by J. Willis is

N = N’/(1 – (τN’+τ2N’2/2))

where τ is a constant.  How exactly does the polynomial generated from the identified function relate to the expression presented by Willis?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 12:18:17 PM
I've been writing up this constant k-ratio method and the new dead time expressions up with several of our colleagues (Aurelien Moy, Zack Gainsforth, John Fournelle, Mike Jercinovic and Anette von der Handt) and hope to have a manuscript ready soon.  So if it seems I've been holding my cards close to my chest, that's the reason why!   :)

Using your notation the expanded expression is:

(https://probesoftware.com/smf/gallery/395_21_07_22_3_32_19.png)

or

(https://probesoftware.com/smf/gallery/395_21_07_22_12_07_43.png)
 
I've been calling it a Taylor series, because it is mentioned in the Taylor series Wiki page, but yes, the Maclaurin expression is an exponential version of the Taylor series which is what we are approximating:

https://en.wikipedia.org/wiki/Taylor_series

We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).

Your comments have been very helpful in getting me to think through these issues, and we will be sure to acknowledge this in the final manuscript.

I'll share one plot from the manuscript which pretty much sums things up:

(https://probesoftware.com/smf/gallery/395_21_07_22_12_22_02.png)

Note that the Y axis (predicted count rate) is in *millions* of cps!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 12:49:20 PM
Here's another way to think of these dead time expressions that Zack, Aurelien and I have come up with:

The traditional (single term) dead time correction expression does describe the probability of a single photon being coincident with another photon, but it doesn't handle the case where two photons are coincident with another photon.
 
That's what the Willis (two term) expression does.

The expanded expression handles three photons coincident with another photon, etc., etc.   The log expression will handle this even more accurately.  Of course at some point the detector physics comes into play when the bias voltage doesn't have time to clear the ionization from the previous event.  Then the detector starts showing paralyzing behavior as has been pointed out.

What amazing is that we are seeing these multiple coincident photon events even at relatively moderate count rates and reasonable dead times, e.g., >100K cps and 1.5 usec. 

But it makes some sense because if you think about it, a 1.5 usec dead time corresponds to a count rate of 1/1.5 usec or about 600K cps assuming no coincident events.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 03:46:14 PM
Using your notation the expanded expression is:

(https://probesoftware.com/smf/gallery/395_21_07_22_12_07_43.png)
 
I've been calling it a Taylor series, because it is mentioned in the Taylor series Wiki page, but yes, the Maclaurin expression is an exponential version of the Taylor series which is what we are approximating:

https://en.wikipedia.org/wiki/Taylor_series

We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).

A human-readable equation is nice.  Picking through someone else’s code is never fun.

This is the Taylor series generated by ln(x) at x = a (for a > 0):

ln(x) = ln(a) + (1/a)(x-a) – (1/a2)(x-a)2/2 + (1/a3)(x-a)3/3 – (1/a4)(x-a)4/4 + … + (-a)n-1(n-1)!(x-a)n/n! + …

Note that the sign alternates from one term to the next.  A Maclaurin series cannot be generated because ln(0) is undefined.

I don’t see how this Taylor series relates to the equation you’ve written.  I also don’t see how it’s physically tenable, as it can’t be evaluated at N’ = N = 0 (but your equation can be).  When working with k-ratios, N’ = N = 0 is the point at which the true k-ratio for given effective takeoff angle is found.

I'm confused, but maybe I'm just being dense.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 04:00:24 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 04:09:31 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 04:17:07 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.

As you pointed out, it's not exactly a Taylor series.

I could make a joke here about "natural" numbers, but instead I'll ask: what is the meaning of zero incident photons?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Brian Joy on July 21, 2022, 05:05:46 PM
You're welcome!

You seem to be focusing on nomenclature. If you need to call it something, call it a Maclaurin-like form of the series. I don't see a problem with a zero count rate. It the count rate is zero, we just return a zero for the corrected count rate.

I'm not confused, probably I'm just being dense.    :)

It's not an issue of nomenclature.  The Taylor series generated by ln(x) blows up when x = 0.

As you pointed out, it's not exactly a Taylor series.

I could make a joke here about "natural" numbers, but instead I'll ask: what is the meaning of zero incident photons?

This is the answer I was looking for.  If it’s not a Taylor series, then you shouldn’t call it by that name.  What is the physical justification for the math?  If the equation is empirical, then how do you know that it will work at astronomical count rates (Mcps) that you can’t actually measure?

When you perform a regression (linear or not), the intercept on the vertical axis (i.e., the point at which N’ = N = 0) gives the ratio for the case of zero dead time, and it is the only point at which N’ = N.  This ratio could be a k-ratio or it could be a ratio of count rates on different spectrometers, or it could be the ratio, N'/I = N/I (if you trust your picoammeter).
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 21, 2022, 05:46:29 PM
This is the answer I was looking for.  If it’s not a Taylor series, then you shouldn’t call it by that name.

Says the man who claims it's not about nomenclature.      ::)

Yes, it's not exactly Taylor and not exactly Maclaurin.  It's something new based on our modeling and empirical testing.  Call it whatever you want, we're going to call it Taylor/Maclaurin-like.  That drives you crazy doesn't it?   :)

What is the physical justification for the math?  If the equation is empirical, then how do you know that it will work at astronomical count rates (Mcps) that you can’t actually measure?

When you perform a regression (linear or not), the intercept on the vertical axis (i.e., the point at which N’ = N = 0) gives the ratio for the case of zero dead time, and it is the only point at which N’ = N.  This ratio could be a k-ratio or it could be a ratio of count rates on different spectrometers, or it could be the ratio, N'/I = N/I (if you trust your picoammeter).

The physical justification will be provided in the paper.  I think you will be pleased (then again, maybe not!).  The log expression makes this all pretty clear, .

The empirical justification is that even these expanded expressions work surprisingly well at over 400K cps as demonstrated in the copious examples shown in this topic. But as we get above these sorts of count rates, the physical limitations (paralyzing behavior) of the detectors starts to dominate.

The bottom line is that these expanded expressions work much better than the traditional expression, certainly at the moderate to high beam currents routinely utilized in trace/minor element analyses and high speed quant mapping. And the log expression gives almost exactly the same results as the expanded expressions, so we are quite confident!

I'll just say that the expression we are using does indeed work at a zero count rate, but the expression you are thinking of does not.  It will all be in the paper.

obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
       0          0          0          0          0          0          0          0          0   
    1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989   
    2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955   
    3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898   
    4000   4024.145      0.994   4024.218   0.993982   4024.218   0.993982   4024.218   0.993982   
    5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718   
    6000    6054.49   0.9910001   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593   
    7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445   

Once again the co-authors and I thank you for all your criticisms and comments, as they have significantly improved our understanding of these processes.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 22, 2022, 11:06:02 AM
Here's another way to think of these dead time expressions that Zack, Aurelien and I have come up with:

The traditional (single term) dead time correction expression does describe the probability of a single photon being coincident with another photon, but it doesn't handle the case where two photons are coincident with another photon.
 
That's what the Willis (two term) expression does.

The expanded expression handles three photons coincident with another photon, etc., etc.   The log expression will handle this even more accurately.  Of course at some point the detector physics comes into play when the bias voltage doesn't have time to clear the ionization from the previous event.  Then the detector starts showing paralyzing behavior as has been pointed out.

What amazing is that we are seeing these multiple coincident photon events even at relatively moderate count rates and reasonable dead times, e.g., >100K cps and 1.5 usec. 

But it makes some sense because if you think about it, a 1.5 usec dead time corresponds to a count rate of 1/1.5 usec or about 600K cps assuming no coincident events.

A few posts ago I made a prediction that traditional (single photon coincidence) dead time expression should fail at around 1/1.5 (usec) assuming a non random distribution.  That would correspond to 666K cps (1/1.5 usec)

I just realized that I forgot to do that calculation (though it really should be modeled using Monte Carlo for confirmation), so here is that calculation using 1.5 usec and going to over 600K cps:

(https://probesoftware.com/smf/gallery/395_22_07_22_11_01_08.png)

 :o

The traditional dead time expression fails right at an observed count rate of 666K cps!   Realize that this corresponds to a "true" count rate of over 10^8 cps, so nothing we need to worry about with our current detectors!   Now maybe this is just a coincidence (no pun intended) as I haven't even had time to run this past my co-authors...

The expanded dead time  expressions fail at somewhat lower count rates of course, but still in the 10^6 cps realm of "true" count rates.  The advantage of the expanded dead time expressions is that they are much more accurate than the traditional expression, at count rates we often see in WDS EPMA.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on July 23, 2022, 09:26:05 AM
Now that we have implemented Aurelien Moy's logarithmic expression let's see how it performs.

Here is a plot of observed vs. predicted count rates at 1.5 usec up to 300K observed count rates:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_10_02.png)

Wait a minute, where did the six term (red line) expression go?  Oh, that's right, it's underneath the log (cyan) expression. At under 300K cps observed count rates, these two expressions give almost identical results.  Meanwhile the traditional expression gives a predicted count rate that is about 30% to 40% too low!    :o

OK, let's take it up to 400K cps observed count rates:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_17_47.png)

Now we are just barely seeing a slight divergence between the two expressions which makes sense since the six term Maclaurin-like expression is only an approximation of the probabilities of multiple photon coincidences.

Note that at 1.5 usec dead times this 400K cps observed count rate corresponds to a predicted (true) count rate of over 4000K cps. Yes, you read that right, 4M cps. 

Of course our gas detectors will be paralyzed long before we get to such count rates.  From Anette's WDS data we think this is starting to occur at predicted (true) count rates of over 500K cps, which at 1.5 usec corresponds to an observed count rate of around 250K cps.

But even at these easily attainable count rates, the traditional expression is still off by around 25% relative.   It's all a question of photon coincidence.    :)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 08, 2022, 10:22:07 AM
Over the weekend I went over my MgO/Al2O3/MgAl2O4 consensus k-ratios from May, now that we finally figured out the issues with calibrating our WDS spectrometer dead times using the "constant k-ratios" procedure developed by John Donovan and John Fournelle and also attached to this post as a short pdf file:

https://probesoftware.com/smf/index.php?topic=1466.msg11008#msg11008

and now that we have an accurate expression for dead time correction (see Moy's logarithmic dead time correction expression in Probe for EPMA), we can re-open that old probe data file from May and re-calculate our k-ratios!

So, using the new logarithmic expression we obtain these k-ratios for MgAl2O4 from 5 nA to 120 nA:

(https://probesoftware.com/smf/gallery/395_08_08_22_10_09_04.png)

Note that I blanked out the y-axis values so as not to influence anyone (these will be revealed later once Will Nachlas has a better response rate from the first FIGMAS round robin!) but the point here is to note how *constant* these consensus k-ratios are over a large range of beam currents.
 
What does this mean?  It means we can quantitatively analyze for major elements, minor elements, and trace elements at high beam currents at the same time!

This is particularly important for high sensitivity quantitative X-ray mapping...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 08, 2022, 10:50:53 AM
I almost forgot to post this.  Here's the analysis of the above MgAl2O4 using MgO and Al2O3 as primary standards, first at 30 nA:

St 3100 Set   3 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 30.0  Beam Size =   10

St 3100 Set   3 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    29.85   29.85     ---

ELEM:       Mg      Al       O   SUM 
    28  17.063  37.971  44.985 100.020
    29  17.162  38.227  44.985 100.374
    30  17.234  38.389  44.985 100.608

AVER:   17.153  38.196  44.985 100.334
SDEV:     .086    .210    .000    .296
SERR:     .050    .121    .000
%RSD:      .50     .55     .00

PUBL:   17.084  37.931  44.985 100.000
%VAR:      .41     .70     .00
DIFF:     .069    .265    .000
STDS:     3012    3013     ---

And here at 120 nA:

St 3100 Set   6 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 120.  Beam Size =   10

St 3100 Set   6 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:   119.71  119.71     ---

ELEM:       Mg      Al       O   SUM 
    55  17.052  37.617  44.985  99.654
    56  17.064  37.554  44.985  99.603
    57  17.083  37.636  44.985  99.704

AVER:   17.066  37.602  44.985  99.654
SDEV:     .016    .043    .000    .051
SERR:     .009    .025    .000
%RSD:      .09     .11     .00

PUBL:   17.084  37.931  44.985 100.000
%VAR:     -.10    -.87     .00
DIFF:    -.018   -.329    .000
STDS:     3012    3013     ---

This is using the default Armstrong phi/rho-z matrix corrections, but all the matrix expressions give similar results as seen here for the 30 nA analysis:

Summary of All Calculated (averaged) Matrix Corrections:
St 3100 Set   3 MgAl2O4 FIGMAS
LINEMU   Henke (LBL, 1985) < 10KeV / CITZMU > 10KeV

Elemental Weight Percents:
ELEM:       Mg      Al       O   TOTAL
     1  17.153  38.196  44.985 100.334   Armstrong/Love Scott (default)
     2  17.062  38.510  44.985 100.558   Conventional Philibert/Duncumb-Reed
     3  17.126  38.468  44.985 100.580   Heinrich/Duncumb-Reed
     4  17.157  38.369  44.985 100.511   Love-Scott I
     5  17.150  38.186  44.985 100.321   Love-Scott II
     6  17.098  37.989  44.985 100.072   Packwood Phi(pz) (EPQ-91)
     7  17.302  38.321  44.985 100.608   Bastin (original) Phi(pz)
     8  17.185  38.701  44.985 100.871   Bastin PROZA Phi(pz) (EPQ-91)
     9  17.170  38.579  44.985 100.735   Pouchou and Pichoir-Full (PAP)
    10  17.154  38.399  44.985 100.538   Pouchou and Pichoir-Simplified (XPP)

AVER:   17.156  38.372  44.985 100.513
SDEV:     .063    .210    .000    .225
SERR:     .020    .066    .000

MIN:    17.062  37.989  44.985 100.072
MAX:    17.302  38.701  44.985 100.871

Proof once again that we really do not require matrix matched standards.

Again, for most silicates and oxides the problem is *not* our matrix corrections. Instead it's our instrument calibrations, especially dead time calibrations, and of course having standards that actually are the compositions that we claim them to be.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 09:08:54 AM
OK, I am going to start again this morning because I think I now understand the main reason why BJ and SG have been having so much trouble appreciating these new dead time expressions (aside from nomenclature issues!).   :)  Though SG seems to appreciate most of what we have been trying to accomplish when he states:

...albeit it have potential to work satisfactory for EPMA with some restrictions. That is not bad, and for sure it is much better than keeping using classical simple form of correction.

I will work through a detailed example in a bit, but I'll start with a short explanation "in a nutshell" as they say:

Whatever we call these effects (pulse pile up, dead time, photon coincidence) we have a traditional expression which does not properly handle photon detection at high count rates.  Let's call it dead time, because everybody calls the traditional expression the dead time correction expression.  And all of us agree, there are many underlying causes both in the detector and in the pulse processing electronics.  I maintain that at least some of these effects are attributable to more than one photon being coincident with another photon, and additionally that the traditional expression does not handle these multiple photon events properly. And as I will demonstrate in a moment, we have some data that seems to support this hypothesis. There will be of course other effects that should be looked into and corrected for, but this effort has never claimed to be a "universal" correction for photon counting, though I wish luck to SG in his efforts towards that holy grail.   :)

Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.

You know already my answer from other post about matrix correction vs matrix matched standards. And to repeat that answer it is Absolutely Certainly Yes!

So, yes our k-ratios should remain constant as a function of beam current/count rate given two materials with a different concentration of an element, for a specified emission line, beam energy and takeoff angle.  And yes, we know that this k-ratio is also affected by a number of calibration issues. Dead time being one of these, and of course also spectrometer alignment, effective takeoff angle and whatever else we want to consider.

But the interesting thing about the dead time correction itself, is that the correction becomes negligible at very low count rates! Regardless of whether these "dead time" effects are photon coincidence or pulse pile up or whatever they might be.

So some of you may recall in the initial FIGMAS round robin that you received an email from Will Nachlas asking everyone to perform their consensus k-ratio measurements at a very low beam current. And it was because of this very reason that we could not be sure, even at moderate beam currents, that people's k-ratios would be accurate because of these dead time or pulse pile up (or whatever you want to call them) effects.

So Will suggested that those in the FIGMAS round robin measure our k-ratios at a very low beam current/count rate and that these will be the most accurate k-ratios, which should then be reported. This is exactly the thought that John Fournelle and I had when we come up with the constant k-ratio method:

That these k-ratios should remain constant as a function of higher beam currents if the instrument (and software) are properly calibrated.

Again aside from spectrometer alignment/effective takeoff angle issues, which can be identified from measuring these consensus k-ratios on more than one spectrometer!

Now I need to quote SG again, as this exchange got me thinking (a dangerous thing, I know!):
As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable.

And I responded:
Clearly it's a constant in the equation, but equally clearly it depends on how the constant is calibrated.  If one assumes that there are zero multiple coincident photons, then one will obtain one constant, but if one does not assume there are zero multiple coincident photons, then one will obtain a different constant. At sufficiently high count rates of course.

I think the issue is that SG is trying to separate out all these different effects in the dead time correction and treat them all separately. And we wish him luck with his efforts.  But we never claimed that our method is a universal method for dead time correction, merely that it is better than the traditional (or as he calls it the classical) expression. 

Roughly speaking, the new expressions allow us to utilize beam currents roughly 10x greater than previously and yet we can still maintain quantitative accuracy.

It is also a fact that if one calibrates their dead time constant using the traditional expression, then one is going to obtain one dead time constant value, but if one utilizes a higher precision dead time expression that handles multiple photon coincidence, then they will obtain a (somewhat) different dead time constant.  This was pointed out some time ago when Anette first reported her constant k-ratio measurements:

https://probesoftware.com/smf/index.php?topic=1466.msg10988#msg10988

And this difference can be seen in the values of the dead time constants calibrated by the JEOL engineer vs. the dead time calibrations using the new higher precision dead time expressions that Anette utilized:

Ti Ka dead times, JEOL iHP200F, UBC, von der Handt, 07/01/2022
Sp1     Sp2    Sp3     Sp4     Sp5
PETJ    LIFL    PETL   TAPL    LIFL
1.26   1.26    1.27    1.1     1.25          (usec) optimized using constant k-ratio method (six term expression)
1.52   1.36    1.32    1.69    1.36         (usec) JEOL engineer using traditonal method


The point being that one must reduce the dead time constant using these new (multiple coincidence) expressions or the intensity data will be over corrected! This will become clearer as we look at some data.  So let's walk through constant k-ratio method in the next post.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 09:53:15 AM
OK, this constant k-ratio method is all documented for those using the Probe for EPMA software in the pdf attachment below in this post, but I think it would be more clear if I walked through the process here for those without the PFE software.

I am not familiar with the JEOL or Cameca OEM software, so I cannot say how easy or difficult performing these measurements and calibrations will be using the OEM software, but I will explain what needs to be done and you can let me know how it goes.

The first step is to acquire an appropriate data set of k-ratios. Typically one would use two materials with significantly different concentrations of an element, though the specific element and emission line are totally optional. Also the precise compositions of these materials is not important, merely that they are homogeneous.  All we are looking for is a *constant* k-ratio as a function of beam current.

I suggest starting with Ti metal and TiO2 as they are both pretty beam stable and easy to obtain and can be used with both LIF and PET crystals, so do measure Ti Ka on all 5 spectrometers if you can, so all spectrometers can be calibrated for dead time.  One can also use two Si bearing materials, e.g., SiO2 and say Mg2SiO4 for TAP and PET crystals, though in all cases the beam should be defocused to 10 to 15 um to avoid any beam damage.

So we start by measuring *both* our Ti metal and TiO2 at say 5 nA (after checking for good background positions of course). For decent precision you might need to count for 60 or more seconds on peak. Measure maybe 5 or 8 points at whatever voltage you prefer (15 or 20 keV works fine, the higher the voltage the smaller the surface effects). Then calculate the k-ratios.

These k-ratios will have a very small dead time correction as the count rates are probably pretty low and for that reason, we can assume that these k-ratios are also the most accurate with regards to the dead time correction (hence the request by FIGMASS to measure their consensus k-ratios at low beam currents).  What we are going to do next is again measure our Ti metal and TiO2 materials at increasing beam currents up to say 100 or 200 nA.

Be sure to measure these materials in pairs at *each* beam current so that any potential picoammeter inaccuracies will be nulled out. In fact this is one of the main advantages of this constant k-ratio dead time calibration method over the traditional method which depends on the accuracy of the picoammeter because it merely plots beam current versus count rate (e.g., the Carpenter dead time spreadsheet).

Interestingly, this constant k-ratio method is somewhat similar to the Heinrich method that Brian Joy discusses in another topic, because both methods are looking at the constancy of two intensity ratios as a function of beam current (here a normal k-ratio and for Heinrich the alpha/beta line ratios).  However, as has been pointed out previously, the Heinrich dead time calibration method fails at high count rates because it does not handle multiple coincident photon probabilities. Of course the Heinrich method could be fitted using one of the newer expressions (and Brian agrees it then does do a better job at higher count rates), but he complains that it over fits the data. Which as mentioned in the previous post is because the dead time constant value itself needs to be adjusted down to yield a consistent set of ratios.

In any case, we think the constant k-ratio method is easier and more intuitive, so let's continue.

OK, so once we have our k-ratio pairs measured over a range beam currents from say 5 or 10 nA to 100 or 200 nA, we plot them up and we might obtain a plot looking like this:

(https://probesoftware.com/smf/gallery/395_12_08_22_9_58_54.png)

This is using the traditional dead time correction expression. So if this was a low count rate spectrometer this k-ratio plot would be pretty "constant", but the count rate at 140 nA on this PETL spectrometer is around 240K cps!  And the reason the k-ratio increases is because the Ti metal primary standard is more affected by dead time effects due to its higher count rate, so as that intensity in the denominator of the k-ratio drops off at higher beam currents (because the traditional expression breaks down at higher count rates), the k-ratio value increases!  Yes, it's that simple.   :)

Now let's have some fun in the next post.

Edit by John: updated pdf attachment
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 12, 2022, 10:43:54 AM
So we saw in the previous post that the traditional dead time correction expression works pretty well in the plot above at the lowest beam currents, but starts to break down at around 40 to 60 nA, which on Ti metal is around 100K (actual) cps.

If we increase the dead time constant in an attempt to compensate for these high count rates we see this:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_07_33.png)

Using the traditional dead time expression we over compensate at lower beam currents and still under compensate at higher beam currents.  So what's an analyst to do?  I say, use a better dead time expression!  This search led us to the Willis, 1993 (two term Taylor series) expression.  By the way, the dead time reference that SG provided is worth reading:

https://www.sciencedirect.com/science/article/pii/S1738573318302596

Using the Willis 1993 expression helps somewhat as shown here reverting back to our original DT constant of 1.32 usec:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_14_37.png)

By the way I don't know how hard it is to edit the dead time constants in the JEOL or Cameca OEM software, but in PFE the software dead time correction is an editable field, because one might (e.g., as the detectors age) see a problem with their dead time correction (as we have above), and decide to re-calibrate the dead time constants. Then it's easy to update the dead time constants and re-calculate one's results for improved accuracy.  In PFE there's even a special dialog under the Analytical menu to update all the DT constants for a specific spectrometer (and crystal) for all (or selected) samples in a probe run...

Note also, that these different dead time correction expressions have almost no effect at the lowest beam currents, exactly as we would expect!  A k-ratio of 0.55 is what we would expect for TiO2/Ti.

OK, so looking at the plot above, wow, we are looking pretty good up to 100 nA of beam current!  What happens if we go to the six term expression? It gets even better.  But let's jump right to the logarithmic expression because it is simply an integration of this Taylor/Maclaurin (whatever!) series, and they both give almost identical results:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_25_41.png)

Now we have a straight line but with a negative slope!  What could that mean?  Well as mentioned in the previous post it's because once we start including coincident photons in the probability series, we don't need as large a DT time constant!  Yes, the exact value of the dead time constant depends on the expression utilized. 

So, we simply adjust our dead time constant to obtain a *constant* k-ratio, because as we all we already know that we *should* obtain the same k-ratios as a function of beam current!  So let's drop it from 1.32 usec to 1.28 usec. Not a big change but at these count rates the DT constant value is very sensitive:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_33_15.png)

Now we are analyzing from 10 to 140 nA and getting k-ratios within our precision.  Not bad for a days work, I'd say! 

I have one more post to make regarding SG's discussion of the other non-probabilistic dead time effects he has mentioned, because he is exactly correct.  There are other dead time effects that need to be dealt with, but I am happy to simply have improved our quantitative accuracy at these amazingly high count rate/high beam currents.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 16, 2022, 10:53:17 AM
So continuing on this overview of the constant k-ratio and dead time calibration topic which started with this post here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100

I thought I would re-post these plots because they very nicely demonstrate the higher accuracy of the logarithmic dead time correction expression compared to the traditional linear expression at moderate to high beam currents:

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)

Clearly, if we want to acquire high speed quant maps or measure major, minor and trace elements together, it's pretty obvious that the new dead time expressions are going to yield more accurate results.

And if anyone still has concerns about how these the new logarithmic expression performs at low beam currents, simply examine this zoom of the above plot showing results at 10 and 20 nA:

(https://probesoftware.com/smf/gallery/395_15_08_22_9_10_21.png)

All three are statistically identical at these low beam currents!  And remember, even at these relatively low count rates there are still some non-zero number of multiple photon coincidence events occurring, so we would argue that even at these low count rates, the logarithmic expression is the more accurate expression.

OK, with that out of the way, let's proceed with the promised discussion regarding SG's comments on the factors contributing towards dead time effects in WDS spectrometers, because there is no doubt that several factors are involved in these dead time effects, both in the detector itself and the electronics.

But however we measure these dead time effects by counting photons, they are all combined in our measurements, so the difficulty is in separating out these effects.  But the good news is that these various effects may not all occur in the same count rate regimes.

For example, we now know from Monte Carlo modeling that at even relatively low count rates, that multiple photon coincidence events are already starting to occur. As seen in the above plot starting around 30 to 40 nA (>50K to 100K cps), on some large area Bragg crystals.

As the data reveals, the traditional dead time expression does not properly deal with these events, so that is the rationale for the multiple term expressions and finally the new logarithmic expression. So by using this new log expression we are able to achieve normal quantitative accuracy up to count rates of 300K to 400K cps (up to 140 nA in the first plot). That's approximately 10 times the count rates that we would normally limit ourselves to for quantitative work!

As for nomenclature I resist the term "pulse pileup" for WDS spectrometers because (and I discussed this with Nicholas Ritchie at NIST), to me the term implies a stoppage of the counting system as seen in EDS spectrometers.

However, in WDS spectometers we correct the dead time in software, so what we are attempting to predict are the photon coincidence events, regardless of whether they are single photon coincidence or multiple photon coincidence. And as these events are 100% due to probabilistic parameters (i.e., count rate and dead time), we merely have to anticipate this mathematically, hence the logarithmic expression.

To remind everyone, here is the traditional dead time expression which only accounts for single photon coincidence:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_21_05.png)

And here is the new logarithmic expression which accounts for single and multiple photon coincidences:

(https://probesoftware.com/smf/gallery/395_09_08_22_7_39_33.png)

Now, what about even higher count rates, say above 400K cps?  Well that is where I think SG's concerns with hardware pulse processing start to make a significant difference. And I believe we can start to see these "paralyzing" (or whatever we want to call them) effects at count rates over 400K cps (above 140 nA!) as shown here, first by plotting with the traditional dead time expression:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_41_33.png)

Pretty miserable accuracy starting at around 40 to 60 nA.  Now the same data up to 200 nA, but using the new logarithmic expression:

(https://probesoftware.com/smf/gallery/395_16_08_22_10_41_50.png)

Much better obviously, but also under correcting starting about 160 nA of beam current which corresponds to a predicted count rate on Ti metal of around 450K cps!   :o

So yeah, the logarithmic expression starts to fail at these extremely high count rates starting around 500K cps, but that's a problem we will leave to others, as we suspect these effects will be hardware (JEOL vs. Cameca) specific.

Next we'll discuss some of the other types of instrument calibration information we can obtain from these constant k-ratio data sets.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 17, 2022, 12:49:07 PM
So once we have fit our dead time to yield constant k-ratios over a large range of range count rates (beam currents), we can now perform high accuracy quantitative analysis from 5 or 10 nA to several hundred nA depending on our crystal intensities and our spectrometer dead times.

Because as we can all agree, we should obtain the same k-ratios (within statistics of course) at all count rates (beam currents). And we can also agree that due to hardware/electronic limitations (as SEM Geologist has correctly pointed out) our accuracy may be limited at count rates that exceed 300K or so cps. And we can see that in the 200 nA plots above when using Ti ka on a large PET crystal when we exceed 400K cps.

But now we can perform additional tests using this same constant k-ratio data set that we used to check our dead time constants, for example we can test our picoammeter linearity.

You will remember the plot we showed previously after we adjusted our dead time constant to 1.28 usec to obtain a constant k-ratio up to 400K cps:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_33_15.png)

We could further adjust our dead time to flatten this plot at the highest beam current even more, but if we examine the Y axis values, we are clearly within a standard deviation or so.

Now remember, this above plot is using both the primary *and* secondary standards measured at the same beam currents. So both standards at 10 nA, both at 20 nA, both at 40 nA, and so on. And we do this to "null out" any inaccuracy of our picoammeter.

Well, to test our picoammeter linearity we simply utilize a primary standard from a *single* beam current measurement and then plot our secondary standards from *all* of our beam current measurements as seen here:

(https://probesoftware.com/smf/gallery/395_17_08_22_12_27_14.png)

Well that is interesting, as we see a (very) small discontinuity around 20 to 40 nA in the k-ratios on Anette's JEOL instrument when using a single primary standard. This is probably due to some (very) small picoammeter non-linearity, because after all, we are now depending on the picoammeter to extrapolate from our single primary standard measured at one beam current, to all the secondary standards measured at various beam currents!

In Probe for EPMA this picoammeter linearity is an easy test to perform using the constant k-ratio data set, as we simply use the string selection control in the Analyze! window to select all the primary standards and disable them all, then simply select one of these primary standards (at one of the beam currents) and enable that one primary standard, and then from the Output | Output XY plots for Standards and Unknown dialog, we calculate the secondary standards for all the beam currents, in this case using beam current for the X axis and the k-ratio for the Y axis.

Stray tuned for further instrument calibration tests that can performed using this simple constant k-ratio data set.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 19, 2022, 12:28:11 PM
And now we come to the third (and to my mind, the elephant in the room) aspect in the use of these constant k-ratios, as originally described by John Fournelle and myself.

We've already described how we can use the constant k-ratio method (pairs of k-ratios measured at multiple beam currents) to adjust our dead time constant to obtain a constant k-ratio over a large range of count rates (beam currents) from zero to several hundred thousand (300K to 400K) cps, as we should expect from our spectrometers (since the lowest count rate k-ratios, should be the least affected by dead time effects).  Note that by measuring both the primary and secondary standards at the *same* beam current, we "null out" any non-linearity in our picoammeter.

Next we've described how on can check for our picoammeter linearity by utilizing from the same data set a single primary standard measured at one beam current and plot our secondary standard k-ratios (from the range of beam currents) to test the beam current extrapolation, and therefore linearity of the picoammeter system.

Finally we come to the third option using this same constant k-ratio data set, and that is the simultaneous k-ratio test.  Now in the past we might have only measured these k-ratios on each of our spectrometers (using the same emission line) at a single beam current as described decades ago by Paul Carpenter and John Armstrong.  But our constant k-ratio data set (if measured on all spectrometers using say the Ti Ka line on LIF and PET and the Si Ka line on PET and TAP), already contains these measurements, so let's just plot them up as seen here:

(https://probesoftware.com/smf/gallery/395_19_08_22_12_07_26.png)

Immediately we can see that two of these spectrometers (form Anette von der Handt's JEOL instrument at UBC), are very much in agreement, but spectrometer 2 is off by some 4% relative.  This is not an uncommon occurrence (as I have the same effect on my spectrometer 3 of my old Cameca instrument).  Please note that when I first measured these simultaneous k-ratios when my instrument was new, they were all within a percent or two as seen here:

https://probesoftware.com/smf/index.php?topic=369.msg1948#msg1948

but it is concerning in a new instrument.  Have you checked your own instrument?  Here were the results I made during my instrument acceptance testing (see section 13.3.9):

https://epmalab.uoregon.edu/reports/Additional%20Specifications%20New.pdf

Note that spectrometer 3 was slightly more problematic than the other spectrometers even back then...

But it should cause us much concern, because how can we begin to compare our consensus k-ratios from one instrument to another, if we can't even get our own spectrometers to agree with each other on the same instrument?

 :(

If you want to get a nice overview of all three of these constant k-ratio tests, start a few posts above beginning here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 19, 2022, 03:38:05 PM
Are you sure your sample is perpendicular to the beam? I see something similar like that on our SX100 too. But I am sure that somehow the holder is not perfectly perpendicular to the beam, when compared with newer SXFiveFE. Also recent replacement of BSE detector on SX100 made me aware that it (actualy the metal cover plate) can affect the efficiency of counting at different spectrometer range.

BTW, where is that cold-finger or other cryo system mounted? maybe it is affecting (shadowing) part of x-rays. Large crystals are in particular edgy on shadowing effects as our bitter experiene with newer BSE had shown. Gathering huge number of k-ratios is not in vain - indeed it will be so helpful in identifying some deeply hiden problems with some annonimiously behaving spectrometers! Lets don't stop, but move on!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 20, 2022, 09:46:31 AM
Are you sure your sample is perpendicular to the beam? I see something similar like that on our SX100 too. But I am sure that somehow the holder is not perfectly perpendicular to the beam, when compared with newer SXFiveFE. Also recent replacement of BSE detector on SX100 made me aware that it (actualy the metal cover plate) can affect the efficiency of counting at different spectrometer range.

As we have discussed there are several mechanical issues which might explain ones spectrometers producing different k-ratios. 

Some relate to sample tilt as you say. Which is why we should if possible measure our constant k-ratios on all of our spectrometers, and include (as Aurelien mentioned in the consensus k-ratio procedure) measuring these k-ratios at several stage positions several millimeters apart in order to calculate a sample tilt.  Probe for EPMA reports sample tilt automatically when using mounts that have three fiducial markings.

The k-ratios above were acquired on Anette's instrument so I cannot say, but I doubt very much that she could have tilted the sample enough to produce a 4% difference in intensity!

The other mechanical issues can be spectrometer alignment or even asymmetrical diffraction of the Bragg crystals resulting in a difference in the effective takeoff angle.  Even more concerning is spectrometer to column positioning due to manufacturing mistakes, as Caltech discovered on one of their instruments many years ago.

On this JEOL 733 instrument Paul Carpenter and John Armstrong found that the so-called "hot" Bragg crystals provided by JEOL were not diffracting symmetrically, resulting in significant differences in their simultaneous k-ratio testing. 

After that had been sorted out by replacing those crystals with "normal" intensity crystals, they found there was still significant differences in the simultaneous k-ratios which they eventually they tracked down to the column being mechanically displaced from the center of the instrument, enough to cause different k-ratios on  some spectrometers.

This was the *last* JEOL 733 delivered as JEOL had already started shipping the 8900.  How many of those older instruments also had the electron column mechanically off-center?  What about your instrument?  There is only one way to find out!    ;D

Measure your constant k-ratios on all your spectrometers (over a range of beam currents) and see if your:

1.  dead times are properly calibrated

2. your picoammeter is linear

3. you get the same k-ratios within statistics on all spectrometers


BTW, where is that cold-finger or other cryo system mounted? maybe it is affecting (shadowing) part of x-rays. Large crystals are in particular edgy on shadowing effects as our bitter experiene with newer BSE had shown. Gathering huge number of k-ratios is not in vain - indeed it will be so helpful in identifying some deeply hiden problems with some annonimiously behaving spectrometers! Lets don't stop, but move on!

I don't know about Anette's instrument, but on our SX100 we see the same sorts of differences in one spectrometer and we have no cold finger. We use a chilled baffle over the pump:

https://probesoftware.com/smf/index.php?topic=646.msg3823#msg3823

But that's certainly something worth checking if your simultaneous k-ratios do not agree and you have a cold finger in your instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 21, 2022, 10:45:44 AM
One more real life experience which would make k-ratio biased. It is noise increase in the detector-preamplifier circuit. I had discovered a month ago we have such problems on some spectrometers. On Cameca instruments it is very easy to test it. Set Gain to 4095, while beam is blanked, and look to ratemeter. It should get sporadic jumps at blue level (very few counts per second) from cosmic rays (yes spectrometer is able to be hit by those  :o without a problem). If there is 10-1000cps there is potential problem. If it is high pressure spectrometer that is not so important, but if it is low pressure spectrometer - the problem grows in severity. We have some noise getting in at such gain on few spectrometers on SX100 and SXFiveFE, there in most of cases it dissapears with gain reduced to 2000-3000. However one of spectrometers produces not 1000cps, but 100000cps when having gain at 2500. After inspection of signal with osciloscope I had found that it have much higher noise than other signals of other spectrometers - after I had opened the metal casing of there preamplifier is placed I found out that one of HV capacitors is cracked and that is an obvious noise source. Cracking of these disc capacitors probably does not take a day and I guess such crack could creep making that noise to increase slowly in many years.

Why noise is important? Because if noise is leaks into counting (and triggers the counting) then dead-time corrected k-ratios will be biased, as such noise would effect the detection more at low count rates than at high count rates. I think older Jeol (is new too?) models would be effected even more buy such hardware aging as background noise is passed to PHA. In Case of Cameca instruments it is much easier to identify such problem.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 22, 2022, 08:39:43 AM
This reminds me of one of the tests that John Armstrong performed on one of the first JEOL 8530 instruments at Carnegie Geophysical.  I think he called it the "beam off test", because he would test the PHA electronics when the beam was turned off, and what he found was that he was seeing x-ray counts with no electron beam!

These counting artifacts were eventually tracked down to JEOL changing suppliers for some chips in the spectrometer pre-amp which were noisier than the original specification and therefore causing these spurious counts.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 22, 2022, 09:39:56 AM
Parametric Constant:

a. A constant in an equation that varies in other equations of the same general form, especially such a constant in the equation of a curve or surface that can be varied to represent a family of curves or surfaces

At this point I think it would be helpful to discuss what we mean when we refer to the dead time as a "constant".

Because if there is one thing we can be sure of, it's that the dead time is not very constant since it can obviously vary from one spectrometer to another!  More importantly it can even vary for a specific detector as it becomes contaminated or as the electronic components age over time. I've even seen the dead time constant change after replacing a P-10 bottle!  P-10 composition?

In addition, we might suspect that the dead time could vary as a function of x-ray emission energy or perhaps the bias voltage on the detector, though these possible effects are still under investigation by some of us.  I hope some of you readers will join in these investigations.

So this should not be surprising to us since the dead time performance of these WDS systems include many separate effects in both the detector and counting electronics (and possibly even satellite line production!), all which are convolved together under the heading of the "dead time constant".

But it should also be clear that when we speak of a dead time "constant", what we really mean is a dead time "parametric constant", because it obviously depends on how we fit the intensity data and more importantly, what expression we utilize to fit the intensity data. Here is an example of the venerable dead time calculation spreadsheet from Paul Carpenter plotting up some intensities (observed intensity vs. beam current):

(https://probesoftware.com/smf/gallery/395_22_08_22_8_52_18.png)

The question then becomes: which dead time "constant" should we utilize in our data corrections?  That is, should we be fitting the lowest intensities, the highest intensities, all the intensities? How then are we calling this thing a "constant"?   :D

Here is another thought: when we attempt to measure some characteristic on our instruments, what instrumental conditions do we utilize for measuring that specific characteristic? In other words how do we optimize the instrument to get the best measurement?

For an example by analogy, when characterizing trace elements, we might increase our beam energy to create a larger interaction volume containing more atoms, and we will probably also increase of beam current, and probably also our counting time, as all three changes in conditions will improve our sensitivity for that characterization.  But we also want to minimize other instrumental conditions which might add uncertainty/inaccuracy to our trace characterization.  So we might for example utilize a higher resolution Bragg crystal to avoid spectral interferences, or perhaps Bragg crystal with a higher sin theta position to avoid curvature of the background.  Or we could utilize better equations for correction of these spectral interferences and curved backgrounds!    8)

Similarly, that is also what we should be doing when characterizing our dead time "constants"!  So to begin with we should optimize these dead time characterizations by utilizing conditions which create the largest dead time effects (high count rates) and we should also apply better equations which fit our intensity data with the best accuracy (over a large range of beam currents/count rates).

So if we are attempting to characterize our dead time "constant" with the greatest accuracy, what conditions (and equations) should we utilize for our instrument?  First of all we should remove the picoammeter from our measurements, because we don't want to include any non-linearity to these measurements since we are depending on differences in count rates at different beam currents.  The traditional dead time calibration fails in this regard. However, both the Heinrich (1966) ratio method and the new constant k-ratio method exclude picoammeter non-linearity from the calculation by design, so that is good.

But should we be utilizing our lowest count rates to calculate our dead time constants? Remember, the lowest count rates not only provide the lowest sensitivity (worse counting statistics), but also contain the *smallest* dead time effects!  Why would we ever want to characterize a parameter when it is the smallest effect it can possibly be?  Wouldn't we want to characterize this parameter when its effects are most easily observed, that is at high count rates?    :o

But then we need to make sure that our (mathematical) model for this (dead time) parameter properly describes these dead time effects at low and high count rates!  Hence the need for a better expression of photon coincidence as described in this topic.

Next let's discuss sensitivity in our dead time calibrations...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 23, 2022, 10:45:52 AM
Before I continue writing about sensitivity, I just want to emphasize an aspect of the constant k-ratio method that I think is a bit under appreciated by some.

And that is that for the purposes of determining the dead time constant (and/or testing picoammeter linearity and/or simultaneous k-ratio testing), the constant k-ratio method does not need to be performed on samples of known composition. Whatever the k-ratio is observed to be (at a low count rate using a high precision logarithmic or multiple term expression), that is all we require for these internal instrumental calibrations.

The two materials could be standards or even unknowns. The only requirement is that they contain significantly different concentrations of an element, and be homogeneous and relatively beam stable over a range of beam currents.

In fact, they can be coated differently (e.g., oxidized or not), and we could even skip performing a background correction for that matter!  We only care about the ratio of the intensities, measured over a range of beam currents/count rates!  :o   All we really require is a significant difference in the count rates between the two materials and then we can adjust the dead time constant (again using a high precision expression) until the regression line of the k-ratios is close to a slope of zero!

 8)

Of course when reporting consensus k-ratios to be compared with other labs using well characterized global standards, we absolutely must perform careful background corrections and be sure that our instrument's electron accelerating energies, effective take off angles and dead time constants are well calibrated!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 25, 2022, 12:49:26 PM
OK, let's talk about sensitivity and the constant k-ratio method!

We've already mentioned that one of the best aspects of the constant k-ratio method is that it depends on a zero slope regression of k-ratios plotted on the y-axis.  We can further appreciate the fact that the low count rate k-ratios are the least affected by dead time effects, so therefore those k-ratios will be the values that are our "fulcrum" when adjusting the dead time constant. Remember, the exact value of these low count rate k-ratios is not important, only that they should be constant over a range of count rates!  So by plotting these k-ratios with a zero slope regression (a horizontal line) we can arbitrarily expand the y-axis to examine our intensity data with excellent precision.

Now let's go back and look at a traditional dead time calibration plot here (using data from Anette von der Handt) where we have plotted on-peak intensities on the y-axis and beam current on the x-axis:

(https://probesoftware.com/smf/gallery/395_25_08_22_11_56_59.png)

I've plotted the multiple points per beam current so we can get a feel for the sensitivity of the plots.  Note that the lower count rates show more scatter than the high count rates, because the scatter you are seeing is the natural counting statistics and will be expanded in subsequent plots. Pay particular attention to the range of the y-axis. In this plot we are seeing a variance of around 45%.

The problem for the traditional method is not only that we have a diagonal line which doesn't reveal much sensitivity, but also that we are fitting a linear model to the data and the linear model only works when the dead time effects are minimal.  It's as though we were trying to measure trace elements at low beam currents!  Instead we should attempt to characterize our dead time effects under conditions that produce significant dead time effects. And that means at high count rates!   :)

All non-zero slope dead time calibration methods will suffer from this lack of sensitivity, though the Heinrich method (like the constant k-ratio method) is at least immune to picoammeter linearity problems.  In fact, because the Heinrich ratio method is also a ratio (of the alpha and beta lines), if we simply plotted those Ka/Kb ratios as a function of beam current/count rate (and fit the data to a non-linear model that handles multiple photon coincidence) it would work rather well!

But I feel the constant k-ratio is more intuitive and it is easier to plot our k-ratios as a zero slope regression. And here is what we see when we do that to the same intensity data as above:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_16_01.png)

Note first of all that merely by plotting our intensities as k-ratios (without any dead time correction at all!), our variance has decreased from 54% to 17%!  Again note the y-axis range and how the multiple data points have expanded showing greater detail. And keep in mind that the subsequent k-ratio plots will always show the low count rate k-ratios right around 0.56 which will decrease slightly to 0.55 as we start applying a dead time correction, as with this PETL spectrometer, even at low beam currents, we are seeing serious count rates (~28K cps at 10 nA on Ti metal!).

Now let's apply the traditional linear dead time expression to these same k-ratios using the JEOL engineer 1.32 usec dead time constant:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_27_17.png)

Our variance is now only 5.4%!  So now we can really see the details in our k-ratio plots as we further approach a zero slope regression. We can also see that we've increased our constant k-ratio range slightly (up to ~80k cps), but above that things start to fall apart.

So now we apply the logarithmic dead time correction (again using the same dead time constant of 1.32 usec determined by the JEOL engineer using the linear assumption):

(https://probesoftware.com/smf/gallery/395_25_08_22_12_33_40.png)

And now we see that our y-axis variance is only 1.1%, but we also notice we are very slightly over-correcting our k-ratios using the logarithmic expression. Why is that?  It's because even at these relatively moderate count rates, we are still observing some non-zero multiple photon coincidences, which the linear dead time calibration model over fits to obtain the 1.32 usec value.  Remember the dead time constant is a "parametric constant", its exact value depends on the mathematical model utilized. 

So by simply reducing the dead time constant from 1.32 to 1.29 usec (a difference of only 0.03 usec!), we can properly deal with all (single and multiple) photon coincidence and we obtain a plot such as this:

(https://probesoftware.com/smf/gallery/395_25_08_22_12_41_19.png)

Our variance is now only 0.5% and our k-ratios are constant from zero to over 300k cps!  And just look at the sensitivity!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: NicholasRitchie on August 25, 2022, 01:55:56 PM
Pretty impressive!
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on August 25, 2022, 07:47:47 PM
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 26, 2022, 09:03:16 AM
Pretty impressive!

Thank-you Nicholas.  It means a lot to me and the team.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 26, 2022, 09:06:54 AM
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.

Much appreciated!

Great work by everyone involved.  John Fournelle and I came up with the constant k-ratio concept, and Aurelien Moy and Zack Gainsforth and I came up with the multi-term and logarithmic expressions. While Anette has provided some amazing data from her new JEOL instrument (wait until you see her "terrifying" count rate measurements!).

We could use some more Cameca data as my instrument has a severe "glitch" around 40 nA. Do you see a similar weirdness around 40 nA on your new tactis instrument?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on August 27, 2022, 07:00:55 AM
I actually skipped 40 nA. I performed k-ratio measurements at 4, 10, 20, 50, 100, 150, 200, and 250 nA. I do see a drop in k-ratio between 20 to 50 nA. The k-ratios values produce (mostly) horizontals lines from 4 to 20 nA, then they decrease (substantially) and form another (mostly) horizontal line from 50 to 250 nA. As soon as I can access the lab computer again, I'll send the MDB file to you.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 27, 2022, 08:47:39 AM
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 27, 2022, 09:11:16 AM
Here's a different spectrometer on Anette's instrument (spc 5, LIFL) that shows how the sensitivity of the constant k-ratio method can be helpful even at low count rates:

(https://probesoftware.com/smf/gallery/395_27_08_22_8_48_19.png)

First note that at these quite low count rates (compared to spc 3, PETL), the k-ratios are essentially *identical* for traditional and log expressions (even when using exactly the same DT constants!) exactly as expected.

Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

Because the k-ratios acquired on other spectrometers at the same time do not show this "glitch", I suspect that this artifact is specific to this spectrometer.  More k-ratio acquisitions will help us to determine the source.

Next I will start sharing some of the "terrifying" intensities from Anette's TAPL crystal.    ;D
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 29, 2022, 07:12:56 AM
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.

Oh Yeah I could :D

Well it depends from the machine (If we have C1 + C2 W/LaB6 column, or we have FEG C2 (no C1) column). In case of FEG it is supposed to be smooth at 1-600nA range, sometimes some crossover can be observed somewhere between 500-1000nA when FEG parameters are set wrong, or when tip is very old and standard procedure is not relevant no more (i.e. our FEG).

But in case of classical C1 and C2 column the crossover point depends from cleanness of the column (its apertures) as the beam crossover point is going to drift depending how much apertures are contaminated. We had our SX100 column not-cleaned for 7 years, and there was some funkiness going at 40-50nA range. After cleaning of the column the range of crossover is no more at that spot but at very high count rates (~500nA). What I suspect after seeing the faraday cup (during column cleaning) is that it is highly possible not whole beam gets into the cup, but in some cases just part of the beam (something like beam defocus onto faraday cup hole). So physically the picoamperometer could be completely OK, but beam measurement with faraday cup inside the column could measure the beam not fully at some of ranges (especially at lower currents). There is where this drifting beam cross-over could come in the observed discrepancies.

On the other hand the picoamperometer circuit is subdivided to sections: up to 0.5nA, 0.5-5 nA, 5-50nA, 50-500nA, 500nA-10µA(?). It is not completely clear for me how it decides which of range to switch-to (The column control board tells which range should be selected... no wait, c.c. board does not tell that it only transfers the request from main processing board), probably there are few loops of measurements for logic in the column control board to select the most relevant range, and probably this 5*10^x nA is the strict boundary only on the paper. Finally only 50-500nA and 500nA-10µA ranges have a potentionmeters and can be physically re-calibrated/tuned (albeit I had never needed to do that). Why only those ranges? the work of picoamperometer is realy simple: it needs to amplify the received current into some voltage range which ADC works with. It is single OPAMP, but different feedback resistors for different ranges. For highest currents, there is little amplification needed and thus feedback resistors are in kiloohm range, where low currents requires high amplification and thus very high ohm (hundreds of Mohm) resistors are used. In case of kilo-ohm resistors the final resistance can be tuned with serially connected potentionmeter, where for hundred of Mohm resistors there is no such potentionmeters available (or rather is not very financially feasible).  Anyway, the analog voltage value from such conversion is finally being measured with shared 15bit ADC (+1bit for sign) (the same ADC for all other column parameters, such as high voltage, emission...) and final interpretation of that converted digital value is burred somewhere in the digital logic (firmware). That is most probably main VME processor board (Motorola 68020 (old) or PowerQuiccII (new)), as column control board contains no processing chip (there are some PAL device s on board for VME<->local data control and rather are too limited for interpretative capabilities). And then this gets a bit tricky: The firmware is loaded during boot, AFAIK there is no mechanisms for alteration of hex files (files uploaded during machine boot), Also I know no commands in interpretor to calibrate the faraday cup measurements (albeit there are many cryptic special functions not exposed to user manuals). I guess such conversion table could exist in Cameca SX Shared folder in some binary files of machine state. How to change the conversion is still a mystery for me.

Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Also the picoamperometer is really pretty simple in design and I see not much possibilities for it to detune itself (could those potentiometers (un-)screw(-in)?), could resistors crack (albeit I had seen many times this to happen on different boards of SX100, but those are power resistors doing a lot of work). Maybe conversion tables were set wrongly from the moment of manufacturing and this problem got caught only recently after using this new calibration by k-ratios method? I just wonder where that problem of your 40-50 nA observed discontinuity is generated exactly and how it could be fixed...
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: sem-geologist on August 29, 2022, 07:22:15 AM
Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

I don't believe technological miracles (especially at lower and comparable prices), and I guess Jeol picoamperometer is forced to be segmented into ranges by same electronic component availability and precision as Cameca instruments (even stupid simple handheld multi-meter have such kind of segmentation). And most likely it is the similar problem as on Your SX100.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 29, 2022, 08:42:55 AM
Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Here's a few examples from my instrument showing this "glitch" around 40 nA.  Our instrument engineer told me recently that he had made some adjustments to the picoammeter circuits but I have not had time to test again.  I will try to do that soon as I can.

(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_10.png)

(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_28.png)

Note in the first plot that the glitch occurred at 30 nA!  Note also that I skipped measurements between 30 and 55 nA in the 2nd plot to avoid this glitch!

But here's my problem: the constant k-ratio method should not be very sensitive to the actual beam current since both the primary standard and the secondary standard of the k-ratio are measured at the same beam current. 

But yet the artifact is there on many Cameca instruments.  I do distinctly recall that more than one Cameca engineer has simply told me to "stay away from beam currents near 40 nA".  Maybe it's some sort of a beam current "drift" issue?

I also think that if one "sneaks up" on the beam current (using 2 nA increments for example) the instrument can handle setting the beam current properly.  I think Will Nachlas has done some constant k-ratio measurements like this on his SXFive instrument.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 29, 2022, 11:35:28 AM
(https://probesoftware.com/smf/gallery/395_29_08_22_8_27_10.png)

I know I'm not the sharpest knife in the drawer, but sometimes I can stare right at something and just not see it. 
 
You will have noticed in the above plot that we see a "glitch" in the k-ratios at 30 nA. And sometimes we see this "glitch" at 40 nA or sometimes at 50 nA.  It always seemed to depend on which beam currents we measured on our Cameca instrument just before and just after the "glitch" but I could not determine the pattern.  The thing that always bothered me was, that if we are indeed measuring our primary and secondary standards at the same beam current, we should be nulling out any picoammeter non-linearity, so I thought we should not be seeing any sort of these "glitches" in the k-ratio data.  It's the one reason I switched to looking at Anette's JEOL data, which did not show any of these "glitches" in the k-ratios.

But, first a short digression on something that I believe is unique to Probe for EPMA, and which, under normal circumstances, is a very welcome feature. And that is the standard intensity drift correction.  Now all microanalysis softwares perform a beam normalization (or drift) correction, so that intensities are reported as cps/nA. That way, one can not only correct for small changes in beam current over time, but one can also compare standards and unknowns (and or elements) acquired at different beam currents, and this correction is applied equally to all element intensities in the sample.

But Probe for EPMA also performs a standard intensity drift correction which tracks the (primary) standard intensities for *each* element over time and makes an adjustment for any linear changes in the standard intensities over time. Basically, if one has acquired more than one set of primary standards,  the program will estimate (linearly) the predicted (primary) standard intensity based on the interpolated time of acquisition of the secondary standard or unknown.

This schematic from the Probe for EPMA User Reference might help to explain this:

(https://probesoftware.com/smf/gallery/395_29_08_22_11_06_45.png)

What this means is that the standard intensity drift correction is on (as it is by default), and one has acquired more than one set of primary standards, the program, will always look for the first primary standard acquired just *before* the specified sample, and also the first primary standard acquired *after* the specified sample.  Then it will estimate what the primary standard intensity should be if the intensity drift was linear between those two primary standard acquisitions, and utilize that intensity for the construction of the sample k-ratio.

This turns out to be very nice for labs with temperature changes over long runs where the various spectrometers (and PET crystals) will change their mechanical alignments and is applied on an element by element basis. One simply needs to acquire their primary standards every so often, and the Probe for EPMA software will automatically take care of such standard intensity drift issues automatically.  I can't tell you how many times I been called by a student that said when they came back in the morning their totals had somehow drifted overnight and was hoping there was something they could do to fix this?  And I'd say, sure, just re-run your primary standard again!  And they'd call back: everything is great now, thanks!

But if we turn off the standard intensity drift correction, the Probe for EPMA software will only utilize the primary standard acquired just *before* the secondary standard or unknown sample.  Keep that in mind, please.   So now back to our constant k-ratios.

As you saw in the plot above, I was having trouble understanding why this "glitch" in the constant k-ratios was occurring, and also why it was occurring at sometimes random nA settings, often between 30 nA and 60 nA.

So this morning I started looking more closely at this MnO/Mn k-ratio data, and the first thing I noticed was that I had (correctly) acquired the Mn metal standard first at a specified beam current, and then acquired the secondary MnO standard at the same specified beam current and for each k-ratio set after that.  So OK.

But wait a minute, didn't I just say that if the standard intensity drift correction is turned (as it is by default!), the program will automatically interpolate between the prior primary standard, and the subsequent primary standard?  But with the constant k-ratio data set, we always want to be sure that the k-ratio is constructed from two materials measured at the *same* beam current. In order to eliminate any non-linearity in the picoammeter!

So the first thing I did was turn off that darn standard intensity drift correction and then plot the k-ratios using only a single primary standard. Remember, if we only utilize a single primary standard, then we are extrapolating to the beam current measurements for all the secondary standards measured at multiple beam currents and therefore testing the linearity of the picoammeter!

(https://probesoftware.com/smf/gallery/395_29_08_22_10_42_39.png)

And lo and behold, look at the above picoammeter non-linearity when the Cameca changes the beam current range from under 50 nA to over 50 nA. Clearly the picoammeter ranges require adjustment by our instrument engineer! 

But since we now have the standard intensity drift correction turned off, and we measured each primary standard just before each secondary standard, let's re-enable all the primary standards to produce a normal constant k-ratio plot and see what our constant k-ratio plot looks like now (compare it to the quoted plot above):

(https://probesoftware.com/smf/gallery/395_29_08_22_10_42_59.png)

Glitch begone! Somebody slap me please...

So we've updated the constant k-ratio procedure to note that the standard intensity drift correction (in PFE) should be turned off, and that the primary standard should always be acquired just before the secondary standard so the program is forced to utilize the primary and secondary standards measured at the same beam current.  See attached pdf below.
 
Only in this way (in Probe for EPMA at least) is any picoammeter non-linearity truly nulled out in these constant k-ratio measurements.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: John Donovan on August 30, 2022, 11:06:35 AM
If you update your Probe for EPMA software (from the Help menu), you will get a new menu that allows you to access the latest version of the constant k-ratio method procedure also from the Help menu:

(https://probesoftware.com/smf/gallery/1_30_08_22_11_05_06.png)

If you do not have the Probe for EPMA software, but you would still like to perform these constant k-ratio tests on your instrument, start here and read on:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on August 31, 2022, 09:23:23 AM
 
included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>

No need for sarcasm  :D , it is quite a reasonable question: that is, if the dead time (parametric) constants vary slightly depending on the exact expression utilized, how will we manage this assortment of expressions and constants? 

This post is a response to that question (since SG asked), but the actual audience for this post is probably the typical Probe for EPMA user, on exactly how we do we manage all these dead time constants and perhaps, do we even require so many?

The simple answer is: it's easy.

But before we get into the details of how all this is handled in Probe for EPMA it might be worth noting a few observations: in most cases the differences in the optimized dead time constants between the various expressions are very small (e.g., 1.32 usec vs. 1.29 usec in the case of Ti Ka on PETL). In fact, for normal sized Bragg crystals (as seen in the previous post of Ti Ka on LIFL), we don't see any significant differences in our results up to 50K cps. For most situations, the exact dead time expression and dead time constant utilized will not be an important consideration.  But if we want to utilize large area crystals at high beam currents on pure metals or oxides (not to mention accurately characterizing our dead time constants for general usage), then we will want to perform these calibrations carefully at high beam currents.

That said, it is still not entirely clear how much of effect emission line energy or bias voltage has on the exact value of the dead time constant. Probeman's initial efforts on the question of emission line energies is ambiguous thus far (from his Cameca SX100 instrument):

https://probesoftware.com/smf/index.php?topic=1475.msg11017#msg11017

And this much larger set of dead times from Philippe Pinard for a number of emission lines from a few years back on his JEOL 8530 instrument:

https://probesoftware.com/smf/index.php?topic=394.msg6325#msg6325

Pinard's data is also somewhat ambiguous as to whether there is a correlation between emission energy and dead time. Anyway, I will admit that when we started developing software for the electron microprobe we did not anticipate that Probeman might develop new expressions for the correction of dead time, much less that the different expressions would produce slightly different (optimized) dead time constants (it's hard to make predictions, especially about the future!).    :)

So how does Probe for EPMA handle all these various dead time constants? It all starts with the SCALERS.DAT file, which is found in the C:\ProgramData\Probe Software\]Probe for EPMA folder (which may need to be unhidden using the View menu in Windows Explorer).

The initial effort to define dead time constants was originally implemented using a single value for each spectrometer. These are found on line 13 in the SCALERS.DAT file.  It can be edited using any plain text editor such as NotePad or NotePad+.

The dead time constants are on line 13 shown highlighted here in red:
     
    "1"      "2"      "3"      "4"      "5"     "scaler labels"
     ""       ""       ""       ""       ""      "fixed scaler elements"
     ""       ""       ""       ""       ""      "fixed scaler xrays"
     2        2        2        2        2       "crystal flipping flag"
     81010    81010    81010    81010    81010   "crystal flipping position"
     4        2        2        4        2       "number of crystals"
     "PET"    "LPET"   "LLIF"   "PET"    "LIF"   "crystal types1"
     "TAP"    "LTAP"   "LPET"   "TAP"    "PET"   "crystal types2"
     "PC1"    ""       ""       "PC1"    ""      "crystal types3"
     "PC2"    ""       ""       "PC25"   ""      "crystal types4"
     ""       ""       ""       ""       ""      "crystal types5"
     ""       ""       ""       ""       ""      "crystal types6"
     2.85     2.8      2.85     3.0      3.0     "deadtime in microseconds"
     150.     150.     140.     150.     140.     "off-peak size, (hilimit - lolimit)/off-peak size"
     80.      80.      70.      80.      70.     "wavescan size, (hilimit - lolimit)/wavescan size"

This line 13 contains the default dead time constants for all Bragg crystals on each WDS spectrometer. The values on this line will be utilized for all crystals on each spectrometer (see below for more on this).

So begin by entering a default dead time constant in microseconds (usec) for each spectrometer on line 13 using your text editor as determined from your constant k-ratio tests. If you have values for more than one Bragg crystal just choose one and proceed below.

And if you have dead time constants for more than a single Bragg crystal per spectrometer, you can also edit lines 72 to 77 for each Bragg crystal on each spectrometer (though only up to 4 crystals are usually found in JEOL and Cameca microprobes).

Each subsequent line corresponds to each Bragg crystal listed above on lines 7 to 12. Here is an example with the edited dead time constant values highlighted in red:

     1        1        1        1        1     "default PHA inte/diff modes1"
     1        1        1        1        1     "default PHA inte/diff modes2"
     1        0        0        1        0     "default PHA inte/diff modes3"
     1        0        0        1        0     "default PHA inte/diff modes4"
     0        0        0        0        0     "default PHA inte/diff modes5"
     0        0        0        0        0     "default PHA inte/diff modes6"
     2.8      3.1      2.85     3.1    3.0     "default detector deadtimes1"
     2.85     2.8      2.80     3.0    3.0     "default detector deadtimes2"
     3.0      0        0        3.1      0     "default detector deadtimes3"
     3.1      0        0        3.2      0     "default detector deadtimes4"
     0        0        0        0        0     "default detector deadtimes5"
     0        0        0        0        0     "default detector deadtimes6"
     0        1        1        0        0     "Cameca large area crystal flag1"
     0        1        1        0        0     "Cameca large area crystal flag2"
     0        0        0        0        0     "Cameca large area crystal flag3"
     0        0        0        0        0     "Cameca large area crystal flag4"
     0        0        0        0        0     "Cameca large area crystal flag5"
     0        0        0        0        0     "Cameca large area crystal flag6"

These dead time constant values on lines 72 to 75 will “over ride” the values defined on line 13 if they are non-zero.

For new probe runs, the PFE software will automatically utilizes these dead time values from the SCALERS.DAT file, but what about re-processing data from older runs? How can they can utilize these new dead time constants (and expressions)?

For example, once you have properly calibrated all your dead time constants using the new constant k-ratio method (as described in the attached document), and would like to apply these new values to an old  run, you can utilize this new feature to easily update all your samples in a single run as described in this link:

https://probesoftware.com/smf/index.php?topic=40.msg10968#msg10968

In addition, it should be noted that since Probe for EPMA saves the dead time constant for each element separately (see the Elements/Cations dialog), when an element setup is saved to the element setup database as seen here:

(https://probesoftware.com/smf/gallery/395_31_08_22_9_04_09.png)

This means that one can have different dead time constants for each element/xray/spectro/crystal combination. So when browsing for an already tuned up element setup, the dead time constant for that element, emission line, spectrometer, crystal, etc. is automatically loaded into the current run. That is also true when loading a sample setup from another probe run. All of this information is loaded automatically automatically and can of course be easily updated if desired.

Now that said, the dead time correction expression type (traditional/Willis/six term/log) is only loaded when loading a file setup from another run.  And in fact Probe for EPMA will prompt the user when the user loads an older probe file setup, and finds that newer dead time constants (or expression type) are available as seen here:

(https://probesoftware.com/smf/gallery/395_31_08_22_11_07_03.png)

This feature prevents the user from accidentally using an out of date dead time constants for acquiring new data.

So in summary, there are many ways to insure that the user can save, recall and utilize these new dead time constants once the SCALERS.DAT file is edited for the new dead time (parametric) constant values.

Bottom line: edit your dead time correction type parameter in your Probewin.ini file to 4 for using the logarithmic expression as shown here:

[software]
DeadtimeCorrectionType=4   ; 1 = normal, 2 = high precision deadtime correction, 3 = super high precision, 4 = log expression (Moy)

Then run some constant k-ratio tests, on say Ti metal and TiO2.

You will probably notice that most spectrometers with normal sized crystals will yield roughly the same dead time constant, but that your dead time constants on your large area crystals may need to be reduced by 0.02 or 0.04 usec or so (probably more like 0.1 or 0.2 usec less for Cameca instruments) in order to perform quantitative analysis at count rates over 50K cps.

 8)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 03, 2022, 09:35:13 AM
Here is something else I just noticed with the Mn Ka k-ratios acquired on my SX100:

(https://probesoftware.com/smf/gallery/395_03_09_22_9_28_23.png)

The PET/LPET crystals are in pretty good agreement and in fact the k-ratios they yield at around 0.735 (see y-axis) are about right, according to a quick calculation from CalcZAF:

ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA KILOVOL                                       
   Mn ka  .00000  .73413  77.445   -----  50.000   1.000   15.00                                       
   O  ka  .00000  .17129  22.555   -----  50.000   1.000   15.00                                       
   TOTAL:                100.000   ----- 100.000   2.000


But the LIF and LLIF spectrometers produce k-ratios about 3 to 4% lower than they should.
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: jlmaner87 on September 08, 2022, 03:07:57 PM
Here are some k-ratio measurement I performed on my Cameca SXFive-Tactis.

Background-corrected count rates (not corrected for dead time) are ~11 kcps on the 4 large crystals and ~ 3 kcps on the standard crystal (sp4) at 4 nA. Count rates are ~185 kcps and 113 kcps at 250 nA for large and standard crystals, respectively.

Traditional DT expression seems to work well up to 50 nA (100 kcps or 34 kcps for large and standard crystals, respectively). Logarithmic expression works well up to at least 150 nA (168 kcps on large crystals), if not higher, especially for sp4 standard size PET crystal.

Additional details are provided on the attached images.

Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 09, 2022, 11:49:14 AM
James,
This is fantastic data, and congrats on an excellently calibrated instrument.  I love seeing those simultaneous k-ratios all agreeing with each other! 

Hey, did you by any chance acquire PHA scans at both ends of your beam current range? 

The more I think about it, the more that I think that at least some of the deviation from constant k-ratios that we are seeing is due to the tuning of the PHA settings. We really need to make sure that our PHA distributions are above the baselines at both the low count rate/beam current and at the highest count rate/beam current.

Here's an example. When I ran some of my Ti Ka k-ratios on TiO2 and Ti metal, I checked the PHA distributions at both ends of the acquisition, first at 10 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_02.png)

and then at 200 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_20.png)

This is really important to check especially as we get to these high count rates. 
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 09, 2022, 12:03:09 PM
A new feature that is worth taking advantage of in Probe for EPMA is to plot/export the raw on-peak counts on the x axis rather than the beam current. This is a new plot item found in the Output Standard and Unknown XY Plots menu dialog as seen here:

(https://probesoftware.com/smf/gallery/395_09_09_22_11_53_07.png)

Now when plotting/exporting the raw k-ratios for the secondary standard (the primary standard k-ratio will always be 1.000!), the program will plot/export the raw on-peak counts for the secondary standard.  But it's the count rate on the primary standard that we really care about since that will generally be a higher concentration/count rate.  And therefore be more sensitive to the dead time correction. And of course since the primary standard intensity is in the denominator of the k-ratio, when it loses counts faster (at higher count rates), the k-ratio values will trend up!

So we need to export twice. First to select all the primary standards and export the raw on peak intensities for the primary standards, then select all the secondary standards and export all the k-ratios for the secondary standards.
 
We then combine the raw on peak counts from the primary standards with the k-ratios from the secondary standards and then we can obtain a plot like the following:

(https://probesoftware.com/smf/gallery/395_07_09_22_10_01_04.png)
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 19, 2022, 12:39:54 PM
I want to look at these PHA settings more closely because I think that some of what we are seeing, when performing these constant k-ratio measurements, is due to PHA peak shifting at high beam currents (count rates).

These effects will be different on Cameca and JEOL instruments obviously, so please feel free to share your own PHA scans at low and high count rates so we can try and learn more. This is of course complicated by the fact that on Cameca instruments, we tend to leave the bias fixed at a specific voltage (~1320v for low pressure flow detectors and ~1850v for high pressure flow detectors) and then simply adjust the PHA gain setting to position the PHA peak (normally around 2 to 2.5 v in the Cameca 0 to 5 v PHA range), but for the constant k-ratio method we want to instead position the peak to slightly *above* the center of the PHA range (at low beam currents) to avoid peak shifting from pulse height depression (at higher beam currents), so centered roughly around 3 volts or so.
 
Here for example is Mn Ka on Spc2, LPET, a low pressure flow detector at 30 nA:
 
(https://probesoftware.com/smf/gallery/395_19_09_22_12_13_37.png)

Note that the peak is roughly centered around 3 volts. Now using the same bias voltage of 1320v here is the same peak at 200 nA:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_13_55.png)

Please note that the gain is *exactly* the same for both the 30nA and the 200 nA scans!   This is really good news because it means that we don't need to adjust the PHA settings as we go to higher count rates.

But the PHA peak at 200 nA has certainly broadened and shifted down slightly to 2.5 volts or so (which is why we set it a little to the right of the center of the PHA range to begin with!), probably due to pulse height depression.  Note that even though it has broadened out, because we are in integral mode, we don't have to worry about cutting off the higher side of the PHA peak.  The important thing is to keep the peak (including the escape peak!), above the baseline cutoff.

How about a high pressure flow detector? This is a PHA scan on Spc3 LLIF which is a high pressure flow detector, first at 30 nA:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_14_10.png)

and again at 200 nA using the same (1850v)  bias voltage:

(https://probesoftware.com/smf/gallery/395_19_09_22_12_14_24.png)

Again, the gain setting is the same, and very little change in the PHA peak (though it is again, slightly shifted down and broadened). Now admittedly we are getting a somewhat less count rate on the LLIF crystal than the LPET, so I do want to try this again on Spc3 LPET, but still very promising.

Again the take away point: check your PHA distributions at both the lowest and highest count rates to be sure you are not cutting off any emission counts when performing the constant k-ratio method.

On JEOL instruments that is an entirely different story because usually the gain is fixed and the bias voltage is adjusted. Question is: can we keep the JEOL PHA distributions above the baseline as we get to higher count rates using a single pair of bias and gain values?  Anette's initial data suggests, no we can't:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_06_08.png)

I should mention that this PHA shift effect (more pronounced on the higher concentration Si metal primary standard), would tend to produce a constant k-ratio trend as we see in this post, because the primary standard is in the denominator (and as the primary intensity decreases, the k-ratio tends to increase):

https://probesoftware.com/smf/index.php?topic=1489.msg11230#msg11230

Can we see some more JEOL PHA data at low and high count rates for both P-10 and Xenon detectors?
Title: Re: New method for calibration of dead times (and picoammeter)
Post by: Probeman on September 26, 2022, 09:41:51 AM
Here are the constant k-ratios from Anette's most recent run, first for the TAP spectrometer:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_36_24.png)

When going from the logarithmic to exponential expression we clearly need to reduce the dead time constant from 1.26 usec to 1.18 usec.  Interesting that the predicted count rates are slightly different for these two models at these two slightly different parametric constants in the middle of the count rate range.

Now for the TAPL crystal (beware it ain't pretty):

(https://probesoftware.com/smf/gallery/395_26_09_22_9_36_39.png)

The logarithmic expression does a pretty good job (at least up until around 450 kcps) but the exponential expression loses it completely as the product exceeds 1/e (so no dead time correction can be applied at higher count rates).