Author Topic: Detection limits  (Read 2440 times)

Mike Jercinovic

  • Professor
  • ****
  • Posts: 92
    • UMass Geosciences Microprobe-SEM Facility
Detection limits
« on: April 09, 2020, 09:24:07 AM »
First, something simple.  The calculation of CDL in the documentation mentions the reference "Love and Scott (1983)".  Is this actually Scott and Love 1983?  That is their book "Quantitative electron-probe microanalysis"? or are you referencing some paper that I can't seem to find?

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #1 on: April 09, 2020, 06:27:35 PM »
First, something simple.  The calculation of CDL in the documentation mentions the reference "Love and Scott (1983)".  Is this actually Scott and Love 1983?  That is their book "Quantitative electron-probe microanalysis"? or are you referencing some paper that I can't seem to find?

Yes, Scott and Love (1983). Sorry. I will fix the typo.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Mike Jercinovic

  • Professor
  • ****
  • Posts: 92
    • UMass Geosciences Microprobe-SEM Facility
Re: Detection limits
« Reply #2 on: April 10, 2020, 08:11:06 AM »
Great John, thanks.  Now the next thing.  What would you think about including some other calculations for detection limits?  Like the Lifshin 1999 one which is exactly the same as eq. 9.25 in Goldstein et al 3rd ed.  Also the Ziebold version, and the Ancey et al (1978) estimate.  Maybe this gets into another discussion of what we should be favoring for this calculation as they all yield somewhat different results.  I don't know if people have strong feelings about this one way or another, perhaps leave well enough alone?

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #3 on: April 10, 2020, 08:51:45 AM »
Great John, thanks.  Now the next thing.  What would you think about including some other calculations for detection limits?  Like the Lifshin 1999 one which is exactly the same as eq. 9.25 in Goldstein et al 3rd ed.  Also the Ziebold version, and the Ancey et al (1978) estimate.  Maybe this gets into another discussion of what we should be favoring for this calculation as they all yield somewhat different results.  I don't know if people have strong feelings about this one way or another, perhaps leave well enough alone?

Hi Mike,
Indeed we have thought about adding another detection limit calculation (I think we've discussed it previously with you), but I haven't seen a compelling reason to yet. They all give slightly different answers, but frankly I do not understand what that means.

There are already two detection limit calculations in PFE, the Scott and Love (1983) equation based on 3 times the (predicted) variance of the background, and the t-test detection limit calculation based on Goldstein et al. The latter of course only applies to the average, but is useful to compare to the Scott and Love equation.  So I guess we need to distinguish between single point predictions and average predictions.

In homogeneous materials they give quite similar results.  For instance the synthetic quartz (1.42 PPM Ti) that we showed in the Donovan et al. (2011) paper.  Here are results *without* the aggregate feature:

Un   31 1920 sec on SiO2, Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti      Si       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC    CALC
BGDS:      LIN     LIN     LIN     LIN     LIN
TIME:  1920.00 1920.00 1920.00 1920.00 1920.00     ---     ---
BEAM:   200.76  200.76  200.76  200.76  200.76     ---     ---

ELEM:       Ti      Ti      Ti      Ti      Ti      Si       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()      ()
   271 -.00003  .00039  .00003 -.00006  .00051 46.7430 53.2576 100.001
   272  .00010  .00039  .00022 -.00036 -.00030 46.7430 53.2570 100.000
   273  .00003  .00037  .00008  .00007  .00048 46.7430 53.2577 100.002
   274  .00002  .00016  .00015 -.00005  .00009 46.7430 53.2572 100.001
   275 -.00010  .00019 -.00002  .00016  .00009 46.7430 53.2572 100.001

AVER:   .00000  .00030  .00009 -.00005  .00017  46.743  53.257 100.001
SDEV:   .00007  .00011  .00010  .00020  .00034    .000    .000  .00067
SERR:   .00003  .00005  .00004  .00009  .00015  .00000  .00012
%RSD:  4057.70 37.3350 103.073 -404.32 193.525  .00000  .00050
STDS:      922     922     922     922     922     ---     ---

STKF:    .5621   .5621   .5621   .5621   .5621     ---     ---
STCT:   667.34 1600.07 1901.70  531.93  828.32     ---     ---

UNKF:    .0000   .0000   .0000   .0000   .0000     ---     ---
UNCT:      .00     .01     .00     .00     .00     ---     ---
UNBG:      .99    2.63    3.41     .79    1.38     ---     ---

ZCOR:   1.1969  1.1969  1.1969  1.1969  1.1969     ---     ---
KRAW:   .00000  .00000  .00000  .00000  .00000     ---     ---
PKBG:  1.00002 1.00271 1.00077  .99951 1.00155     ---     ---
BLNK#:      27      27      27      27      27     ---     ---
BLNKL: .000142 .000142 .000142 .000142 .000142     ---     ---
BLNKV: .000000 -.00125 -.00272 .000689 -.00040     ---     ---

Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):

ELEM:       Ti      Ti      Ti      Ti      Ti
   271  .00049  .00033  .00032  .00054  .00046
   272  .00048  .00033  .00032  .00054  .00046
   273  .00049  .00033  .00032  .00054  .00046
   274  .00049  .00033  .00032  .00054  .00046
   275  .00049  .00033  .00031  .00054  .00046

AVER:   .00049  .00033  .00032  .00054  .00046
SDEV:   .00000  .00000  .00000  .00000  .00000
SERR:   .00000  .00000  .00000  .00000  .00000

Detection Limit (t-test) in Elemental Weight Percent (Average of Sample):

ELEM:       Ti      Ti      Ti      Ti      Ti
  60ci  .00008  .00004  .00019  .00014  .00009
  80ci  .00013  .00007  .00032  .00022  .00015
  90ci  .00018  .00009  .00044  .00031  .00021
  95ci  .00024  .00012  .00057  .00040  .00028
  99ci  .00039  .00020  .00095  .00066  .00046

Note that I edited out the analytical sensitivity calculations for brevity.

I would very much appreciate if someone (such as yourself!), posted here a short tutorial comparing these various detection limit calculations (including the ones you mentioned) and explain what they are attempting to model, and what those differences mean.

If we turn on the aggregate feature we get these results:

Un   31 1920 sec on SiO2, Results in Elemental Weight Percents
 
ELEM:       Ti      Ti      Ti      Ti      Ti      Si       O
TYPE:     ANAL    ANAL    ANAL    ANAL    ANAL    SPEC    CALC
BGDS:      LIN     LIN     LIN     LIN     LIN
TIME:  1920.00     .00     .00     .00     .00     ---     ---
BEAM:   200.76     .00     .00     .00     .00     ---     ---
AGGR:        5                                     ---     ---

ELEM:       Ti      Ti      Ti      Ti      Ti      Si       O   SUM 
XRAY:     (ka)    (ka)    (ka)    (ka)    (ka)      ()      ()
   271  .00019  .00000  .00000  .00000  .00000 46.7430 53.2571 100.000
   272  .00012  .00000  .00000  .00000  .00000 46.7430 53.2571 100.000
   273  .00022  .00000  .00000  .00000  .00000 46.7430 53.2571 100.000
   274  .00011  .00000  .00000  .00000  .00000 46.7430 53.2571 100.000
   275  .00007  .00000  .00000  .00000  .00000 46.7430 53.2570 100.000

AVER:   .00014  .00000  .00000  .00000  .00000  46.743  53.257 100.000
SDEV:   .00006  .00000  .00000  .00000  .00000    .000    .000  .00011

SERR:   .00003  .00000  .00000  .00000  .00000  .00000  .00002
%RSD:  43.8974   .0000   .0000   .0000   .0000  .00000  .00008
STDS:      922       0       0       0       0     ---     ---

STKF:    .5621   .0000   .0000   .0000   .0000     ---     ---
STCT:  5529.37     .00     .00     .00     .00     ---     ---

UNKF:    .0000   .0000   .0000   .0000   .0000     ---     ---
UNCT:      .01     .00     .00     .00     .00     ---     ---
UNBG:     9.21     .00     .00     .00     .00     ---     ---

ZCOR:   1.1969   .0000   .0000   .0000   .0000     ---     ---
KRAW:   .00000   .0000   .0000   .0000   .0000     ---     ---
PKBG:  1.00125  .00000  .00000  .00000  .00000     ---     ---
BLNK#:      27       0       0       0       0     ---     ---
BLNKL: .000142       0       0       0       0     ---     ---
BLNKV: -.00129       0       0       0       0     ---     ---

Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):

ELEM:       Ti      Ti      Ti      Ti      Ti
   271  .00018  .00000  .00000  .00000  .00000
   272  .00018  .00000  .00000  .00000  .00000
   273  .00018  .00000  .00000  .00000  .00000
   274  .00018  .00000  .00000  .00000  .00000
   275  .00018  .00000  .00000  .00000  .00000

AVER:   .00018  .00000  .00000  .00000  .00000
SDEV:   .00000  .00000  .00000  .00000  .00000
SERR:   .00000  .00000  .00000  .00000  .00000

Detection Limit (t-test) in Elemental Weight Percent (Average of Sample):

ELEM:       Ti      Ti      Ti      Ti      Ti
  60ci  .00008     ---     ---     ---     ---
  80ci  .00012     ---     ---     ---     ---
  90ci  .00017     ---     ---     ---     ---
  95ci  .00022     ---     ---     ---     ---
  99ci  .00037     ---     ---     ---     ---

Again without the analytical sensitivity calculations.

Looking at the aggregate results we obtain an average concentration of 1.4 +/- 0.6 PPM, which is what we would expect (using another SiO2 sample analysis as a blank measurement). 

Now we see the single point (Scott and Love) equation predicts a 1.8 PPM detection limit, while the t-test from Goldstein predicts a 3.7 PPM detection limit (both at 99% confidence). That would seem to indicate that our 1.4 PPM result is less than 99% confidence, say 95% confidence for the t-test (or 2.2 PPM detection limit). Since the slightly larger value from the t-test utilizes the actual measured variance, while the Scott and Love calculation is based on the predicted variance (using a square root), I would take that to mean that synthetic quartz is slightly inhomogeneous. However, the t-test calculation only includes 5 measurements, so we would expect that number to improve as we performed more measurements in the average. So maybe the results are almost the same...

But as I said, I'm just speculating here.  What are your thoughts on this subject?
« Last Edit: April 10, 2020, 09:42:25 AM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Mike Jercinovic

  • Professor
  • ****
  • Posts: 92
    • UMass Geosciences Microprobe-SEM Facility
Re: Detection limits
« Reply #4 on: April 10, 2020, 10:31:31 AM »
Thanks John!  I actually just ordered the Scott and Love book (found a used one on Amazon) as I want to see exactly where they are coming from.  The Ancey one has a good statistical argument behind it, but I am certainly not a statistician!  Always gave sensible results in our view, at least relative to the Ziebold formula.  Mike W. and I have talked occasionally about the validity of the Goldstein et al t-test on the average.  This is partly philosophical, but we can get into this a bit too.  It implies to us that, in order to produce the lowest calculated CDL, you would always be better off with only one measurement... for example, as single 6000 second measurement compared to ten 600 second measurements.  But is this what we want to do in trying to obtain accurate trace element results?  So, for your example, after aggregating spectrometers you get about a 2ppm single point detection limit 99%ci, but the average of five points at 99%ci ends up with a 4ppm CDL.  We would like to think we are improving our precision, and sensitivity by sampling a homogeneous domain a number of times.

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #5 on: April 10, 2020, 06:44:11 PM »
So, for your example, after aggregating spectrometers you get about a 2ppm single point detection limit 99%ci, but the average of five points at 99%ci ends up with a 4ppm CDL.  We would like to think we are improving our precision, and sensitivity by sampling a homogeneous domain a number of times.

Hi Mike,
And that is exactly the problem in a nutshell, as they say.    :)

Because, almost every material must have some level of heterogeneity. So by performing a single analysis and assuming three standard deviations based on a square root, we will always obtain a better detection limit, than by utilizing the average of several measurements. Not only because of possible (as expected) heterogeneity, but also just because of spectrometer reproducibility, instrument stability, etc., etc.

I think our only recourse is to utilize a material that has the blank concentration *below* the level of detection (for EPMA ayways). So for instance, the SpectroSil glass that has Ti at the 50 PPB level.  That material should always produce a homogeneous, zero net intensity, on an EPMA instrument, and therefore give us a true detection limit regardless of whether we perform a single measurement, or a series of measurements (aside from instrument stability and reproducibility issues of course).
« Last Edit: April 10, 2020, 06:50:14 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Mike Jercinovic

  • Professor
  • ****
  • Posts: 92
    • UMass Geosciences Microprobe-SEM Facility
Re: Detection limits
« Reply #6 on: April 11, 2020, 01:07:12 PM »
The blank will produce a zero net intensity within counting error, some points above zero slightly, some below.  You just want to be zero at 2 or 3 sigma over a set of analyes.  What would be good to see in something like your trace Ti run would be the propagated error on the concentration at 3-sigma.  Is Ti at 1.9 ppm within 3 sigma of Ti at 2.2 ppm on the next point, and so on.

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #7 on: April 11, 2020, 01:55:55 PM »
Hi Mike,
Not quite sure I understand what you mean, but the PFE MDB file is attached to this post if you want to mess around with the data:

https://probesoftware.com/smf/index.php?topic=29.msg1183#msg1183

john
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #8 on: April 15, 2020, 02:23:23 PM »
First, something simple.  The calculation of CDL in the documentation mentions the reference "Love and Scott (1983)".  Is this actually Scott and Love 1983?  That is their book "Quantitative electron-probe microanalysis"? or are you referencing some paper that I can't seem to find?

Hi Mike,
I fixed the references for Scott and Love (1983) in the latest PFE version for the Reference manual (pdf) and help file.
john
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #9 on: November 07, 2021, 10:38:46 AM »
Recently we were posting on the (usually) minimal effects of matrix corrections on trace element concentrations (depending of course on the physics details) as seen here:

https://probesoftware.com/smf/index.php?topic=92.msg10342#msg10342

But afterwards when reflecting on using that multi-point background (MPB) data from a recent probe run, it occurred to us (Donovan and Allaz) to think a little more about the calculation of detection limits for MPB data.

Warning: this is going to get "into the weeds" a bit...   :D

When Probe for EPMA acquires WDS intensity data, it saves the counting times utilized during the acquisition, for both the on and off-peak counting times. Mostly to document the actual counting times utilized for each data point because of course the counting times utilized in a sample acquisition can change on a point by point basis as specified by the user. In addition, depending on various acquisition flags utilized in the software, the actual counting times utilized can also vary.

By the way, if anyone is interested in seeing the actual counting times utilized for each data point acquisition, turn on DebugMode from the Output menu and click the "Raw Data" button from the Analyze! window (but be sure to update PFE to the latest version because the code has recently been improved in this respect).

Basically what happens now is that when MPB measurements are output or used for quantification, the latest code now checks not only for the acquisition counting time originally utilized, but also the actual counting time utilized for the MPB background correction as specified by the "Iterate" parameter. This has implications for the calculation of MPB detection limits as we will see in a moment.

These considerations of course also apply also to "shared" multi-point backgrounds which are described here:

https://probesoftware.com/smf/index.php?topic=9.msg1579#msg1579

Basically the code now checks both the MPB "Acquire" and MPB "iterate" parameters to modify the stored off-peak acquisition times and adjust the (actually utilized) off-peak count times based on these parameters (of which the "Iterate" parameters can be adjusted during data re-processing. 

Here is an example of a "shared" off-peak background (now loaded into the MPB arrays using the "Search for "Shared" Bgds button) where Si and Ca were acquired together on spectrometer 1 and  K and Mn were acquired on spectrometer 3:

Raw Hi-Peak X-ray Count Times:
ELEM:    si ka    k ka   al ka   mg ka   fe ka   ca ka   mn ka
  130G    5.00    5.00    5.00    5.00    5.00   15.00   15.00
  131G    5.00    5.00    5.00    5.00    5.00   15.00   15.00
  132G    5.00    5.00    5.00    5.00    5.00   15.00   15.00

AVER:     5.00    5.00    5.00    5.00    5.00   15.00   15.00

Raw Hi-Peak X-ray Counts (cps/1nA):
ELEM:    si ka    k ka   al ka   mg ka   fe ka   ca ka   mn ka
  130G     .09    1.34     .32     .09     .25     .12     .87
  131G     .04    1.12     .32     .07     .25     .16     .90
  132G     .05    1.16     .37     .08     .25     .13     .87

Raw Lo-Peak X-ray Count Times:
ELEM:    si ka    k ka   al ka   mg ka   fe ka   ca ka   mn ka
  130G   15.00   15.00    5.00    5.00    5.00    5.00    5.00
  131G   15.00   15.00    5.00    5.00    5.00    5.00    5.00
  132G   15.00   15.00    5.00    5.00    5.00    5.00    5.00

AVER:    15.00   15.00    5.00    5.00    5.00    5.00    5.00

Raw Lo-Peak X-ray Counts (cps/1nA):
ELEM:    si ka    k ka   al ka   mg ka   fe ka   ca ka   mn ka
  130G     .08    1.46     .36     .05     .29     .18     .97
  131G     .08    1.41     .33     .07     .29     .22    1.00
  132G     .05    1.38     .42     .09     .24     .15    1.21

Each element has one off-peak on one side and three off-peaks on the other side, and the (DebugMode) counting times now reflect this. And if the "Iterate" parameters are changed for these elements, the displayed counting times (and those utilized in the detection limits calculations) will now reflect this.

So how does this new code affect the detection limit calculations for MPB elements?

Here's another example of a five element (normal) MPB acquisition, first where all MPB backgrounds are utilized, that is, the "Iterate" parameter is equal to the "Acquire" parameter, so all backgrounds are utilized (here we are only showing the detection limit output to make the output less confusing, though the analytical sensitivity calculation for minor/major elements) is also (very slightly) affected by these code changes):

Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):

ELEM:       Zr      Nb      La      Sr      Ti
   415    .004    .002    .004    .004    .003
   416    .004    .002    .004    .004    .003
   417    .004    .002    .004    .005    .003
   418    .004    .002    .004    .004    .003
   419    .004    .002    .004    .004    .003

AVER:     .004    .002    .004    .004    .003
SDEV:     .000    .000    .000    .000    .000
SERR:     .000    .000    .000    .000    .000

Detection Limit (t-test) in Elemental Weight Percent (Average of Sample):

ELEM:       Zr      Nb      La      Sr      Ti
  60ci    .001    .000    .001    .001    .001
  80ci    .002    .001    .001    .002    .001
  90ci    .002    .001    .002    .002    .002
  95ci    .003    .001    .002    .003    .002
  99ci    .005    .002    .004    .005    .004

So in the above example 4 off-peaks were acquired, and 4 off-peaks were utilized.  Now we set the "Iterate" parameters to "3" for both the high and low MPBs and re-calculate the detection limits:

Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):

ELEM:       Zr      Nb      La      Sr      Ti
   415    .005    .002    .004    .005    .003
   416    .005    .002    .004    .005    .003
   417    .005    .002    .004    .005    .003
   418    .005    .002    .004    .005    .003
   419    .005    .002    .004    .005    .003

AVER:     .005    .002    .004    .005    .003
SDEV:     .000    .000    .000    .000    .000
SERR:     .000    .000    .000    .000    .000

Detection Limit (t-test) in Elemental Weight Percent (Average of Sample):

ELEM:       Zr      Nb      La      Sr      Ti
  60ci    .001    .000    .001    .001    .001
  80ci    .002    .001    .001    .002    .001
  90ci    .002    .001    .002    .002    .002
  95ci    .003    .001    .002    .003    .002
  99ci    .005    .002    .004    .005    .004

So a very slight change (~10 PPM) for a couple of the elements in the single point detection limits, but no change for the reported t-test values.  And, as expected, the calculated detection limits are very slightly higher, exactly as one would expect by reducing the (total) background counting times.  Now let's only utilize 2 of the 4 for the MPB high and low sides by setting the "Iterate" parameter to 2 for each:

Detection limit at 99 % Confidence in Elemental Weight Percent (Single Line):

ELEM:       Zr      Nb      La      Sr      Ti
   415    .006    .003    .005    .006    .004
   416    .006    .003    .005    .006    .004
   417    .006    .003    .005    .006    .004
   418    .006    .003    .005    .006    .004
   419    .006    .003    .005    .006    .004

AVER:     .006    .003    .005    .006    .004
SDEV:     .000    .000    .000    .000    .000
SERR:     .000    .000    .000    .000    .000

Detection Limit (t-test) in Elemental Weight Percent (Average of Sample):

ELEM:       Zr      Nb      La      Sr      Ti
  60ci    .001    .000    .001    .001    .001
  80ci    .002    .001    .001    .002    .001
  90ci    .002    .001    .002    .002    .002
  95ci    .003    .001    .002    .003    .002
  99ci    .005    .002    .004    .005    .004

Again, very slightly (higher) values in the calculated single point detection limits and no observable changes in the calculated t-test detection limits, which makes sense as the t-test calculation relies primarily on the measured variance of the averages.

Anyway, sorry to get into these tedious details, but now the calculated MPB detection limits should be (subtly) more accurate.
« Last Edit: November 07, 2021, 02:14:34 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Axel Wittmann

  • Student
  • *
  • Posts: 4
Re: Detection limits
« Reply #10 on: May 20, 2022, 09:11:16 AM »
Hi John,

Is it possible to add more detail to the description of detection limit calculations ("Calculate Detection Limits" chapter on pages 281 and following in the PROBEWIN manual) ? I have been asked to reproduce the detection limit calculation that PfE does (CDL99) but I have not been able to do so (I admittedly only spent a few hours on this chore; however, I think that it is unfortunate how opaque the description of this calculation is handled in the PROBEWIN manual, given the importance of reporting detection limits - and actually being able to reproduce them).   
Two issues that concern me in particular is that the CDL99 outputs are for a confidence interval that requires several measurements but still these data are provided for single elements' concentrations in single analysis spot outputs. Does this mean that these confidence intervals were calculated from counts acquired over the measurement time on the peaks and backgrounds ? This is not obvious from the description in the manual.
Second, Love & Scott (1983) and the PROBEWIN manual point out that an integral part of the detection limit calculation is the count rate on a "pure element" (added underline and bold face for indication of importance ?). Still the CDL99 calculations are made for whatever standard was used for the respective measurements (how can "Durango Apatite" be considered a "pure element" standard for P ?).
I would really like to see how the calculation is done, step by step, from an actual example.

Best regards,
Axel

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #11 on: May 20, 2022, 10:33:36 AM »
There are two detection limit calculations in Probe for EPMA.  One of the equations requires pure element standard intensities, so those are calculated automatically.

To learn more about how these detection limits are calculated, check the DebugMode menu in the Output menu to see all the intermediate calculations steps, though you might want to select only a single analysis because there is a lot of output to the log window! See the Analyze Selected Lines button in the Analyze! window.

One calculation is for single analysis points based on the method of Scott and Love (1983) which is essentially based on 3 times the background variance. The code for this calculation is shown here:

Code: [Select]
Function ConvertDetectionLimits2(datarow As Integer, chan As Integer, tRowUnkCors() As Single, tRowStdCts() As Single, analysis As TypeAnalysis, sample() As TypeSample) As Single
' Calculate detection limit for a single element (off-peak measurement)

ierror = False
On Error GoTo ConvertDetectionLimits2Error

Dim temp As Single, bgdcount2 As Single
Dim tmsg As String

' Default is zero
ConvertDetectionLimits2! = 0#

' Normalize standard counts to pure element counts by apply std k-factor (0 = phi/rho/z, 1,2,3,4 = alpha fits, 5 = calilbration curve, 6 = fundamental parameters)
If CorrectionFlag% = 0 Or CorrectionFlag% = 5 Then
If analysis.StdAssignsKfactors!(chan%) = 0# Then Exit Function
stdcps100! = tRowStdCts!(datarow%, chan%) / analysis.StdAssignsKfactors!(chan%)

' Alpha-Factors (calculate alpha k-fac = conc/beta)
ElseIf CorrectionFlag% > 0 And CorrectionFlag% < 5 Then
If analysis.StdAssignsBetas!(chan%) = 0# Then Exit Function
temp! = (analysis.StdAssignsPercents!(chan%) / 100#) / analysis.StdAssignsBetas!(chan%)
If temp! = 0# Then Exit Function
stdcps100! = tRowStdCts!(datarow%, chan%) / temp!     ' leave std counts in cps/nominal beam

' Fundamental parameter corrections
ElseIf CorrectionFlag% = MAXCORRECTION% Then

End If

' Check for valid stdcps100 (pure element intensity)
If stdcps100! = 0# Then Exit Function

' Determine background count time for unknown (0=off-peak, 1=MAN, 2=multipoint)
If sample(1).BackgroundTypes%(chan%) <> 1 Then

' 0=linear, 1=average, 2=high only, 3=low only, 4=exponential, 5=slope hi, 6=slope lo, 7=polynomial, 8=multi-point
If sample(1).OffPeakCorrectionTypes%(chan%) = 2 Then
bgdtime! = sample(1).HiTimeData!(datarow%, chan%)         ' high only
ElseIf sample(1).OffPeakCorrectionTypes%(chan%) = 3 Then
bgdtime! = sample(1).LoTimeData!(datarow%, chan%)         ' low only
Else
bgdtime! = sample(1).HiTimeData!(datarow%, chan%) + sample(1).LoTimeData!(datarow%, chan%)   ' all other off peak types
End If

Else
bgdtime! = sample(1).OnTimeData!(datarow%, chan%)         ' use on-peak time for MAN
End If

' Check for valid count times on background
If bgdtime! <= 0# Then Exit Function

' Determine unknown beam current for each element
If Not sample(1).CombinedConditionsFlag Then
bgdbeam! = sample(1).OnBeamData!(datarow%, chan%)         ' use OnBeamData in case of aggregate intensity calculation (use average aggregate beam)
Else
bgdbeam! = sample(1).OnBeamDataArray!(datarow%, chan%)    ' use OnBeamDataArray in case of aggregate intensity calculation (use average aggregate beam)
End If

' De-normalize unknown background counts for time and beam
bgdcount! = sample(1).BgdData(datarow%, chan%) * bgdtime!
Call DataCorrectDataBeamDrift2(bgdcount!, bgdbeam!)
If ierror Then Exit Function

' Take square root to get gaussian standard deviation of background
If bgdcount! < 0# Then
If DebugMode Then
tmsg$ = "ConvertDetectionLimits2: negative background counts (" & Format$(sample(1).BgdData(datarow%, chan%)) & ", unable to calculate detection limits on channel " & Format$(chan%) & "..."
Call IOWriteLogRichText(tmsg$, vbNullString, Int(LogWindowFontSize%), vbMagenta, Int(FONT_REGULAR%), Int(0))
End If
Exit Function
End If

' Calculate off-peak variance for this element and row (0=off-peak, 1=MAN, 2=multipoint)
If sample(1).BackgroundTypes%(chan%) <> 1 Then
bgddevraw! = Sqr(bgdcount!)                                                                    ' off peak or multi-point bgd only variance, assume gaussian statistics

' Calculate MAN (net) variance for this element and row (0=off-peak, 1=MAN, 2=multipoint)
Else
bgddevraw! = ConvertDetectionLimits3!(datarow%, chan%, analysis, sample())                   ' MAN net variance, calculate using Jared Singer MAN sensitivity expressions
End If

' Re-normalize modeled background variance to cps and nominal beam again
bgddevcps! = bgddevraw! / bgdtime!
Call DataCorrectDataBeamDrift(bgddevcps!, bgdbeam!)
If ierror Then Exit Function

' Load off-peak bgd only dev cps for log debug output below
If sample(1).BackgroundTypes%(chan%) <> 1 Then
bgd_onlydevcps! = bgddevcps!
End If

' Re-normalize raw background count to cps and nominal beam again (for ConvertDetectionLimits5 call below)
bgdcount2! = bgdcount! / bgdtime!
Call DataCorrectDataBeamDrift(bgdcount2!, bgdbeam!)
If ierror Then Exit Function

'UseSingerMANExpressionsFlag = False
UseSingerMANExpressionsFlag = True
If (sample(1).BackgroundTypes%(chan%) = 1 And UseSingerMANExpressionsFlag) Or ConvertDataIsNthPoint(datarow%, chan%, sample()) Or (sample(1).BackgroundTypes%(chan%) <> 1 And NthPointCalculationFlag) Then   ' calculate traditional sensitivity
ConvertDetectionLimits2! = 2# * bgddevcps! / stdcps100! * 100# * tRowUnkCors!(datarow%, chan%)                                                ' net intensity statistics- assume 2x net variation for CDL for MAN and Nth point
Else
ConvertDetectionLimits2! = 3# * bgddevcps! / stdcps100! * 100# * tRowUnkCors!(datarow%, chan%)                                                ' bgd intensity only statistics- assume 3x bgd variation for CDL for off-peak
End If

' Save bgd intensity and variance to array for statistics output to log window
If DebugMode Then
Call ConvertDetectionLimits5(chan%, bgdcount2!, bgd_onlydevcps!, sample())
If ierror Then Exit Function
End If

Exit Function

' Errors
ConvertDetectionLimits2Error:
MsgBox Error$, vbOKOnly + vbCritical, "ConvertDetectionLimits2"
ierror = True
Exit Function

End Function

The above code only applies to off-peak measurements, because the statistics for MAN backgrounds are completely different. See Donovan et al., 2016 for a detailed discussion of the MAN background statistics.

For the calculation of average detection limits using a t-test, these calculations are based on equations in Goldstein et al, 1992 and those values are dominated by the concentration variance. The code that performs these calculations is shown here:

Code: [Select]
' Detection limits (mode% = 4)
If mode% = 4 Then

' Calculate average standard counts (already corrected for aggregate intensities if flagged)
Call MathArrayAverage(averstd, tRowStdCts!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub

For j% = 1 To MAXCI%                 ' calculate MAXCI% different confidence intervals (60 to 99%)
Call StudentGetT(df!, Alpha!(j%), t!)
If ierror Then Exit Sub

For chan% = 1 To sample(1).LastElm%
ip% = IPOS2(NumberofStandards%, sample(1).StdAssigns%(chan%), StandardNumbers%())
If ip% > 0 And averstd.averags!(chan%) <> 0# Then
analysis.CalData!(j%, chan%) = analysis.StdPercents!(ip%, chan%) / Abs(averstd.averags!(chan%)) * Sqr(2#)
analysis.CalData!(j%, chan%) = analysis.CalData!(j%, chan%) * t! * average.Stddevs!(chan%) / Sqr(sample(1).GoodDataRows%)
End If
Next chan%
Next j%

Exit Sub
End If

Let us know if you have further questions.
« Last Edit: May 20, 2022, 03:16:29 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Axel Wittmann

  • Student
  • *
  • Posts: 4
Re: Detection limits
« Reply #12 on: May 20, 2022, 05:46:33 PM »
John,

Thank you very much for your reply. (Remember, I am not a programmer.)

By digesting the forum thread Dan Ruscitto started a while back (https://probesoftware.com/smf/index.php?topic=256.msg1244#msg1244), I figured out how the detection limit calculation works.

That old sausage recipe...

Axel
« Last Edit: May 20, 2022, 07:29:17 PM by John Donovan »

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Detection limits
« Reply #13 on: May 20, 2022, 07:30:02 PM »
OK, right.

Yeah that is a good thread for explaining those details.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"