Author Topic: Calculating averages close to the detection limit?  (Read 4757 times)

Anette von der Handt

  • Global Moderator
  • Professor
  • *****
  • Posts: 355
    • UMN Probelab
Calculating averages close to the detection limit?
« on: August 30, 2015, 07:05:10 PM »
Hi all,

I just had an interesting question by a user. How do you treat data where you want calculate an average and some elements are below detection limit in some analyses and above in others. How do you deal with a situation like this in a rigorous way statistically?

Thanks!
Against the dark, a tall white fountain played.

Malcolm Roberts

  • Professor
  • ****
  • Posts: 134
Re: Calculating averages close to the detection limit?
« Reply #1 on: August 30, 2015, 10:57:15 PM »
I am not sure if this of any relevance whatsoever but exploration geologists assign values of half the lld to those analyses less than the lld. I never liked this as it is saying something is there,that may well not be,at levels from zero to lld.  I guess this has the tendency to generate artificial bimodal populations that could be ok for resource estimation but probably not suitable for mineral analyses. Have you thought about weeding out the outliers using weighted averages or ignoring those values <lld? I suppose no one can criticise you if you document your approach?

Probeman

  • Emeritus
  • *****
  • Posts: 2858
  • Never sleeps...
    • John Donovan
Re: Calculating averages close to the detection limit?
« Reply #2 on: August 31, 2015, 01:01:26 PM »
Hi all,

I just had an interesting question by a user. How do you treat data where you want calculate an average and some elements are below detection limit in some analyses and above in others. How do you deal with a situation like this in a rigorous way statistically?

Thanks!

Hi Anette,
I am not an expert at this sort of thing but here are my thoughts on this interesting question:

The calculation of an average must include all data. Including not only data below the detection limit, but even data below zero. That is "negative concentrations". Why? because if one "throws out" data based on some (any?) criteria, one is introducing a bias into the average.  This was discussed here in some detail:

http://probesoftware.com/smf/index.php?topic=392.msg2104#msg2104

As for calculating detection limits, the difficulty is estimating sensitivity for single measurements, because one has to make some assumptions regarding the error distribution. For off-peak measurements we assume Poisson statistics because the continuum intensity is essentially random, but for MAN background measurements, the answer is much more complicated as discussed here:

http://probesoftware.com/smf/index.php?topic=307.msg3190#msg3190

The good news is that for average sensitivity calculations we don't have to guess at the error distributions because we have already made replicate measurements, and hence all sources of imprecision (or reproducibility) have already been included in the calculation of the standard deviation.  In fact in Probe for EPMA we perform a t-test for the average detection limit (or sensitivity) calculation and as seen in this equation here from Goldstein et. al., which includes the *measured* standard deviation:



Hence the average detection limit is probably the best estimate of sensitivity for traces.
The only stupid question is the one not asked!