Author Topic: Why does EDS suck (sometimes)?  (Read 8694 times)

Jason in Singapore

  • Post Doc
  • ***
  • Posts: 15
    • FACTS Lab at NTU
Why does EDS suck (sometimes)?
« on: April 07, 2014, 07:33:39 PM »
Hi all,

A colleague of mine has been asked to review a couple of papers in which normalized EDS data is being used for mineral chemical interpretation. He has asked me for input.

One big issue I see is that, fundamentally, the results they are presenting are semi-quantitative and the analytical standard in geology for decades has been to present fully quantitative results. I have mentioned that a table with analyses of some well-characterized mineral reference standards would make the data more convincing. Also, some write-up of what calibration standards were used is necessary.

This could easily turn into an essay on
« Last Edit: April 14, 2014, 05:55:02 PM by John Donovan »
"Truth is relative, belief absolute"

Jeremy Wykes

  • Professor
  • ****
  • Posts: 42
Re: Why does EDS suck (sometimes)?
« Reply #1 on: April 08, 2014, 02:38:23 AM »
At ANU we have a JEOL JSM6400 from 1986 with an Oxford Link-ISIS EDS system that consistently gets excellent stoichiometry and totals for major elements in silicate, oxide and sulfide minerals. Because we don't calibrate peak profiles and standard intensities every session, I think it is necessary to analyse secondary standards in every session to monitor/demonstrate reproducibility. This is a pain, because the instrument can only take a single 1" mount at a time, so I clock up several extra sample exchanges just for the standards.

It is also necessary to keep track of the beam current; because our SEM does not do this automatically, I must measure the beam current with a faraday cup and picoammeter between every spot or few spots, depending on stability, and adjust if needed. At the very least, if your secondary standards are reproducible, and the stoichiomtery and composition is correct, you have a convincing demonstration of the quality of your data. Without that...

In some ways the JEOL SEM is better for analysing silicate glasses from experiments than our probe because it operates a 1 nA, and because it is EDS it is possible to raster the beam over small areas without penalty to achieve low current density to minimise Na-loss. Unfortunately for glasses you can no longer use stoichiometry as a check on the quality of your analysis.

With the right approach, you can get excellent results within the limitations of the technique (i.e. abundances >~1 wt. %).
« Last Edit: April 14, 2014, 05:55:18 PM by John Donovan »
Australian Synchrotron - XAS

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: Why does EDS suck (sometimes)?
« Reply #2 on: April 14, 2014, 04:38:46 PM »
Here are some slides from a talk I gave at NANO-2011 in Portland a few years ago which I think explains why EDS will never compete with WDS, at least for thin films and nano-particle characterization.



same spectrum but expanded intensity scale:



and the same goes for nano-particles:

« Last Edit: April 14, 2014, 05:55:33 PM by John Donovan »
The only stupid question is the one not asked!

jon_wade

  • Professor
  • ****
  • Posts: 82
Re: Why does EDS suck (sometimes)?
« Reply #3 on: April 15, 2014, 03:51:09 PM »
This could be an essay! :)

It's not so much 'sucks' as routinely used as an 'excuse' for not bothering too hard with the analysis.

Got data that doesn't add to 100?  sneak it past review and normalise with EDS!
Got poor sample prep?  use the EDS!
Tiny phases that will give duff data on WDS?  hit that 'quant!' button on the EDS!
Got a probe, but can't be bothered to learn how to use it?  EDS is your simple chum!

me?  a grudge?  naaaaah. ;)
but I've definitely seen (& reviewed) papers that tick all the above boxes.....

If you use it properly, in a fully quant mode and within reason, I can't see why it shouldn't be good.  On the other hand, there seems to be an increasingly popular current of thought that its the simple panacea for difficult analysis, and its this habitual mis-use of EDS data thats the worry.

I've certainly used EDS integrated in with WDS data.  Having said that, its as much, if not more effort, to calibrate and use than the WDS.  But all the 'problems' of a good WDS analysis exist in an EDS (beam current stability,  matrix homogeneity etc etc), only most software systems allow the casual analyst to carry on blissfully unaware. grr.
« Last Edit: April 15, 2014, 06:10:23 PM by John Donovan »

kthompson75

  • Post Doc
  • ***
  • Posts: 10
Re: Why does EDS suck (sometimes)?
« Reply #4 on: June 19, 2014, 07:24:24 AM »
Quantitative analysis with EDS can be done very rigorously and lead to very accurate answers. There are also some situations (albeit a minority) where EDS provides better quantitative answers due to issues such as severe peak overlaps. Steve Seddio has a presentation at M&M comparing EDS and WDS quant for rare earth elements. The KEY REQUIREMENT is that rigorous standards, a very stable beam current and a systematic approach to data collection are required to achieve accurate Quant with an EDS. Not so different than the requirements for accurate  Quant in a WDS.

HOWEVER, the EDS community at large has lost sight of both the natural limitations of EDS quantitative analysis and the rigor required for good quantitative analysis. There are two major problems with how the community at large approaches quant with EDS.

1. Several posts here have really hit the first misconception right on its head. Most EDS users don't use standards and simply normalize the "standardless" data and pretend that the results are perfect. In fact a certain EDS company actually advertises standardless EDS quant "as good as full standards quantitative analysis." Work on enough different SEMs - better yet plug in a good oscilloscope and watch the actual outbound signals from an EDS detector before it hits the DPP - and this statement becomes comical. To further exacerbate the error, It's amazing how much EDS analysis is done in variable pressure mode, without standards, with the assumed pretense that there is no loss in low energy x-ray intensity (read as boron or oxygen) relative to the transition metals. It's easy to prove: Good quant on any EDS system requires the use of standards.

2. The systematic accuracy in EDS quant is ALWAYS reported based on counting statistics. Want accuracy to 0.001 %wt? Just collect 10 million x-rays. Only need 0.1% accuracy? Collect only a couple thousand x-rays. This neglects entire concepts such as peak to background, Gauge R&R, and basic machine error.

The EDS technology leaders have all done an outstanding job in making EDS more powerful and more available - i.e. easier to use. This has dramatically expanded the community of EDS users, which is a very, very good thing. The sad drawback is that this rapid expansion in the size of the EDS user base comes at the expense of the basic understanding of underlying principles of EDS.



John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Why does EDS suck (sometimes)?
« Reply #5 on: June 19, 2014, 01:08:34 PM »
Quantitative analysis with EDS can be done very rigorously and lead to very accurate answers. There are also some situations (albeit a minority) where EDS provides better quantitative answers due to issues such as severe peak overlaps. Steve Seddio has a presentation at M&M comparing EDS and WDS quant for rare earth elements. The KEY REQUIREMENT is that rigorous standards, a very stable beam current and a systematic approach to data collection are required to achieve accurate Quant with an EDS. Not so different than the requirements for accurate  Quant in a WDS.
Hi Keith,
I'm going to agree with most of what you said and nitpick the rest.  First I would say that since WDS (e.g., Probe for EPMA at least) has had a fully quantitative iterated interference correction implemented, spectral overlaps are easy to diagnose and very easy to correct. EDS is getting there, but still there are more such situations in EDS with its 10 to 20 times lower spectral resolution. I agree that standards is the key and as I stated at the Castaing session in Nashville a few years ago: "There is one advantage that WDS will *always* have over EDS, and that is the fact that you can't do WDS quant without standards!"

HOWEVER, the EDS community at large has lost sight of both the natural limitations of EDS quantitative analysis and the rigor required for good quantitative analysis. There are two major problems with how the community at large approaches quant with EDS.

1. Several posts here have really hit the first misconception right on its head. Most EDS users don't use standards and simply normalize the "standardless" data and pretend that the results are perfect. In fact a certain EDS company actually advertises standardless EDS quant "as good as full standards quantitative analysis." Work on enough different SEMs - better yet plug in a good oscilloscope and watch the actual outbound signals from an EDS detector before it hits the DPP - and this statement becomes comical. To further exacerbate the error, It's amazing how much EDS analysis is done in variable pressure mode, without standards, with the assumed pretense that there is no loss in low energy x-ray intensity (read as boron or oxygen) relative to the transition metals. It's easy to prove: Good quant on any EDS system requires the use of standards.

I could not agree more- it's basically a training issue, but also an issue of making the application so easy to do the right thing, that even a lazy man (such as myself) will do it!

2. The systematic accuracy in EDS quant is ALWAYS reported based on counting statistics. Want accuracy to 0.001 %wt? Just collect 10 million x-rays. Only need 0.1% accuracy? Collect only a couple thousand x-rays. This neglects entire concepts such as peak to background, Gauge R&R, and basic machine error.

OK, just a semantic nitpick here:  when speaking of counting statistics, you'll want to say "precision" or "sensitivity" as opposed to accuracy, which is determined by secondary standards or a blank specimen.

The EDS technology leaders have all done an outstanding job in making EDS more powerful and more available - i.e. easier to use. This has dramatically expanded the community of EDS users, which is a very, very good thing. The sad drawback is that this rapid expansion in the size of the EDS user base comes at the expense of the basic understanding of underlying principles of EDS.

Yes, make the system easy enough to use so that even an idiot can use it and... well you know.

My bottom line is this:

1. Use EDS for major elements and, *when possible*, even for light elements because peak shape issues are automatically dealt with.

2. Use WDS for trace elements because of its intrinsic better sensitivity.

As most of you know, this is the reason Probe for EPMA automatically acquires an EDS spectra with every standard and unknown analysis point- this allows for much flexibility when off line processing and which elements you decide to quantify using which technique. This means we construct a normal unk/std net intensity k-ratio from the unknown and standard EDS spectra which can be calculated for any element in the EDS spectra (assuming you have a standard with the element). See here for more details:

http://probesoftware.com/smf/index.php?topic=226.msg1052#msg1052
« Last Edit: June 19, 2014, 06:08:18 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"