I've been trying to test some of the standards we have in our lab by running a calibration on one standard and testing another. I know that for samples with a small amount of an element it is best to use a standard with a small/similar amount of that element. However, I'm curious just how far off I can expect to see the quantification if I use a pure element standard and then try to measure a small amount.
For example: I recently used pure Si as the standard and tried to measure an SiO2 standard. I found myself off by 8%. Is that reasonable? Or does it tell me my standards are bad? (Goodness knows they are old!) Any other theories?
Thanks!
Hi and welcome to the user forum! I think I can explain, and others here will certainly have additional ideas/suggestions for you.
First of all, it is an excellent idea to run one's standards against each other as you mentioned. Selecting one standard as the "primary" standard and running the remaining standards as "secondary" standards is exactly how one can evaluate accuracy in our unknowns. The idea being that the primary standard has the best accuracy (based on whatever considerations), and the secondary standard acts as a sort of check on our matrix physics extrapolation, to match the expected matrix of our unknowns.
For example, we might use MgO as our primary standard for Mg and the NIST K411/412 mineral glasses as secondary standards to check on our extrapolation to our unknown- assuming of course that our unknown is a mineral glass!
By the way, the need for "matrix matching" of standards to unknowns is generally unnecessary now as the matrix correction physics has improved greatly since early days. That said, the example you mentioned of Si as the primary standard and SiO2 as the secondary standard fails for a well known reason, and it has nothing to do with the matrix correction physics. The problem here is that the Si ka emission line has a very significant shift in the emission energy depending on the Si-O bond chemistry. This is because the Si L shell is involved in the oxygen bond and it is the L to K transition we are measuring.
There are several ways to deal with this if you actually need to measure Si-O compositions with varying chemistry. The most accurate (and slowest) method is to measure the integrated peak intensity as opposed to simply the peak intensity as seen here:
http://probesoftware.com/smf/index.php?topic=536.msg2992#msg2992One can also apply so called Area Peak Factors (APF) to the peak intensities as described here (they come in two flavors: compound and specified):
http://probesoftware.com/smf/index.php?topic=536.msg2946#msg2946So generally if we are measuring Si in a non-oxide matrix we would use elemental Si as a primary standard, and if we are measuring Si in an oxide matrix we would use SiO2 as a primary standard.
By the way, although Si ka has a significant peak shift/shape effect, most higher energy emission lines show much smaller effects and these can generally be ignored especially when characterizing trace elements. This is because the accuracy error from the peak shift is usually much smaller than the precision of the measurement. In fact for trace elements, we generally want to use a very high concentration standard to improve sensitivity (see the discussion here):
http://probesoftware.com/smf/index.php?topic=607.msg3465#msg3465