Anette von der Handt and I are starting to put together some materials for explaining "best practices" in EPMA measurement science.
But to start off, here's an example of "worst practices" in EPMA: using San Carlos olivine as a primary standard for Ni... where its concentration is (roughly!) 0.3 wt%!
Believe it or not there are actually a few labs out there that think this is a good idea. Clearly they have not thought this through very well!
Here are a few schematics that I put together to try and explain what matters for major elements vs. trace elements with regards to standard selection. I would be very grateful for any comments and or suggestions to improve these explanations. Here is the general schematic:
All this is an attempt to explain that the standard concentration, matrix correction and dead time (accuracies) are multiplicative corrections and therefore dominate at high concentrations, but whose effects scale down as the concentration decreases as shown here:
While the background modeling and interference correction accuracy dominates at trace level concentrations, but since they are subtractive in operation, they remain constant regardless of concentration and therefore are negligible at major concentrations as shown here:
Any comments or suggestions to improve this explanation?