I recently was testing some new code over the weekend, so running Probe for EPMA in "demo" or simulation mode and acquiring some multi-point and off-peak bgds on standard samples. As many of you know, this PFE simulation mode is what we use in the classroom to teach EPMA to our students as the simulation is quite realistic and that way they can all follow along on their laptops. In fact, this simulation mode is almost too realistic as I am about to explain!
So I acquired some standard data and just for fun I clicked the Analyze button and much to my surprise I saw some quite questionable results as seen here for my Orthoclase standard:
ELEM: Si K Al Mg Fe Ca Mn O H Na Ba SUM
63 30.153 12.932 7.960 .014 1.679 -.027 .030 45.798 .000 .675 .054 99.268
64 30.275 12.863 7.804 .024 1.687 .021 -.084 45.798 .000 .675 .054 99.118
65 30.207 7.435 7.829 .009 1.612 .004 -.035 45.798 .000 .675 .054 93.588
AVER: 30.212 11.077 7.864 .016 1.659 -.001 -.030 45.798 .000 .675 .054 97.324
SDEV: .061 3.154 .084 .008 .041 .024 .057 .000 .000 .000 .000 3.237
SERR: .035 1.821 .048 .004 .024 .014 .033 .000 .000 .000 .000
What is going on?
Then it struck me (ouch!). Now I must explain by going back to actual instrument measurements a bit: as we know, our various EPMA softwares all have some idea of where the emissions lines should appear in our WDS spectrometer ranges, but due to the fact that our EPMA instruments are not mechanically perfect, we know that we generally need to start our new probe runs by tuning our spectrometers so our on-peak measurement positions are right at the top of the emission peaks, where a slight variance in the positioning of the WDS spectrometer will not cause a severe change in the recorded intensity. Just imagine how much the intensity would vary if we happened to be measuring our peak intensity on the side of our emission lines! A slight change in spectrometer position will produce a large change in the measured intensity.
For EDS measurements the actual emission peak positions are not an issue for two reasons, one, the EDS detectors are relatively stable, so the peaks don't move around much, and two, the spectral resolution of the EDS detectors are so poor, any detector or electronics instability is usually masked by the extreme widths of the measured emission lines.
Now, back to my demo simulation run. So it turns out that in my haste to test my code, I neglected to peak up the spectrometers before acquiring the standard intensities. Now you might say, but this is just a simulation, doesn't the software know where the emission lines will appear in the simulated spectrometer range? And yes, the software does know where they should appear, but the spectral simulation software (in this case the Penepma Monte Carlo physics package), has slightly different theoretical emission positions in its emission energy databases compared to the Armstrong emission line database that PFE has utilized historically. Also, there are several other factors such as the Bragg refractive index correction which is applied to each Bragg crystal (even in simulation mode) and probably a few other factors that I'm not even thinking of.
In addition, the simulation mode in PFE is designed so when one moves to a specified stage or spectrometer position, the software introduces a small amount of error, so just as in an actual EPMA instrument, the actual position arrived at is not exactly at the target position, because, you know, reality.
So the result is that the simulated WDS spectrometer emission lines do not appear exactly at the expected Bragg angles (just as they don't appear exactly where they are expected on your actual instrument!). Now one could say this is a bug, but I prefer to think of this as a simulation feature since it more closely resembles the performance of an actual instrument.
So I then ran the peaking procedure on all the elements and lo and behold, we now obtain these quantitative results:
ELEM: Si K Al Mg Fe Ca Mn O H Na Ba SUM
96 30.395 13.004 8.973 .041 1.706 .015 .014 45.798 .000 .675 .054 100.676
97 30.160 12.873 8.975 .021 1.668 -.003 .039 45.798 .000 .675 .054 100.261
98 30.191 12.888 8.933 .001 1.704 -.006 .017 45.798 .000 .675 .054 100.256
AVER: 30.249 12.922 8.961 .021 1.693 .002 .023 45.798 .000 .675 .054 100.397
SDEV: .128 .072 .024 .020 .021 .011 .014 .000 .000 .000 .000 .241
SERR: .074 .042 .014 .012 .012 .006 .008 .000 .000 .000 .000
The K Ka measurements are now quite a bit better. So this is actually a "teaching moment" to help remind your students that yes, with WDS spectrometers, we do need to tune the spectrometer positions to the actual measured positions, even if we are running in simulation mode. The simulation mode in PFE is that realistic!