I am really bombarding this channel with topics... This topic actually is much broader, and tackles the problem for other xray software too. But as this problem affects me directly in my attempt of REE fluoride measurements on DTSA-II, I am writing this here.
What is striking me is that simulated REE M lines with DTSA-II is so different from what is measured. The lighter REE the bigger discrepancy, albeit even Lu is not perfect. There is few different stuff bothering me which is interconnected.
1. Where are Ba and La M alpha lines? They are not listed in DTSA-II line selector. But that is not the only place where its existence is ignored. I see that Cameca Peaksight is ignoring them too. Bruker Esprit is not taking them from existence, but still is weakly neglecting it (see further points)
2. The relative line intensities for REE M lines are bizarre, where dominating line is Mζ, and Mα for LREE is given ridiculously low weight. The Esprit does exactly the same, and existing there Ma of Lanthanium and Barium have smallest weight.
3. Simulated spectra on DTSA-II follows those intensity weights and additionally to that introduce some peak shifts at Ma position, which is not observed in real spectras.
I thought that it is related with self absorption known for REE M lines (see the attached publication which demonstrates that), and those software automatically disables those lines for high voltage. But if I simulate spectra for low voltage (3kV) it still keeps the bizarre intensity ratios, and Ma lines are non-existing or neglected. This is troubling me, as ignoring the existence of those lines or neglecting them probably leads to neglecting them in fluorescence simulation. That means that near surface generated REE Ma by fluorescence (thus would have not much absorption) is not accounted. It probably is the cause why Auto referencing of M lines works so poorly.
I have one more interesting observation, I tried to modify EDS spectra by stripping Oxygen (removing equivalent amplitude of Oxygen spectra component generated by autofit for SiO2)... however I forgot that there is perfect standard for M lines of REE, at least I have such for Lanthanum - that is LaB6. Interestingly LaB6 gives different proportions between zeta and alpha(or collection of bizarre IUPAC line names if we ignore existence of alpha). I made WDS scan on LPC0, and compared that to LaPO4 scan, and there are some amplitude differences, albeit at LaB6 M alpha peak gets some flanks - so probably this is some crystallization thing. And that is probably source where those theoretical mark intensities with higher M zeta peak comes from. This can have terrible consequences for fluorine measurement in REE bearing minerals, where interference correction would depend from state of Ce in the sample. This however does not explain why M Alpha peak in simulation is so shifted, while theoretical marker positions fit well with measured M alpha positions, it looks those position converges at LuPO4 (see attachements).
Maybe La M zeta in LaPO4 seems to be relatively smaller to Malpha as it is partly absorbed by Oxygen?
If I strip oxygen peaks from my REEPO4 spectras and use those as reference for M lines, will DTSA-II respect relative line weights or will it try to fit theoretical intensities? If it ignores/neglects Ma lines for LREE, how is fitting going to work?