Author Topic: MAN fit - ?Fluorescent? standards?  (Read 5768 times)

JakubHaifler

  • Post Doc
  • ***
  • Posts: 12
Re: MAN fit - ?Fluorescent? standards?
« Reply #15 on: July 15, 2019, 03:22:46 PM »
Hi Ben and John,

to be honest, I did not have any special physical theory on that phenomenon. I just wanted to point out that I observed the same behaviour on synthetic cheralite. And actually, the observation of a very strong photoluminescence also agreed. But later, I was thinking about it a little bit. A strong photoluminiscence usually occurs when e.g. quartz is exposed to electron beam. Given that MAN method is applied on quartz, I guess someone would have observed such phenomenon. 

Many thanks for the explanation. I will need to find something about the alternative expression of the Z-bar.

Best regards, Jakub Haifler.
Department of Geological Sciences
Masaryk University
Brno, Czech Republic

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: MAN fit - ?Fluorescent? standards?
« Reply #16 on: July 16, 2019, 12:48:37 PM »
Hi Jakub,
No worries.

Actually I blame myself.  I published on these electron (or Z) fraction based Z-bar calculations for both elastic scattering and continuum production 20 years ago. At the time I thought the effects would not be so significant for normal EPMA work.

But then people more recently starting utilizing high Z materials in their MAN background fits, which allows for a greater possibility of compounds with very different A/Z ratios. And again, Ben Buse starting the elastic scattering discussion again with his re-discovery of the Z-bar effect when running BSE simulations in Penepma and WinCasino.

Now I realize that these effects are significant enough, especially in the case of high Z elements. So indeed we should be implementing these new Z-bar calculations into our quantitative software. This has been done in Probe for EPMA for the MAN curves, as shown in several posts above.

On the Z fraction elastic scattering Z-bar calculations, I am working with one of our colleagues on modifying perhaps the Pouchou & Pichior backscatter loss equations, and hopefully we'll have something to show later this year.  It turns out that everyone in the past just assumed that BSE loss simply scales with mass!  How silly of us...  :D

But in the meantime you can read the abstract on elastic scattering that I will be presenting at M&M next month in Portland which is attached to this post:

https://probesoftware.com/smf/index.php?topic=1111.msg8249#msg8249

The thing that amazes me is that both continuum production *and* BSE loss are well modeled by applying the Z^0.7 Z fraction average atomic number equations. Two completely different physical processes...
« Last Edit: July 16, 2019, 03:55:09 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: MAN fit - ?Fluorescent? standards?
« Reply #17 on: August 05, 2019, 08:02:10 AM »
Yesterday at the M&M social Peter Statham reminded me of a paper he sent me in 2016, where he independently derived a fit for continuum production with Z, and where he found a Z exponent of 0.75.  Which is quite close to 0.70.

I attach his 2016 paper and my original 2002 papers (BSE and Continuum) below (please login to see attachments). In another cute coincidence of science, please see fig. 3 in both papers!

I also include this screen shot of an Excel spreadsheet with a summary of the Z exponents for Ben Wade's MAN fits to continuum data:



If you're at M&M and see me, please feel free to ask me what the color coding represents, though you may indeed figure it out before I explain.
« Last Edit: August 09, 2019, 09:11:36 AM by Probeman »
The only stupid question is the one not asked!

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: mass vs. Z averaging
« Reply #18 on: September 18, 2021, 07:03:41 PM »
Perhaps this has been posted elsewhere, but I should point out that Stephen Reed has objected to the use of atomic number averaging that is used in place of more traditional mass averaging in the MAN continuum calculation.  In his brief comment, Reed focuses on beam electron energy loss within the target (i.e., stopping power), which accounts for continuum production in the target.  I’ve attached the relevant references including the paper by Pouchou and Pichoir from “Electron Probe Quantitation,” aka “the green book,” as Reed mentions their Figure 4 and associated equations.  In their Figure 4 (page 35), Pouchou and Pichoir compare the results of their model for dE/dρs with those of Bethe’s model; their discussion of electron deceleration begins on page 34.
« Last Edit: September 19, 2021, 08:09:43 AM by John Donovan »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: mass vs. Z averaging
« Reply #19 on: September 18, 2021, 07:49:56 PM »
Yes, that was the controversy.  But Reed was unfortunately wrong.

We already know that the only reason mass averaging was originally utilized in the stopping power calculation is because these equations are expressed in mass normalized terms to be consistent with the absorption correction which is also mass normalized (mass absorption coefficients).

From isotope measurements (Donovan and Pingitore 2002, Donovan et al. 2003), we also know that the physics of EPMA is electrodynamics based, not mass based. The use of mass is simply a holdover from early days of chemistry, when the scale balance was the primary tool of science!   (that's only a slight exaggeration!)    :D

The better response to Reed's objections is found in the attachment below.  But I encourage each of you to try the different average atomic number averaging methods and see which gives the best fit to a combination of compounds and pure elements using your own measurements of continuum intensities.  A fairly recent investigation into these effects is found in this abstract:

https://epmalab.uoregon.edu/publ/average_atomic_number_and_electron_backscattering_in_compounds.pdf

By the way, Aurelien Moy even more recently revisited this question and confirmed our own measurements using Monte Carlo modeling from Penepma:

https://www.cambridge.org/core/journals/microscopy-and-microanalysis/article/universal-mean-atomic-number-curves-for-epma-calculated-by-monte-carlo-simulations/6F1C63250D980846ED0A765B10C504DE
« Last Edit: April 22, 2023, 09:09:11 AM by John Donovan »
The only stupid question is the one not asked!

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3277
  • Other duties as assigned...
    • Probe Software
Re: mass vs. Z averaging
« Reply #20 on: September 18, 2021, 09:20:37 PM »
In their Figure 4 (page 35), Pouchou and Pichoir compare the results of their model for dE/dρs with those of Bethe’s model; their discussion of electron deceleration begins on page 34.

This reminds me of a story John Armstrong related to me many years ago when he happened to meet Hans Bethe at a conference.

The story goes that John introduced himself to Hans telling him that their entire field of EPMA was based on his original equations for electron energy loss. Apparently there was a brief silence at this statement, after which Hans exclaimed "But that was only accurate for hydrogen!".

 ;D
« Last Edit: September 19, 2021, 08:10:03 AM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3277
  • Other duties as assigned...
    • Probe Software
Re: mass vs. Z averaging
« Reply #21 on: September 19, 2021, 08:17:46 AM »
I moved this discussion to this topic since it more directly deals with the issue of mass vs. electron or Z fraction averaging for continuum intensities.

I also found the topic that Ben Buse started a while ago looking at the averaging issue for backscatter elentrons:

https://probesoftware.com/smf/index.php?topic=1111.0

Basically he found the same problems with mass fraction averaging that we did back in the 2000s.

On a related note I've attached below (please login to see attachments), some isotope measurements of characteristic emissions of stable isotopes of Ni, Cu and Mo compared to the natural abundance materials from 2001. These were actually the first measurements we had made on isotopes in the EPMA, but for some reason I had never published them.

Again as was the case for continuum and backscatter measurements, we found no significant statistical differences in characteristic emission intensities between the enriched stable isotopes and the natural abundance materials.
« Last Edit: September 19, 2021, 09:06:35 AM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: mass vs. Z averaging
« Reply #22 on: September 19, 2021, 10:29:03 AM »
In calculation of the mean atomic number, how do you justify the use of an arbitrary fractional exponent?  What does it actually mean?  In your 2002 paper with Nicholas Pingitore, it appears without explanation.  If you are trying to construct a model that’s physically more realistic, then what, physically, does the fractional exponent represent?  Why does it appear to produce an improvement in the fit?

In that same paper, in your plots on p. 434 (Fig. 3), it looks like the curves represent 2nd-degree polynomials.  Is there a physical explanation for why continuum intensity should vary with mean atomic number or mass in such a manner?  This approach seems not to work for some high mean Z compounds (like cheralite and galena).  And why is zircon problematic?

Also, like Reed points out in his earlier comment (from 2000), the atomic number averaging only produces a marginal apparent improvement over mass averaging in the plots shown in Fig. 2 of Pingitore et al. (1999).  This also appears to be true in the plots presented in Fig. 3 of your 2002 paper.

Why not use a model such as PAP to predict continuum intensity relative to that of an analyzed reference material that produces only continuum radiation at the wavelength of the X-ray line of interest?
« Last Edit: September 27, 2021, 08:46:05 AM by John Donovan »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: mass vs. Z averaging
« Reply #23 on: September 19, 2021, 12:48:14 PM »
In summary, what I'm saying is this:  You expend some effort in creating a new model that you say has a more physically realistic basis, but then, regardless of Reed's criticisms, you render that physical basis null and void by introducing an unexplained, empirical fractional exponent and polynomial fit.  Even the relatively complex PAP model is still semi-empirical, noting that two parabolas are used to model φ(ρz) rather than a seemingly more appropriate "surface-centered Gaussian" model.

P.S.  I'm not trying to be a jerk, just devil's advocate.  In particular, the outliers could contain a lot of useful information.  Why are they outliers?
« Last Edit: September 27, 2021, 08:46:14 AM by John Donovan »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3277
  • Other duties as assigned...
    • Probe Software
Re: mass vs. Z averaging
« Reply #24 on: September 24, 2021, 04:42:32 PM »
In calculation of the mean atomic number, how do you justify the use of an arbitrary fractional exponent?  What does it actually mean?  In your 2002 paper with Nicholas Pingitore, it appears without explanation.  If you are trying to construct a model that’s physically more realistic, then what, physically, does the fractional exponent represent?  Why does it appear to produce an improvement in the fit?

I'm sorry, I just noticed this post.  Been swamped with other work lately!

The exponent is simply tuned to the data (much like the PAP models themselves!) to provide the best fit. As Aurelien Moy points out in his recent paper, the best fit exponent seems vary slightly with x-ray energy.

As to why a fractional exponent, well it seems to be related to a geometric charge screening effect of the distribution of coulombic charge, e.g., Yukawa Potential. This would imply a Z^0.66 response. We are currently working on this idea...

In the case for a fractional Z exponent for backscatter production, we explain in some of our publications that the fractional exponent seems to relate to the decrease in backscatter production at higher average Z, due to well known screening of the nuclear charge by the increase in inner orbital electrons. Essentially another geometric screening effect.

In that same paper, in your plots on p. 434 (Fig. 3), it looks like the curves represent 2nd-degree polynomials.  Is there a physical explanation for why continuum intensity should vary with mean atomic number or mass in such a manner?  This approach seems not to work for some high mean Z compounds (like cheralite and galena).  And why is zircon problematic?

Outliers can be interesting I agree and I welcome any insight into these, though I will notice that the Monte Carlo models do not show outliers, so I suspect they are simply poor measurements, but certainly worth keeping an eye on.

But keep in mind that the original effort was solely based on the fact that backscatter, continuum (and for that matter characteristic), emissions/productions are overwhelmingly based on electrodynamics. That is already known from physics, and confirmed by the isotope data measurements.

Atomic mass is a best only a rough proxy for these electrodynamic effects, so why not simply exclude mass, since we already know mass has essentially no effect on these productions?  In fact, because A/Z generally increases as a function of Z, merely due to reasons of nuclear stability, including neutrons into these electrodynamic calculations, introduces a mass bias into our calculations. As my friends analyzing interstellar dust would say: atomic number is universal, but atomic mass is local.

In summary, what I'm saying is this:  You expend some effort in creating a new model that you say has a more physically realistic basis, but then, regardless of Reed's criticisms, you render that physical basis null and void by introducing an unexplained, empirical fractional exponent and polynomial fit.  Even the relatively complex PAP model is still semi-empirical, noting that two parabolas are used to model φ(ρz) rather than a seemingly more appropriate "surface-centered Gaussian" model.

We are not committed to the polynomial fit in any way. It just happens that earlier continuum models utilized a straight line fit and the polynomial fit seems to better represent the data. If you can suggest a better method for fitting continuum production as a function of average Z we can certainly try that also.

Also, like Reed points out in his earlier comment (from 2000), the atomic number averaging only produces a marginal apparent improvement over mass averaging in the plots shown in Fig. 2 of Pingitore et al. (1999).  This also appears to be true in the plots presented in Fig. 3 of your 2002 paper.

I think we all prefer scientific models that are more physically realistic *and* produce an improvement in our predictions of measurements. Even if the effect is relatively small, it's still an improvement.  Shouldn't we welcome improvements in our physical models?

Why not use a model such as PAP to predict continuum intensity relative to that of an analyzed reference material that produces only continuum radiation at the wavelength of the X-ray line of interest?

I think that is an interesting idea. I look forward to seeing your results from this. Just remember, mass doesn't affect any of the emissions/production we observe in the microprobe. Of course if we utilized a 1 MeV electron beam, that would be another story!

I do have to add one comment. Whenever I discuss this atomic mass versus atomic number issue with "card carrying" physicists, they all respond the same way: "Well duh, it's all electrodynamics!".  But for some reason in the field of EPMA there is this obsession with atomic mass. I suspect it's just historical inertia as chemists are used to reporting results in mass fractions, because "wet chemistry."

That and also that we started out using some mass normalized expressions (i.e., mass absorption coefficients), even though we know that density is unrelated to EPMA physics, except for considerations of non-infinitely thick specimen geometry of course.
« Last Edit: September 27, 2021, 08:46:23 AM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: mass vs. Z averaging
« Reply #25 on: September 27, 2021, 02:51:46 AM »
I also have one question about MAN. I use PHA integral as differential mode in most of cases cause more problems than good (in my cases). So the question is: Does your MAN method takes into account 2nd, 3rd, 4th and etc order bremsstrahlung (which is additionally complicated by Ar edge corresponding orders, in case of P10 gas)?
« Last Edit: September 27, 2021, 08:46:33 AM by John Donovan »

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3277
  • Other duties as assigned...
    • Probe Software
Re: mass vs. Z averaging
« Reply #26 on: September 27, 2021, 08:59:16 AM »
That is the neat thing about calibration curves: they handle all the physics we don't yet understand!   :D

Of course the MAN background correction is really a semi-empirical calibration curve because it is based on the physics of Kramer's Law, which assumes that continuum production is primarily an effect of average atomic number, hence the discussion regarding on what basis should average Z be calculated: mass fraction vs. Z fraction.

And we apply a continuum absorption correction to the intensities measured on each standard material (and our unknown), though we find that the modern phi-rho-Z absorption corrections seem to do a better job than continuum specific absorption corrections from decades ago.

Remember, with the MAN correction we are modeling the continuum at a *single* continuum energy, corresponding to the emission energy of the element we are observing. Also we construct a separate calibration curve for each element/x-ray/spectrometer/crystal combination which automatically handles these instrument dependent effects.

The good news is that these MAN calibrations are very easy to do and are very stable over time, so one spends a few minutes once in a while acquiring these MAN curves (which could simply be the primary standards one is utilizing as long as they don't contain the particular element of interest, e.g., pure MgO and TiO2 for Al Ka), and then one obtains better precision measurements in about half the acquisition time. The MAN method is particularly nice when performing quantitative x-ray mapping.

https://pubs.geoscienceworld.org/msa/ammin/article-abstract/101/8/1839/264218/A-new-EPMA-method-for-fast-trace-element-analysis
« Last Edit: September 27, 2021, 05:30:09 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: MAN fit - ?Fluorescent? standards?
« Reply #27 on: May 07, 2024, 10:17:57 AM »
As discussed above it's interesting that the Yukawa potential model yields a Z^0.666 fit.  It's also interesting how often this number 0.666 (or ~2/3) shows up so often in physics models.

Probably just a coincidence (as my co-author Andrew Ducharme points out simply because 2 and 3's are common numbers!), but here's another weird physics 2/3 coincidence:

https://en.wikipedia.org/wiki/Koide_formula
« Last Edit: May 07, 2024, 11:49:41 AM by Probeman »
The only stupid question is the one not asked!