Author Topic: Nasty Boundary Fluorescence Analytical Situations  (Read 58033 times)

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: Nasty Boundary Fluorescence Analytical Situations
« Reply #90 on: June 04, 2024, 09:26:19 AM »
Note that if you have Probe for EPMA you can perform a boundary fluorescence correction directly on your unknown data from the Analyze! Window:

https://probesoftware.com/smf/index.php?topic=1545.0

Note that the SF boundary correction does not yet incorporate a Bragg defocus calibration, so at the moment it is a worse case scenario correction. Which depends on the orientation of the boundary relative to the Bragg crystal on the spectrometer being utilized.

But I believe a Bragg defocus calibration is currently being developed…

Until this Bragg defocus correction is implemented and one wants to correct for secondary boundary fluorescence (as opposed to FIBing out the sample and mounting the individual grains to avoid fluorescence from nearby phases), you should orient your sample in the stage so that the spectrometer making the trace element is aligned with the boundary. That is the phase boundary you are making measurements adjacent to, should point towards the spectrometer making the trace measurement.  Then as one moves the stage away from the adjacent boundary, there is little to no Bragg defocus effects.

And because the Bragg defocus effects are minimized, the boundary fluorescence model calculated from PENFLUOR/FANAL will not over correct the k-ratios.

Of course the opposite approach can also be attempted whereby one orients their specimen in the sample stage so that the adjacent boundary points directly *away* (at 90 degrees) from the spectrometer making the trace measurement.  And then one limits their trace measurements to points being at least 50 or 100 um away from the adjacent boundary.  Then one might hope that the WDS Bragg defocus reduces the detection of the boundary fluorescence emissions and no boundary fluorescence correction is necessary.

Of course with EDS there are no Bragg defocus effects, so the model from PENFLUOR/FANAL should apply as is for elements measured by EDS.
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2856
  • Never sleeps...
    • John Donovan
Re: Nasty Boundary Fluorescence Analytical Situations
« Reply #91 on: June 04, 2024, 09:36:23 AM »
I went through the SF correction and have successfully modeled my own standards file (so many hours!)

So you have run your standards to create .PAR files for your standard compositions using Standard.exe?  Nice!


I have a question regarding the .PAR files used in the FANAL program. What parameters exactly does Standard simulate for set file for an specific material and what information is in the .PAR file? And why does it take a long time to simulate those parameters?

The Standard application merely provides the GUI for the PENFLUOR/FANAL FORTRAN programs.  The PAR calculations are performed in PENFLUOR (the left side of the GUI in Standard), while the boundary fluorescence k-ratios are extracted by FANAL (the right side of the GUI in Standard). 



Note that if the same material is specified for both the beam incident and boundary phases, the k-ratios calculated are for a bulk sample:

https://probesoftware.com/smf/index.php?topic=152.0

I've attached the paper on the FORTRAN programs and also the source code  below.  One helpful thing is to look at the output in Excel by checking the "Send To Excel" checkbox. That would the the output to the k-ratios2.dat file and also check here:

https://probesoftware.com/smf/index.php?topic=58.msg5895#msg5895

There are some minor modifications to the FANAL code by Donovan, so I've provided both the modified and original code from Llovet.
« Last Edit: June 04, 2024, 09:39:30 AM by Probeman »
The only stupid question is the one not asked!

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3304
  • Other duties as assigned...
    • Probe Software
Re: Nasty Boundary Fluorescence Analytical Situations
« Reply #92 on: June 04, 2024, 10:18:11 AM »
...And why does it take a long time to simulate those parameters?

The PENFLUOR code is basically the PENELOPE/PENEPMA Monte Carlo code modified to simulate multiple keVs.  Monte Carlo can take a long time to obtain reasonable precision on a PC.  The default is 3600 seconds per keV, but one can reduce that time if precision is not important.

The advantage of PENFLUOR/FANAL is that once the PAR file has been calculated one can extract k-ratios for any keV and element/x-ray line and distance from the boundary in seconds.  Previously we had to run PENEPMA hours for each boundary distance.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"