Author Topic: New method for calibration of dead times (and picoammeter)  (Read 23593 times)

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #75 on: August 25, 2022, 12:49:26 PM »
OK, let's talk about sensitivity and the constant k-ratio method!

We've already mentioned that one of the best aspects of the constant k-ratio method is that it depends on a zero slope regression of k-ratios plotted on the y-axis.  We can further appreciate the fact that the low count rate k-ratios are the least affected by dead time effects, so therefore those k-ratios will be the values that are our "fulcrum" when adjusting the dead time constant. Remember, the exact value of these low count rate k-ratios is not important, only that they should be constant over a range of count rates!  So by plotting these k-ratios with a zero slope regression (a horizontal line) we can arbitrarily expand the y-axis to examine our intensity data with excellent precision.

Now let's go back and look at a traditional dead time calibration plot here (using data from Anette von der Handt) where we have plotted on-peak intensities on the y-axis and beam current on the x-axis:



I've plotted the multiple points per beam current so we can get a feel for the sensitivity of the plots.  Note that the lower count rates show more scatter than the high count rates, because the scatter you are seeing is the natural counting statistics and will be expanded in subsequent plots. Pay particular attention to the range of the y-axis. In this plot we are seeing a variance of around 45%.

The problem for the traditional method is not only that we have a diagonal line which doesn't reveal much sensitivity, but also that we are fitting a linear model to the data and the linear model only works when the dead time effects are minimal.  It's as though we were trying to measure trace elements at low beam currents!  Instead we should attempt to characterize our dead time effects under conditions that produce significant dead time effects. And that means at high count rates!   :)

All non-zero slope dead time calibration methods will suffer from this lack of sensitivity, though the Heinrich method (like the constant k-ratio method) is at least immune to picoammeter linearity problems.  In fact, because the Heinrich ratio method is also a ratio (of the alpha and beta lines), if we simply plotted those Ka/Kb ratios as a function of beam current/count rate (and fit the data to a non-linear model that handles multiple photon coincidence) it would work rather well!

But I feel the constant k-ratio is more intuitive and it is easier to plot our k-ratios as a zero slope regression. And here is what we see when we do that to the same intensity data as above:



Note first of all that merely by plotting our intensities as k-ratios (without any dead time correction at all!), our variance has decreased from 54% to 17%!  Again note the y-axis range and how the multiple data points have expanded showing greater detail. And keep in mind that the subsequent k-ratio plots will always show the low count rate k-ratios right around 0.56 which will decrease slightly to 0.55 as we start applying a dead time correction, as with this PETL spectrometer, even at low beam currents, we are seeing serious count rates (~28K cps at 10 nA on Ti metal!).

Now let's apply the traditional linear dead time expression to these same k-ratios using the JEOL engineer 1.32 usec dead time constant:



Our variance is now only 5.4%!  So now we can really see the details in our k-ratio plots as we further approach a zero slope regression. We can also see that we've increased our constant k-ratio range slightly (up to ~80k cps), but above that things start to fall apart.

So now we apply the logarithmic dead time correction (again using the same dead time constant of 1.32 usec determined by the JEOL engineer using the linear assumption):



And now we see that our y-axis variance is only 1.1%, but we also notice we are very slightly over-correcting our k-ratios using the logarithmic expression. Why is that?  It's because even at these relatively moderate count rates, we are still observing some non-zero multiple photon coincidences, which the linear dead time calibration model over fits to obtain the 1.32 usec value.  Remember the dead time constant is a "parametric constant", its exact value depends on the mathematical model utilized. 

So by simply reducing the dead time constant from 1.32 to 1.29 usec (a difference of only 0.03 usec!), we can properly deal with all (single and multiple) photon coincidence and we obtain a plot such as this:



Our variance is now only 0.5% and our k-ratios are constant from zero to over 300k cps!  And just look at the sensitivity!
« Last Edit: August 25, 2022, 02:53:13 PM by Probeman »
The only stupid question is the one not asked!

Nicholas Ritchie

  • Professor
  • ****
  • Posts: 141
    • NIST DTSA-II
Re: New method for calibration of dead times (and picoammeter)
« Reply #76 on: August 25, 2022, 01:55:56 PM »
Pretty impressive!
"Do what you can, with what you have, where you are"
  - Teddy Roosevelt

jlmaner87

  • Post Doc
  • ***
  • Posts: 10
Re: New method for calibration of dead times (and picoammeter)
« Reply #77 on: August 25, 2022, 07:47:47 PM »
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #78 on: August 26, 2022, 09:03:16 AM »
Pretty impressive!

Thank-you Nicholas.  It means a lot to me and the team.
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #79 on: August 26, 2022, 09:06:54 AM »
Incredible work John (et al)! I've tried these new expressions on my new SX5 Tactics and am blown away by the results. I am still plotting/processing the data, but I will share it soon.

Much appreciated!

Great work by everyone involved.  John Fournelle and I came up with the constant k-ratio concept, and Aurelien Moy and Zack Gainsforth and I came up with the multi-term and logarithmic expressions. While Anette has provided some amazing data from her new JEOL instrument (wait until you see her "terrifying" count rate measurements!).

We could use some more Cameca data as my instrument has a severe "glitch" around 40 nA. Do you see a similar weirdness around 40 nA on your new tactis instrument?
« Last Edit: August 26, 2022, 01:22:38 PM by Probeman »
The only stupid question is the one not asked!

jlmaner87

  • Post Doc
  • ***
  • Posts: 10
Re: New method for calibration of dead times (and picoammeter)
« Reply #80 on: August 27, 2022, 07:00:55 AM »
I actually skipped 40 nA. I performed k-ratio measurements at 4, 10, 20, 50, 100, 150, 200, and 250 nA. I do see a drop in k-ratio between 20 to 50 nA. The k-ratios values produce (mostly) horizontals lines from 4 to 20 nA, then they decrease (substantially) and form another (mostly) horizontal line from 50 to 250 nA. As soon as I can access the lab computer again, I'll send the MDB file to you.

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #81 on: August 27, 2022, 08:47:39 AM »
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #82 on: August 27, 2022, 09:11:16 AM »
Here's a different spectrometer on Anette's instrument (spc 5, LIFL) that shows how the sensitivity of the constant k-ratio method can be helpful even at low count rates:



First note that at these quite low count rates (compared to spc 3, PETL), the k-ratios are essentially *identical* for traditional and log expressions (even when using exactly the same DT constants!) exactly as expected.

Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

Because the k-ratios acquired on other spectrometers at the same time do not show this "glitch", I suspect that this artifact is specific to this spectrometer.  More k-ratio acquisitions will help us to determine the source.

Next I will start sharing some of the "terrifying" intensities from Anette's TAPL crystal.    ;D
« Last Edit: August 27, 2022, 08:57:16 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: New method for calibration of dead times (and picoammeter)
« Reply #83 on: August 29, 2022, 07:12:56 AM »
The Cameca instruments switch picoammeter (and condenser?) ranges around 40 to 50 nA so that could be what you are seeing.  SEM Geologist I'm sure can discuss these aspects of the Cameca instrument.

I'll also share some of my Cameca data as I've recently been showing Anette's JEOL because it is a much clearer picture.

Oh Yeah I could :D

Well it depends from the machine (If we have C1 + C2 W/LaB6 column, or we have FEG C2 (no C1) column). In case of FEG it is supposed to be smooth at 1-600nA range, sometimes some crossover can be observed somewhere between 500-1000nA when FEG parameters are set wrong, or when tip is very old and standard procedure is not relevant no more (i.e. our FEG).

But in case of classical C1 and C2 column the crossover point depends from cleanness of the column (its apertures) as the beam crossover point is going to drift depending how much apertures are contaminated. We had our SX100 column not-cleaned for 7 years, and there was some funkiness going at 40-50nA range. After cleaning of the column the range of crossover is no more at that spot but at very high count rates (~500nA). What I suspect after seeing the faraday cup (during column cleaning) is that it is highly possible not whole beam gets into the cup, but in some cases just part of the beam (something like beam defocus onto faraday cup hole). So physically the picoamperometer could be completely OK, but beam measurement with faraday cup inside the column could measure the beam not fully at some of ranges (especially at lower currents). There is where this drifting beam cross-over could come in the observed discrepancies.

On the other hand the picoamperometer circuit is subdivided to sections: up to 0.5nA, 0.5-5 nA, 5-50nA, 50-500nA, 500nA-10µA(?). It is not completely clear for me how it decides which of range to switch-to (The column control board tells which range should be selected... no wait, c.c. board does not tell that it only transfers the request from main processing board), probably there are few loops of measurements for logic in the column control board to select the most relevant range, and probably this 5*10^x nA is the strict boundary only on the paper. Finally only 50-500nA and 500nA-10µA ranges have a potentionmeters and can be physically re-calibrated/tuned (albeit I had never needed to do that). Why only those ranges? the work of picoamperometer is realy simple: it needs to amplify the received current into some voltage range which ADC works with. It is single OPAMP, but different feedback resistors for different ranges. For highest currents, there is little amplification needed and thus feedback resistors are in kiloohm range, where low currents requires high amplification and thus very high ohm (hundreds of Mohm) resistors are used. In case of kilo-ohm resistors the final resistance can be tuned with serially connected potentionmeter, where for hundred of Mohm resistors there is no such potentionmeters available (or rather is not very financially feasible).  Anyway, the analog voltage value from such conversion is finally being measured with shared 15bit ADC (+1bit for sign) (the same ADC for all other column parameters, such as high voltage, emission...) and final interpretation of that converted digital value is burred somewhere in the digital logic (firmware). That is most probably main VME processor board (Motorola 68020 (old) or PowerQuiccII (new)), as column control board contains no processing chip (there are some PAL device s on board for VME<->local data control and rather are too limited for interpretative capabilities). And then this gets a bit tricky: The firmware is loaded during boot, AFAIK there is no mechanisms for alteration of hex files (files uploaded during machine boot), Also I know no commands in interpretor to calibrate the faraday cup measurements (albeit there are many cryptic special functions not exposed to user manuals). I guess such conversion table could exist in Cameca SX Shared folder in some binary files of machine state. How to change the conversion is still a mystery for me.

Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Also the picoamperometer is really pretty simple in design and I see not much possibilities for it to detune itself (could those potentiometers (un-)screw(-in)?), could resistors crack (albeit I had seen many times this to happen on different boards of SX100, but those are power resistors doing a lot of work). Maybe conversion tables were set wrongly from the moment of manufacturing and this problem got caught only recently after using this new calibration by k-ratios method? I just wonder where that problem of your 40-50 nA observed discontinuity is generated exactly and how it could be fixed...

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: New method for calibration of dead times (and picoammeter)
« Reply #84 on: August 29, 2022, 07:22:15 AM »
Second, note the "glitch" in the k-ratios from 50 to 60 nA.  I don't know what is causing this but we can see that the constant k-ratio method, with its ability to zoom in on the y-axis, allows us to see these sorts of instrumental artifacts more clearly.

I don't believe technological miracles (especially at lower and comparable prices), and I guess Jeol picoamperometer is forced to be segmented into ranges by same electronic component availability and precision as Cameca instruments (even stupid simple handheld multi-meter have such kind of segmentation). And most likely it is the similar problem as on Your SX100.

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #85 on: August 29, 2022, 08:42:55 AM »
Oh You probeman, You had just forced me to look closer to the hardware and You convinced me to start be paranoid how volatile this beam current measurements could be. Albeit not so fast, I had tested some time ago how EDS total input rate holds to the increasing currents (total estimated input rate on Bruker Nano Flash SDD at smallest aperture vs measured current with FC) and it looked rather linear on both 20 year old SX100 and 8 year old SXFiveFE with smooth transitions on all picoamperometer boundaries and sensible linearity result (not linear due to pile-ups ofc). Actually as I am writing I just got an idea for ultimate approach for exact picoamperometers linearity measurement with help of EDS (and EDS have an edge here for such measurements compared to WDS). I will come back soon when will get new measurements and data.

Here's a few examples from my instrument showing this "glitch" around 40 nA.  Our instrument engineer told me recently that he had made some adjustments to the picoammeter circuits but I have not had time to test again.  I will try to do that soon as I can.





Note in the first plot that the glitch occurred at 30 nA!  Note also that I skipped measurements between 30 and 55 nA in the 2nd plot to avoid this glitch!

But here's my problem: the constant k-ratio method should not be very sensitive to the actual beam current since both the primary standard and the secondary standard of the k-ratio are measured at the same beam current. 

But yet the artifact is there on many Cameca instruments.  I do distinctly recall that more than one Cameca engineer has simply told me to "stay away from beam currents near 40 nA".  Maybe it's some sort of a beam current "drift" issue?

I also think that if one "sneaks up" on the beam current (using 2 nA increments for example) the instrument can handle setting the beam current properly.  I think Will Nachlas has done some constant k-ratio measurements like this on his SXFive instrument.
« Last Edit: August 29, 2022, 08:44:56 AM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #86 on: August 29, 2022, 11:35:28 AM »


I know I'm not the sharpest knife in the drawer, but sometimes I can stare right at something and just not see it. 
 
You will have noticed in the above plot that we see a "glitch" in the k-ratios at 30 nA. And sometimes we see this "glitch" at 40 nA or sometimes at 50 nA.  It always seemed to depend on which beam currents we measured on our Cameca instrument just before and just after the "glitch" but I could not determine the pattern.  The thing that always bothered me was, that if we are indeed measuring our primary and secondary standards at the same beam current, we should be nulling out any picoammeter non-linearity, so I thought we should not be seeing any sort of these "glitches" in the k-ratio data.  It's the one reason I switched to looking at Anette's JEOL data, which did not show any of these "glitches" in the k-ratios.

But, first a short digression on something that I believe is unique to Probe for EPMA, and which, under normal circumstances, is a very welcome feature. And that is the standard intensity drift correction.  Now all microanalysis softwares perform a beam normalization (or drift) correction, so that intensities are reported as cps/nA. That way, one can not only correct for small changes in beam current over time, but one can also compare standards and unknowns (and or elements) acquired at different beam currents, and this correction is applied equally to all element intensities in the sample.

But Probe for EPMA also performs a standard intensity drift correction which tracks the (primary) standard intensities for *each* element over time and makes an adjustment for any linear changes in the standard intensities over time. Basically, if one has acquired more than one set of primary standards,  the program will estimate (linearly) the predicted (primary) standard intensity based on the interpolated time of acquisition of the secondary standard or unknown.

This schematic from the Probe for EPMA User Reference might help to explain this:



What this means is that the standard intensity drift correction is on (as it is by default), and one has acquired more than one set of primary standards, the program, will always look for the first primary standard acquired just *before* the specified sample, and also the first primary standard acquired *after* the specified sample.  Then it will estimate what the primary standard intensity should be if the intensity drift was linear between those two primary standard acquisitions, and utilize that intensity for the construction of the sample k-ratio.

This turns out to be very nice for labs with temperature changes over long runs where the various spectrometers (and PET crystals) will change their mechanical alignments and is applied on an element by element basis. One simply needs to acquire their primary standards every so often, and the Probe for EPMA software will automatically take care of such standard intensity drift issues automatically.  I can't tell you how many times I been called by a student that said when they came back in the morning their totals had somehow drifted overnight and was hoping there was something they could do to fix this?  And I'd say, sure, just re-run your primary standard again!  And they'd call back: everything is great now, thanks!

But if we turn off the standard intensity drift correction, the Probe for EPMA software will only utilize the primary standard acquired just *before* the secondary standard or unknown sample.  Keep that in mind, please.   So now back to our constant k-ratios.

As you saw in the plot above, I was having trouble understanding why this "glitch" in the constant k-ratios was occurring, and also why it was occurring at sometimes random nA settings, often between 30 nA and 60 nA.

So this morning I started looking more closely at this MnO/Mn k-ratio data, and the first thing I noticed was that I had (correctly) acquired the Mn metal standard first at a specified beam current, and then acquired the secondary MnO standard at the same specified beam current and for each k-ratio set after that.  So OK.

But wait a minute, didn't I just say that if the standard intensity drift correction is turned (as it is by default!), the program will automatically interpolate between the prior primary standard, and the subsequent primary standard?  But with the constant k-ratio data set, we always want to be sure that the k-ratio is constructed from two materials measured at the *same* beam current. In order to eliminate any non-linearity in the picoammeter!

So the first thing I did was turn off that darn standard intensity drift correction and then plot the k-ratios using only a single primary standard. Remember, if we only utilize a single primary standard, then we are extrapolating to the beam current measurements for all the secondary standards measured at multiple beam currents and therefore testing the linearity of the picoammeter!



And lo and behold, look at the above picoammeter non-linearity when the Cameca changes the beam current range from under 50 nA to over 50 nA. Clearly the picoammeter ranges require adjustment by our instrument engineer! 

But since we now have the standard intensity drift correction turned off, and we measured each primary standard just before each secondary standard, let's re-enable all the primary standards to produce a normal constant k-ratio plot and see what our constant k-ratio plot looks like now (compare it to the quoted plot above):



Glitch begone! Somebody slap me please...

So we've updated the constant k-ratio procedure to note that the standard intensity drift correction (in PFE) should be turned off, and that the primary standard should always be acquired just before the secondary standard so the program is forced to utilize the primary and secondary standards measured at the same beam current.  See attached pdf below.
 
Only in this way (in Probe for EPMA at least) is any picoammeter non-linearity truly nulled out in these constant k-ratio measurements.
« Last Edit: September 09, 2022, 08:24:25 AM by Probeman »
The only stupid question is the one not asked!

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3276
  • Other duties as assigned...
    • Probe Software
Re: New method for calibration of dead times (and picoammeter)
« Reply #87 on: August 30, 2022, 11:06:35 AM »
If you update your Probe for EPMA software (from the Help menu), you will get a new menu that allows you to access the latest version of the constant k-ratio method procedure also from the Help menu:



If you do not have the Probe for EPMA software, but you would still like to perform these constant k-ratio tests on your instrument, start here and read on:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
« Last Edit: August 30, 2022, 07:10:46 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #88 on: August 31, 2022, 09:23:23 AM »
 
included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>

No need for sarcasm  :D , it is quite a reasonable question: that is, if the dead time (parametric) constants vary slightly depending on the exact expression utilized, how will we manage this assortment of expressions and constants? 

This post is a response to that question (since SG asked), but the actual audience for this post is probably the typical Probe for EPMA user, on exactly how we do we manage all these dead time constants and perhaps, do we even require so many?

The simple answer is: it's easy.

But before we get into the details of how all this is handled in Probe for EPMA it might be worth noting a few observations: in most cases the differences in the optimized dead time constants between the various expressions are very small (e.g., 1.32 usec vs. 1.29 usec in the case of Ti Ka on PETL). In fact, for normal sized Bragg crystals (as seen in the previous post of Ti Ka on LIFL), we don't see any significant differences in our results up to 50K cps. For most situations, the exact dead time expression and dead time constant utilized will not be an important consideration.  But if we want to utilize large area crystals at high beam currents on pure metals or oxides (not to mention accurately characterizing our dead time constants for general usage), then we will want to perform these calibrations carefully at high beam currents.

That said, it is still not entirely clear how much of effect emission line energy or bias voltage has on the exact value of the dead time constant. Probeman's initial efforts on the question of emission line energies is ambiguous thus far (from his Cameca SX100 instrument):

https://probesoftware.com/smf/index.php?topic=1475.msg11017#msg11017

And this much larger set of dead times from Philippe Pinard for a number of emission lines from a few years back on his JEOL 8530 instrument:

https://probesoftware.com/smf/index.php?topic=394.msg6325#msg6325

Pinard's data is also somewhat ambiguous as to whether there is a correlation between emission energy and dead time. Anyway, I will admit that when we started developing software for the electron microprobe we did not anticipate that Probeman might develop new expressions for the correction of dead time, much less that the different expressions would produce slightly different (optimized) dead time constants (it's hard to make predictions, especially about the future!).    :)

So how does Probe for EPMA handle all these various dead time constants? It all starts with the SCALERS.DAT file, which is found in the C:\ProgramData\Probe Software\]Probe for EPMA folder (which may need to be unhidden using the View menu in Windows Explorer).

The initial effort to define dead time constants was originally implemented using a single value for each spectrometer. These are found on line 13 in the SCALERS.DAT file.  It can be edited using any plain text editor such as NotePad or NotePad+.

The dead time constants are on line 13 shown highlighted here in red:
     
    "1"      "2"      "3"      "4"      "5"     "scaler labels"
     ""       ""       ""       ""       ""      "fixed scaler elements"
     ""       ""       ""       ""       ""      "fixed scaler xrays"
     2        2        2        2        2       "crystal flipping flag"
     81010    81010    81010    81010    81010   "crystal flipping position"
     4        2        2        4        2       "number of crystals"
     "PET"    "LPET"   "LLIF"   "PET"    "LIF"   "crystal types1"
     "TAP"    "LTAP"   "LPET"   "TAP"    "PET"   "crystal types2"
     "PC1"    ""       ""       "PC1"    ""      "crystal types3"
     "PC2"    ""       ""       "PC25"   ""      "crystal types4"
     ""       ""       ""       ""       ""      "crystal types5"
     ""       ""       ""       ""       ""      "crystal types6"
     2.85     2.8      2.85     3.0      3.0     "deadtime in microseconds"
     150.     150.     140.     150.     140.     "off-peak size, (hilimit - lolimit)/off-peak size"
     80.      80.      70.      80.      70.     "wavescan size, (hilimit - lolimit)/wavescan size"

This line 13 contains the default dead time constants for all Bragg crystals on each WDS spectrometer. The values on this line will be utilized for all crystals on each spectrometer (see below for more on this).

So begin by entering a default dead time constant in microseconds (usec) for each spectrometer on line 13 using your text editor as determined from your constant k-ratio tests. If you have values for more than one Bragg crystal just choose one and proceed below.

And if you have dead time constants for more than a single Bragg crystal per spectrometer, you can also edit lines 72 to 77 for each Bragg crystal on each spectrometer (though only up to 4 crystals are usually found in JEOL and Cameca microprobes).

Each subsequent line corresponds to each Bragg crystal listed above on lines 7 to 12. Here is an example with the edited dead time constant values highlighted in red:

     1        1        1        1        1     "default PHA inte/diff modes1"
     1        1        1        1        1     "default PHA inte/diff modes2"
     1        0        0        1        0     "default PHA inte/diff modes3"
     1        0        0        1        0     "default PHA inte/diff modes4"
     0        0        0        0        0     "default PHA inte/diff modes5"
     0        0        0        0        0     "default PHA inte/diff modes6"
     2.8      3.1      2.85     3.1    3.0     "default detector deadtimes1"
     2.85     2.8      2.80     3.0    3.0     "default detector deadtimes2"
     3.0      0        0        3.1      0     "default detector deadtimes3"
     3.1      0        0        3.2      0     "default detector deadtimes4"
     0        0        0        0        0     "default detector deadtimes5"
     0        0        0        0        0     "default detector deadtimes6"
     0        1        1        0        0     "Cameca large area crystal flag1"
     0        1        1        0        0     "Cameca large area crystal flag2"
     0        0        0        0        0     "Cameca large area crystal flag3"
     0        0        0        0        0     "Cameca large area crystal flag4"
     0        0        0        0        0     "Cameca large area crystal flag5"
     0        0        0        0        0     "Cameca large area crystal flag6"

These dead time constant values on lines 72 to 75 will “over ride” the values defined on line 13 if they are non-zero.

For new probe runs, the PFE software will automatically utilizes these dead time values from the SCALERS.DAT file, but what about re-processing data from older runs? How can they can utilize these new dead time constants (and expressions)?

For example, once you have properly calibrated all your dead time constants using the new constant k-ratio method (as described in the attached document), and would like to apply these new values to an old  run, you can utilize this new feature to easily update all your samples in a single run as described in this link:

https://probesoftware.com/smf/index.php?topic=40.msg10968#msg10968

In addition, it should be noted that since Probe for EPMA saves the dead time constant for each element separately (see the Elements/Cations dialog), when an element setup is saved to the element setup database as seen here:



This means that one can have different dead time constants for each element/xray/spectro/crystal combination. So when browsing for an already tuned up element setup, the dead time constant for that element, emission line, spectrometer, crystal, etc. is automatically loaded into the current run. That is also true when loading a sample setup from another probe run. All of this information is loaded automatically automatically and can of course be easily updated if desired.

Now that said, the dead time correction expression type (traditional/Willis/six term/log) is only loaded when loading a file setup from another run.  And in fact Probe for EPMA will prompt the user when the user loads an older probe file setup, and finds that newer dead time constants (or expression type) are available as seen here:



This feature prevents the user from accidentally using an out of date dead time constants for acquiring new data.

So in summary, there are many ways to insure that the user can save, recall and utilize these new dead time constants once the SCALERS.DAT file is edited for the new dead time (parametric) constant values.

Bottom line: edit your dead time correction type parameter in your Probewin.ini file to 4 for using the logarithmic expression as shown here:

[software]
DeadtimeCorrectionType=4   ; 1 = normal, 2 = high precision deadtime correction, 3 = super high precision, 4 = log expression (Moy)

Then run some constant k-ratio tests, on say Ti metal and TiO2.

You will probably notice that most spectrometers with normal sized crystals will yield roughly the same dead time constant, but that your dead time constants on your large area crystals may need to be reduced by 0.02 or 0.04 usec or so (probably more like 0.1 or 0.2 usec less for Cameca instruments) in order to perform quantitative analysis at count rates over 50K cps.

 8)
« Last Edit: August 31, 2022, 11:08:32 AM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2838
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #89 on: September 03, 2022, 09:35:13 AM »
Here is something else I just noticed with the Mn Ka k-ratios acquired on my SX100:



The PET/LPET crystals are in pretty good agreement and in fact the k-ratios they yield at around 0.735 (see y-axis) are about right, according to a quick calculation from CalcZAF:

ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA KILOVOL                                       
   Mn ka  .00000  .73413  77.445   -----  50.000   1.000   15.00                                       
   O  ka  .00000  .17129  22.555   -----  50.000   1.000   15.00                                       
   TOTAL:                100.000   ----- 100.000   2.000


But the LIF and LLIF spectrometers produce k-ratios about 3 to 4% lower than they should.
The only stupid question is the one not asked!