Author Topic: New method for calibration of dead times (and picoammeter)  (Read 23124 times)

entoptics

  • Student
  • *
  • Posts: 4
Re: New method for calibration of dead times (and picoammeter)
« Reply #105 on: October 17, 2022, 02:50:58 PM »
I played around with the constant k-ratio method over the weekend, and got some very good results.

One thing that may not have been stressed enough (at least for me :P ) was ensuring your PHA settings are suitable for the count rates you'll see. I assumed a "middle of the road" gain setting would be sufficient, but over the nA range I measured, it wasn't. On our JEOL 8500F, I had to up the gain for the higher count rates. Be sure to run a few test PHA scans at low-mid-high currents for all the elements you plan to use.

I've attached the results from Sc/GdScO3 for all five of my spectrometers (PET). Spec 3 (H-type) only goes to 40 nA due to the aforementioned PHA blunder.

I'm quite pleased with the linear response from ~10 kcps to 140 kcps. <0.5% variation.

I'd also note the variation in k-ratio across my spectrometers. I'm assuming this is a takeoff angle discrepancy. My setup has them arranged clockwise from 1 (7 o'clock) to 5 (5 o'clock), and you can see the k-ratios drop as you go around. Presumably there's a bit of stage/specimen tilt, altering the real takeoff value depending on spectrometer location?

I dug around PFE, and couldn't find a place to alter the takeoff angle for individual spectrometers. Is this possible? Would be interesting to change the values a smidge to see if the k-ratios would converge.
« Last Edit: October 17, 2022, 03:45:10 PM by entoptics »

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3265
  • Other duties as assigned...
    • Probe Software
Re: New method for calibration of dead times (and picoammeter)
« Reply #106 on: October 20, 2022, 09:22:40 AM »
I played around with the constant k-ratio method over the weekend, and got some very good results.

One thing that may not have been stressed enough (at least for me :P ) was ensuring your PHA settings are suitable for the count rates you'll see. I assumed a "middle of the road" gain setting would be sufficient, but over the nA range I measured, it wasn't. On our JEOL 8500F, I had to up the gain for the higher count rates. Be sure to run a few test PHA scans at low-mid-high currents for all the elements you plan to use.

Yeah. Getting a single set of PHA settings appropriate over a wide range of count rates takes some care. As you say, one should acquire PHA scans at both extremes of beam current before attempting to acquire constant k-ratios. I've posted about these PHA peak shifts myself recently:

https://probesoftware.com/smf/index.php?topic=1475.msg11330#msg11330

I've attached the results from Sc/GdScO3 for all five of my spectrometers (PET). Spec 3 (H-type) only goes to 40 nA due to the aforementioned PHA blunder.

I'm quite pleased with the linear response from ~10 kcps to 140 kcps. <0.5% variation.

Very nice dataset!  It's interesting that you used Sc La, was there a particular reason for that?

These k-ratios calculated using the logarithmic dead time expression in PFE?  How much did your dead time constants change from your previous values?

I'd also note the variation in k-ratio across my spectrometers. I'm assuming this is a takeoff angle discrepancy. My setup has them arranged clockwise from 1 (7 o'clock) to 5 (5 o'clock), and you can see the k-ratios drop as you go around. Presumably there's a bit of stage/specimen tilt, altering the real takeoff value depending on spectrometer location?

I dug around PFE, and couldn't find a place to alter the takeoff angle for individual spectrometers. Is this possible? Would be interesting to change the values a smidge to see if the k-ratios would converge.

I think this is an absolutely amazingly good idea!   8)

Perhaps rather than depend on the engineer to align our spectrometers and/or replacing crystals with asymmetrical diffraction, we should instead attempt to determine the effective takeoff angle of each spectrometer, by comparing these simultaneous k-ratios from constant k-ratio measurements (and of course we really only need k-ratios from a single beam current for this purpose).

And yes, it would not help in the case of samples with variable sample tilts (different each time they are inserted in the sample holder). However, if it was the entire sample holder that was tilted (reproduciblly) in a particular direction, then yes it would help very much!

The only "fly in the ointment" I can think of is how do we know what the correct or ideal k-ratio is for a given primary and secondary standard?  We can average a bunch of models of course, but then it comes down to all those particular details such as oxide layers, and operating voltage accuracy, carbon coating thickness, etc. in order to get an absolute k-ratio value to "shoot for" in order to adjust our effective takeoff angles to obtain this ideal k-ratio.

In any case I think it's worth working on this idea. So we modified the underlying physics code in CalcZAF/Probe for EPMA to support manually input effective takeoff angles for each element.  The takeoff angle in PFE is now defined internally (using combined conditions) as specific to each element.  So for a first effort we enabled the take off angle text control in the Combined Conditions dialog in CalcZAF:



Go ahead and update to the latest PFE, then export a constant k-ratio sample from PFE using the Output | Save CalcZAF Format menu. Try reprocessing the data in CalcZAF based on the spectrometer orientation and let us know what you find. 

There could indeed be a different take off angle for each spectrometer. It's one reason why Aurelien specified defining the spectrometer orientation (and x/y/z coordinates) in the consensus k-ratio ratio measurement method to see if they could check the specimen tilt.  It would be very interesting to see if you can get consistent k-ratios by adjusting the "effective" takeoff angle of each spectrometer!

But it could be even worse than that, as I can imagine a spectrometer mechanism that is out of alignment variously over it's sin theta range!  That means that there could difference effective take off angles as a function of the spectrometer position!  That would require a visit from the engineer I expect.

Just to see what effect changing the takeoff angle for a single spectrometer I did a quick model test in CalcZAF. Here is Si Ka at 40 degrees take off (20 keV):

ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Si ka  1.6169  1.0000  1.0254  1.6579  1.0522   .9745   .5259  1.8390 10.8755 1542.63
   Mg ka  1.5028   .9946  1.0220  1.5275  1.0329   .9895   .5279  1.3050 15.3257 1491.24
   O  ka  2.3761   .9985   .9698  2.3009   .9549  1.0156   .2424   .5317 37.6152 3965.38

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA TAKEOFF KILOVOL                                       
   Si ka  .00000  .12041  19.962   -----  14.286    .333   40.00   20.00                                       
   Mg ka  .00000  .22619  34.550   -----  28.571    .667   40.00   20.00                                       
   O  ka  .00000  .19770  45.488   -----  57.143   1.333   40.00   20.00                                       
   TOTAL:                100.000   ----- 100.000   2.333


Note the new column for take off angle for each element! And here Si Ka at 39 degrees:

ELEMENT  ABSCOR  FLUCOR  ZEDCOR  ZAFCOR STP-POW BKS-COR   F(x)u      Ec   Eo/Ec    MACs
   Si ka  1.6304  1.0000  1.0254  1.6718  1.0522   .9745   .5198  1.8390 10.8755 1542.63
   Mg ka  1.5028   .9946  1.0220  1.5275  1.0329   .9895   .5279  1.3050 15.3257 1491.24
   O  ka  2.3761   .9985   .9698  2.3009   .9549  1.0156   .2424   .5317 37.6152 3965.38

 ELEMENT   K-RAW K-VALUE ELEMWT% OXIDWT% ATOMIC% FORMULA TAKEOFF KILOVOL                                       
   Si ka  .00000  .11941  19.962   -----  14.286    .333   39.00   20.00                                       
   Mg ka  .00000  .22619  34.550   -----  28.571    .667   40.00   20.00                                       
   O  ka  .00000  .19770  45.488   -----  57.143   1.333   40.00   20.00                                       
   TOTAL:                100.000   ----- 100.000   2.333


A difference of 1 degree in the take off angle results in a difference in the Mg Ka absorption correction in this system of around 0.8%, so not that large but certainly worth trying to correct for this in our software I think...

It would certainly be interesting to know how much the effective take off angle would need to change to correct for these observed differences in these simultaneous k-ratios we are seeing in our measurements.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #107 on: October 21, 2022, 09:32:46 AM »
It's one reason why Aurelien specified defining the spectrometer orientation (and x/y/z coordinates) in the consensus k-ratio ratio measurement method to see if they could check the specimen tilt.

If anyone wants to try checking their "effective" take off angles for their spectrometers (using the constant k-ratio method on multiple spectrometers), be sure to first test your specimen tilt by focusing your light optics on a flat specimen using three corners of a triangle with the vertices a few or more millimeters apart.  Then just calculate your degrees tilt.

In PFE one can use the fiducial confirmation feature and it will calculate the tilt automatically for you. I'd say make sure that your specimen tilt is less than 0.5 degrees.  In my experience we see sample tilts on our one piece acrylic standard mounts of around 0.2 to 0.3 degrees on our Cameca instrument, when confirming the standard mount fiducials.

Once you know your specimen is mounted flat, then go ahead and test using simultaneous k-ratios for checking your spectrometer effective take off angles...
« Last Edit: October 22, 2022, 05:07:09 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #108 on: October 26, 2022, 02:34:14 PM »
Here is a recent constant k-ratio data set I acquired over the weekend on TiO2 and Ti at 20 keV. After optimizing the dead time constant (excluding the "terrifying" count rates), we obtain this plot:



Zooming in we obtain this:



Pretty constant k-ratios up to around ~300 kcps and higher. 

Surprisingly the various spectrometers all yield pretty consistent k-ratios (0.57 to 0.58).  Maybe that's partly because I've finally got the PHA settings properly adjusted!    :-[

By the way, here are the optimized dead times (using integer enforced dead times of 3 usec) at 140 nA:

SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LPET     PET     PET
DEAD:     2.71    2.60    2.66    2.70    2.60
DTC%:     55.9   145.3   200.0    44.3    72.5

DTC% is dead time correction (relative) percent!    :o
« Last Edit: October 26, 2022, 02:37:49 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #109 on: November 03, 2022, 11:19:54 AM »
Here is a recent constant k-ratio data set I acquired over the weekend on TiO2 and Ti at 20 keV. After optimizing the dead time constant (excluding the "terrifying" count rates), we obtain this plot:



Zooming in we obtain this:



Pretty constant k-ratios up to around ~300 kcps and higher. 

Surprisingly the various spectrometers all yield pretty consistent k-ratios (0.57 to 0.58).  Maybe that's partly because I've finally got the PHA settings properly adjusted!    :-[

By the way, here are the optimized dead times (using integer enforced dead times of 3 usec) at 140 nA:

SPEC:        1       2       3       4       5
CRYST:     PET    LPET    LPET     PET     PET
DEAD:     2.71    2.60    2.66    2.70    2.60
DTC%:     55.9   145.3   200.0    44.3    72.5

DTC% is dead time correction (relative) percent!    :o

And just to put things in perspective for the above "constant" k-ratio plots, here are the same data, but plotting using the traditional dead time correction method:

« Last Edit: November 03, 2022, 11:21:30 AM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #110 on: November 11, 2022, 02:42:27 PM »
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?
« Last Edit: November 11, 2022, 02:59:15 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 301
Re: New method for calibration of dead times (and picoammeter)
« Reply #111 on: November 12, 2022, 11:50:29 AM »
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?

Still constructing the generator. Duties first :P, other stuff at spare time. That is my plan to measure by doing such test: that is generate and send two pulses and change time interval between pulses until counting electronics will see a single instead of two pulse - it is basically measuring the dead time of that physically in controlled way. For influence of mentioned skew rate I have plan to do double or triple first and normal amplitude second pulse (again test with changing time interval) - if interval at which second pulse will get ignored will be same as with normal amplitude pulses - this skew hypothesis part can be discarded.

I am trying to understand what do You mean "pulse response time". Is it time taken between the X-ray ionisising gas - pulse in counter - pulse shapping - sending to counting electronics - counting and sending to acquisition board as (LV)TTL pulse. in between all of these and at every of these steps naturally there will be a delay - electronic signals travels near at speed of light, and indeed working in high frequency that somehow matters. Here those delays (in ns) are proportional to all counts, and if two x-ray beams had arrived to counter chamber with time difference for example sake lets say 5.5µs, the time difference between shapped and gain-amplified pulses will exactly the same. Now due to way it is sensed for digital signal (integral counting) that time can be a bit different when crossing to digital domain - and additionally digital domain is run by clock - thus time resolution  is granular. I should remind that in my opinion there is digital signal ceiling of 500kcps (raw counts) as digital (LV)TTL pulses has 1µs length, and are aligned at 1MHz Clock.

"non-rectilinear shape of the pulses" - rectilinear pulses are unnatural - they easy produce some artifacts (under-over shots), they are good for digital domain as in digital domain the rising and falling edges are detected for getting if it is 1 or 0. rectilinear pulse would be poor choise for amplitude carriage as it would behave very differently at low amplitudes and high amplitudes. The negative tail I believe however does influence missing of some counts, but lack of such tail would bring other problems (pulse pileups would "climb" to positive rail saturating OPAMPS).

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #112 on: November 12, 2022, 12:54:02 PM »
Things to check (I would like to check on my own): skew rate of that chip is fixed, but by decreasing the gas and analog gain (thus average pulse height coming into comparator---pulse-hold chip tandem) there should be less missed pulses - that would look absolutely counter intuitive measure for PHA shift.

I am curious if you have any thoughts (or even better, measurements) that you can share with us regarding the relative contribution towards the overall observed dead time interval, from the detector gas ionization response time vs. the pulse processor electronics response time.

Also do you think the non-rectilinear shape of the pulses (i.e., curved tails) can contribute towards the non-linear response of the system at these high count rates?

I am trying to understand what do You mean "pulse response time". Is it time taken between the X-ray ionisising gas - pulse in counter - pulse shapping - sending to counting electronics - counting and sending to acquisition board as (LV)TTL pulse.

I am asking if you can compare for us the *intrinsic* dead time of the gas detector/pre-amplifier versus the *intrinsic* dead time of the pulse processing electronics versus the *intrinsic* dead time of the pulse counting electronics. That is assuming a single photon input, what is the natural width of this pulse for each segment of the WDS photon counting system? I'm attempting to understand the relative importance of each piece of the WDS system in contributing towards the total dead time interval that we actually observe.

"non-rectilinear shape of the pulses" - rectilinear pulses are unnatural - they easy produce some artifacts (under-over shots), they are good for digital domain as in digital domain the rising and falling edges are detected for getting if it is 1 or 0. rectilinear pulse would be poor choise for amplitude carriage as it would behave very differently at low amplitudes and high amplitudes. The negative tail I believe however does influence missing of some counts, but lack of such tail would bring other problems (pulse pileups would "climb" to positive rail saturating OPAMPS).

Yes, I know that rectilinear pulses are "unnatural". I am using the term as a mathematical ideal and asking if you can compare the behavior of ideal rectilinear pulses, with the behavior of the natural non-rectilinear pulses (that we actually have in our electronics).

That is, at low count rates when the pulses are far apart compared to their natural widths, the pulses can be modeled as perfect rectilinear pulses because they rarely overlap.

But, as the interval between the pulses decreases (the pulses begin to overlap), does it make sense that the pulse counting system (which I assume is triggered at some specific voltage level), will begin to behave in a non-linear fashion (compared to ideal pulse shapes) as the curved edges of these pulses increasingly overlap?
« Last Edit: November 12, 2022, 01:08:12 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 301
Re: New method for calibration of dead times (and picoammeter)
« Reply #113 on: November 14, 2022, 10:17:52 AM »
There is no straight answer. Devil in details. Can I compare?.... gas and preamplifier - first, I can't see any missing pulses (at least on oscilloscope) at that level and because of that it produce enormously complicated overlap patterns which later levels of signal pipeline struggle to untangle. Would there be any dead time at gas counter and preamplifier - The counting electronics would have much easier life. So I am 99.9% sure that intrinsic dead time of pre-amplifier and gas counter at our achievable most extreme beam currents and even large XTALS and most intense line positions has dead time equal 0,  at least on Cameca Hardware. For Jeol I am not so pretty sure, as IMHO their preamplifier miss some important and crucial higher capacity HV backup capacitor and because of that bias voltage of cathode could be significantly drained down at burst of high rate X-ray (similarly to G-M counter). Most of dead time happens at analog pulse sensing and probably at digital pulse counting, probably - that will be possible to measure after I will finish construction of that generator.

The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb". more sophisticated signal processing using FPGA can with ease sense all pulses. Few months ago I had opportunity to watch new EDAX EDS detector  in action - there were no visible pileups even at 97% of dead time. Its signal processing moved completely to FPGA, where pulses are recognized and deconvoluted in real-time constantly in whole, not just some dumb voltage level triggering.

WE need to get something like that for WDS and will be able to do those few million counts a second!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #114 on: November 14, 2022, 11:33:35 AM »
There is no straight answer. Devil in details. Can I compare?.... gas and preamplifier - first, I can't see any missing pulses (at least on oscilloscope) at that level and because of that it produce enormously complicated overlap patterns which later levels of signal pipeline struggle to untangle. Would there be any dead time at gas counter and preamplifier - The counting electronics would have much easier life. So I am 99.9% sure that intrinsic dead time of pre-amplifier and gas counter at our achievable most extreme beam currents and even large XTALS and most intense line positions has dead time equal 0,  at least on Cameca Hardware. For Jeol I am not so pretty sure, as IMHO their preamplifier miss some important and crucial higher capacity HV backup capacitor and because of that bias voltage of cathode could be significantly drained down at burst of high rate X-ray (similarly to G-M counter). Most of dead time happens at analog pulse sensing and probably at digital pulse counting, probably - that will be possible to measure after I will finish construction of that generator.

OK, that makes perfect sense. I remember now seeing schematics of EDS detector pulse steams and I think it's pretty much the same as you describe for WDS. 

The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb".

Thanks, I think I am beginning to understand these "dumb" details.   :)

So could these sideways deviations cause additional (non-linear) loss of counts that we observe at high enough count rates?  I'm trying to understand these dead time effects beyond simple photon coincidence.

...more sophisticated signal processing using FPGA can with ease sense all pulses. Few months ago I had opportunity to watch new EDAX EDS detector  in action - there were no visible pileups even at 97% of dead time. Its signal processing moved completely to FPGA, where pulses are recognized and deconvoluted in real-time constantly in whole, not just some dumb voltage level triggering.

WE need to get something like that for WDS and will be able to do those few million counts a second!

Absolutely.  Wouldn't it be great to have linear response up to 1 mcps!
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #115 on: November 15, 2022, 09:09:39 AM »
The pulse sensing on Cameca is not triggered at specific absolute voltage level, but at threshold of difference between incoming at real-time and delayed pulse. That is why there is comparator, which compares these signals and detects the rising edge of pulse. Delay part is done by sample hold chip, which is able not only to hold the cached voltage level of signal then triggered, but also outputs passes significantly delayed signal (when it is not triggered to hold).  This tandem looks OK theoretically, but due to noise such pulse sensing can be triggered a bit too early or too late, and thus pulse hold chip will not catch the voltage at very top center of pulse but with deviations sideways - thus we get lots of PHA distribution broadening due to that. decreasing time between pulses and increasing pilled up pulses creates increasing in occurrence situations where some of pulses arrives during voltage drop of previous pulse and the rising slope of that pulse can't get to be enought to trigger the comparator as it sees flat or still diminishing signal. Comparator-sample-and-hold chip tandem is really very oversimplified way to sense pulses and is "dumb".

Thanks, I think I am beginning to understand these "dumb" details.   :)

So could these sideways deviations cause additional (non-linear) loss of counts that we observe at high enough count rates?  I'm trying to understand these dead time effects beyond simple photon coincidence.

Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

Another thought: could the "effective" dead time interval actually increase as a function of count rate at very high count rates?
« Last Edit: November 15, 2022, 12:07:30 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #116 on: November 25, 2022, 12:09:54 PM »
This is a post on recent "constant" k-ratio measurements and some strange observations regarding the results.

Now that I think I've finally learned to properly adjust my PHA gain settings to work at count rates from zero to ~600 kcps, I've been noticing some consistently odd non-linearities in the constant k-ratio results.  These are quite small variations but they seem to be reproducibly present on my Cameca instrument, though maybe not on Anette's JEOL instrument.  But maybe that is just a question of the larger dead time constant on the Cameca instrument?

Beginning this story with Anette's instrument here are some TiO2/Ti metal k-ratios on her PETL spectrometer:



Take a look at the Y-axis range and you can see that we are seeing consistent accuracy in the TiO2/Ti k-ratios from 15 kcps to 165 kcps count rates of Ti ka in TiO2 and count rates from 28 kcps to 392 kcps in Ti metal (10 nA to 140 nA) within roughly a thousand or so PPM.  Pretty darn good! 

This very nicely demonstrates the sensitivity of the constant k-ratio method because the Y-axis can be expanded indefinitely as the slope of the k-ratios approaches zero (as they should in an well calibrated instrument!).  Her JEOL data was taken at 15 kV. Now here are some Ti Ka k-ratio data from my Cameca at 20 kV:
 


First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 usec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!

But more interestingly (and also incomprehensibly) to me is that  these "squiggles" appear on all the spectrometers, even those with lower count rates as seen here:



So that might indicate to me that maybe these squiggles are due to a picoammeter non-linearity, but if you've been following along with these discussions you will remember that when using the constant k-ratio method, we measure both the primary and secondary standard at the *same* beam current. Therefore any picoammeter linearity should normalize out.  And in fact the picoammeter non-linearity on my instrument is much worse than these k-ratio data show, as previously plotted here:

https://probesoftware.com/smf/index.php?topic=1466.msg11324#msg11324

So I don't think it's the picoammeter.  Now, it is worth pointing out that using a traditional dead time calibration one would never see such tiny variations in the data.  To demonstrate this, here are the same Cameca k-ratio data as above, but this time plotted using the traditional linear dead time expression:



These k-ratio variations are even less evident in a traditional plot of a single material intensity vs. beam current as seen here:



The PHA data is here, first adjusted at 200 nA to ensure that the Ti Ka escape peak is above the baseline:



and here at 30 nA:



Remember, in integral mode, the counts to the right of the 5v X-axis are counted in the integration as shown previously:

https://probesoftware.com/smf/index.php?topic=1475.msg11356#msg11356
 
And are not cut off as we might expect, given the display in the PeakSight software.  And by the way, Anette has sent me some preliminary "gain test" data from her JEOL, and even though she had to deal with a shifting baseline, she also sees a constant intensity as a function of gain. She will post that data here soon I hope.

In the mean time does anyone have any theories on what could be causing these "squiggles" in the constant k-ratio data on my Cameca?  And why are they so much more pronounced than on Anette's JEOL instrument?
« Last Edit: November 26, 2022, 09:39:04 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 301
Re: New method for calibration of dead times (and picoammeter)
« Reply #117 on: November 28, 2022, 03:57:22 AM »
Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

I returned with oscilloscope to the probe to answer these questions (I also got intrigued if pulse shape would not change somehow at higher count rates - which would be possible in case Shapping amplifier would have too short time constant; Wanted to check out that, especially that Brian recently doubted if 250ns is not too short as it is not common value). I made this GIF below to show the differences (or no differences actually) between common Ti Ka pulse registered at 1.4nA and "lonely" pulse "hunted" at 130 nA.

I use here word "hunted" as it is not so simple to get pulse with "no pulse" before and after already at 130nA or 150kcps. Going to higher count rates such situation gets more and more rare, and gets more and more challenging to catch:


So the answer to Probeman is: There is rather no observable dependability between count rate and pulse shape, neither it morphs into more or less rectilinear at any count rate.
« Last Edit: November 28, 2022, 04:01:20 AM by sem-geologist »

Probeman

  • Emeritus
  • *****
  • Posts: 2829
  • Never sleeps...
    • John Donovan
Re: New method for calibration of dead times (and picoammeter)
« Reply #118 on: November 28, 2022, 09:15:51 AM »
Could another possibility for the non-linear behavior of the pulse processing system at high count rates be due to the shape of the pulses changing as a function of count rate?

In other words, could the pulses have a more rectilinear shape at low count rates, but the pulse shapes become increasingly non-rectilinear at higher count rates?

I returned with oscilloscope to the probe to answer these questions (I also got intrigued if pulse shape would not change somehow at higher count rates - which would be possible in case Shapping amplifier would have too short time constant; Wanted to check out that, especially that Brian recently doubted if 250ns is not too short as it is not common value). I made this GIF below to show the differences (or no differences actually) between common Ti Ka pulse registered at 1.4nA and "lonely" pulse "hunted" at 130 nA.



So the answer to Probeman is: There is rather no observable dependability between count rate and pulse shape, neither it morphs into more or less rectilinear at any count rate.

That is very interesting, thanks.  I'm curious, what was the observed count rate at 130 nA?

But if it's not changes in pulse shape causing the non-linear response of the counting system above 50 kcps, what effect(s) do you think could be causing such extreme non-linearity, beyond simple photon coincidence, as shown here:

Now, it is worth pointing out that using a traditional dead time calibration one would never see such tiny variations in the data.  To demonstrate this, here are the same Cameca k-ratio data as above, but this time plotted using the traditional linear dead time expression:


The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 301
Re: New method for calibration of dead times (and picoammeter)
« Reply #119 on: November 29, 2022, 03:17:48 AM »
First note that the count rates are almost the same (at 20 keV) as Anette's JEOL instrument at 15 keV. Next note that the k-ratio variation in the Cameca Y-axis range is larger than Anette's instrument though still within a percent or so. But that's still a pretty significant variation in the k-ratios as a function of count rate. So the question is, why is it so "squiggly" on the Cameca instrument? Though I should add that if we look really closely at Anette's JEOL data, there is an almost imperceptible "squiggle" to her data as well...  though seemingly smaller by about a factor of 10.  So what is causing these "squiggles" in the constant k-ratios?

And also note the fact that at 140 nA, the k-ratios are starting to "head north" is simply because at that beam current the count rate on the Ti metal is approaching 600 kcps!  And on the Cameca with a 2.6 µsec dead time constant, the logarithmic dead time correction is around 200% and really just can't keep up any more!
As you have estimated 2.6µs with logarithmic (I guess) equation, that means hardware is set to 3µs, correct? I had thought that I had already convinced You of benefits reducing it at least to 2µs (which should give you estimated "deadtime constant" somewhere between 1.5-1.8µs), so why You are still using 3µs?. You can reduce it safely in integral mode without any drawbacks (but on diff mode it is better to increase it at least to 4µs, if You use diff mode for anything at all). I had gathered some limited measurements on Ti/TiO2 with hardware DT set to 1µs. I need to pull and organize that data to show anything here.

I am aware about these "Squiggles" as You called, and pointed out already previously (bold part in quote):
Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).
There is no pulses missing, it is that this equation is not perfect. Think this like these numerous matrix correction models, which works OK and comparably at common acceleration voltages (7-25kV), but some gives very large biases for (very-)low voltage analyses. It is because some of them describe mathematically very oversimplified physical reality. As I said I had already made a MC simulation and there is no visible discrepancy between modeled input and observable output count rates, albeit I could not find the equation, as my greed for whole possible range of counts (lets stay up to 10Mcps) stalled me. At least your method extends the usable range to 150-200kcps, You can minimize effect of the first "bump" by calibrating dead time only up to 100kcps. Your log equation in current form is already nice improvement, as there is no need to be limited up to 10-15kcps anymore, or requires separate calibrations for high current, or using matrix matched standards (where in real its count-rate matched intensities provided better results, and was misinterpret to do anything with matrix).

I will try to redo montecarlo simulation using a real pulse shape, with better detailed simulation of detection - that should clear up things a bit, I think. The point is actually not how and where (inside GPC detector - photon coincidence vs at Shapping amplifier signal - pulse-pile-up) coincidences happen, but how they are ignored. This is what I think Your log equation starts to fail to account correctly at higher count rates (>150kcps).
« Last Edit: November 29, 2022, 03:22:15 AM by sem-geologist »