Author Topic: Generalized dead times  (Read 4597 times)

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #15 on: September 04, 2022, 02:47:10 PM »
I looked through some of my original constant k-ratio data from my Sx100 and found this SiO2/Si run from earlier this June, where on spc2 LTAP we were getting around 4300 cps/nA on pure Si metal:



It's not as "terrifying" as Anette's TAPL crystal on her JEOL instrument but still pretty impressive!

You can see that starting around 500K cps, even the logarithmic expression starts to not fully correct the observed count rate properly.  But it's still doing a lot better than the traditional expression which starts failing almost immediately!
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #16 on: September 06, 2022, 02:52:08 AM »
Yeah, TAP's are scary, even simple TAP is able to do ~2000cps/nA on Si. That is comparable to Cr Ka on LPET in intensity. What is the secret sauce of JEOL to be able to get more counts per nA?
« Last Edit: September 06, 2022, 07:54:14 AM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #17 on: September 06, 2022, 07:52:03 AM »
Yeah, TAP's are scary, even simple TAP is able to do ~2000cps/nA on Si. That is comparable to Cr Ka on LPET in intensity. What is the secret sauce of JEOL to be able to get more counts per nA?

I don't know for sure, but it is probably at least partly due to JEOL having a smaller focal circle diameter (140mm vs. 160mm for Cameca). That implies slightly less spectra resolution, but better geometric efficiency (assuming the crystals are the same size).

I've been promising to post Anette's "terrifying" TAPL data and so here it is:



Now that's a whole lotta counts! 

Looking at the lower count rates, we can see the the traditional and logarithmic corrected data both at 1.1 usec (red and yellow) where the logarithmic expression very slightly over corrects, and also the logarithmic expression after decreasing the dead time constant by 0.02 usec to 1.08 usec (cyan), which plots very constant k-ratios (for a little while!).

We can see it a little clearer in a zoom:



We can see that at these "terrifying" count rates, even the logarithmic expression fails eventually.  The question I wish we could answer is what physical mechanism at these high count rates is causing this and what math might we utilize to correct for this. 

On the other hand, at these count rates (~400K cps) the dead time correction (percent) is simply enormous, and so maybe we should just be happy with improving our traditional dead time correction by only a factor of 10x.    :)

However, Anette also performed some PHA scans at these various beam currents and we do see some severe pulse height depression starting around 100 nA:



So that's at least part of the problem we're seeing above.
« Last Edit: September 06, 2022, 12:31:15 PM by Probeman »
The only stupid question is the one not asked!

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: Generalized dead times
« Reply #18 on: September 06, 2022, 03:49:44 PM »
I'd like to return to my Mo Lβ3/Lα count rate dataset as an example and re-interpret a little...

While it is true that the origin of pulse pileup differs from that of dead time, corrections for the two are described by the same types of models.  For instance, correction for pulse pileup requires a model equivalent to that for an extending dead time (see work by Pommé):  N’ = Nexp(-τN).  As shown by Müller (1991), the first order approximation (based on power series expansion) to the extending dead time correction is the non-extending correction, N = N’/(1 - τN’) or N’/N = 1 - τN’.  The implication of this is that, even if JEOL pulse processing circuitry is in truth subject only to pulse pileup, then the non-extending (non-paralyzable, linear) model should still be applicable at relatively low count rates (certainly below 50 kcps).  In contrast, for the more general case extending to high count rates, superposition of pulse pileup and dead time requires a more complicated treatment such as that proposed by Pommé (2008).  If a dead time is not enforced electronically, then correction for pulse processing count losses at high count rates could potentially be described by a simple model.

In the plot below (and in all of the data plots that I’ve shown in my application of the Heinrich et al. ratio method), most of the correction and essentially all departures from linear behavior are due to one X-ray line (within the ratio).  In the case illustrated below, N’32 represents the Mo Lα count rate on channel 5/PETH, while N’12 represents the Mo Lβ3 count rate on channel 2/PETL.  For measured count rates below 200 kcps, the ratio, N’32/N’12, is greater than 20:1.  It appears that the plotted ratio (N’12/N’32), in which departure from linearity can be ascribed effectively solely to Mo Lα, is fit well by an exponential function.  The same is true for my corresponding dataset for Ti, which extends to measured count rates up to 227 kcps.  For my corresponding dataset for Si, an exponential fit works well for Si Kα count rates up to about 140 kcps (on channel 4), but the dataset is fit better as a whole by a quadratic.  I believe that the reason for this may lie in the extreme degradation of resolution in the pulse amplitude distribution, which may have contributed to irrecoverable loss of peak X-ray counts above ~140 kcps.

I should note that I’ve restricted the range over which I’ve applied the linear model to the Mo data plotted below to about 63 kcps.  This is lower than the maximum count rate at which I applied the linear model before, as later I became concerned that ratios calculated using values greater than this might display noticeable departure from linearity.  Using the linear fit, I obtain τ3 = 1.32 μs (channel 5).  Although I’ve chosen a value of τ3 = 1.30 μs for the exponential correction, the two values are essentially identical when considering propagation of counting error.  Increasing the value of τ3 produces a lower ratio value for either the exponential model or the Donovan et al. model.  Note that the exponential correction requires an iterative solution.  I'll let the plot do the rest of the talking.

« Last Edit: September 07, 2022, 11:55:03 PM by Brian Joy »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #19 on: September 06, 2022, 05:44:47 PM »
Holy cow, this looks interesting (and glad to see you've come over to the non-linear side!).   :)

As you say, we'll have to implement this with a Lambert W function so give us a couple of days to implement that and try it out...
« Last Edit: September 06, 2022, 09:27:48 PM by Probeman »
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #20 on: September 07, 2022, 10:27:23 AM »
Aurelien and I are evaluating Pomme's dead time expression and it looks to be worth implementing, but is limited in the maximum count rate it can handle (more so than the other expressions).
 
In fact it appears to be limited to count rates around 245K cps at 1.5 usec (~JEOL) or 126K cps at 3 usec (~Cameca).  Of course it all depends on the dead time constant utilized to obtain a constant k-ratio!    :)

I'll be plotting these exponential expression graphs up soon, but in the meantime here's another interesting observation: Aurelien and I plotted up some SX100 data from a while back for Ti Ka on LPET and Si ka on LTAP:

https://probesoftware.com/smf/index.php?topic=1489.msg11218#msg11218

https://probesoftware.com/smf/index.php?topic=1489.msg11223#msg11223

But when we plotted them both against the count rate of each primary standard we get this:



The Ti Ka data has a glitch at 60 nA as explained earlier because I acquired the primary standards *after* the secondary standards, so the k-ratio was constructed using the primary standard intensity from the previous beam current condition, which caused a slightly anomalous intensity when switch picoammeter ranges from below 50 nA to above 50 nA.

But still it is clear that something odd is going on because both sets of k-ratios are measured on the same spectrometer, over the same (primary standard) count range and at the same bias voltage (albeit slightly different dead time constants).  That is, why are the Ti Ka k-ratios plotting up nice and constant, while the Si Ka k-ratios are showing a dead time correction issue, albeit at fairly high count rates? 

Well Aurelien noticed that the PHA gains are quite different. So for example, here are the PHA settings for the Ti Ka k-ratios:

PHA Parameters:
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka
DEAD:     2.80    2.76    2.90    2.95    3.10
BASE:      .29     .29     .29     .29     .29
WINDOW    4.50    4.50    4.50    4.50    4.50
MODE:     INTE    INTE    INTE    INTE    INTE
GAIN:     942.    864.   1369.    818.    864.
BIAS:    1320.   1320.   1850.   1320.   1850.

And here for the Si Ka k-ratios:

PHA Parameters:
ELEM:    si ka   si ka   si ka   si ka   si ka
DEAD:     2.85    2.65    3.00    2.76    3.10
BASE:      .26     .26     .26     .26     .26
WINDOW    4.50    4.50    4.50    4.50    4.50
MODE:     INTE    INTE    INTE    INTE    INTE
GAIN:    2400.   2330.   3410.   1677.   2237.
BIAS:    1320.   1320.   1850.   1320.   1850.


This would seem to indicate that the gain setting has an effect on the dead time of the system beyond the photon coincidence effect, whereby the higher the gain, the higher the pulse pileup and therefore the higher the dead time constant necessary?

Increasing the dead time constant using the logarithmic expression for the Si Ka k-ratios would only cause an over correction at moderate count rates. And at this dead time and count rates the exponential expression fails...
« Last Edit: September 07, 2022, 03:22:25 PM by John Donovan »
The only stupid question is the one not asked!

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: Generalized dead times
« Reply #21 on: September 07, 2022, 11:53:01 AM »
Increasing the dead time constant using the logarithmic expression for the Si Ka k-ratios would only cause an over correction at moderate count rates. And at this dead time and count rates the exponential expression fails...

Let me emphasize that the exponential expression can only work for cases in which the correction is due to an extending dead time or pulse pileup.  If an enforced, non-extending dead time is present (as in the Cameca pulse processing circuitry), then a more involved treatment such as that of Pommé (2008) must be applied.  Also, keep in mind that SEM Geologist has modeled the latter situation at high count rates using Monte Carlo simulation.
« Last Edit: September 07, 2022, 03:22:41 PM by John Donovan »
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #22 on: September 07, 2022, 03:01:04 PM »
Very interesting...

The systematic and 100% exact answer can be hard to get to.
There is some outstanding huge problem with peaking into Cameca WDS board workings. Most of VME boards can be exposed from cabin using the extender board (it is board with parallel traces with 3x 96pin sockets at one side and  3x 96 pin plugs at other) so that VME board could be completely exposed outside the electronic cabin) - I had troubleshooted many problems with other boards as could check how signals were evolving in the path at live and where it fails. However WDS WME boards despite if it is new or old type won't boot if connected to VME back plane with extender. Probeman, I think You have something interesting going and maybe gain changes the pulse width? Was both PHA peaks centered exactly at the same position (2.5V)? Maybe this is where this previously reported different dead times for different elements come from.
If I remember correctly You have the new WDS board. OPAMP's used for signal handling and gain are high speed AD847 (I have picture if You would want to see). With slew rate of 300V/µs it for sure is not capable to introduce broadening of peak with higher gain. No wait, there is this AD7943 for setting Gain.
https://www.analog.com/media/en/technical-documentation/data-sheets/AD7943_7945_7948.pdf
It has interesting thing at figure 12 in specification. So this multiplication (gain) would work differently on low gain and high gain - but what that Frequency response does exactly mean? Could it broaden the pulses? This actually could be an additional source for PHA shift. I had found lately (and had already posted somewhere here) that higher gain and lower bias can give less PHA shift than bias/gain set with auto, and that is interesting as datasheet of that multiplication chip partly explains the observation.

But then higher gain implies it should work more linear, but in your graph your high gain analysis are misbehaving, and low gain is working more consistently.

Unless these had different centered PHA... wait actually even if they would be both centered at exact 2.5V they will behave a bit differently as Ti Ka will have Ar escape, and Si Ka won't... and then again we would expect Ar escape to be cut out at PHA baseline (I still need to check if that (baseline filter at integral mode) is the case for Cameca PHA electronics) and so would expect Ti  measurements to derail at high count rate, but it is Si which derails.

I guess the Ti would derail in the same manner probably just a bit further right from not covered with experiment count rates. Could this difference be caused by 10% of Ar esc peaks? I think most probable place for additional count loss is pulse hold chip which naturally delays the signal and has pretty slow slew rate. It is acceptable slew rate when it follows the signal, but after holding the amplitude (it depends very much how it is implemented, is hold released after ADC read the value, or is it kept for full set dead time) it could not drop to the base line (or significantly below the pulse top which needs to be measured). In such situation the tandem of comparator and pulsehold chip could miss the following pulse after dead time blanking lifted.

What is the picture on other spectrometers? Looking to such big gain differences between 1,2 and 4th spectrometers I guess You set your biases on spectrometers at same current, not at same count rate (i.e. 10kcps)?

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #23 on: September 07, 2022, 04:04:27 PM »
Also, keep in mind that SEM Geologist has modeled the latter situation at high count rates using Monte Carlo simulation.
Indeed, the Monte-Carlo sim is based only on pulse-pileups and deterministic (integer) blanking dead time and nothing else. It kinda works only for SXFive (should work for new generation of Cameca WDS boards for SX100 as well) and is not sufficient for old WDS boards which have some additional choke points (Analog signal multiplexing - which takes 1µs to switch between sources, actually because of multiplexing the high count rate will be choked differently depending how busy are the other 2 shared spectrometer signals at multiplexer for ADC). If anyone still use old VME boards on SX100, throwing out old WDS board and getting new gen is the only upgrade which really brings the very important changes (in particularly if there are any large XTAL added or/and differential PHA method is being used). I would do that immediately if we would have funds for that on our SX100. New board have no more analog signal multiplexing to shared ADC's - every spectrometer signal has its own pipeline and its own ADC, and only ADC-FPGA bus is shared, which is digital. Digital multiplexing or bus sharing is orders of magnitude faster to switch the sources than analog.

« Last Edit: September 08, 2022, 10:21:26 AM by sem-geologist »

jlmaner87

  • Post Doc
  • ***
  • Posts: 10
Re: Generalized dead times
« Reply #24 on: September 08, 2022, 03:09:53 PM »
Speaking of Monte Carlo pulse pileup modelling, a quick Google Scholar search provided several interesting reads. I've attached one that may spark some conversation.

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #25 on: September 09, 2022, 01:15:42 AM »
Speaking of Monte Carlo pulse pileup modelling, a quick Google Scholar search provided several interesting reads. I've attached one that may spark some conversation.

thanks @jlmaner87. I am also aiming at something like that, however my current simulation is much more simple. The presented pulse shape in the paper is different from that observed from Cameca WDS. Paper presents monopolar, where WDS has bipolar pulses.  On Cameca WDS such monopolar pulse (albeit much shorter, with shapping time of only 250ns) is differentiated second time producing the bipolar pulse - that has few nice outcomes: (1) the average DC bias (with regards to common ground) is close to 0V so there is no current flow, and signaling is only AC, (2) there is more room for more closely packed pulses without saturating amplifier(s) as differentiation "narrows pulse in half". Due to bipolar pulse and higher complexity I initially had not attempted to do the simulation with detailed shapes. Revisiting it recently, I was quite surprised how current course grained (with intervals of 1µs and pulses aligned at 1µs grid) and so over-simplified simulation could predict the count rates, in particular when changing the hardware settable dead time (integer).

The catch is that the integral mode actually doesn't care at all about pulse-pileups (or colliding galaxies, or "coincident photons" as probeman calls it) and it actually completely does not matter if other pulse(s) were after the counted pulse with a 4ns delay (which technically is a pulse-pileup) or 1µs, or 2.1 µs (which are technically blanked pulses) while we have set 3µs (integer blanking) dead time. In either cases those pulses will be ignored and only single first event will be registered. And yes, I was arguing contrary in some post a month ago saying that this exponential equation won't work as it does not account for two separate process - I was partially wrong, those two processes are ignored by integral mode without any distinction. However, pulse pile-up process would play crucial role in PHA differential mode and I am sure this log equation won't work. How it can be? Lets look to the problem from completely (at first glance very bizarre) different perspective - In integer mode we actually do not measure the count rate, but average time passed when counter is armed but no counts is registered - we measure pulse-free time. That will be large at low count rates and will diminish non-linearly to small values with increasing count rate. That diminishing will follow the exponential (reversed) law and will go closer to 0, but should never reach it. Thus in integral mode the counting is non-extending and non-paralyzable. Would it be extendable it would result in pralyzable behaviour. And that is basically why logarithmic equation of probeman at al kinda works at integral mode up to 450-500 kcps.

I think one of important note which should be added to the help and manuals and to PfS itself is that log mode dead time correction should be used only with integral PHA mode, and should be not used with diff mode. (I mean in particularly when diff is moderate sized window to pass only some well defined distribution, not the "universal" wide diff window which would reveal some deterioration only at very high count rates.)
« Last Edit: September 09, 2022, 06:18:18 AM by sem-geologist »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #26 on: September 09, 2022, 08:16:18 AM »
Thus in integral mode the counting is non-extending and non-paralyzable. Would it be extendable it would result in pralyzable behaviour. And that is basically why logarithmic equation of probeman at al kinda works at integral mode up to 450-500 kcps.

And count rates up to 300K to 400K cps is all we are claiming it is accurate to!  But that is still 10x better than the traditional expression! 

I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.  As previously shown here:

https://probesoftware.com/smf/index.php?topic=1489.msg11233#msg11233

I think one of important note which should be added to the help and manuals and to PfS itself is that log mode dead time correction should be used only with integral PHA mode, and should be not used with diff mode. (I mean in particularly when diff is moderate sized window to pass only some well defined distribution, not the "universal" wide diff window which would reveal some deterioration only at very high count rates.)

Absolutely. And in fact this is noted in the Constant K-Ratio procedure attached below in point #6.
The only stupid question is the one not asked!

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #27 on: September 09, 2022, 08:35:49 AM »
Unless these had different centered PHA... wait actually even if they would be both centered at exact 2.5V they will behave a bit differently as Ti Ka will have Ar escape, and Si Ka won't... and then again we would expect Ar escape to be cut out at PHA baseline (I still need to check if that (baseline filter at integral mode) is the case for Cameca PHA electronics) and so would expect Ti  measurements to derail at high count rate, but it is Si which derails.

So, I generally run a PHA scan at one of the low count rates and another at the highest count rates, just to make sure that the PHA peak is still relatively well centered as was shown here for Mn ka:

https://probesoftware.com/smf/index.php?topic=1489.msg11213#msg11213

But in fact, we don't center the PHA peak at low count rates, because pulse height depression will shift the PHA peak to the left.  On Camca instruments we adjust the PHA gain to place the PHA peak somewhat to the right of the PHA scan as described in the pdf in the post above and shown here here:

https://probesoftware.com/smf/index.php?topic=1466.msg11008#msg11008

I think we may have neglected to emphasize how important it is to attempt to keep the PHA peak relatively well centered in the PHA range.  As I showed with Anette's data in this post, the JEOL instrument can show severe pulse height depression at high count rates:

https://probesoftware.com/smf/index.php?topic=1489.msg11230#msg11230

Basically at a relatively low count rate adjust the gain until the PHA peak is around 3 or 3.5 volts on a Cameca instrument, and on a JEOL instrument around 5 or 6 volts or so.  But then also perform a PHA scan at the highest count rate just to make sure that the PHA peak is still relatively well centered.

What is the picture on other spectrometers? Looking to such big gain differences between 1,2 and 4th spectrometers I guess You set your biases on spectrometers at same current, not at same count rate (i.e. 10kcps)?

Yes.
« Last Edit: September 09, 2022, 09:48:21 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #28 on: September 09, 2022, 09:16:57 AM »
I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.
You mean It is not the same thing? and small time span (4-10 ns) is more important at low current than large (1µs sized) pulses at high current? If to distinguish these at all at least it should be other way around as in proposed way it makes no logical sense. Also present bending at very high count rates in your plots is rather due to the pulse catch mechanism sluggishness than a pileup. Let me explain how pulses are detected.

Looking to what chips are presented on boards it is clear that counting (integral mode, or pulse sensing) uses classical tandem of comparator and pulse hold chip known from electronic textbooks.
The amplified (by gain multiplier) and buffered signal (with pulses) is fed into tandem of comparator and pulsehold chip. pulse hold chip has two functions: 1) holding and outputing the catched voltage level if hold pin is triggered 2) delaying the signal by fraction of µs unless its "pulse hold" function is triggered. So pulse hold chip has one input of the signal, where comparator has two inputs. Raw pulse signal goes to both chips. second input of comparator is the delayed output of pulse hold chip (the same signal goes to ADC). So the comparator detects pulse when its two inputs are different by some set voltage offset (it can detect rising and falling edge of pulse). It probably then triggers the FPGA and FPGA then activates the hold pin of the pulsehold chip. Now everything is nice up to this point, however to detect the next pulse the holding function of pulsehold chip needs to be deactivated, and pulsehold chip can't instantly go to very low voltage, it quite sluggishly drops down with delay, and thus comparator can be blind for a pulse even if FPGA is listening (i.e. the set 3µs had passed) for pulse trigger from comparator. This is where pulse pile-up comes in - if holded pulse was a pile-up twice or more the voltage than normal pulse, the voltage drop after hold function of pulsehold chip then released can not manage to drop fast enough below the voltage of consecutive normal pulse (which would be also much lower due to baseline drift than at normal and low count rates). Such situation would drastically increase with increasing the pulse density (counting rate).

There are few unknown important details:
* Does Cameca pulse sensing is triggered with rising edge of pulse, or with falling edge? (in first case, knowing the fixed shapping time it is very easy to catch the peak maximum value; in second case when pileups are present the catched voltage could be significantly off from real peak absolute V value)
* Does "hold" pin at pulsehold chip is being kept activated for whole time of set integer dead time or is it released as soon the ADC reads the value?

Should I animate that principles?
« Last Edit: September 09, 2022, 09:29:56 AM by sem-geologist »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #29 on: September 09, 2022, 09:31:14 AM »
I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.
You mean It is not the same thing? and small time span (4-10 ns) is more important at low current than large (1µs sized) pulses at high current? If to distinguish these at all at least it should be other way around as in proposed way it makes no logical sense. Also present bending at very high count rates in your plots is rather due to the pulse catch mechanism sluggishness than a pileup.

OK, let's call it "pulse catch mechanism sluggishness".   I have zero knowledge of the electronic mechanisms (and to be honest I really am not interested in all the gritty details!   :D  ), I'm just trying to model the dead time effects mathematically (whatever they are) so we can obtain constant k-ratios for quantitative analysis on both JEOL and Cameca instruments!   :)

But why do you think the constant k-ratio plots for Ti Ka are fine up to ~180K cps, but the Si Ka plot (also up to ~180K cps but at higher PHA again settings) are not corrected properly using a logarithmic expression?

By the way, at these count rates (and Cameca dead times) the exponential expression fails (mathematically) very quickly so that is not an option.  See the next post.
« Last Edit: September 09, 2022, 09:49:12 AM by Probeman »
The only stupid question is the one not asked!