Author Topic: Generalized dead times  (Read 4602 times)

Brian Joy

  • Professor
  • ****
  • Posts: 296
Generalized dead times
« on: August 31, 2022, 08:27:35 PM »
I’ve attached a paper by Jörg Müller, who has written (or wrote) extensively on the subject of dead time correction.  In the paper, he presents a generalized model that can be used to correct count rates for both Geiger-Müller and proportional counters, regardless of whether the dead time is “natural” or is set electronically.  He argues that most cases may be described as intermediate between non-extending (non-paralyzable) and extending (paralyzable) behavior (see Figure 2).  A criticism of his approach is that he does not account explicitly for pulse pileup but rather considers it a contribution to the extendible dead time.  This is a source of significant confusion in the literature (look for papers by Pommé, for instance).  The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #1 on: August 31, 2022, 09:24:35 PM »
The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.

But still more accurate than the traditional expression that the JEOL and Cameca software utilize!  😁

We look forward to reading the paper. Our current efforts to improve upon the traditional  dead time correction is simply based on better modeling of the probabilities of multiple photon coincidence. This results in a 10x improvement in the range of the dead time correction accuracy (from tens of thousands cps to hundreds of thousands of cps).

Expressions that deal with hardware/electronics effects at even higher count rates should be investigated, but a 10x improvement is still a 10x improvement.

The two term Willis expression is more accurate than the traditional expression as you have already acknowledged, and the six term expression further improves accuracy. Which is exactly why we kept going and eventually integrated it with the logarithmic expression! In summary the logarithmic expression only handles the probabilities of photon coincidence, but it’s a damn good start!

In the meantime we’ve been looking at constant k-ratio data from a number of labs over the last week, and it is quite amazing to see how how well this method can reveal subtle instrumental artifacts.  Have you had a chance to try it on on your instrument? There’s a nice description of the process here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
« Last Edit: September 01, 2022, 07:08:26 AM by Probeman »
The only stupid question is the one not asked!

Brian Joy

  • Professor
  • ****
  • Posts: 296
Re: Generalized dead times
« Reply #2 on: September 01, 2022, 09:45:48 PM »
I’ve attached a paper by S. Pommé in which the author expands on the treatment of Müller through explicit consideration of pulse pileup in conjunction with an electronically imposed dead time.  Although pulse pileup is a more severe problem for the JEOL WDS pulse processing circuitry, it cannot be ignored -- especially at high count rates -- even for the case in which a dead time is enforced electronically.

It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.
Brian Joy
Queen's University
Kingston, Ontario
JEOL JXA-8230

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #3 on: September 02, 2022, 03:29:27 AM »
It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.

Exactly! This is really annoying as often Proportional Counters are mentioned like subsection under Geiger-Müller (GM) counter in larger research works. This is where this nonsense about any dead-time inside Proportional Counter generates from and keeps propagating through all last decades (and had made into different kind of books on microanalysis subjects). This mistake and oversight resulted most probably due to close physical similarity between GPC and GM. The fact that GPC has completely different working principles, which by the way are much more closer to solid state like detectors (i.e. SDD)  than to GM, are overlooked. Also popular chart of gas chamber mode vs applied potential hints that those methods are similar and as there could be nice transition from one to other. However it is not accurate as actually between proportional-reduced proportional mode and GM mode there is missing picture of SQS (self quenched streamers) - GM is not quenched streamer (that means streamer which stretches from cathode to anode and bridges cathode with anode making a complete discharge), where proportional and reduced proportionality modes are Townsend's avalanche dominated. SQS is streamer like GM, but quenches itself (stops itself) from propagating at the middle between cathode and anode, thus produce extreme amplitude (incredible P/B ratio, no need for pre-amplification, can be directly measured) pulse with some substantial dead time with significant discharge of charge reserved at cathode and buffer capacitor, but far from full discharge like GM.

The similarity of solid state detectors to proportional counters is that incident events (photoelectrons and resulting amplified currents) just gives a scratch to the charge reserve of the detectors cathode where GM fully discharges it. Because GM fully discharge - that is the main reason for few hundred ms of dead time on GM - discharging and charging capacitors full to zero and zero to full state takes lots of time. We don't need that huge full discharge/charge part on Proportional counters as we have no physical means to produce such a huge discharge on the Cathode (and attached reserve capacitors), we barely "scratch the surface". If proportional counter would be in state to do that (deep discharge), we would need no charge sensitive preamplifiers as such a huge discharge would produce very huge voltage drop and that would be possible to detect directly. But that is not a case, and that is why we need preamplifiers in our proportional counters, as single event are able to discharge only some millionth part of reserved charge of cathode, which is fast to drop and restore back (just a few ns), and small drop in few microVolts (from 1-2kV) is unable to change anyhow the potential field and its ability to attract simultaneously other coincident photoelectrons.

Proportional counters, at least those used in EPMA's and even at most extreme achievable count rates there have really absolutely no dead time, not a chance (that would require not 2,3,9, 99, but a million coincident X-rays within few ns which is not possible. Maybe synchrotron radiation could deliver such incident rate but not EPMA). All deadtime we observe on EPMA WDS is only and only electric signal losses somewhere further in the pipeline after the GPC.

BTW, It is possible to force the solid state detectors to "behave" like GM (at least for a single event, depends from material), but it is unpractical as cracked or molten solids do not heal after such deep discharge, where GM gas can get back into its initial state without a problem ("heals up" after the event).
« Last Edit: September 02, 2022, 08:45:40 AM by sem-geologist »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #4 on: September 02, 2022, 11:35:10 AM »
Aurelien and I finally had a chance to look over the paper by Müller and we found it interesting, though disappointing in that it is an entirely theoretical paper with no data presented to evaluate any of the expressions. He does state "only the outcome of several studies underway will tell whether the suggested expressions are indeed valid".

We do not see any follow up papers from him in a Google Scholar search however, though we did find this paper from two other authors (An experimental test of Müller statistics for counting systems with a non-extending dead time), which we have not yet had a chance to look over:

https://doi.org/10.1016/0029-554X(78)90544-X

So that might be worth a look.

I’ve attached a paper by Jörg Müller, who has written (or wrote) extensively on the subject of dead time correction...  The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term...

Unfortunately equation 5 (truncated or not) is not related to the Willis expression nor to or our "extended Willis" multiple term expression. In our case the coefficients are 1/2, 1/3, 1/4... and in the Müller paper they are 1/2, 2/3, 9/8...  In fact, Aurelien thinks the last term 9/8 is a typo and should actually be 3/8 to be mathematically consistent.

The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.

When we first read this we thought Brian was referring to something stated in the Müller paper, but since it's an entirely theoretical paper, we could not understand why Brian would say that. And then we realized that he is just repeating his "insufficiently accurate" claim from his previous posts.  And to be honest, though I (and the co-authors) have tried, we've never been able to make sense of his claim.

But then, a few hours after one of our recent zoom meetings with the manuscript co-authors, a light bulb finally went off in my head and I think I finally realized where Brian went wrong in his data analysis.  The explanation (I hope) will provide some useful information to all and I must say I think it's quite appropriate that this occurs in this topic on Generalized Dead time which he created, because the mistake he made is related to how we interpret these various effects that all fall under the heading of the (generalized) "dead time" correction.

So let's start with Brian's "delta" plot that he keeps pointing to and expand it a little into the main area of interest:


 
We see the red circles which are the traditional linear expression and there's the green circles which are the logarithmic expression, both expressions using a dead time constant of 1.07 usec (I am assuming since he doesn't specify that for the traditional expression). And we see clearly the logarithmic expression provides identical results at low count rate (as expected), and more constant k-ratios at higher count rates, than the traditional expression (as he has already agreed).  So far so good. 

But then he does something very strange.  He then proceeds to plot (green line) the logarithmic expression using a dead time constant of 1.19 usec!   Why this value?   And why did he not also plot the traditional expression using the same 1.19 usec constant?  Because in both cases the result will be a severe over correction of the data!  Why would someone do that? 

I'm just guessing here, but I think he thought: OK, at the really high count rates even the logarithmic expression isn't working perfectly, so I'll just start increasing the dead time constant to force those really high count rate values lower.

But, as we have stated numerous times, once you have adjusted your dead time constant using the traditional linear expression (or obtained it from the JEOL engineer), one should just continue to use that value, or in the case of very high count rates where you might note a very small over correction of the data using the log expression, one might slightly decrease the dead time constant by 0.02 or 0.03 usec.  But it should never be increased to produce an over correction of the data at lower count rates.

Let's now discuss the underlying mechanisms. As both BJ and SG have noted there are probably several underlying mechanisms that are described by the general term "dead time".  We maintain that some of these effects (above 50K cps) are due to multiple photon coincidence. And above 300K or 400K cps other hardware/electronic effects become more dominant as BJ and SG have been discussing.  Why do I say this?  Because assuming Poisson statistics at these extremely high count rates just doesn't make any difference.  But again, for count rates under say 200 to 300 or even 400K cps, the new expressions help enormously.

Here is a plot showing the traditional, Willis and log expressions for Anette's Ti PETL data (originally 1.32 usec from the JEOL engineer, but then adjusted down slightly to 1.29 usec):



You will note that the k-ratios are increasingly constant as we go from the traditional expression (which only deals with single photon coincidence) to the two term Willis expression (which deals with two photons coincident with a single incident photon) to the log expression (which deals with any number of photons coincident with a single photon).  However, and this is a key point, you will note that at some sufficiently high count rate even the logarithmic expression fails to correct properly for these dead time effects.

If we then attempt to force the dead time constant to correct for these extremely high count rates (by arbitrarily increasing the dead time constant), we are simply attempting to correct for other (non-Poisson) dead time effects as seen here, which produces an over correction just as Brian saw:


 
Note the over correction after the dead time was arbitrarily increased from 1.29 usec to 1.34 usec (red symbols).

This should not be surprising. All three of these expressions are only an attempt to model the dead time as mathematical (Poisson) probabilities. The traditional linear method was a good attempt when calculating on slide rules was a very nice thing. Now that we have computers I say let's also account for additional Poisson probabilties from multi-photon coincidence. 

I know nothing about WDS pulse processing hardware/electronics, but let me now speculate by showing the Anette's data plot with some notations:
 


I am proposing that while these various non-linear dead time expressions have allowed us to perform quantitative analyses at count rates roughly 10x greater than were previously possible, at even higher count rates (>400K cps) we start to run into other non-Poisson effects (from limitations of hardware/electronics) that may require additional terms in our dead time correction as proposed by Müller and others.  I suspect that these additional hardware related terms may require careful hardware dependent calibration or even as SG has proposed, new detectors and /or pulse processing electronics.

I welcome any comments and discussion.
« Last Edit: September 02, 2022, 02:43:53 PM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #5 on: September 03, 2022, 03:02:02 AM »
Very relevant question for JEOL probe users: does Jeol probes have the integral mode (ignores the PHA mode)? What about most recent Jeol probe models? I am asking as the last PHA plots I saw sent by Brian, it had no background distribution visible, only the pulse distribution. On Cameca instruments integral mode uses only the pulse sense part of electronics which is able even to sense pulses with baseline drifting (at very high count rates) to negative voltage, where PHA (diff mode) would filter such pulses out. I think the log deadtime correction is able to work consistently with much further than 500kcps (input rate) on Cameca instruments.
Look to this picture (again):

the pulse nr 3. would be recognised with integral mode, but would be rejected with diff mode. And that can be one of additional process for missing counts at high count rates seen from Anette's dataset. There are other posibilities, but that is the most likely for Jeol to kick in at such medium count rates of half million.

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #6 on: September 03, 2022, 08:21:34 AM »
Very relevant question for JEOL probe users: does Jeol probes have the integral mode (ignores the PHA mode)? What about most recent Jeol probe models?

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

All the the constant k-ratio data I have showed in the last two months (including Anette's) has been using integral mode.  At the very beginning of the constant k-ratio topic I did perform some differential mode acquisitions on my Cameca, but I switched to integral after you chastised me for using differential mode!     :D

When I ran some of my Ti Ka k-ratios on TiO2 and Ti metal, I checked the PHA distributions at both ends of the acquisition, first at 10 nA:



and then at 200 nA:



just to be sure that they weren't being clipped by pulse height depression.  On this spectrometer we were getting around  1300 cps/nA so not too hot, so at 10 nA that would be 13K cps and at 200 nA that would be 260K cps.   I'm actually pretty impressed at how little PHA shift there was going from 13K cps to 260K cps using the same PHA settings (baseline = 0.3, gain = 800, bias = 1320).

To all: please note that the PHA peak shifts lower (to the left) at higher beam currents, so for the constant k-ratio acquisition we adjust the PHA at low beam currents to be to the right of the center of the PHA scan region.  Normally of course we adjust the PHA peak to the left of the center because we are usually performing our peak scans and PHA scans on a standard with a higher concentration of the element than our unknowns, and so as we go to lower intensities in our unknowns, the peak shifts to the right, so need need to leave room for that shift to the right.  But for the constant k-ratio acquisitions (assuming we tune up at low beam currents), the PHA shift goes to the left, so we start higher in the PHA distribution to avoid clipping the PHA peak at high count rates.  If this doesn't make sense ask, because this is an important point about EPMA!

The above PHA scans were using the normal Cameca MCA acquisition, but I can also acquire JEOL style PHA scans by scanning the PHA baseline on the Cameca, which often produces a much higher energy resoloution PHA scan.   I will try to get to that also.

However, Anette recently sent me another "terrifying" data set at even higher (> 1M cps!) count rates on her JEOL TAPL crystal of SiO2/Si k-ratios and there we see some problematic PHA shifting. I will post her new data as soon as I get a chance.   Suffice to say, we now realize that we need to modify the PHA settings as we ramp up to crazy high count rates to prevent pulse high depression from cutting off some of the PHA peak.
« Last Edit: September 03, 2022, 08:50:11 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #7 on: September 03, 2022, 08:43:16 AM »

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

And that is not an integral mode, but "pseudo" integral and that bites at high count rate due to signal baseline (the average base of pulse, not the lower threshold of PHA) shifting below the 0 V. With such PHA lower filter (or baseline - as you call it, which is lower threshold value of PHA) the 3rd pulse in my presented oscilloscope snapshot would be rejected at such pseudo-"integral" mode. However real integral mode (Cameca, hardware) would accept such pulse, even if its peak would be completely shifted below the 0V. The true integral mode is completely resilient against any PHA shifts (which is in real the signal baseline moving much below 0V due to increased density (count rate) of pulses).
« Last Edit: September 03, 2022, 08:52:06 AM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #8 on: September 03, 2022, 08:55:34 AM »

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

And that is not an integral mode, but "pseudo" integral and that bites at high count rate due to signal baseline (the average base of pulse, not the lower threshold of PHA) shifting below the 0 V. With such PHA lower filter (or baseline - as you call it, which is lower threshold value of PHA) the 3rd pulse in my presented oscilloscope snapshot would be rejected at such pseudo-"integral" mode. However real integral mode (Cameca, hardware) would accept such pulse, even if its peak would be completely shifted below the 0V. The true integral mode is completely resilient against any PHA shifts (which is in real the signal baseline moving much below 0V due to increased density (count rate) of pulses).

If you already knew, why the heck did you ask?     :)

You may be correct about these details, but based on Anette's constant k-ratio data from her new JEOL instrument we can obtain consistent k-ratios up to around 400K or 500K cps, so pretty darn good.

Have you had a chance to acquire some constant k-ratios on your instrument(s)?   
« Last Edit: September 03, 2022, 09:29:49 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #9 on: September 03, 2022, 09:26:06 AM »
If you already knew, why the heck did you ask?     :)

You may be correct about these details, but based on Anette's constant k-ratio data from her new JEOL instrument we can obtain consistent k-ratios up to around 400K or 500K cps, so pretty darn good.

I had not known if and how integral mode is/works on Jeol. Thanks for sharing info.

Yes, up to 400kcps, or 500kcps that will work in this pseudo integral mode as:

S_b => baseline of signal
Ph => absolute pulse height from baseline of signal
P0 => pulse height relative to 0V (common ground with few mV fluctuations)
PHA_L => PHA baseline or simply lowest PHA threashold

pulse will be counted at such "integer" mode if P0 > than PHA_L;
so the problem wont show in untill baseline drops below PHA_L - Ph. So at low count rates average S_b will be close to 0; increasing count rate and count density this S_b will start shifting to negative values and so at 400 kcps some parts of S_b will start to be lesser than PHA_L - Ph. That is why trend departs from 400kcps.

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #10 on: September 03, 2022, 10:46:29 AM »
Yes, up to 400kcps, or 500kcps that will work in this pseudo integral mode as:

S_b => baseline of signal
Ph => absolute pulse height from baseline of signal
P0 => pulse height relative to 0V (common ground with few mV fluctuations)
PHA_L => PHA baseline or simply lowest PHA threashold

pulse will be counted at such "integer" mode if P0 > than PHA_L;
so the problem wont show in untill baseline drops below PHA_L - Ph. So at low count rates average S_b will be close to 0; increasing count rate and count density this S_b will start shifting to negative values and so at 400 kcps some parts of S_b will start to be lesser than PHA_L - Ph. That is why trend departs from 400kcps.

I think you might be on to something!

I had stopped looking at my SX100 constant k-ratio data from a while back because at that time I had not yet appreciated the importance of acquiring the primary standard prior to the secondary standard (at each beam current). That way, when the standard intensity drift correction is turned off, the primary standard utilized in the k-ratio is always using the same beam current as the secondary standard.  And any picoammeter non-linearity is automatically nulled out.  See this post for the details on my mea culpa moment:

https://probesoftware.com/smf/index.php?topic=1466.msg11189#msg11189

So going back through some of that old data, here's a constant k-ratio acquisition using TiO2 and Ti metal (unfortunately I acquired the TiO2 *before* the Ti metal for each k-ratio, so the k-ratio is constructed using the primary standard from the previous beam current condition, which causes a "glitch" at 60 nA when the instrument switches picoammeter ranges at 50 nA!), so please ignore the 60 nA glitch:



But look at how consistent the k-ratios are at count rates up to 620K cps!   These were all integral PHA acquisitions by the way.

I need to run up to some even high count rates as soon as I get a chance.  Have you been able to acquire any constant k-ratio data sets on your Cameca instruments?  It would be very interesting to see your data.  I don't remember how the Cameca software utilizes the primary standard intensity data for constructing k-ratios, but just be sure to acquire both the primary and secondary standards at the same beam current (for each beam current), to null out any picoammeter non-linearities.
The only stupid question is the one not asked!

jlmaner87

  • Post Doc
  • ***
  • Posts: 10
Re: Generalized dead times
« Reply #11 on: September 03, 2022, 11:14:39 AM »
@sem-geologist: Thanks for posting the oscilloscope readings. Very helpful!

I have attached a manuscript that may be useful to this discussion. See equations (4) and (5). In equation (4), they use two dead time constants: one for 'paralyzing' behaviors and another for 'non-paralyzing' behaviors.

On another note, I thought the 'integral' modes in the Cameca and JEOL instruments were identical? They both use a baseline but 'count' all pulses above the baseline (whether those pulses are positive or negative in amplitude).

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #12 on: September 03, 2022, 11:27:16 AM »
It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.

This paper in the post above on Geiger-Mueller (GM) dead times just posted by James Maner from Almutairi et al. (2019) is very interesting. I'm enjoying reading it, but I agree with Brian that it's difficult to say how much of this applies to our proportional detectors.

They say for example that dead time effects are added to the pulse stream at all stages of the system from the detector itself, through the final digital counting, though they say that in GM systems it is the detector dead time that dominates these effects. They also maintain that GM detectors are neither ideal paralyzing nor non-paralyzing models and are a mixture of both depending on the detector voltage.

How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?
« Last Edit: September 03, 2022, 11:31:39 AM by Probeman »
The only stupid question is the one not asked!

sem-geologist

  • Professor
  • ****
  • Posts: 302
Re: Generalized dead times
« Reply #13 on: September 03, 2022, 03:16:54 PM »
On another note, I thought the 'integral' modes in the Cameca and JEOL instruments were identical? They both use a baseline but 'count' all pulses above the baseline (whether those pulses are positive or negative in amplitude).

Ok, now you make me starting to doubt myself. From electronic circuit POV at integral mode PHA could be skipped completely, but maybe I miss some human factor there and maybe you are right about  this "forced" PHA check of lower boundary. This in real is very easy to check. I will bridge the signal with a help of some resistor with -15V rail (analog power -15V of spectrometer) to shift the baseline significantly and will look if integral counting will cease.

How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?

I already wrote that it does not. Working principles of proportional counter is more close to any solid state detector than to GM. GPC is easier understood as like gaseous transistor, where GM could be compared to a latching relay. The thing is this paper fails at very fundamental level - it states that GM is Townsend mode, while in real it is Streamer. There is two competing theories for discharge in gases: Townsend and Streamer. The distinction gets in particular clear after reading any paper about SQS (self quenching streamers). GPC is Townsend, and SQS and GM are not. This paper does not discover nothing what would be not obvious for streamers or out of ordinary - its basically Ohm's law, and stated observations in paper follows it exactly.
« Last Edit: September 03, 2022, 03:55:09 PM by John Donovan »

Probeman

  • Emeritus
  • *****
  • Posts: 2839
  • Never sleeps...
    • John Donovan
Re: Generalized dead times
« Reply #14 on: September 03, 2022, 03:58:55 PM »
How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?

I already wrote that it does not. Working principles of proportional counter is more close to any solid state detector than to GM. GPC is easier understood as like gaseous transistor, where GM could be compared to a latching relay.

OK, thanks for your explanation. Still it is interesting to see the mathematical (non-linear) form of these dead time expressions! But the second question still stands.

For example, there must be a non-zero recovery (dead ) time for the gas in the detector to de-ionize. And I believe you stated that there are also some dead time effects within the pulse processing electronics, correct?

So how might we investigate the relative values and magnitudes of the detector dead time versus the electronics dead time?   I'm thinking of eq. 5 in the Almutairi et al. paper where they have two different dead times and a weighting factor between the two...  could something like this be useful for our WDS systems?
« Last Edit: September 04, 2022, 08:19:34 AM by Probeman »
The only stupid question is the one not asked!