Probe Software Users Forum

General EPMA => Discussion of General EPMA Issues => Topic started by: Brian Joy on August 31, 2022, 08:27:35 PM

Title: Generalized dead times
Post by: Brian Joy on August 31, 2022, 08:27:35 PM
I’ve attached a paper by Jörg Müller, who has written (or wrote) extensively on the subject of dead time correction.  In the paper, he presents a generalized model that can be used to correct count rates for both Geiger-Müller and proportional counters, regardless of whether the dead time is “natural” or is set electronically.  He argues that most cases may be described as intermediate between non-extending (non-paralyzable) and extending (paralyzable) behavior (see Figure 2).  A criticism of his approach is that he does not account explicitly for pulse pileup but rather considers it a contribution to the extendible dead time.  This is a source of significant confusion in the literature (look for papers by Pommé, for instance).  The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.
Title: Re: Generalized dead times
Post by: Probeman on August 31, 2022, 09:24:35 PM
The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.

But still more accurate than the traditional expression that the JEOL and Cameca software utilize!  😁

We look forward to reading the paper. Our current efforts to improve upon the traditional  dead time correction is simply based on better modeling of the probabilities of multiple photon coincidence. This results in a 10x improvement in the range of the dead time correction accuracy (from tens of thousands cps to hundreds of thousands of cps).

Expressions that deal with hardware/electronics effects at even higher count rates should be investigated, but a 10x improvement is still a 10x improvement.

The two term Willis expression is more accurate than the traditional expression as you have already acknowledged, and the six term expression further improves accuracy. Which is exactly why we kept going and eventually integrated it with the logarithmic expression! In summary the logarithmic expression only handles the probabilities of photon coincidence, but it’s a damn good start!

In the meantime we’ve been looking at constant k-ratio data from a number of labs over the last week, and it is quite amazing to see how how well this method can reveal subtle instrumental artifacts.  Have you had a chance to try it on on your instrument? There’s a nice description of the process here:

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100
Title: Re: Generalized dead times
Post by: Brian Joy on September 01, 2022, 09:45:48 PM
I’ve attached a paper by S. Pommé in which the author expands on the treatment of Müller through explicit consideration of pulse pileup in conjunction with an electronically imposed dead time.  Although pulse pileup is a more severe problem for the JEOL WDS pulse processing circuitry, it cannot be ignored -- especially at high count rates -- even for the case in which a dead time is enforced electronically.

It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.
Title: Re: Generalized dead times
Post by: sem-geologist on September 02, 2022, 03:29:27 AM
It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.

Exactly! This is really annoying as often Proportional Counters are mentioned like subsection under Geiger-Müller (GM) counter in larger research works. This is where this nonsense about any dead-time inside Proportional Counter generates from and keeps propagating through all last decades (and had made into different kind of books on microanalysis subjects). This mistake and oversight resulted most probably due to close physical similarity between GPC and GM. The fact that GPC has completely different working principles, which by the way are much more closer to solid state like detectors (i.e. SDD)  than to GM, are overlooked. Also popular chart of gas chamber mode vs applied potential hints that those methods are similar and as there could be nice transition from one to other. However it is not accurate as actually between proportional-reduced proportional mode and GM mode there is missing picture of SQS (self quenched streamers) - GM is not quenched streamer (that means streamer which stretches from cathode to anode and bridges cathode with anode making a complete discharge), where proportional and reduced proportionality modes are Townsend's avalanche dominated. SQS is streamer like GM, but quenches itself (stops itself) from propagating at the middle between cathode and anode, thus produce extreme amplitude (incredible P/B ratio, no need for pre-amplification, can be directly measured) pulse with some substantial dead time with significant discharge of charge reserved at cathode and buffer capacitor, but far from full discharge like GM.

The similarity of solid state detectors to proportional counters is that incident events (photoelectrons and resulting amplified currents) just gives a scratch to the charge reserve of the detectors cathode where GM fully discharges it. Because GM fully discharge - that is the main reason for few hundred ms of dead time on GM - discharging and charging capacitors full to zero and zero to full state takes lots of time. We don't need that huge full discharge/charge part on Proportional counters as we have no physical means to produce such a huge discharge on the Cathode (and attached reserve capacitors), we barely "scratch the surface". If proportional counter would be in state to do that (deep discharge), we would need no charge sensitive preamplifiers as such a huge discharge would produce very huge voltage drop and that would be possible to detect directly. But that is not a case, and that is why we need preamplifiers in our proportional counters, as single event are able to discharge only some millionth part of reserved charge of cathode, which is fast to drop and restore back (just a few ns), and small drop in few microVolts (from 1-2kV) is unable to change anyhow the potential field and its ability to attract simultaneously other coincident photoelectrons.

Proportional counters, at least those used in EPMA's and even at most extreme achievable count rates there have really absolutely no dead time, not a chance (that would require not 2,3,9, 99, but a million coincident X-rays within few ns which is not possible. Maybe synchrotron radiation could deliver such incident rate but not EPMA). All deadtime we observe on EPMA WDS is only and only electric signal losses somewhere further in the pipeline after the GPC.

BTW, It is possible to force the solid state detectors to "behave" like GM (at least for a single event, depends from material), but it is unpractical as cracked or molten solids do not heal after such deep discharge, where GM gas can get back into its initial state without a problem ("heals up" after the event).
Title: Re: Generalized dead times
Post by: Probeman on September 02, 2022, 11:35:10 AM
Aurelien and I finally had a chance to look over the paper by Müller and we found it interesting, though disappointing in that it is an entirely theoretical paper with no data presented to evaluate any of the expressions. He does state "only the outcome of several studies underway will tell whether the suggested expressions are indeed valid".

We do not see any follow up papers from him in a Google Scholar search however, though we did find this paper from two other authors (An experimental test of Müller statistics for counting systems with a non-extending dead time), which we have not yet had a chance to look over:

https://doi.org/10.1016/0029-554X(78)90544-X

So that might be worth a look.

I’ve attached a paper by Jörg Müller, who has written (or wrote) extensively on the subject of dead time correction...  The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term...

Unfortunately equation 5 (truncated or not) is not related to the Willis expression nor to or our "extended Willis" multiple term expression. In our case the coefficients are 1/2, 1/3, 1/4... and in the Müller paper they are 1/2, 2/3, 9/8...  In fact, Aurelien thinks the last term 9/8 is a typo and should actually be 3/8 to be mathematically consistent.

The Willis correction function is consistent with Müller’s equation 5 (non-extendible model) truncated after the second order term but is not sufficiently accurate.

When we first read this we thought Brian was referring to something stated in the Müller paper, but since it's an entirely theoretical paper, we could not understand why Brian would say that. And then we realized that he is just repeating his "insufficiently accurate" claim from his previous posts.  And to be honest, though I (and the co-authors) have tried, we've never been able to make sense of his claim.

But then, a few hours after one of our recent zoom meetings with the manuscript co-authors, a light bulb finally went off in my head and I think I finally realized where Brian went wrong in his data analysis.  The explanation (I hope) will provide some useful information to all and I must say I think it's quite appropriate that this occurs in this topic on Generalized Dead time which he created, because the mistake he made is related to how we interpret these various effects that all fall under the heading of the (generalized) "dead time" correction.

So let's start with Brian's "delta" plot that he keeps pointing to and expand it a little into the main area of interest:

(https://probesoftware.com/smf/gallery/395_02_09_22_9_45_57.png)
 
We see the red circles which are the traditional linear expression and there's the green circles which are the logarithmic expression, both expressions using a dead time constant of 1.07 usec (I am assuming since he doesn't specify that for the traditional expression). And we see clearly the logarithmic expression provides identical results at low count rate (as expected), and more constant k-ratios at higher count rates, than the traditional expression (as he has already agreed).  So far so good. 

But then he does something very strange.  He then proceeds to plot (green line) the logarithmic expression using a dead time constant of 1.19 usec!   Why this value?   And why did he not also plot the traditional expression using the same 1.19 usec constant?  Because in both cases the result will be a severe over correction of the data!  Why would someone do that? 

I'm just guessing here, but I think he thought: OK, at the really high count rates even the logarithmic expression isn't working perfectly, so I'll just start increasing the dead time constant to force those really high count rate values lower.

But, as we have stated numerous times, once you have adjusted your dead time constant using the traditional linear expression (or obtained it from the JEOL engineer), one should just continue to use that value, or in the case of very high count rates where you might note a very small over correction of the data using the log expression, one might slightly decrease the dead time constant by 0.02 or 0.03 usec.  But it should never be increased to produce an over correction of the data at lower count rates.

Let's now discuss the underlying mechanisms. As both BJ and SG have noted there are probably several underlying mechanisms that are described by the general term "dead time".  We maintain that some of these effects (above 50K cps) are due to multiple photon coincidence. And above 300K or 400K cps other hardware/electronic effects become more dominant as BJ and SG have been discussing.  Why do I say this?  Because assuming Poisson statistics at these extremely high count rates just doesn't make any difference.  But again, for count rates under say 200 to 300 or even 400K cps, the new expressions help enormously.

Here is a plot showing the traditional, Willis and log expressions for Anette's Ti PETL data (originally 1.32 usec from the JEOL engineer, but then adjusted down slightly to 1.29 usec):

(https://probesoftware.com/smf/gallery/395_02_09_22_10_01_44.png)

You will note that the k-ratios are increasingly constant as we go from the traditional expression (which only deals with single photon coincidence) to the two term Willis expression (which deals with two photons coincident with a single incident photon) to the log expression (which deals with any number of photons coincident with a single photon).  However, and this is a key point, you will note that at some sufficiently high count rate even the logarithmic expression fails to correct properly for these dead time effects.

If we then attempt to force the dead time constant to correct for these extremely high count rates (by arbitrarily increasing the dead time constant), we are simply attempting to correct for other (non-Poisson) dead time effects as seen here, which produces an over correction just as Brian saw:

(https://probesoftware.com/smf/gallery/395_02_09_22_10_23_41.png)
 
Note the over correction after the dead time was arbitrarily increased from 1.29 usec to 1.34 usec (red symbols).

This should not be surprising. All three of these expressions are only an attempt to model the dead time as mathematical (Poisson) probabilities. The traditional linear method was a good attempt when calculating on slide rules was a very nice thing. Now that we have computers I say let's also account for additional Poisson probabilties from multi-photon coincidence. 

I know nothing about WDS pulse processing hardware/electronics, but let me now speculate by showing the Anette's data plot with some notations:
 
(https://probesoftware.com/smf/gallery/395_02_09_22_10_31_02.png)

I am proposing that while these various non-linear dead time expressions have allowed us to perform quantitative analyses at count rates roughly 10x greater than were previously possible, at even higher count rates (>400K cps) we start to run into other non-Poisson effects (from limitations of hardware/electronics) that may require additional terms in our dead time correction as proposed by Müller and others.  I suspect that these additional hardware related terms may require careful hardware dependent calibration or even as SG has proposed, new detectors and /or pulse processing electronics.

I welcome any comments and discussion.
Title: Re: Generalized dead times
Post by: sem-geologist on September 03, 2022, 03:02:02 AM
Very relevant question for JEOL probe users: does Jeol probes have the integral mode (ignores the PHA mode)? What about most recent Jeol probe models? I am asking as the last PHA plots I saw sent by Brian, it had no background distribution visible, only the pulse distribution. On Cameca instruments integral mode uses only the pulse sense part of electronics which is able even to sense pulses with baseline drifting (at very high count rates) to negative voltage, where PHA (diff mode) would filter such pulses out. I think the log deadtime correction is able to work consistently with much further than 500kcps (input rate) on Cameca instruments.
Look to this picture (again):
(https://probesoftware.com/smf/gallery/1607_17_08_22_2_08_01.bmp)
the pulse nr 3. would be recognised with integral mode, but would be rejected with diff mode. And that can be one of additional process for missing counts at high count rates seen from Anette's dataset. There are other posibilities, but that is the most likely for Jeol to kick in at such medium count rates of half million.
Title: Re: Generalized dead times
Post by: Probeman on September 03, 2022, 08:21:34 AM
Very relevant question for JEOL probe users: does Jeol probes have the integral mode (ignores the PHA mode)? What about most recent Jeol probe models?

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

All the the constant k-ratio data I have showed in the last two months (including Anette's) has been using integral mode.  At the very beginning of the constant k-ratio topic I did perform some differential mode acquisitions on my Cameca, but I switched to integral after you chastised me for using differential mode!     :D

When I ran some of my Ti Ka k-ratios on TiO2 and Ti metal, I checked the PHA distributions at both ends of the acquisition, first at 10 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_02.png)

and then at 200 nA:

(https://probesoftware.com/smf/gallery/395_03_09_22_8_14_20.png)

just to be sure that they weren't being clipped by pulse height depression.  On this spectrometer we were getting around  1300 cps/nA so not too hot, so at 10 nA that would be 13K cps and at 200 nA that would be 260K cps.   I'm actually pretty impressed at how little PHA shift there was going from 13K cps to 260K cps using the same PHA settings (baseline = 0.3, gain = 800, bias = 1320).

To all: please note that the PHA peak shifts lower (to the left) at higher beam currents, so for the constant k-ratio acquisition we adjust the PHA at low beam currents to be to the right of the center of the PHA scan region.  Normally of course we adjust the PHA peak to the left of the center because we are usually performing our peak scans and PHA scans on a standard with a higher concentration of the element than our unknowns, and so as we go to lower intensities in our unknowns, the peak shifts to the right, so need need to leave room for that shift to the right.  But for the constant k-ratio acquisitions (assuming we tune up at low beam currents), the PHA shift goes to the left, so we start higher in the PHA distribution to avoid clipping the PHA peak at high count rates.  If this doesn't make sense ask, because this is an important point about EPMA!

The above PHA scans were using the normal Cameca MCA acquisition, but I can also acquire JEOL style PHA scans by scanning the PHA baseline on the Cameca, which often produces a much higher energy resoloution PHA scan.   I will try to get to that also.

However, Anette recently sent me another "terrifying" data set at even higher (> 1M cps!) count rates on her JEOL TAPL crystal of SiO2/Si k-ratios and there we see some problematic PHA shifting. I will post her new data as soon as I get a chance.   Suffice to say, we now realize that we need to modify the PHA settings as we ramp up to crazy high count rates to prevent pulse high depression from cutting off some of the PHA peak.
Title: Re: Generalized dead times
Post by: sem-geologist on September 03, 2022, 08:43:16 AM

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

And that is not an integral mode, but "pseudo" integral and that bites at high count rate due to signal baseline (the average base of pulse, not the lower threshold of PHA) shifting below the 0 V. With such PHA lower filter (or baseline - as you call it, which is lower threshold value of PHA) the 3rd pulse in my presented oscilloscope snapshot would be rejected at such pseudo-"integral" mode. However real integral mode (Cameca, hardware) would accept such pulse, even if its peak would be completely shifted below the 0V. The true integral mode is completely resilient against any PHA shifts (which is in real the signal baseline moving much below 0V due to increased density (count rate) of pulses).
Title: Re: Generalized dead times
Post by: Probeman on September 03, 2022, 08:55:34 AM

Yes, JEOL instruments have "integral" mode.  All models of JEOL instruments have both integral and differential mode.  In integral mode, only the baseline filter is applied.

And that is not an integral mode, but "pseudo" integral and that bites at high count rate due to signal baseline (the average base of pulse, not the lower threshold of PHA) shifting below the 0 V. With such PHA lower filter (or baseline - as you call it, which is lower threshold value of PHA) the 3rd pulse in my presented oscilloscope snapshot would be rejected at such pseudo-"integral" mode. However real integral mode (Cameca, hardware) would accept such pulse, even if its peak would be completely shifted below the 0V. The true integral mode is completely resilient against any PHA shifts (which is in real the signal baseline moving much below 0V due to increased density (count rate) of pulses).

If you already knew, why the heck did you ask?     :)

You may be correct about these details, but based on Anette's constant k-ratio data from her new JEOL instrument we can obtain consistent k-ratios up to around 400K or 500K cps, so pretty darn good.

Have you had a chance to acquire some constant k-ratios on your instrument(s)?   
Title: Re: Generalized dead times
Post by: sem-geologist on September 03, 2022, 09:26:06 AM
If you already knew, why the heck did you ask?     :)

You may be correct about these details, but based on Anette's constant k-ratio data from her new JEOL instrument we can obtain consistent k-ratios up to around 400K or 500K cps, so pretty darn good.

I had not known if and how integral mode is/works on Jeol. Thanks for sharing info.

Yes, up to 400kcps, or 500kcps that will work in this pseudo integral mode as:

S_b => baseline of signal
Ph => absolute pulse height from baseline of signal
P0 => pulse height relative to 0V (common ground with few mV fluctuations)
PHA_L => PHA baseline or simply lowest PHA threashold

pulse will be counted at such "integer" mode if P0 > than PHA_L;
so the problem wont show in untill baseline drops below PHA_L - Ph. So at low count rates average S_b will be close to 0; increasing count rate and count density this S_b will start shifting to negative values and so at 400 kcps some parts of S_b will start to be lesser than PHA_L - Ph. That is why trend departs from 400kcps.
Title: Re: Generalized dead times
Post by: Probeman on September 03, 2022, 10:46:29 AM
Yes, up to 400kcps, or 500kcps that will work in this pseudo integral mode as:

S_b => baseline of signal
Ph => absolute pulse height from baseline of signal
P0 => pulse height relative to 0V (common ground with few mV fluctuations)
PHA_L => PHA baseline or simply lowest PHA threashold

pulse will be counted at such "integer" mode if P0 > than PHA_L;
so the problem wont show in untill baseline drops below PHA_L - Ph. So at low count rates average S_b will be close to 0; increasing count rate and count density this S_b will start shifting to negative values and so at 400 kcps some parts of S_b will start to be lesser than PHA_L - Ph. That is why trend departs from 400kcps.

I think you might be on to something!

I had stopped looking at my SX100 constant k-ratio data from a while back because at that time I had not yet appreciated the importance of acquiring the primary standard prior to the secondary standard (at each beam current). That way, when the standard intensity drift correction is turned off, the primary standard utilized in the k-ratio is always using the same beam current as the secondary standard.  And any picoammeter non-linearity is automatically nulled out.  See this post for the details on my mea culpa moment:

https://probesoftware.com/smf/index.php?topic=1466.msg11189#msg11189

So going back through some of that old data, here's a constant k-ratio acquisition using TiO2 and Ti metal (unfortunately I acquired the TiO2 *before* the Ti metal for each k-ratio, so the k-ratio is constructed using the primary standard from the previous beam current condition, which causes a "glitch" at 60 nA when the instrument switches picoammeter ranges at 50 nA!), so please ignore the 60 nA glitch:

(https://probesoftware.com/smf/gallery/395_03_09_22_10_14_02.png)

But look at how consistent the k-ratios are at count rates up to 620K cps!   These were all integral PHA acquisitions by the way.

I need to run up to some even high count rates as soon as I get a chance.  Have you been able to acquire any constant k-ratio data sets on your Cameca instruments?  It would be very interesting to see your data.  I don't remember how the Cameca software utilizes the primary standard intensity data for constructing k-ratios, but just be sure to acquire both the primary and secondary standards at the same beam current (for each beam current), to null out any picoammeter non-linearities.
Title: Re: Generalized dead times
Post by: jlmaner87 on September 03, 2022, 11:14:39 AM
@sem-geologist: Thanks for posting the oscilloscope readings. Very helpful!

I have attached a manuscript that may be useful to this discussion. See equations (4) and (5). In equation (4), they use two dead time constants: one for 'paralyzing' behaviors and another for 'non-paralyzing' behaviors.

On another note, I thought the 'integral' modes in the Cameca and JEOL instruments were identical? They both use a baseline but 'count' all pulses above the baseline (whether those pulses are positive or negative in amplitude).
Title: Re: Generalized dead times
Post by: Probeman on September 03, 2022, 11:27:16 AM
It can be a little frustrating to find papers that contain correction models applicable to proportional counters, as much more has been published on the behavior of Geiger-Müller counters.  I hope that these two papers will generate some thought and discussion, particularly amongst those who are interested in doing quantitative work at high count rates.

This paper in the post above on Geiger-Mueller (GM) dead times just posted by James Maner from Almutairi et al. (2019) is very interesting. I'm enjoying reading it, but I agree with Brian that it's difficult to say how much of this applies to our proportional detectors.

They say for example that dead time effects are added to the pulse stream at all stages of the system from the detector itself, through the final digital counting, though they say that in GM systems it is the detector dead time that dominates these effects. They also maintain that GM detectors are neither ideal paralyzing nor non-paralyzing models and are a mixture of both depending on the detector voltage.

How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?
Title: Re: Generalized dead times
Post by: sem-geologist on September 03, 2022, 03:16:54 PM
On another note, I thought the 'integral' modes in the Cameca and JEOL instruments were identical? They both use a baseline but 'count' all pulses above the baseline (whether those pulses are positive or negative in amplitude).

Ok, now you make me starting to doubt myself. From electronic circuit POV at integral mode PHA could be skipped completely, but maybe I miss some human factor there and maybe you are right about  this "forced" PHA check of lower boundary. This in real is very easy to check. I will bridge the signal with a help of some resistor with -15V rail (analog power -15V of spectrometer) to shift the baseline significantly and will look if integral counting will cease.

How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?

I already wrote that it does not. Working principles of proportional counter is more close to any solid state detector than to GM. GPC is easier understood as like gaseous transistor, where GM could be compared to a latching relay. The thing is this paper fails at very fundamental level - it states that GM is Townsend mode, while in real it is Streamer. There is two competing theories for discharge in gases: Townsend and Streamer. The distinction gets in particular clear after reading any paper about SQS (self quenching streamers). GPC is Townsend, and SQS and GM are not. This paper does not discover nothing what would be not obvious for streamers or out of ordinary - its basically Ohm's law, and stated observations in paper follows it exactly.
Title: Re: Generalized dead times
Post by: Probeman on September 03, 2022, 03:58:55 PM
How much of this applies to our proportional counters?  How do we begin to separate out these different effects for our proportional counters?

I already wrote that it does not. Working principles of proportional counter is more close to any solid state detector than to GM. GPC is easier understood as like gaseous transistor, where GM could be compared to a latching relay.

OK, thanks for your explanation. Still it is interesting to see the mathematical (non-linear) form of these dead time expressions! But the second question still stands.

For example, there must be a non-zero recovery (dead ) time for the gas in the detector to de-ionize. And I believe you stated that there are also some dead time effects within the pulse processing electronics, correct?

So how might we investigate the relative values and magnitudes of the detector dead time versus the electronics dead time?   I'm thinking of eq. 5 in the Almutairi et al. paper where they have two different dead times and a weighting factor between the two...  could something like this be useful for our WDS systems?
Title: Re: Generalized dead times
Post by: Probeman on September 04, 2022, 02:47:10 PM
I looked through some of my original constant k-ratio data from my Sx100 and found this SiO2/Si run from earlier this June, where on spc2 LTAP we were getting around 4300 cps/nA on pure Si metal:

(https://probesoftware.com/smf/gallery/395_04_09_22_2_41_55.png)

It's not as "terrifying" as Anette's TAPL crystal on her JEOL instrument but still pretty impressive!

You can see that starting around 500K cps, even the logarithmic expression starts to not fully correct the observed count rate properly.  But it's still doing a lot better than the traditional expression which starts failing almost immediately!
Title: Re: Generalized dead times
Post by: sem-geologist on September 06, 2022, 02:52:08 AM
Yeah, TAP's are scary, even simple TAP is able to do ~2000cps/nA on Si. That is comparable to Cr Ka on LPET in intensity. What is the secret sauce of JEOL to be able to get more counts per nA?
Title: Re: Generalized dead times
Post by: Probeman on September 06, 2022, 07:52:03 AM
Yeah, TAP's are scary, even simple TAP is able to do ~2000cps/nA on Si. That is comparable to Cr Ka on LPET in intensity. What is the secret sauce of JEOL to be able to get more counts per nA?

I don't know for sure, but it is probably at least partly due to JEOL having a smaller focal circle diameter (140mm vs. 160mm for Cameca). That implies slightly less spectra resolution, but better geometric efficiency (assuming the crystals are the same size).

I've been promising to post Anette's "terrifying" TAPL data and so here it is:

(https://probesoftware.com/smf/gallery/395_06_09_22_7_36_13.png)

Now that's a whole lotta counts! 

Looking at the lower count rates, we can see the the traditional and logarithmic corrected data both at 1.1 usec (red and yellow) where the logarithmic expression very slightly over corrects, and also the logarithmic expression after decreasing the dead time constant by 0.02 usec to 1.08 usec (cyan), which plots very constant k-ratios (for a little while!).

We can see it a little clearer in a zoom:

(https://probesoftware.com/smf/gallery/395_06_09_22_7_36_44.png)

We can see that at these "terrifying" count rates, even the logarithmic expression fails eventually.  The question I wish we could answer is what physical mechanism at these high count rates is causing this and what math might we utilize to correct for this. 

On the other hand, at these count rates (~400K cps) the dead time correction (percent) is simply enormous, and so maybe we should just be happy with improving our traditional dead time correction by only a factor of 10x.    :)

However, Anette also performed some PHA scans at these various beam currents and we do see some severe pulse height depression starting around 100 nA:

(https://probesoftware.com/smf/gallery/395_06_09_22_7_36_59.png)

So that's at least part of the problem we're seeing above.
Title: Re: Generalized dead times
Post by: Brian Joy on September 06, 2022, 03:49:44 PM
I'd like to return to my Mo Lβ3/Lα count rate dataset (https://probesoftware.com/smf/index.php?topic=1470.msg11016#msg11016) as an example and re-interpret a little...

While it is true that the origin of pulse pileup differs from that of dead time, corrections for the two are described by the same types of models.  For instance, correction for pulse pileup requires a model equivalent to that for an extending dead time (see work by Pommé):  N’ = Nexp(-τN).  As shown by Müller (1991) (https://probesoftware.com/smf/index.php?topic=1489.msg11199#msg11199), the first order approximation (based on power series expansion) to the extending dead time correction is the non-extending correction, N = N’/(1 - τN’) or N’/N = 1 - τN’.  The implication of this is that, even if JEOL pulse processing circuitry is in truth subject only to pulse pileup, then the non-extending (non-paralyzable, linear) model should still be applicable at relatively low count rates (certainly below 50 kcps).  In contrast, for the more general case extending to high count rates, superposition of pulse pileup and dead time requires a more complicated treatment such as that proposed by Pommé (2008) (https://probesoftware.com/smf/index.php?topic=1489.msg11203#msg11203).  If a dead time is not enforced electronically, then correction for pulse processing count losses at high count rates could potentially be described by a simple model.

In the plot below (and in all of the data plots that I’ve shown in my application of the Heinrich et al. ratio method (https://probesoftware.com/smf/index.php?topic=1470.0)), most of the correction and essentially all departures from linear behavior are due to one X-ray line (within the ratio).  In the case illustrated below, N’32 represents the Mo Lα count rate on channel 5/PETH, while N’12 represents the Mo Lβ3 count rate on channel 2/PETL.  For measured count rates below 200 kcps, the ratio, N’32/N’12, is greater than 20:1.  It appears that the plotted ratio (N’12/N’32), in which departure from linearity can be ascribed effectively solely to Mo Lα, is fit well by an exponential function.  The same is true for my corresponding dataset for Ti, which extends to measured count rates up to 227 kcps.  For my corresponding dataset for Si, an exponential fit works well for Si Kα count rates up to about 140 kcps (on channel 4), but the dataset is fit better as a whole by a quadratic.  I believe that the reason for this may lie in the extreme degradation of resolution in the pulse amplitude distribution, which may have contributed to irrecoverable loss of peak X-ray counts above ~140 kcps.

I should note that I’ve restricted the range over which I’ve applied the linear model to the Mo data plotted below to about 63 kcps.  This is lower than the maximum count rate at which I applied the linear model before, as later I became concerned that ratios calculated using values greater than this might display noticeable departure from linearity.  Using the linear fit, I obtain τ3 = 1.32 μs (channel 5).  Although I’ve chosen a value of τ3 = 1.30 μs for the exponential correction, the two values are essentially identical when considering propagation of counting error.  Increasing the value of τ3 produces a lower ratio value for either the exponential model or the Donovan et al. model.  Note that the exponential correction requires an iterative solution.  I'll let the plot do the rest of the talking.

(https://probesoftware.com/smf/gallery/381_06_09_22_5_13_01.png)
Title: Re: Generalized dead times
Post by: Probeman on September 06, 2022, 05:44:47 PM
Holy cow, this looks interesting (and glad to see you've come over to the non-linear side!).   :)

As you say, we'll have to implement this with a Lambert W function so give us a couple of days to implement that and try it out...
Title: Re: Generalized dead times
Post by: Probeman on September 07, 2022, 10:27:23 AM
Aurelien and I are evaluating Pomme's dead time expression and it looks to be worth implementing, but is limited in the maximum count rate it can handle (more so than the other expressions).
 
In fact it appears to be limited to count rates around 245K cps at 1.5 usec (~JEOL) or 126K cps at 3 usec (~Cameca).  Of course it all depends on the dead time constant utilized to obtain a constant k-ratio!    :)

I'll be plotting these exponential expression graphs up soon, but in the meantime here's another interesting observation: Aurelien and I plotted up some SX100 data from a while back for Ti Ka on LPET and Si ka on LTAP:

https://probesoftware.com/smf/index.php?topic=1489.msg11218#msg11218

https://probesoftware.com/smf/index.php?topic=1489.msg11223#msg11223

But when we plotted them both against the count rate of each primary standard we get this:

(https://probesoftware.com/smf/gallery/395_07_09_22_10_01_04.png)

The Ti Ka data has a glitch at 60 nA as explained earlier because I acquired the primary standards *after* the secondary standards, so the k-ratio was constructed using the primary standard intensity from the previous beam current condition, which caused a slightly anomalous intensity when switch picoammeter ranges from below 50 nA to above 50 nA.

But still it is clear that something odd is going on because both sets of k-ratios are measured on the same spectrometer, over the same (primary standard) count range and at the same bias voltage (albeit slightly different dead time constants).  That is, why are the Ti Ka k-ratios plotting up nice and constant, while the Si Ka k-ratios are showing a dead time correction issue, albeit at fairly high count rates? 

Well Aurelien noticed that the PHA gains are quite different. So for example, here are the PHA settings for the Ti Ka k-ratios:

PHA Parameters:
ELEM:    ti ka   ti ka   ti ka   ti ka   ti ka
DEAD:     2.80    2.76    2.90    2.95    3.10
BASE:      .29     .29     .29     .29     .29
WINDOW    4.50    4.50    4.50    4.50    4.50
MODE:     INTE    INTE    INTE    INTE    INTE
GAIN:     942.    864.   1369.    818.    864.
BIAS:    1320.   1320.   1850.   1320.   1850.

And here for the Si Ka k-ratios:

PHA Parameters:
ELEM:    si ka   si ka   si ka   si ka   si ka
DEAD:     2.85    2.65    3.00    2.76    3.10
BASE:      .26     .26     .26     .26     .26
WINDOW    4.50    4.50    4.50    4.50    4.50
MODE:     INTE    INTE    INTE    INTE    INTE
GAIN:    2400.   2330.   3410.   1677.   2237.
BIAS:    1320.   1320.   1850.   1320.   1850.


This would seem to indicate that the gain setting has an effect on the dead time of the system beyond the photon coincidence effect, whereby the higher the gain, the higher the pulse pileup and therefore the higher the dead time constant necessary?

Increasing the dead time constant using the logarithmic expression for the Si Ka k-ratios would only cause an over correction at moderate count rates. And at this dead time and count rates the exponential expression fails...
Title: Re: Generalized dead times
Post by: Brian Joy on September 07, 2022, 11:53:01 AM
Increasing the dead time constant using the logarithmic expression for the Si Ka k-ratios would only cause an over correction at moderate count rates. And at this dead time and count rates the exponential expression fails...

Let me emphasize that the exponential expression can only work for cases in which the correction is due to an extending dead time or pulse pileup.  If an enforced, non-extending dead time is present (as in the Cameca pulse processing circuitry), then a more involved treatment such as that of Pommé (2008) must be applied.  Also, keep in mind that SEM Geologist has modeled the latter situation at high count rates using Monte Carlo simulation.
Title: Re: Generalized dead times
Post by: sem-geologist on September 07, 2022, 03:01:04 PM
Very interesting...

The systematic and 100% exact answer can be hard to get to.
There is some outstanding huge problem with peaking into Cameca WDS board workings. Most of VME boards can be exposed from cabin using the extender board (it is board with parallel traces with 3x 96pin sockets at one side and  3x 96 pin plugs at other) so that VME board could be completely exposed outside the electronic cabin) - I had troubleshooted many problems with other boards as could check how signals were evolving in the path at live and where it fails. However WDS WME boards despite if it is new or old type won't boot if connected to VME back plane with extender. Probeman, I think You have something interesting going and maybe gain changes the pulse width? Was both PHA peaks centered exactly at the same position (2.5V)? Maybe this is where this previously reported different dead times for different elements come from.
If I remember correctly You have the new WDS board. OPAMP's used for signal handling and gain are high speed AD847 (I have picture if You would want to see). With slew rate of 300V/µs it for sure is not capable to introduce broadening of peak with higher gain. No wait, there is this AD7943 for setting Gain.
https://www.analog.com/media/en/technical-documentation/data-sheets/AD7943_7945_7948.pdf (https://www.analog.com/media/en/technical-documentation/data-sheets/AD7943_7945_7948.pdf)
It has interesting thing at figure 12 in specification. So this multiplication (gain) would work differently on low gain and high gain - but what that Frequency response does exactly mean? Could it broaden the pulses? This actually could be an additional source for PHA shift. I had found lately (and had already posted somewhere here) that higher gain and lower bias can give less PHA shift than bias/gain set with auto, and that is interesting as datasheet of that multiplication chip partly explains the observation.

But then higher gain implies it should work more linear, but in your graph your high gain analysis are misbehaving, and low gain is working more consistently.

Unless these had different centered PHA... wait actually even if they would be both centered at exact 2.5V they will behave a bit differently as Ti Ka will have Ar escape, and Si Ka won't... and then again we would expect Ar escape to be cut out at PHA baseline (I still need to check if that (baseline filter at integral mode) is the case for Cameca PHA electronics) and so would expect Ti  measurements to derail at high count rate, but it is Si which derails.

I guess the Ti would derail in the same manner probably just a bit further right from not covered with experiment count rates. Could this difference be caused by 10% of Ar esc peaks? I think most probable place for additional count loss is pulse hold chip which naturally delays the signal and has pretty slow slew rate. It is acceptable slew rate when it follows the signal, but after holding the amplitude (it depends very much how it is implemented, is hold released after ADC read the value, or is it kept for full set dead time) it could not drop to the base line (or significantly below the pulse top which needs to be measured). In such situation the tandem of comparator and pulsehold chip could miss the following pulse after dead time blanking lifted.

What is the picture on other spectrometers? Looking to such big gain differences between 1,2 and 4th spectrometers I guess You set your biases on spectrometers at same current, not at same count rate (i.e. 10kcps)?
Title: Re: Generalized dead times
Post by: sem-geologist on September 07, 2022, 04:04:27 PM
Also, keep in mind that SEM Geologist has modeled the latter situation at high count rates using Monte Carlo simulation.
Indeed, the Monte-Carlo sim is based only on pulse-pileups and deterministic (integer) blanking dead time and nothing else. It kinda works only for SXFive (should work for new generation of Cameca WDS boards for SX100 as well) and is not sufficient for old WDS boards which have some additional choke points (Analog signal multiplexing - which takes 1µs to switch between sources, actually because of multiplexing the high count rate will be choked differently depending how busy are the other 2 shared spectrometer signals at multiplexer for ADC). If anyone still use old VME boards on SX100, throwing out old WDS board and getting new gen is the only upgrade which really brings the very important changes (in particularly if there are any large XTAL added or/and differential PHA method is being used). I would do that immediately if we would have funds for that on our SX100. New board have no more analog signal multiplexing to shared ADC's - every spectrometer signal has its own pipeline and its own ADC, and only ADC-FPGA bus is shared, which is digital. Digital multiplexing or bus sharing is orders of magnitude faster to switch the sources than analog.

Title: Re: Generalized dead times
Post by: jlmaner87 on September 08, 2022, 03:09:53 PM
Speaking of Monte Carlo pulse pileup modelling, a quick Google Scholar search provided several interesting reads. I've attached one that may spark some conversation.
Title: Re: Generalized dead times
Post by: sem-geologist on September 09, 2022, 01:15:42 AM
Speaking of Monte Carlo pulse pileup modelling, a quick Google Scholar search provided several interesting reads. I've attached one that may spark some conversation.

thanks @jlmaner87. I am also aiming at something like that, however my current simulation is much more simple. The presented pulse shape in the paper is different from that observed from Cameca WDS. Paper presents monopolar, where WDS has bipolar pulses.  On Cameca WDS such monopolar pulse (albeit much shorter, with shapping time of only 250ns) is differentiated second time producing the bipolar pulse - that has few nice outcomes: (1) the average DC bias (with regards to common ground) is close to 0V so there is no current flow, and signaling is only AC, (2) there is more room for more closely packed pulses without saturating amplifier(s) as differentiation "narrows pulse in half". Due to bipolar pulse and higher complexity I initially had not attempted to do the simulation with detailed shapes. Revisiting it recently, I was quite surprised how current course grained (with intervals of 1µs and pulses aligned at 1µs grid) and so over-simplified simulation could predict the count rates, in particular when changing the hardware settable dead time (integer).

The catch is that the integral mode actually doesn't care at all about pulse-pileups (or colliding galaxies, or "coincident photons" as probeman calls it) and it actually completely does not matter if other pulse(s) were after the counted pulse with a 4ns delay (which technically is a pulse-pileup) or 1µs, or 2.1 µs (which are technically blanked pulses) while we have set 3µs (integer blanking) dead time. In either cases those pulses will be ignored and only single first event will be registered. And yes, I was arguing contrary in some post a month ago saying that this exponential equation won't work as it does not account for two separate process - I was partially wrong, those two processes are ignored by integral mode without any distinction. However, pulse pile-up process would play crucial role in PHA differential mode and I am sure this log equation won't work. How it can be? Lets look to the problem from completely (at first glance very bizarre) different perspective - In integer mode we actually do not measure the count rate, but average time passed when counter is armed but no counts is registered - we measure pulse-free time. That will be large at low count rates and will diminish non-linearly to small values with increasing count rate. That diminishing will follow the exponential (reversed) law and will go closer to 0, but should never reach it. Thus in integral mode the counting is non-extending and non-paralyzable. Would it be extendable it would result in pralyzable behaviour. And that is basically why logarithmic equation of probeman at al kinda works at integral mode up to 450-500 kcps.

I think one of important note which should be added to the help and manuals and to PfS itself is that log mode dead time correction should be used only with integral PHA mode, and should be not used with diff mode. (I mean in particularly when diff is moderate sized window to pass only some well defined distribution, not the "universal" wide diff window which would reveal some deterioration only at very high count rates.)
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 08:16:18 AM
Thus in integral mode the counting is non-extending and non-paralyzable. Would it be extendable it would result in pralyzable behaviour. And that is basically why logarithmic equation of probeman at al kinda works at integral mode up to 450-500 kcps.

And count rates up to 300K to 400K cps is all we are claiming it is accurate to!  But that is still 10x better than the traditional expression! 

I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.  As previously shown here:

https://probesoftware.com/smf/index.php?topic=1489.msg11233#msg11233

I think one of important note which should be added to the help and manuals and to PfS itself is that log mode dead time correction should be used only with integral PHA mode, and should be not used with diff mode. (I mean in particularly when diff is moderate sized window to pass only some well defined distribution, not the "universal" wide diff window which would reveal some deterioration only at very high count rates.)

Absolutely. And in fact this is noted in the Constant K-Ratio procedure attached below in point #6.
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 08:35:49 AM
Unless these had different centered PHA... wait actually even if they would be both centered at exact 2.5V they will behave a bit differently as Ti Ka will have Ar escape, and Si Ka won't... and then again we would expect Ar escape to be cut out at PHA baseline (I still need to check if that (baseline filter at integral mode) is the case for Cameca PHA electronics) and so would expect Ti  measurements to derail at high count rate, but it is Si which derails.

So, I generally run a PHA scan at one of the low count rates and another at the highest count rates, just to make sure that the PHA peak is still relatively well centered as was shown here for Mn ka:

https://probesoftware.com/smf/index.php?topic=1489.msg11213#msg11213

But in fact, we don't center the PHA peak at low count rates, because pulse height depression will shift the PHA peak to the left.  On Camca instruments we adjust the PHA gain to place the PHA peak somewhat to the right of the PHA scan as described in the pdf in the post above and shown here here:

https://probesoftware.com/smf/index.php?topic=1466.msg11008#msg11008

I think we may have neglected to emphasize how important it is to attempt to keep the PHA peak relatively well centered in the PHA range.  As I showed with Anette's data in this post, the JEOL instrument can show severe pulse height depression at high count rates:

https://probesoftware.com/smf/index.php?topic=1489.msg11230#msg11230

Basically at a relatively low count rate adjust the gain until the PHA peak is around 3 or 3.5 volts on a Cameca instrument, and on a JEOL instrument around 5 or 6 volts or so.  But then also perform a PHA scan at the highest count rate just to make sure that the PHA peak is still relatively well centered.

What is the picture on other spectrometers? Looking to such big gain differences between 1,2 and 4th spectrometers I guess You set your biases on spectrometers at same current, not at same count rate (i.e. 10kcps)?

Yes.
Title: Re: Generalized dead times
Post by: sem-geologist on September 09, 2022, 09:16:57 AM
I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.
You mean It is not the same thing? and small time span (4-10 ns) is more important at low current than large (1µs sized) pulses at high current? If to distinguish these at all at least it should be other way around as in proposed way it makes no logical sense. Also present bending at very high count rates in your plots is rather due to the pulse catch mechanism sluggishness than a pileup. Let me explain how pulses are detected.

Looking to what chips are presented on boards it is clear that counting (integral mode, or pulse sensing) uses classical tandem of comparator and pulse hold chip known from electronic textbooks.
The amplified (by gain multiplier) and buffered signal (with pulses) is fed into tandem of comparator and pulsehold chip. pulse hold chip has two functions: 1) holding and outputing the catched voltage level if hold pin is triggered 2) delaying the signal by fraction of µs unless its "pulse hold" function is triggered. So pulse hold chip has one input of the signal, where comparator has two inputs. Raw pulse signal goes to both chips. second input of comparator is the delayed output of pulse hold chip (the same signal goes to ADC). So the comparator detects pulse when its two inputs are different by some set voltage offset (it can detect rising and falling edge of pulse). It probably then triggers the FPGA and FPGA then activates the hold pin of the pulsehold chip. Now everything is nice up to this point, however to detect the next pulse the holding function of pulsehold chip needs to be deactivated, and pulsehold chip can't instantly go to very low voltage, it quite sluggishly drops down with delay, and thus comparator can be blind for a pulse even if FPGA is listening (i.e. the set 3µs had passed) for pulse trigger from comparator. This is where pulse pile-up comes in - if holded pulse was a pile-up twice or more the voltage than normal pulse, the voltage drop after hold function of pulsehold chip then released can not manage to drop fast enough below the voltage of consecutive normal pulse (which would be also much lower due to baseline drift than at normal and low count rates). Such situation would drastically increase with increasing the pulse density (counting rate).

There are few unknown important details:
* Does Cameca pulse sensing is triggered with rising edge of pulse, or with falling edge? (in first case, knowing the fixed shapping time it is very easy to catch the peak maximum value; in second case when pileups are present the catched voltage could be significantly off from real peak absolute V value)
* Does "hold" pin at pulsehold chip is being kept activated for whole time of set integer dead time or is it released as soon the ADC reads the value?

Should I animate that principles?
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 09:31:14 AM
I'm beginning to think that on the Cameca instrument the photon coincidence effects dominate up to these 300K to 400k cps levels, but then *depending on the PHA gain* these pulse pileup effects become more dominant.
You mean It is not the same thing? and small time span (4-10 ns) is more important at low current than large (1µs sized) pulses at high current? If to distinguish these at all at least it should be other way around as in proposed way it makes no logical sense. Also present bending at very high count rates in your plots is rather due to the pulse catch mechanism sluggishness than a pileup.

OK, let's call it "pulse catch mechanism sluggishness".   I have zero knowledge of the electronic mechanisms (and to be honest I really am not interested in all the gritty details!   :D  ), I'm just trying to model the dead time effects mathematically (whatever they are) so we can obtain constant k-ratios for quantitative analysis on both JEOL and Cameca instruments!   :)

But why do you think the constant k-ratio plots for Ti Ka are fine up to ~180K cps, but the Si Ka plot (also up to ~180K cps but at higher PHA again settings) are not corrected properly using a logarithmic expression?

By the way, at these count rates (and Cameca dead times) the exponential expression fails (mathematically) very quickly so that is not an option.  See the next post.
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 09:34:26 AM
Increasing the dead time constant using the logarithmic expression for the Si Ka k-ratios would only cause an over correction at moderate count rates. And at this dead time and count rates the exponential expression fails...

Let me emphasize that the exponential expression can only work for cases in which the correction is due to an extending dead time or pulse pileup.  If an enforced, non-extending dead time is present (as in the Cameca pulse processing circuitry), then a more involved treatment such as that of Pommé (2008) must be applied.  Also, keep in mind that SEM Geologist has modeled the latter situation at high count rates using Monte Carlo simulation.

I should have explained this better.

I'm not saying the exponential expression is not accurate (we are still evaluating the expression using both JEOL and Cameca data). What I was saying was that at sufficiently high dead times and/or count rates the exponential expression fails mathematically.

That is, if we look at the exponential expression and solve for the predicted count rate we can see that the term -dtime * cps cannot be less than -1/e:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_46_54.png)

This is heavily dependent on both the count rate and the dead time constant. Here is a calculation at 1.5 usec dead time:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_53_56.png)

So at 1.5 usec we are limited to around 245k cps (which is actually pretty good!).  However, on a Cameca instrument (assuming 3 usec) we are limited to 123K cps:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_54_25.png)

Which is easily attained on any PET and especially LPET crystals.  Now it maybe that this expression is not applicable to a Cameca instrument if indeed it exhibits purely non-extending behavior. But based on some data it appears that the Cameca may exhibit extending behavior at sufficiently high PHA gain settings as shown here:

https://probesoftware.com/smf/index.php?topic=1489.msg11233#msg11233

But again, the dead time constant in the exponential expression is very sensitive so at 1.1 usec, we can handle count rates up to 334K cps, so pretty high count rates:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_54_42.png)

By the way, when you say the JEOL instrument exhibits extending behavior, do you mean that at sufficiently high count rates, the intrinsic dead time of the system is increasing to higher values?   Or do you mean a different dead time constant is dominant at these sufficiently high count rates?

I think that is partially what Almutairi, 2019 meant in this passage:

(https://probesoftware.com/smf/gallery/395_09_09_22_9_20_58.png)

The Excel spreadsheet for these exponential examples are provided below as attachments if anyone is interested.
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 09:38:25 AM
We can see the limits of the exponential expression more clearly here at 1.5 usec:

(https://probesoftware.com/smf/gallery/395_09_09_22_9_35_41.png)

And here at 3.0 usec:

(https://probesoftware.com/smf/gallery/395_09_09_22_9_36_01.png)

Now this is not to say that this exponential expression is not useful in many situations (it's already been implemented in Probe for EPMA!) but it has some caveats as has been discussed. 
Title: Re: Generalized dead times
Post by: Brian Joy on September 09, 2022, 12:54:35 PM
By the way, when you say the JEOL instrument exhibits extending behavior, do you mean that at sufficiently high count rates, the intrinsic dead time of the system is increasing to higher values?   Or do you mean a different dead time constant is dominant at these sufficiently high count rates?

What I mean is that it appears that loss of X-ray counts in the JEOL pulse processing circuitry is dominated by pulse pileup and not dead time.  Pulse pileup is described mathematically in a manner equivalent to an extending dead time.  At low count rates, the pileup is described adequately by a non-extending dead time model, even though it arises due to a different mechanism.  The value of the time constant should not vary between the two models (and I’ll post more about this).  For Cameca proportional counters, the situation is more complicated.  Great care is required in examination and application of the extending dead time model (or any count rate correction model).
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 03:11:42 PM
What I mean is that it appears that loss of X-ray counts in the JEOL pulse processing circuitry is dominated by pulse pileup and not dead time.  Pulse pileup is described mathematically in a manner equivalent to an extending dead time.  At low count rates, the pileup is described adequately by a non-extending dead time model, even though it arises due to a different mechanism.  The value of the time constant should not vary between the two models (and I’ll post more about this).

OK.   Are you saying that what we call dead time exists solely in the electronics and not in the detector?

Or are you are saying (on JEOL systems) it appears that there is a non-extending component that is dead time and an extending component that is pulse pileup?  And that at low count rates it is dominated by dead time, but at high count rates it is dominated by pulse pileup? And by dead time do you mean photon coincidence?
Title: Re: Generalized dead times
Post by: sem-geologist on September 09, 2022, 03:29:44 PM
Quote from: Monty Python
-What is the Airspeed Velocity of an Unladen Swallow?
-What do You mean? An African or European swallow?

Great care is required in examination and application of the extending dead time model (or any count rate correction model).

I have zero knowledge of the electronic mechanisms (and to be honest I really am not interested in all the gritty details!   :D  ), I'm just trying to model the dead time effects mathematically (whatever they are) so we can obtain constant k-ratios for quantitative analysis on both JEOL and Cameca instruments!   :)

If You are going to construct mathematical model You need all these gritty details, even if You are not interested in them (which I don't blame anyone, it needs some nerdy passion to be interested in electronics - it is not for everyone). If You want to calculate the time the vehicle goes from point A to point B knowing the speed and straight line distance between A and B, You need to know what kind of vehicle it is. A plane will go in straight line, a Car will go on the road (thus You need then additional information about road network), Ship will go on the water, and Train will go on rails. And even if they will go at same velocity, they will need different kind of additional information to tell how much it will take to go from point A to point B and additional corrections (i.e. ship will need speed of river flow, plane the direction and speed of wind...).

Going back to the EPMA electronics. Is the dead time expandable or not expandable it is by design. EDS counting circuits implements extendable dead time where it is extended as much as needed so that pulse which is going to be counted and its amplitude measured would have no pulses before (not piled-us on a tail of preceding pulse, be it positive or negative tail). That is (expandable/extendable) by design so that energy of measured x-ray lines would not drift depending from count rate as it does on WDS PHA. Because it is extendable it is possible to observe on detector with increasing current and counting rate the the decrease of raw count rate at very high currents/count rates - which is the paralyzing behavior. It is possible to stall the counting by reaching 100% dead time.
We have non-extendable dead time on WDS on both Jeol and Cameca instruments. Why? 1) because we have PHA peak shifts - again extension of dead time is to prevent that, and as we see the shifts it is clear that there is no extension; 2) If I increase the current it the raw count rate increases, at high current it increase very little, but still it is the increase and no count rate decrease is observable even at >1µA beam at large crystal at most intense lines. It is clearly non paralyzable. 3) EDS for extendable dead time needs many different shapping amplifiers, where one fast shapping amplifier works constantly in parallel to the main high resolution (slower) amplifier. At least on Cameca WDS there is only and only one shapping amplifier integrated with Charge sensitive preamplifier in a single package, which is connected directly to the GPC - because of that there is no way to implement (and hide away in any possible means) EDS-like extendable dead time circuit.

Have pulse pile ups have anything with extendable vs not extendable? It depends what we mean with "pulse pile up". If it is pile up on the tail (imperfect pile up, recognizable with very fast shapping and sensing circuit on EDS) then EDS extendable dead time  introducing circuit is the response to that. Else if it is perfect pile up (within shaping time of fast circuit of EDS) - the extendable EDS counting circuit fails to recognise it and we can observe such pile-ups appearing on EDS at very high count rates. Can pulse pile up do anything to non-extendable circuit? No not at all, as it by design does not care. It is designed with profound superstition that it is fast enough (80es, there were still no large diffracting crystals) and will not come to such situation. Also integral counting method does not care about pulse pile ups.

Additionally it is important to understand multiplexing and consequences of that (in case of using such older kind of solutions/boards) where dead time will depend from count rate of other detector connected to same multiplexer (and ADC). The result of dead time correction (and dead time during real measurements) can be completely different when using single WDS, or loading all WDS detectors with high count rates. Fortunately new generation (last generation) of WDS boards on Cameca went away from that, but I am pretty sure there are still many SX100 with old WDS boards, and people should know this and importance of that.


Well, there is some still hard to answer questions, i.e. is Cameca integral mode real integral mode or same "pseudo-integral" as Jeol, and other questions which You are stimulating my head to come at. I came to a plan to check that out with injecting the deterministically generated pulse train with signal generator (unplugging the signal cable from detector and plugging it to such generator). Such equipment is expensive $$$$ and out of my budget, but I found out that with some resistor ladder improvised DAC I could do that with Raspbery pico board (4$) and few electronic components (fast opamp to drive the signal $$). I will open separate thread to show how to construct such a device, program it and use it for such a purpose. This board is able to output signals at its clock speed of 133Mhz, but saw someone overclocking it to 250Mhz, anyway that is more than enough to emulate nearly exact pulse shapes emitted and fed to WDS counting electronics from Shapping amplifier near detector. I think this experiment will prove or disprove some of my claims such as:
* GPC's has no dead time (in case it is true, we should see the exactly same rate of missing pulses with increased pulse rate from such generator).
* GPC's pulses are precise - WDS counting electronics introduce PHA spread (feeding the artificially precise pulses with exact same pulse height PHA scan should produce very narrow peak if that claim is wrong).
 
Title: Re: Generalized dead times
Post by: Probeman on September 09, 2022, 03:35:32 PM
We have non-extendable dead time on WDS on both Jeol and Cameca instruments. Why? 1) because we have PHA peak shifts -

I am beginning to wonder more and more if this is mostly a problem with PHA shifting...

https://probesoftware.com/smf/index.php?topic=1466.msg11247#msg11247

https://probesoftware.com/smf/index.php?topic=1489.msg11230#msg11230

But in any case you definitely win the "nerd" award!    :)
Title: Re: Generalized dead times
Post by: Brian Joy on September 09, 2022, 05:48:52 PM
I’ve attached a very readable paper by Lindstrom and Fleming (1995) in which the authors examine “intrinsic” dead time due largely to the ADC as well as pulse pileup effects in pulse processing circuits that do not contain an enforced dead time.  They note that the detector (HPGe in this case) itself contributes negligibly to the dead time.  Although the discussion focuses on behavior of a solid state detector, the principles should be applicable to proportional counters as well.

By the way, does anyone happen to have a schematic for the X-RAY CONT PB to which the JEOL pre-amplifiers send their signals?  It is missing from my book of JEOL schematics, but I fear that this might not be accidental.
Title: Re: Generalized dead times
Post by: Probeman on September 13, 2022, 09:05:19 AM
For those interested Aurelien found an early reference for the exponential dead time correction expression in a book from 1955:

R.D. Evans, The Atomic Nucleus, McGraw-Hill, New York, 1955, p 786, Eq. 1.1

Maybe there's an even earlier reference, but in any case this is referring this expression:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_46_54.png)

where x is the observed count rate, y is the predicted count rate and b is the dead time constant.  When solving for the predicted count rate, W is the log product which has to be solved iteratively.  We utilized the method of Lambert.
Title: Re: Generalized dead times
Post by: Brian Joy on September 13, 2022, 06:04:02 PM
For those interested Aurelien found an early reference for the exponential dead time correction expression in a book from 1955:

R.D. Evans, The Atomic Nucleus, McGraw-Hill, New York, 1955, p 786, Eq. 1.1

Maybe there's an even earlier reference, but in any case this is referring this expression:

(https://probesoftware.com/smf/gallery/395_09_09_22_8_46_54.png)

where x is the observed count rate, y is the predicted count rate and b is the dead time constant.  When solving for the predicted count rate, W is the log product which has to be solved iteratively.  We utilized the method of Lambert.

One of the earliest references is Schiff (1936, Physical Review 50:88-96); I've attached it.
Title: Re: Generalized dead times
Post by: Probeman on September 14, 2022, 09:00:35 AM
One of the earliest references is Schiff (1936, Physical Review 50:88-96); I've attached it.

Nice find. 

Interesting that this exponential expression appeared a year before the linear expression (Ruark, 1937) that is traditionally utilized today.
Title: Re: Generalized dead times
Post by: Probeman on September 23, 2022, 01:25:29 PM
We have non-extendable dead time on WDS on both Jeol and Cameca instruments. Why? 1) because we have PHA peak shifts - again extension of dead time is to prevent that, and as we see the shifts it is clear that there is no extension; 2) If I increase the current it the raw count rate increases, at high current it increase very little, but still it is the increase and no count rate decrease is observable even at >1µA beam at large crystal at most intense lines. It is clearly non paralyzable. 3) EDS for extendable dead time needs many different shapping amplifiers, where one fast shapping amplifier works constantly in parallel to the main high resolution (slower) amplifier. At least on Cameca WDS there is only and only one shapping amplifier integrated with Charge sensitive preamplifier in a single package, which is connected directly to the GPC - because of that there is no way to implement (and hide away in any possible means) EDS-like extendable dead time circuit.

So you agree that WDS dead time is non-extending? I agree this would seem to be true by definition, since all WDS systems count only for exactly as long as the specified count time.  But then why does Brian make this claim:

What I mean is that it appears that loss of X-ray counts in the JEOL pulse processing circuitry is dominated by pulse pileup and not dead time.  Pulse pileup is described mathematically in a manner equivalent to an extending dead time.

How can pulse pileup in a (JEOL) WDS system equate to an extending dead time model when the count time is fixed? Is the (JEOL) pulse processing electronics "saving" pulses to be counted later?  But then you go on to say:

Have pulse pile ups have anything with extendable vs not extendable?... Can pulse pile up do anything to non-extendable circuit? No not at all, as it by design does not care. It is designed with profound superstition that it is fast enough (80es, there were still no large diffracting crystals) and will not come to such situation. Also integral counting method does not care about pulse pile ups.

I agree with this, but then why does Brian say the JEOL WDS system is extending?  How could it be different from the Cameca?  Because of the "enforced" dead time of the Cameca electronics?  I am somewhat confused by these seemingly conflicting statements.

I would very much like to see you and Brian to discuss this question!

Well, there is some still hard to answer questions, i.e. is Cameca integral mode real integral mode or same "pseudo-integral" as Jeol, and other questions which You are stimulating my head to come at. I came to a plan to check that out with injecting the deterministically generated pulse train with signal generator (unplugging the signal cable from detector and plugging it to such generator). Such equipment is expensive $$$$ and out of my budget, but I found out that with some resistor ladder improvised DAC I could do that with Raspbery pico board (4$) and few electronic components (fast opamp to drive the signal $$). I will open separate thread to show how to construct such a device, program it and use it for such a purpose. This board is able to output signals at its clock speed of 133Mhz, but saw someone overclocking it to 250Mhz, anyway that is more than enough to emulate nearly exact pulse shapes emitted and fed to WDS counting electronics from Shapping amplifier near detector. I think this experiment will prove or disprove some of my claims such as:
* GPC's has no dead time (in case it is true, we should see the exactly same rate of missing pulses with increased pulse rate from such generator).
* GPC's pulses are precise - WDS counting electronics introduce PHA spread (feeding the artificially precise pulses with exact same pulse height PHA scan should produce very narrow peak if that claim is wrong). 

These experiments should be performed on both the Cameca and JEOL electronics so we can gain a better understanding of these "black boxes" that we depend on so much!
Title: Re: Generalized dead times
Post by: sem-geologist on September 25, 2022, 03:45:12 AM
So you agree that WDS dead time is non-extending? I agree this would seem to be true by definition, since all WDS systems count only for exactly as long as the specified count time.  But then why does Brian make this claim:

It is not that I agree or don't agree (That is not a matter of an agreement).
1) on Cameca instruments I am 100% sure it is non-extendable as I am fully aware how the hardware is built, and on Jeol I argue that by seeing secondary observations (strong shifts of PHA) that it is designed with very similar hardware (and missing hardware part which is needed for extension of dead time - else there would be no PHA shifts). However, probably on Jeol it does have some unintended pralysable behavior misidentified as "extension" by some mechanisms/processes, which are not on Cameca instrument.
2) Before we dwell further we need to distinguish that most of dead time we observe on these instruments is intentionally designed to be there and it cover-over (with huge overlap) the unintentional dead-time (the missing counts from other processes which creeps into signal process depending from count rate)... I think confusion comes from that all (EDS and WDS) systems enforce some dead time in different ways, but I think it does that for a bit different reasons and thus it is more (EDS) or less (WDS) complicated/advanced.
3) lets look to EDS "enforced" dead time. The main reason for EDS enforced dead time is energy accuracy. The counting system looks for all pulses (with very fast but low resolution pulse shapping amplifier) in parallel to the main (high resolution) shapping amplifier and rejects the currently processed pulse if any (accepted or rejected) pulse was close enough before to overlap anyhow with currently processed pulse. As such counter is keeping track of all incoming pulses it will keep rejecting pulses perpetually, unless there is enough of space before the current pulse and its amplitude then can be guaranteed to be accurate. That ability to keep rejecting pulses, unless the height of incoming pulse can be guaranteed to be accurate, - that is what makes the dead time extendable by hardware design.
4) WDS could look similar on the first glimpse, as it "enforces" some dead time. But, it does it differently: a) it enforces dead time after the sensing pulse and blinds itself from sensing any incoming pulses during the dead time (see the difference: EDS does not blind itself at the fast track - so it could keep a note of all incoming pulse, where WDS blinds itself completely) b) it could look that the reason is similrar to EDS: a simplified attempt to prevent to count the pulse coming after the sensed pulse (As there is normally negative tail of pulse, thus preventing overlapped pulse with tail) - thus only the pulse with accurate height would be counted. I initially thought that would be the reason - but, it fails completely, as system have no idea what happened before the sensed pulse (and thus we see PHA shifts on both Cameca and Jeol). Basically if it sees a pulse it holds the pulse and blinds itself (it is accepted or rejected by PHA) for "enforced" amount of time. c) I think the main reason for WDS "enforced" dead time is not accuracy (which we know fails miserable) but to have predictable dead time and overcome the bottleneck of sharing the part of pipeline by few spectrometers. In example on Cameca SX old WDS boards that is up to 3 spectrometers, where analog pulse signal is multiplexed to single shared ADC - The multiplexer requires 1µs for switch! setting dead time anything below 3µs with all three spectrometers on high count rate would not decrease the dead time! On new WDS boards multiplexing is shifted to digital domain (switching can be done at 50 MHz) on single digital bus (all five spectrometers); There setting the "enforced" dead time below 3µs shows the huge difference in count rates, even when all spectrometers are near fully saturated. Still because of multiplexing it should not be set below 1µs (and thus it is blocked from doing that) as the dead time would start to "float" depending from count rate of other spectrometers.

So it is not that "WDS systems count only for exactly as long as the specified count time" - You actually can force most of EDS systems to count for realtime and not live time, which would make it the same from that perspective. No, it is so because of the different counting design and hardware. But, why Brian brings in extending dead time? probably there is misunderstanding what is extending vs non-extending and paralyzing vs non-paralyzing. I think Jeol is at disadvantage and I think You had uncovered the reason in your other thread showing that Jeol is much more affected with PHA shifts than Cameca instruments, which introduce paralysable behavior where more and more pulses are rejected by baseline of PHA. I actually could simulate paralysable behavior at my Monte-Carlo simulation for diff mode (which demonstrates it (diff mode) is very unsuitable for high count rates), --the rejection by baseline would be a similar.

How can pulse pileup in a (JEOL) WDS system equate to an extending dead time model when the count time is fixed? Is the (JEOL) pulse processing electronics "saving" pulses to be counted later?  But then you go on to say:

Again "fixing counting time" has nothing to do with extending - non-extending. As JEOL sees more rejected pulses by PHA baseline with increasing count rate, due to severe broadening and shifting of the PHA it starts to observe paralysable behaviour. It have nothing to do with extension of dead time as hardware is blind for any pulse-pileup and don't care (same as I had wrote before).

I agree with this, but then why does Brian say the JEOL WDS system is extending?  How could it be different from the Cameca?  Because of the "enforced" dead time of the Cameca electronics?  I am somewhat confused by these seemingly conflicting statements.

First, I believe the Jeol has "enforced" dead time - the difference from Cameca is that on Jeol it is "cut-in-the-stone", where on Cameca it is "user-settable" with low boundary of 1µs (to prevent from "floating" dead time by multiplexing) and high 255µs (max of 8-bits). Anyway, as by default it is set to 3µs and most user don't change it -- that will produce less of PHA shift than on Jeol. Other reason is Jeol gain circuit looks rubish (sorry), and setting the PHA peak position centrally by changing the bias is not the best idea (the countermeasures of PHA shift by increasing bias just increase the very cause of the shift). Lastly I am not sure about that, I am near ready to test it out, but I think Jeol has "pseudo"-integral mode, where Cameca has real integral mode for counting, and that would do the huge difference introducing the paralysable behaviour for Jeol and no paralysable behaviour observed on Cameca.

The biggest confusion comes from mixing the "extending" and "non-extending" with "paralysing" and "non-paralysing" terminology.
It is not synonimous, but misunderstanding originates due to extending deadtime producing paralyzable behaviour. But it is not the same other way arround!

If it goes about mathematics: extendable dead time will revert the input count rate vs observed count rate curve at some point, and it will drop and drop untill will reach the 0 output counts at extremely high input count rate - which would be 100% dead time on the EDS.
Clearly this is extendable and paralysing.
To compare with EDS, on WDS with non-extendable deadtime, we can also see paralyzable behaviour at some point - in particular if using diff PHA mode. However, that paralyzable behaviour won't lead to 0 cps at very extremely high input count rates - it will never drop there, as that is only additional mechanism blocking some but not all pulses. It will start dropping but after some time then will go into plato. If paralysing behaviour is noted on any detector, going above that point is absolutely bad as it is not possible to calculate the real count rate (as it can be from both sides of parabolic curve). EDS gets away with that as due to tracking dead time (as it measures all incoming pulses) it knows on which side of such parabolic curve it is. WDS by not tracking the total number of pulses is blind and resolution is impossible.

As for experiment, I have access only for Cameca instruments. As soon I will have something to share I will do, hopefully someone owning Jeol probe will feel adventurous and knowledgeable enough (connecting the earth/ground clip of oscilloscope to wrong place can instantly fry the boards! be warned!) to do such experiments for Jeol.

P.S. above described "double track" pipelines on EDS was on previous detectors. Newest generation of EDS detectors most probably has no more of double tracking but resolve the pileups with terrific beefy Digital Signaling Processors on FPGA's (I am aware that some EDS vendors had moved there - the outcome is terrific: You wont see any pulse-pile ups even with 90% dead time!!!). That is what I would like to go with for WDS too.
Title: Re: Generalized dead times
Post by: Probeman on September 25, 2022, 08:53:42 AM
First, I believe the Jeol has "enforced" dead time - the difference from Cameca is that on Jeol it is "cut-in-the-stone", where on Cameca it is "user-settable" with low boundary of 1µs (to prevent from "floating" dead time by multiplexing) and high 255µs (max of 8-bits). Anyway, as by default it is set to 3µs and most user don't change it -- that will produce less of PHA shift than on Jeol. Other reason is Jeol gain circuit looks rubish (sorry), and setting the PHA peak position centrally by changing the bias is not the best idea (the countermeasures of PHA shift by increasing bias just increase the very cause of the shift). Lastly I am not sure about that, I am near ready to test it out, but I think Jeol has "pseudo"-integral mode, where Cameca has real integral mode for counting, and that would do the huge difference introducing the paralysable behaviour for Jeol and no paralysable behaviour observed on Cameca.

The biggest confusion comes from mixing the "extending" and "non-extending" with "paralysing" and "non-paralysing" terminology. It is not synonimous, but misunderstanding originates due to extending deadtime producing paralyzable behaviour. But it is not the same other way arround!

If it goes about mathematics: extendable dead time will revert the input count rate vs observed count rate curve at some point, and it will drop and drop untill will reach the 0 output counts at extremely high input count rate - which would be 100% dead time on the EDS. Clearly this is extendable and paralysing.

Thank-you for your thoughts on this very complicated topic.  It will be interesting to see further results from your investigations. 

We have additional recent PHA data from Anette's JEOL instrument that I will be posting soon.  This data should be compared to the PHA data from my Cameca instrument which is here:

https://probesoftware.com/smf/index.php?topic=1466.msg11271#msg11271

But would be nice to see some PHA data from your instrument as well as some constant k-ratio measurements!

As for the fact that Cameca users tend to adjust their PHAs by setting the bias to a fixed number and adjusting the gain to center the PHA peak, while JEOL users tend to do the opposite (set the gain and adjust the bias to center the PHA peak- I don't know how to modify that behavior as the gain settings on the JEOL are in 2x increments which is very coarse.

Anette is also going to post more detailed information on JEOL PHA behavior as she tried centering her PHA peaks at various gain settings.  She has a lot of data to share.
Title: Re: Generalized dead times
Post by: Probeman on September 25, 2022, 09:31:52 AM
Maybe we can test some of these ideas regarding paralyzable/non-paralyzable with just looking at the raw data?  I plotted raw count (observed) rates from my LTAP and Anette's TAPL on Si metal and of course the Cameca is "topping out" at a lower count rate due to its higher nominal dead times (JEOL = ~1.5 usec vs. Cameca = ~3 usec) but perhaps if we keep going...

(https://probesoftware.com/smf/gallery/395_25_09_22_9_22_52.png)

Clearly this graph doesn't take it far enough, but perhaps if we continue to increase the beam current we can see if the behavior at even higher count rates produces a different response?

But remember, one must carefully adjust their PHA settings to keep their PHA peak above the baseline to avoid affecting the measurement.  I suggest performing PHA scans at several beam currents over the range being utilized for these tests.  The JEOL seems to be more susceptible to pulse height depression so it is especially important for that instrument to monitor the PHA peak as the beam current is increased.

Another problem specific to JEOL instruments is that if one is adjusting the bias voltage to compensate for pulse height depression (as is usually the case) we have to ask ourselves, could different bias voltages produce different dead times, or are these effects minor compared to the dead times of the pulse processing electronics?
Title: Re: Generalized dead times
Post by: sem-geologist on September 25, 2022, 09:24:59 PM
Maybe we can test some of these ideas regarding paralyzable/non-paralyzable with just looking at the raw data?  I plotted raw count (observed) rates from my LTAP and Anette's TAPL on Si metal and of course the Cameca is "topping out" at a lower count rate due to its higher nominal dead times (JEOL = ~1.5 usec vs. Cameca = ~3 usec) but perhaps if we keep going...

Now about "topping out", that is a wrong term - it should be "flatting out".
As Far I remember for other posts Your SX100 have a new type of WDS card. Set DT (hardware) to 1µs - You will see it gives more benefits at integral mode for high count rate than drawbacks (it is not the same as old board - It is not the same experience). It will move the slope-flattening toward much higher currents (and higher count rates) - and thus will produce lesser uncertainty at 100-200nA range. With 1µs set as DT (hardware) at least on our SXFiveFE I had not seen it go down even going way up to 1000nA. The problem is that it starts to be useless at such conditions, as due to very flat slope the recalculation of measured count rate  to the input count rate even with the most perfect equations would have a very huge uncertainty (due to flat slope - as the calculated input count rate would get extremely sensitive to measured raw count rate).

Another problem specific to JEOL instruments is that if one is adjusting the bias voltage to compensate for pulse height depression (as is usually the case) we have to ask ourselves, could different bias voltages produce different dead times, or are these effects minor compared to the dead times of the pulse processing electronics?

I often forget that we talk about different dead times. From Your "dead time" (as a single constant including everything) position - Yes, absolutely. From my point of view looking into dead time of "this part" and dead time of "that part" the answer is - not at all. The missing counts (which makes us to think the detector is blind) are not due to bias, but due to PHA baseline, the reduction is due to rejected pulses, not due to not seen pulses. The PHA is not dead at all when it does it - it does it with a complete premeditation. The problem is that increasing bias will allow to center the pulse, which baseline (the bottom of the pulse) had shifted to negative voltages - by increasing bias the PHA spectrum is not only shifted, but also zoomed-in which enhance the pulse broadening (there are also other causes making pulse broadening, and increasing bias just make it more pronounced) and with increasing bias for centering at high count rates its left side (PHA distribution) is getting more and more being rejected by PHA baseline.

"rejected"... For a moment I thought - "wait a minute, maybe I am wrong about the extending vs paralysing as counter on EDS is also rejecting" - this looks like some surface for "extension" bit to to reaper, but no. After WDS PHA pulse rejection there is still a follow-up of fixed time blinding-of itself (counting electronics), where on EDS PHA despite pulse being rejected the counting system stays focused on all incoming pulses. So math for extending dead time (borrowed from EDS) is completely not fit to be tossed-in/mingled-into the equation for Jeol WDS counting dead time. It should be something else.
Title: Re: Generalized dead times
Post by: sem-geologist on September 25, 2022, 10:58:15 PM
And we come again at problem of PHA distribution shift... As Far as I could find out it is caused at least by two processes, where mild shift can be mitigated up to raw 100kcps (until other process kicks-in and overwhelms it with a real SHIFT of everything). Actually that correctable (mild) process is not shift at all - it is fortunately only downsizing of the pulse!

I had already posted those before but no one commented it... so look:
(https://probesoftware.com/smf/gallery/1607_05_05_22_8_00_50.bmp)
The description under the picture: Auto PHA sets low gain, and high bias (supposing that gas amplification produce less noise than semiconductor amplification (Please, have a bit more faith in modern electric engineering)). That at higher count rate causes slower charging step transition (longer delta t between cascades) at charge sensitive preamplifier's feedback capacitor (working closer to fully charged state) and signal further is translated by CR differentiator (OPAMP-based, called as Shapping Amplifier) to appear as lower amplitude pulse, than it would be seen at lower count rates.

So my mitigation for PHA downsizing problem is to lower the bias (and increase gain a lot, look - it is near Max (12-bits = 4096)  and then going up to 100kcps (raw counts) I don't need to touch any gain:
(https://probesoftware.com/smf/gallery/1607_05_05_22_8_05_38.bmp)

Now I am wondering how far such strategy could work for JEOL...
Title: Re: Generalized dead times
Post by: Probeman on September 26, 2022, 09:34:48 AM
And we come again at problem of PHA distribution shift... As Far as I could find out it is caused at least by two processes, where mild shift can be mitigated up to raw 100kcps (until other process kicks-in and overwhelms it with a real SHIFT of everything). Actually that correctable (mild) process is not shift at all - it is fortunately only downsizing of the pulse!

I see two effects in your plots above when going from low to high count rates, and also in my own data as shown here:

https://probesoftware.com/smf/index.php?topic=1466.msg11271#msg11271

First the downward shift in the PHA peak position at higher count rates and second the broadening of the right side of the peak also at higher count rates. Any idea why we start seeing that "shelf" on the right side at higher count rates?

Note also that I was able to keep both the bias and gain the same on my data from 10 to 200 nA, which for Mn Ka on Spc 2, LPET at 200 nA on Mn metal is 260 kcps!  For the purposes of calibrating the dead times I think we should try to keep both the bias voltage and the gain constant if possible (control our dependent variables!).

So why do you "mitigate" pulse height depression by decreasing the bias and increasing the gain (a lot)? What is your thinking on this? Why not simply keep the bias at a normal value and just increase the gain until the PHA peak is around 3 volts or so at a low count rate?  Then when the count rate is increased up to 260 kcps or more, the peak is still well above the baseline.  At least on Cameca instruments! 

Yes, the JEOL instrument may be unable to keep the bias voltage constant (over a large range of count rates) due to their very coarse gain settings, but why change the bias voltage at all on the Cameca?

I am not using auto PHA, because I want to adjust things manually. May we see your PHA scans at count rates greater than 100 kcps?

Now I am wondering how far such strategy could work for JEOL...

Good question.  To add some data to our speculations, here are some PHA scans from Anette's most recent run. First here is a normal TAP spectrometer at 10 nA:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_24_10.png)

and now at 120 nA:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_24_40.png)

Not very pretty I know, but if JEOL's integral mode works as expected(?) we are hopefully not losing any counts on the high side. By the way, the above PHA scan was on Si metal and extrapolating from 10 nA should have a predicted count rate of around 334 kcps at 120 nA. 

Now let's look at the TAPL crystal (hold on to your seats!), first at 10 nA:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_24_58.png)

and now at 120 nA:

(https://probesoftware.com/smf/gallery/395_26_09_22_9_25_15.png)

Pretty ugly but at least the PHA peak is above the baseline!  By the way, again extrapolating from 10 nA, the predicted count rate for this TAPL crystal at 120 nA is 894 kcps!!!!
Title: Re: Generalized dead times
Post by: sem-geologist on September 26, 2022, 04:35:55 PM
I see two effects in your plots above when going from low to high count rates, and also in my own data as shown here:

https://probesoftware.com/smf/index.php?topic=1466.msg11271#msg11271

First the downward shift in the PHA peak position at higher count rates and second the broadening of the right side of the peak also at higher count rates. Any idea why we start seeing that "shelf" on the right side at higher count rates?


shift, broadening and "shelf" are due to same process - pulse pileup, I already discussed that it in other posts. So sorry from repeating myself. "Shelf" represents the double piled up peaks (thus double energy). The Broadening is by imperfect positive and negative+positive pileups dominating the part of pileuped pulses, in my case also Ar esc pulses pileup with every other pulse combinations (as it is Ti Ka). shift is due to pulses being bi-polar (bipolar pulse: voltage raises to positive voltage, then it goes down and overshots the 0V going into wide negative voltage forming the negative tail slowly getting back to 0V) and its negative tail being much longer than positive pulse - with increased pileup many pulses starts at such negative tail, or double-deep negative tail or even more - thus PHA sees a shift as a function of increased pulse density (as PHA measures difference of voltages between top of pulses and 0V).

So why do you "mitigate" pulse height depression by decreasing the bias and increasing the gain (a lot)? What is your thinking on this? Why not simply keep the bias at a normal value and just increase the gain until the PHA peak is around 3 volts or so at a low count rate?  Then when the count rate is increased up to 260 kcps or more, the peak is still well above the baseline.  At least on Cameca instruments! 


1) This mitigation is very beneficial for differential PHA mode for higher order line minimization (elimination is impossible up to 4th order, with current hardware - thus only minimization)! 2) Lower bias will age the counter slower! 3) there is actually no normal values - You get exactly the same amount of counts with lower bias and high gain compared with lower gain higher bias using integral mode - thus there is no normal values - they are canonical - they are settled so by tradition of fear - fear of analog electronics noise. Well there is actually the lower threshold how much bias can be dropped, where below that value it will start to decrease the count rate. For high pressure spectrometers that should not be below 1600-1650 V (for Cameca spectrometers), the threshold can be experimentally found by setting max gain, and changing bias until the raw count rate starts to drop. The shared PHA demonstration was pushed to extreme (very near the threshold) to showcase the mitigation of the "Pulse amplitude downsizing by increased average load in feedback capacitor of Charge Sensitive Preamplifier".
Title: Re: Generalized dead times
Post by: Probeman on September 27, 2022, 09:10:16 AM
I see two effects in your plots above when going from low to high count rates, and also in my own data as shown here:

https://probesoftware.com/smf/index.php?topic=1466.msg11271#msg11271

First the downward shift in the PHA peak position at higher count rates and second the broadening of the right side of the peak also at higher count rates. Any idea why we start seeing that "shelf" on the right side at higher count rates?

shift, broadening and "shelf" are due to same process - pulse pileup, I already discussed that it in other posts. So sorry from repeating myself. "Shelf" represents the double piled up peaks (thus double energy). The Broadening is by imperfect positive and negative+positive pileups dominating the part of pileuped pulses, in my case also Ar esc pulses pileup with every other pulse combinations (as it is Ti Ka). shift is due to pulses being bi-polar (bipolar pulse: voltage raises to positive voltage, then it goes down and overshots the 0V going into wide negative voltage forming the negative tail slowly getting back to 0V) and its negative tail being much longer than positive pulse - with increased pileup many pulses starts at such negative tail, or double-deep negative tail or even more - thus PHA sees a shift as a function of increased pulse density (as PHA measures difference of voltages between top of pulses and 0V).

OK, thanks. That sounds quite reasonable. And I note that we see the same sort of "shelving" in the JEOL PHA scans at higher count rates, even though the peaks are much broader for some reason:

https://probesoftware.com/smf/index.php?topic=1489.msg11281#msg11281

Any idea why the JEOL PHA peaks are so broad?

So why do you "mitigate" pulse height depression by decreasing the bias and increasing the gain (a lot)? What is your thinking on this? Why not simply keep the bias at a normal value and just increase the gain until the PHA peak is around 3 volts or so at a low count rate?  Then when the count rate is increased up to 260 kcps or more, the peak is still well above the baseline.  At least on Cameca instruments! 


1) This mitigation is very beneficial for differential PHA mode for higher order line minimization (elimination is impossible up to 4th order, with current hardware - thus only minimization)! 2) Lower bias will age the counter slower! 3) there is actually no normal values - You get exactly the same amount of counts with lower bias and high gain compared with lower gain higher bias using integral mode - thus there is no normal values - they are canonical - they are settled so by tradition of fear - fear of analog electronics noise. Well there is actually the lower threshold how much bias can be dropped, where below that value it will start to decrease the count rate. For high pressure spectrometers that should not be below 1600-1650 V (for Cameca spectrometers), the threshold can be experimentally found by setting max gain, and changing bias until the raw count rate starts to drop. The shared PHA demonstration was pushed to extreme (very near the threshold) to showcase the mitigation of the "Pulse amplitude downsizing by increased average load in feedback capacitor of Charge Sensitive Preamplifier".

I get that this would reduce spectral interferences from high order reflections, but why not just use the spectral interference correction in PeakSight?

I know the interference correction in PeakSight a bit of a pain to use compared to quantitative interference correction in Probe for EPMA, but at least it provides a full correction for all interferences including first order interferences.

https://probesoftware.com/smf/index.php?topic=69.0

My advice to all my users and students is: adjust your PHA gain so the peak is well centered (maybe slightly to the left of center on a high concentration standard, so the there's room for it to shift to the right at lower count rates), and use integral mode and keep the baseline under 0.5 volts.  This way the spectrometer response will be quite linear no matter what the count rate is.

In other words, let all the x-rays in, and correct for spectral interferences as God intended   :D   using a quantitative interference correction:

https://probesoftware.com/smf/index.php?topic=69.msg1189#msg1189

Having a linear spectrometer response is of course essential for the constant k-ratio calibration for dead time:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102

Have you had a chance to acquire some constant k-ratios on your instruments?
Title: Re: Generalized dead times
Post by: sem-geologist on September 28, 2022, 04:18:58 AM
Any idea why the JEOL PHA peaks are so broad?

Because by increasing bias for keeping the main PHA peak at the same position it actually makes "zooming-in" like procedure (increasing Gain actually would do the same zooming magic). Increasing bias increases amplification and increases the real amplitude of Pulses. The thing which hides this is that both Cameca And Jeol PHA measures not real amplitude of pulse, but voltage from top of peak in reference to 0V (where pulse often at high count rate can start at negative voltage). So lets say Pulse is 5V height +/- 0.5V (at small count rate, the chosen values are for demonstration, not at exact proportions seen on instruments). At very high rate such most common pulse starts at ~ -2.5V and PHA measuring its top sees (and plots) only 2.5V plus the spread is still similar +/- 0.5V. Now Jeol User will increase bias to move that 2.5V PHA peak to be "back" on 5V. What happens behind the Users observable PHA graphs and slips unnoticed behind his back is that amplitude of pulse just was increased twice around the 0V to both sides (+V and -V) symmetrically (increasing those negative after-pulse tails (and thus whole part of signal at negative to 0V). So then at such high count rate pulses start no more at average of -2.5V but at average at -5V. So real amplitude was increased twice (from 5V to 10V), and so the distribution looks like broadening as that is zooming in and that initial uncertainty of +/- 0.5V got enlarged twice as well into +/- 1V. So as the end result The user having no idea that base line of pulses are far away in negative voltage sees the same centered pulse but much broader 5V +/-1V. And of course pile ups adds some more broadening on that.

I get that this would reduce spectral interferences from high order reflections, but why not just use the spectral interference correction in PeakSight?

I know the interference correction in PeakSight a bit of a pain to use compared to quantitative interference correction in Probe for EPMA, but at least it provides a full correction for all interferences including first order interferences.

https://probesoftware.com/smf/index.php?topic=69.0

The pain? I would argue that it is not so much (especially that I have some (own written) software to manage interference corrections, and can construct the set of corrections for new setup or evaluate validity for modified... my interference corrections often gets like ~50-100 corrections and above that (max has 150 for 43 elements)). Yes Peaksigh has some pain with circular correction... which can be worked around very easily. But my most liked feature which is in Peaksight 6.5 is its ability to handle negative interference (interference with background measurements) - which works remarkably. And so I was on the same boat: "Don't use narrow window diff, use interference corrections on all orders... for correction".  However I came across some experience which changed my mind. This year I adapted measuring the Si, Mg, and Al on second order lines (which worked remarkably at the beginning) but after getting back to work after vacation I had found out that after changing of season the intensities had dropped tens of percent (!), while intensities of first order stayed comparably same. I then started searching for answer if higher diffraction order intensities depends from some physical factors. I even asked this question on Research Gate:
https://www.researchgate.net/post/Can_proportion_of_intensities_of_different_orders_from_diffracted_X-rays_depend_on_temperature

There answer is IT DOES! And that would then require to do often calibration of such interference corrections (for higher order lines) that brings an additional hassle, but that would be not practical as it can change during the day when high precision trace element compositions are measured. I was wondering why my Monazite Dating floats at summer from older at evening to younger toward next morning - I was doing interference correction of 2nd and 3rd order lines of REE for U,Th,Pb. After I ditched that, and moved to diff mode for these elements - analyses started to be stable and no more produce any clear daily biases.
Title: Re: Generalized dead times
Post by: Probeman on September 28, 2022, 10:55:36 AM
Any idea why the JEOL PHA peaks are so broad?

Because by increasing bias for keeping the main PHA peak at the same position it actually makes "zooming-in" like procedure (increasing Gain actually would do the same zooming magic). Increasing bias increases amplification and increases the real amplitude of Pulses. The thing which hides this is that both Cameca And Jeol PHA measures not real amplitude of pulse, but voltage from top of peak in reference to 0V (where pulse often at high count rate can start at negative voltage). So lets say Pulse is 5V height +/- 0.5V (at small count rate, the chosen values are for demonstration, not at exact proportions seen on instruments). At very high rate such most common pulse starts at ~ -2.5V and PHA measuring its top sees (and plots) only 2.5V plus the spread is still similar +/- 0.5V. Now Jeol User will increase bias to move that 2.5V PHA peak to be "back" on 5V. What happens behind the Users observable PHA graphs and slips unnoticed behind his back is that amplitude of pulse just was increased twice around the 0V to both sides (+V and -V) symmetrically (increasing those negative after-pulse tails (and thus whole part of signal at negative to 0V). So then at such high count rate pulses start no more at average of -2.5V but at average at -5V. So real amplitude was increased twice (from 5V to 10V), and so the distribution looks like broadening as that is zooming in and that initial uncertainty of +/- 0.5V got enlarged twice as well into +/- 1V. So as the end result The user having no idea that base line of pulses are far away in negative voltage sees the same centered pulse but much broader 5V +/-1V. And of course pile ups adds some more broadening on that.

So you are saying that the reason for the apparent broader PHA peaks on JEOL instruments is due to the fact that the JEOL electronics amplifies the pulse distribution to 0 to 10v, while Cameca only amplifies to 0 to 5v?

And are you also saying there is more pulse pileup on JEOL instruments (and therefore more peak broadening)? Could some of that be due to the JEOL spectrometer's larger geometric efficiency from having the smaller focal circle? Or is it mostly that the JEOL electronics is faster and therefore it sees more raw counts?

I posted this comparison between JEOL and Cameca instruments here looking at the raw observed count rates as a function of beam current:

https://probesoftware.com/smf/index.php?topic=1489.msg11277#msg11277

In your response to that post you mentioned setting the hardware (integer) dead times to 1 usec on your Cameca instrument, and seeing many more counts at high beam currents.  But in the past when I've set my hardware (integer) dead times to 1 usec, I still measured dead times around 2 usec or so, which is why I think the Cameca is intrinsically a higher dead time system than the JEOL.

Attached are spreadsheets I did many years ago testing the dead times when the hardware was set to 1 usec for Si ka and Ti ka on Si metal and Ti metal.  The measured dead times were around 2 for both x-rays.

But maybe this is all moot, as we wouldn't want to run without imposed (hardware) dead times of 3 usec, correct?  I believe you also run using a hardware integer dead time of 3 usec? 

That said, I am trying to remember when we upgraded our WDS board, as these tests were performed in 2010!  I think we upgraded our stage and WDS boards when we got v. 4.2 of PeakSight, so when was that released?  Maybe I should re-run some dead time calibrations with the new WDS board using new constant k-ratio method with the hardware dead times set to 1 usec just to see if I get similar numbers as before?   

I'll try sneaking into the lab this weekend if the instrument isn't busy!    ;D
Title: Re: Generalized dead times
Post by: sem-geologist on September 29, 2022, 05:52:52 AM
So you are saying that the reason for the apparent broader PHA peaks on JEOL instruments is due to the fact that the JEOL electronics amplifies the pulse distribution to 0 to 10v, while Cameca only amplifies to 0 to 5v?
No I am not saying that. Where had you got this from? I am saying the broadening is because of "zooming" effect.

I think You fail to grasp what is bipolar-pulse and its importance for observed artifacts with increasing density of pulses (increasing count rates). I already uploaded in few places oscilloscope screenshot, but maybe it is overwhelming.  So I am stopping here and can go further with explanation only If I am sure You understand the bipolar pulse. I am extremely bad on sketching, please find below the part of simplified sketch showing how bipolar pulse is born (in WDS G(F)PC pulse forming).

So look to this (I am also attaching it as vector version in attachments); (PS.: only bipolar pulse presented below has a very precise shape as that is pulse being measured with oscilloscope, other earlier forms of pulse in pipeline are reconstructed from it):
(https://probesoftware.com/smf/gallery/1607_29_09_22_5_51_34.png)
Title: Re: Generalized dead times
Post by: Probeman on September 29, 2022, 08:33:07 AM
So you are saying that the reason for the apparent broader PHA peaks on JEOL instruments is due to the fact that the JEOL electronics amplifies the pulse distribution to 0 to 10v, while Cameca only amplifies to 0 to 5v?
No I am not saying that. Where had you got this from? I am saying the broadening is because of "zooming" effect.

I think You fail to grasp what is bipolar-pulse and its importance for observed artifacts with increasing density of pulses (increasing count rates). I already uploaded in few places oscilloscope screenshot, but maybe it is overwhelming.  So I am stopping here and can go further with explanation only If I am sure You understand the bipolar pulse. I am extremely bad on sketching, please find below the part of simplified sketch showing how bipolar pulse is born (in WDS G(F)PC pulse forming).

So look to this (I am also attaching it as vector version in attachments); (PS.: only bipolar pulse presented below has a very precise shape as that is pulse being measured with oscilloscope, other earlier forms of pulse in pipeline are reconstructed from it):
(https://probesoftware.com/smf/gallery/1607_29_09_22_5_51_34.png)

OK, then are you saying that the Cameca instrument performs an extra step of converting from mono-polar to bipolar pulses, that the JEOL electronics does not?  And that is the reason for the thinner PHA peaks on the Cameca pulse processing electronics?

Sorry if I ask dumb questions, I am not familiar at all with these electronic details.
Title: Re: Generalized dead times
Post by: sem-geologist on September 30, 2022, 05:25:04 AM
OK, then are you saying that the Cameca instrument performs an extra step of converting from mono-polar to bipolar pulses, that the JEOL electronics does not?  And that is the reason for the thinner PHA peaks on the Cameca pulse processing electronics?

Sorry if I ask dumb questions, I am not familiar at all with these electronic details.

Those are not dumb questions, those are actually very valid questions, just  a bit in this context inpatient like "a-cart-attached-before-the-horse". The answer is, unless someone would invite me to peek into Jeol Probe electronics (at least to detector electronics), it is hard to tell.

So if we open the case where the Shapping of pulse takes place we see this (My kind reminder: DEADLY high voltage is present on part of such board):
(https://probesoftware.com/smf/gallery/1607_30_09_22_4_09_10.png)
The Charge-Sensitive Preamplifier (CSP) and the Shapping Amplifier (SA) are packed inside single a single chip produced by Amptek - A203. The SA on that chip offers both: bipolar (pin9) and unipolar (pin8) outputs, however from visual inspection it is clear that Cameca uses unipolar (pin8, a clear trace from it, then around A203 to the capacitor for decoupling, which is required by documentation of A203) output from that chip and by components placed on the shielded ground plane (which points to very careful design for sensitive signal handling) it is clear that it does the 2nd differentiation on its own implementation (with OPAMP AD847). But why? I believe it is as A203 is not able to drive terminated (75ohm) coaxial cable on its own as A203 bipolar output is rated for 2kohm impedance - connecting such signal directly to terminated coaxial (75ohm) would lower down the amplitude a lot (x30 times), thus I think Cameca uses its own implementation with 2nd differentiation of monopolar pulses and (what I suspect from clearly visible two diodes and connection with NPN and PNP transistor pair) signal after that differentiation goes through class AB power amplifier (the short explanation what it is: https://www.elprocus.com/class-ab-amplifier/ (https://www.elprocus.com/class-ab-amplifier/)) as signal needs to be driven through few meter terminated coaxial cable to the gain and counting electronics. Just a side note: there are hardly any high speed OPAMPS which could directly drive such loads at these voltages (+/-15V), and thus the engineering of Cameca in this regard is a top notch.

So to answer the question if Jeol "shortcuts" on signal handling and thus PHA distribution suffers because of that - that would need similar inspection on Jeol hardware side: We need to know, what CPS and SA it uses (i.e. A203 is unique in its integration of both CSP and SA into single package - but it is possible to use separate CSP and SA chips (i.e. produced by Cremat inc.) to get the similar functionality), what kind of coaxial cable is used to send the signal from detector to counting board. Design for a few hundred of thousands of pulses a second is not complicated. However with Million pulses in second a single weak point in design can cause the amplitude drop.

This 75 ohm terminated coaxial cable is making me a headache for my planned experiment with external pulse generator. This means I will need to implement some fast class AB power amplifier if I want to simulate the pulses from the above presented circuit.

I however am skeptical if above described part of pipeline would produce observed differences in severity of PHA shift and broadness on Jeol probes. I introduced the description of bipolar pulse as a starting point to go further with explanation what happens next - when density of such pulses increase (count rate increase). We also can see the PHA shift on Cameca instruments and PHA distribution goes straight to hell when going to very high input count rates (like >1Mcps). And PHA shifting if using normal bias values is visible already from 20kcps and up.

So at first I want to present how I know the PHA shifts are produced on Cameca WDS hardware and in particular - how the bipolarity of pulses is causing that. Also my presented mitigation for "downsizing" (not to mistake with PHA shift) has a part in this story, and knowing if early shift can be mitigated by lowering the bias and increasing the gain can shed some light why Jeol PHA has more severe shifts (and broadening). I guess that there is not much difference of how Cameca converts unipolar to bipolar (doing math differentiation with OPAM), but how unipolar pulses looks like and differs by different handling of SA and CSP used by these two vendors. Unfortunately everything in pipeline before the bipolar pulse is more theoretical, as only bipolar output can be captured with oscilloscope. Nevertheless, knowing the process it is possible to reconstruct earlier shapes in the pipeline and find the possible reason even without opening the case and looking to the physical components of electronics.
Title: Re: Generalized dead times
Post by: Probeman on September 30, 2022, 08:28:01 AM
OK, then are you saying that the Cameca instrument performs an extra step of converting from mono-polar to bipolar pulses, that the JEOL electronics does not?  And that is the reason for the thinner PHA peaks on the Cameca pulse processing electronics?

Sorry if I ask dumb questions, I am not familiar at all with these electronic details.

Those are not dumb questions, those are actually very valid questions, just  a bit in this context inpatient like "a-cart-attached-before-the-horse". The answer is, unless someone would invite me to peek into Jeol Probe electronics (at least to detector electronics), it is hard to tell.
...
So to answer the question if Jeol "shortcuts" on signal handling and thus PHA distribution suffers because of that - that would need similar inspection on Jeol hardware side...

OK, so I assume your answer is: we do not know yet...

We would still love to see some constant k-ratio data from your instruments. You saw the data from Jlmaner87's tactis instrument here?

https://probesoftware.com/smf/index.php?topic=1466.msg11238#msg11238

Login to see attachments...
Title: Re: Generalized dead times
Post by: sem-geologist on September 30, 2022, 09:09:40 AM

OK, so I assume your answer is: we do not know yet...

We would still love to see some constant k-ratio data from your instruments. You saw the data from Jlmaner87's tactis instrument here?

https://probesoftware.com/smf/index.php?topic=1466.msg11238#msg11238

Login to see attachments...

Yes we don't know yet, however understanding what causes PHA shifts on Cameca instruments can point where to look exactly on Jeol probes.

Also Yes I saw these k-ratio data. Unfortunately our SXFive is heavily booked, hopefully soon there will be some time window for such measurements.
Title: Re: Generalized dead times
Post by: Probeman on September 30, 2022, 09:21:16 AM
Now I need to change my dream instrument somewhat from:

https://probesoftware.com/smf/index.php?topic=1410.msg10366#msg10366

to:

1. JEOL FEG electron column

2. Cameca WDS spectrometers with linear optical encoding

3. Bruker or Thermo SDD EDS

4. Cameca stage with linear optical encoding

5. Cameca light optics

6. Cameca pulse processing electronics with JEOL dead times

7. Polycold cryo-pumped vacuum baffle (100 Kelvin) with turbo pump

 :D
Title: Re: Generalized dead times
Post by: sem-geologist on September 30, 2022, 10:30:21 AM
Now I need to change my dream instrument somewhat from:

https://probesoftware.com/smf/index.php?topic=1410.msg10366#msg10366

to:

1. JEOL FEG electron column

2. Cameca WDS spectrometers with linear optical encoding

3. Bruker or Thermo SDD EDS

4. Cameca stage with linear optical encoding

5. Cameca light optics

6. Cameca pulse processing electronics with JEOL dead times

7. Polycold cryo-pumped vacuum baffle (100 Kelvin) with turbo pump

 :D

1. What so good with Jeol FEG and what so bad with Cameca FEG? Cameca FEG if run correctly can be stable as it can get (<0.5% 24h). I had hear lots of not so good rumors about Jeol column stability. If I would need to consider dream instrument for this point my choise of FEG would be Shimadzu one, as looks they are the first to understand the real advantage of FEG for EPMA (stability and very high current, or at least addresses correctly those advantages in the marketing, and states that they use a bit bigger FEG than SEM which mean easier attainable ultimate Stability).

6. Also why do You think You need Jeol dead times? If there is not enough counts, and You want to see also much bigger PHA position swing, then set Hardware dead time to 1µs (I suppose You had not redone the test and are using default 3µs) and enjoy.
Title: Re: Generalized dead times
Post by: Probeman on September 30, 2022, 10:57:24 AM
1. What so good with Jeol FEG and what so bad with Cameca FEG? Cameca FEG if run correctly can be stable as it can get (<0.5% 24h). I had hear lots of not so good rumors about Jeol column stability. If I would need to consider dream instrument for this point my choise of FEG would be Shimadzu one, as looks they are the first to understand the real advantage of FEG for EPMA (stability and very high current, or at least addresses correctly those advantages in the marketing, and states that they use a bit bigger FEG than SEM which mean easier attainable ultimate Stability).

That is not what I hear from Cameca FEG owners. I hear that if they try and change the beam current they have to wait a long time for stability to return.  I also hear that the electrostatic beam blanker introduces additional instability.  Anyway, it's all moot now since Cameca is going to stop producing EPMA instruments. At least any instruments that most of us can afford!

6. Also why do You think You need Jeol dead times? If there is not enough counts, and You want to see also much bigger PHA position swing, then set Hardware dead time to 1µs (I suppose You had not redone the test and are using default 3µs) and enjoy.

Aren't you using an enforced dead time of 3 usec on your instrument? My view is that the less dead time the better, especially for high speed quantitative mapping using high beam currents.  The smaller the dead time correction, the better is my thinking.  I hope we all agree that SDD detectors/electronics with 1/10th dead times are better than Si(Li)? 

I will try and run a 1 usec enforced test again this weekend using the constant k-ratio method:

https://probesoftware.com/smf/index.php?topic=1466.msg11102#msg11102
Title: Re: Generalized dead times
Post by: sem-geologist on September 30, 2022, 11:57:57 AM

Aren't you using an enforced dead time of 3 usec on your instrument? My view is that the less dead time the better, especially for high speed quantitative mapping using high beam currents.  The smaller the dead time correction, the better is my thinking.  I hope we all agree that SDD detectors/electronics with 1/10th dead times are better than Si(Li)? 

No, I am moving away from 3µs: and moving any new calibrations for new analysis to 1µs if using PHA integral mode, and also enlarging the dead time to 5µs if going for narrow window PHA diff mode (also reducing bias, not so extreme as in my example above, (i.e. just down to 1700 V from 1850, and increasing gain accordingly), as 3µs is not enough to get back from pulse depression to 0V of those bipolar pulses. 3µs is safe value (and I am still not going below on our SX100 with old WDS board), but not optimal for different purposes especially on new machines - and the ability left to user to set it to this or that value gives flexibility. The Peaksight has only a single dead time correction model (nothing so fancy as PfS) and thus moving to 1µs makes even not perfect dead time correction messing up things less.

Well, Cameca FEG... to be honest I was mad about it too in the beginning as we had different expectations. For sure stability-wise it is less flexible than Tungsten based SX. You can't just jump up and down with HV and current, and that can be worked around with better planning with analytical strategy (i.e. doing 10nA, then 20nA, then 200nA) and merge these after acquisition.

However, the most wickedest thing I find with that column is the splash aperture mounted in a way that replacement needs basically dismantling upper part of the column - that is my most hated thing with that design and that IMHO is the main source of instabilities observed later even if FEG is well maintained (like after one year of using, which after 2 years of using it basically blocks using the HV <12kV, as contamination on that aperture is able to deflect the beam).

The normal apertures fortunately can be easily changed as in SX100. I also found out that with FEG it is much better to let go the beam regulator (I am not using that at all) as not regulated beam is much more stable (I saw also Theo not long ago asked how to use PfS with 200µm not regulated aperture, probably found out the gotcha too). I can not comment on stability of electrostatic lens - actually I have no complaint about that and can't get how any proof that it cause any instability can be gathered. Contrary, I think the problem with instabilities lies with instability caused by too heavy reliance on single condenser lens, which alone needs to to handle such different power loads compared to the set of two lenses on W/LaB6 column. In example its power (C2) needs to be decreased in about ~70% of full range to go from 1pA to 800nA (on single aperture),  where two-lens system changes power only 15-20% for that. And thus it is rather thermal re-equilibration of lenses (and its power supply circuit) which cause instabilities then changing the beam current, but that can be mitigated easily by using more different apertures, allowing to stay within similar range of C2 power (using 70µm aperture for low beam currents and 200µm aperture for high currents). Our current FEG tip is running already 3.5 years and still kicking (still able to provide maximum 1000nA with 0.5% stability, i.e. Zr in Rutile enjoys these tickling currents, unless it is grains in epoxy - epoxy expands terribly and 1) cracks  and 2) surface goes out of focus)!


Title: Re: Generalized dead times
Post by: Probeman on September 30, 2022, 12:13:18 PM

Aren't you using an enforced dead time of 3 usec on your instrument? My view is that the less dead time the better, especially for high speed quantitative mapping using high beam currents.  The smaller the dead time correction, the better is my thinking.  I hope we all agree that SDD detectors/electronics with 1/10th dead times are better than Si(Li)? 

No, I am moving away from 3µs: and moving any new calibrations for new analysis to 1µs if using PHA integral mode, and also enlarging the dead time to 5µs if going for narrow window PHA diff mode (also reducing bias, not so extreme as in my example above, (i.e. just down to 1700 V from 1850, and increasing gain accordingly), as 3µs is not enough to get back from pulse depression to 0V of those bipolar pulses. 3µs is safe value (and I am still not going below on our SX100 with old WDS board), but not optimal for different purposes especially on new machines - and the ability left to user to set it to this or that value gives flexibility. The Peaksight has only a single dead time correction model (nothing so fancy as PfS) and thus moving to 1µs makes even not perfect dead time correction messing up things less.

OK, that is useful information.

We should both try some constant k-ratio measurements at 1 usec enforced dead times on our instruments this weekend and see what the actual dead times turn out to be.
Title: Re: Generalized dead times
Post by: sem-geologist on October 03, 2022, 05:44:42 AM
Unfortunately I had no time to do that k-ratio test (it is time consuming) and our machine is kept occupied even at weekends (which makes me very happy). Hopefully, there will be a short window in the schedule next week.

However, I was not idle, and done some more interesting prep and modeling work for future experimentation, and came up with some more (important for dead time) ideas to check in the future. If you remember I had mentioned some artificial signal generator. I already got raspberry pi nano (what an incredible capable small board for a few bucks), played around with R2R (resistor ladder) 8 bit DIY DAC firstly on breadboard (which I found is junk), and after moved to soldered assembly. 8-bit DAC works nicely, but before I can connect that to coaxial cable (with 75ohm termination) of WDS signal, I need to amplify and shift the Voltage level (This Rpi nano and DAC is outputing signals 0V/+3.29V, and I need to scale that to -15V/+15V, or at least -12V/+12V) and amplify the power. I somehow initially bought pretty cheap OPAMP which is capable only up to 3Mhz. Ideally I should buy the same AD847 (type used in original signal processing), but there is issues with supply chain, and it is quite expensive. But I start to doubt if that would not be an overkill...
Anyway, I also was working on the way to demonstrate as much as clear why bipolar pulses and its bipolarity maters (how it is behind the shifting and broadening of the PHA distribution). So I took this picture which I already showed previously as a starting point:
(https://probesoftware.com/smf/gallery/1607_17_08_22_1_40_57.bmp)
and prepared it for capturing the waveform (the time section of 12µs has 300pixels, thus one pixel = 40ns). I refined the captured shape (the orange curve) until integration of the bipolar-pulse (the previous -unipolar- form (blue curve)) would begin at 0=0 and end at y=0:
(https://probesoftware.com/smf/gallery/1607_03_10_22_4_49_48.png)
As we can see from that picture the properties of asymmetry (around y=0) of bipolar pulse is inherited from asymmetry (around x) at max position of uni-polar pulse  - its left side is shorter than right side. Unipolar pulse gets its asymmetry from "cascade" pulses and the background of Charge sensitive preamplifier. Thus important questions are:
Would Unipolar pulse look the same at very high count rates, and low count rates? Would not slope of background should be different - and thus tail - and so the shape of bipolar pulse? The longer is the tail, the longer is the negative part of bipolar pulse, and the shallower.
...so, should unipolar end at y=0 at all count rates or rather not at all???
edit: answer after comparing pulses from measurements on low count rate and high count rate is - it does not change the shape.

I am attaching the digitized form of pulse below as txt.
I found out that it is very easy to model pulse coincidences using simple numpy library of python, using ".convolve" method.

I am not adding any units (neither y, neither x is so important, the important is only pulse proportions), as if I would introduce U [V] and t[µs] and then probeman would freak out that this has something to do with electronics :D, so I leave these unit-less. Also my in-construction pulse generator will be using 8-bit values - thus I will confuse less myself keeping this like that.

So first to import the pulse shape we do:
Code: [Select]
import numpy as np
import matplotlib.pyplot as plt

bi_pulse_model = np.loadtxt("SX_pulse_model_40ns.txt")
plt.plot(bi_pulse_model)
plt.show()

to get the previous unipolar form we need to do integration of signal:
Code: [Select]
uni_pulse_model = np.cumsum(bi_pulse_model)
plt.plot(bi_pulse_model * 10, label="bipolar")  # lets scale for comparison with unipolar pulse
plt.plot(uni_pulse_model, label="unipolar")
plt.legend()
plt.show()

But that alone, while is interesting, is not so useful.
The real interesting part starts when we define some longer time space in example 1000 units (the pulse model between points is 40 ns; such 1000 units would represent 40µs)

Code: [Select]
pulse_space = np.zeros(1000)

in example if we want pulses every 4µs:
Code: [Select]
pulse_space[0::100] = 1

Then we can convolute pulses on this like that simple:
Code: [Select]
model = np.convolve(pulse_space, bi_pulse_model)
plt.plot(model)
plt.show()
which would produce this (for 250kcps):
(https://probesoftware.com/smf/gallery/1607_03_10_22_5_23_28.png)
And so if we start to decrease this gap between pulses we start to see amplitude (Relative to y=0) shift downward (except the very first pulse):
Code: [Select]
pulse_space[:] = 0 # reset everything to 0
pulse_space[0::75] = 1 # pulse every 3µs
model = np.convolve(pulse_space, bi_pulse_model)
plt.plot(model)
plt.show()
which would produce this for 3µs (for 333kcps):
(https://probesoftware.com/smf/gallery/1607_03_10_22_5_32_27.png)
then for 2µs after code modification (for 500kcps):
(https://probesoftware.com/smf/gallery/1607_03_10_22_5_32_55.png)
then for 1µs (for 1Mcps):
(https://probesoftware.com/smf/gallery/1607_03_10_22_5_33_20.png)
and finaly for 520ns (for 2Mcps):
(https://probesoftware.com/smf/gallery/1607_03_10_22_5_33_44.png)

Well, few things to remind - this is synthetic modeling with even time gaps between pulses imitating average time gap shortening with increased count rate. However pulses comes at random in real life. This simulation shows that with even spaced pulse trains PHA shift should start at 250kcps (input rate), however in real life we observe on Cameca it much earlier, because pulses randomly hit the spot close after other pulse. However this agrees quite well with breaking point of clear PHA shift increase crossing the 250kcps (input).

One more point is that those pulse trains are shifted with whole information of amplitude preserved. That gets clear with last negative big pulse when compared with y-shifted single pulse overlays. In example below, the previous (with gaps of 520ns) plot is zoomed at the end part of pulse train:
(https://probesoftware.com/smf/gallery/1607_03_10_22_7_58_31.png)

Stay tuned for the random pulse simulation.
Title: Re: Generalized dead times
Post by: Probeman on October 03, 2022, 09:04:22 AM
Unfortunately I had no time to do that k-ratio test (it is time consuming) and our machine is kept occupied even at weekends (which makes me very happy). Hopefully, there will be a short window in the schedule next week.

A busy machine is a happy machine!    :)

The lab was available Sunday so I went in and set up a quick constant k-ratio run.  With PFE it's super easy. Took me ~30 minutes to tune up Ti Ka on all 5 spectrometers and then acquire some PHA scans at 10 nA and 200 nA. The Cameca PHA is amazing in that it is extremely stable over that enormous count rate range. 

Then because PFE can fully automate a constant k-ratio acquisition, I ran 6 points (60 sec on-peak and 10 seconds on each off peak) on Ti metal and TiO2. I ran sample setups at 10, 20, 30 , 60, 80, 120, 160 and 200 nA which took about 3 hours. Though I left for home after starting the automation and downloaded the run once it had finished!   :D

See Probe for EPMA constant k-ratio procedure attached as a pdf below...

I will post the Ti Ka (20 keV) k-ratio results to the constant k-ratio topic later today...

However, I was not idle, and done some more interesting prep and modeling work for future experimentation, and came up with some more (important for dead time) ideas to check in the future. If you remember I had mentioned some artificial signal generator.

This is really interesting stuff.  I look forward to seeing the random pulse generation.
Title: Re: Generalized dead times
Post by: Ben Buse on March 01, 2023, 06:21:03 AM
Does anyone have a copy of this?

https://www.cambridge.org/core/journals/advances-in-x-ray-analysis/article/abs/correction-for-nonlinearity-of-proportional-counter-systems-in-electron-probe-xray-microanalysis/ADA4C17FF92493BE71B614CB808BF902 (https://www.cambridge.org/core/journals/advances-in-x-ray-analysis/article/abs/correction-for-nonlinearity-of-proportional-counter-systems-in-electron-probe-xray-microanalysis/ADA4C17FF92493BE71B614CB808BF902)
Title: Re: Generalized dead times
Post by: Probeman on March 02, 2023, 07:54:33 AM
Does anyone have a copy of this?

https://www.cambridge.org/core/journals/advances-in-x-ray-analysis/article/abs/correction-for-nonlinearity-of-proportional-counter-systems-in-electron-probe-xray-microanalysis/ADA4C17FF92493BE71B614CB808BF902 (https://www.cambridge.org/core/journals/advances-in-x-ray-analysis/article/abs/correction-for-nonlinearity-of-proportional-counter-systems-in-electron-probe-xray-microanalysis/ADA4C17FF92493BE71B614CB808BF902)

See attached.
Title: Re: Generalized dead times
Post by: Ben Buse on March 07, 2023, 05:38:05 AM
Thank you  :)