Probe Software Users Forum

General EPMA => Discussion of General EPMA Issues => Topic started by: Brian Joy on June 17, 2022, 09:44:45 PM

Title: An alternate means of calculating detector dead time
Post by: Brian Joy on June 17, 2022, 09:44:45 PM
As I noted elsewhere, Heinrich et al. (1966, attached) proposed an alternate method for calculating the non-extendible dead time of proportional X-ray counters; the method does not rely on measurement of beam current.  Their approach, which was adapted from that of Short (1960, Review of Scientific Instruments 31:618-620) for XRD systems, relies on calculation of ratios of apparent count rates of two X-ray lines measured simultaneously on two spectrometers over a range of beam currents.  Two datasets must be collected, with the chosen X-ray lines measured alternately on each spectrometer.  For instance, if Si Kβ and Si Kα are used in turn, then Si Kβ (observed count rate = N'11) is collected on the first spectrometer while Si Kα (N'21) is collected simultaneously on the second.  In the next dataset, Si Kα (N'12) is collected on the first spectrometer while Si Kβ (N'22) is collected on the second.  The dead time for each spectrometer can then be calculated as follows:

spectrometer 1:  τ1 = (m1m2)/(b2b1)
spectrometer 2:  τ2 = (b2m1b1m2)/(b2b1)

In each equation, m1 and m2 are the respective slopes on plots of N'11/N'21 versus N'11 and N'12/N'22 versus N'12, while b1 and b2 are the respective intercepts.  Note that the expressions as presented may only be applied in the range in which each plot shows linear behavior.

I’ve attached a spreadsheet that can be used as a template for the calculations (“method 2”).  The dead times calculated for Si Kα/Kβ (elemental Si, uncoated, 15 kV) on my channels 1 and 4 (TAPJ each) are similar to those obtained from the more typical beam current-dependent calculation (“method 1,” modified from Paul Carpenter's spreadsheet).  I haven’t yet compared results of the two methods for other cases.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 02, 2022, 05:28:04 PM
Rather than worrying about accurate dead time corrections at extraordinarily high count rates, I’d prefer to focus first on accurate characterization of dead time within the range of count rates within which I typically work while running quantitative spot analyses.  I rarely exceed 10 kcps on either standard or unknown and always work within the range in which the measured X-ray count rate divided by the true rate is an essentially linear function of the measured rate.  On the instrument that I operate, I see such linear behavior up to about 70 kcps using the P-10 gas-flow counters and up to at least 80 kcps using the sealed Xe counters.

Picoammeter inaccuracy can be a major obstacle when characterizing dead time using the relation, N’/I = k(1-N’τ) (as well as expansions of it).  On a plot of N’/I versus N’, the intercept of the regression line gives the value of k, and the negative of the slope divided by the intercept gives the dead time, τ.  Obviously, since measurement of current is required for the calculation, inaccuracy in that measured value will produce inaccuracy in the calculated dead time.  In contrast, not only does the “ratio method” not even require measurement of current, it should also minimize the effects of any problem that affects each spectrometer equally or at least roughly so.  This could include, for instance, accumulation of contamination around the beam.  Consequent absorption of electron energy would cause transition metal Kα and Kβ count rates to fall over time, yet the ratios of those count rates measured on different spectrometers would likely not be affected noticeably.

I’ve used the ratio method to determine dead times using both Cu and Fe metal; I’ve attached a spreadsheet containing my data and plots for Fe.  For this latest set of measurements, I used high purity Fe metal mounted in Buehler Konductomet conductive resin; I did not apply a conductive coat.  I lightly polished (0.25 μm diamond) and thoroughly cleaned the mount surface prior to the two data collection runs.  For each of the two measurement sets for Fe, I counted at either the Kα or Kβ peak position (details below) on three spectrometers simultaneously at 55 values of beam current as measured on the PCD.  The high voltage supply had been on for perhaps 30 hours when I started, and so the beam current was extremely stable.  Each set of measurements at a given beam current was collected at a different spot.  I made all measurements using the JEOL X-ray counter built into PC-EPMA.  I aimed for at least 30000 apparent counts for any given X-ray line and current and adjusted the count time accordingly down to a minimum of 10 seconds.  Collecting the data took me perhaps four hours (for a given metal).  In reflected light, I saw no obvious accumulation of contamination around the beam during the runs (though I’m sure I would have seen some in a secondary electron image – I neglected to check).  I collected the data for Cu on the JEOL 13-element mount in a similar manner.

Prior to making the measurements, I collected PHA scans at the Kα and Kβ peak positions at 200 nA (though the maximum current I used was 320 nA during the second run).  I was particularly concerned with ensuring that the Xe escape peak did not collide with the baseline at high count rate.  Additionally, I had to make certain that any “ghost peaks” produced by my aging counters would remain within the window.  Generally, I opted to increase electronic gain rather than anode bias and centered the 200 nA distribution around 5 volts (with baseline at 0.7 V).  (Also, working at relatively low anode bias should help to minimize shifts in the pulse amplitude distribution and should extend Xe counter lifetime.)  When I decreased the beam current to ~5 nA at these new detector settings, the center of each distribution shifted to between 6 and 6.5 V (roughly), and counts fell to near-zero by 8-8.5 V.

As I noted before, I counted X-rays simultaneously on three spectrometers.  During the first measurement set, I counted at the Kα peak position on channel 2/LIFL and at the Kβ peak position on channel 3/LiF and channel 5/LiFH.  During the second measurement set, I counted at the Kβ peak position on LiFL and at the Kα peak position on LiF and LiFH.  Due to this arrangement, I obtained two values for channel 2 dead time.  For both Fe and Cu, the ratio method produces larger dead time values than the current-based method:

Fe Kα/Kβ ratio method [μs]:
channel 2/LiFL:  1.44, 1.48
channel 3/LIF:  1.41
channel 5/LIFH:  1.41
Current-based method, Fe Kα [μs]:
channel 2/LiFL:  1.32
channel 3/LiF:  1.13
channel 5/LiFH:  1.31
Current-based method, Fe Kβ [μs]:
channel 2/LiFL:  0.97
channel 3/LiF:  0.22
channel 5/LiFH:  0.85

Cu Kα/Kβ ratio method [μs]:
channel 2/LiFL:  1.50, 1.46
channel 3/LIF:  1.45
channel 5/LIFH:  1.38
Current-based method, Cu Kα [μs]:
channel 2/LiFL:  1.37
channel 3/LiF:  1.06
channel 5/LiFH:  1.25
Current-based method, Cu Kβ [μs]:
channel 2/LiFL:  0.96
channel 3/LiF:  -0.12
channel 5/LiFH:  0.76

The following plots illustrate the data collected for Fe.  For the plots constructed using the ratio method, I've omitted data collected at greater than about 85 kcps.  For the plots related to the current-based method, I've included all data collected and, on the Fe Kα plot, have highlighted data used in the regression with black borders.

(https://probesoftware.com/smf/gallery/381_02_07_22_5_04_01.png)

(https://probesoftware.com/smf/gallery/381_02_07_22_5_05_13.png)

A pattern is apparent in the numbers, noting especially that a negative dead time (positive slope) is physically impossible (Cu Kβ/LiF).  On my instrument, when current is high but count rate is relatively low, such as when measuring at the Kβ peak on any spectrometer and when measuring at the Kα peak on channel 3/LiF, the dead time is clearly underestimated using the current-based approach.  The dead time values appear to progress monotonically downward as count rate at given current decreases.  One potential explanation for this is that the picoammeter is reading systematically higher (relative to the true current) with increasing current.  This would either 1) accentuate the departure from linearity on plots of N’/I versus N’ at high current (Kα/channel 5/LIFH/N’32) or 2) give the appearance of a departure from linearity at anomalously low count rate (Kα/channel 3/LiF/N’22).  It could also account for the near-zero slope on the plot for Fe Kβ on channel 3/LiF/N’21 (also true for Cu Kβ).  In the past, I believe that I’ve systematically underestimated the dead time on my channel 3 (LiF/PETJ) due to its low count rate at given current.  Of course, if my idea is correct, then I should obtain a larger apparent dead time value via the current-based method when counting X-rays using PET rather than LiF, and I haven’t tested this yet.  Like I’ve noted before, I always minimize problems due to inaccurate dead time correction by either 1) working at relatively low count rate (<10 kcps) or 2) when count rates are higher, roughly matching the count rate on the standard with that on the unknown at given current. 

As a final note, I haven’t yet propagated counting error through my dead time calculations.  I’ll get to this eventually, as the use of ratios increases the contribution of random error in estimated uncertainty.  As I’ve noted above, though, systematic error appears to be more influential than random error in the current-based calculation.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 03, 2022, 01:21:17 AM
And here are the corresponding plots for Cu (collected at V = 15 kV):

(https://probesoftware.com/smf/gallery/381_03_07_22_1_40_30.png)

(https://probesoftware.com/smf/gallery/381_03_07_22_1_41_38.png)

Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 03, 2022, 10:47:05 AM
After thinking about this a bit, it seems to me that this Heinrich Ka/Kb ratio method is essentially similar to the k-ratio method that John Fournelle and I came up with recently!  Think about it...  they're sort of doing the same thing.

The point being that the ratio of the Ka and Kb lines on a single material should be constant on two spectrometers, just as the (k-)ratio of a single emission line of any two materials with different concentrations should also be constant as a function of beam current on a single spectrometer!  Both methods assume that the ratios of the emission line(s) (at significantly different count rates) remain constant!

The Heinrich method is cool because it doesn't depend on the picoammeter, but the k-ratio method does not either as I will explain.  Both methods require measurements at fairly high beam currents in order to obtain information on the dead time effects which will be negligible at low beam currents. The higher the count rates (beam currents), the better the handle we obtain on our determination of our dead time constants.

However, the constant k-ratio method can also reveal any hidden problems with ones picoammeter (when plotted differently as mentioned in the next paragraph), which is also important since we would like to utilize different beam currents for quant analysis.

The k-ratio method does in fact have an independence from beam current and picoammeter (mis)calibrations because the k-ratios that one produces, are for a primary standard and a secondary standard both measured at the same beam current!  I neglected to emphasize this enough. The point being that the k-ratio should be the same given that the count rates are significantly different for the two materials (at each of the different beam currents).  Any miscalibration of the picoammeter only reveals itself when plotting the k-ratios for the secondary standards (at multiple beam currents) using a single primary standard measured at a low beam current!  I should have emphasized that point also.  See this post here:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972

But still, it's interesting that you found that the Ka/Kb ratio method gives such different answers compared to the "current" method. I suspect that this is because you're not using the expanded dead time expression.  Did you try the six term Taylor expansion series expression?  I find the traditional single term expression works well up to 50K cps or so, the two term expression up to about 100K cps, and the six term expression seems good to 200K cps and more.  Why limit our counting rates when we have a better dead time expression now?

The point is that performing these ratio tests using high beam currents is not necessarily because we will actually be running at such high count rates, but rather to ensure that our dead time correction is robust over a large range of beam currents. That is to say, if the dead time correction expression works at very high count rates, it will work even better at low count rates. 

That said, in our lab we often measure minor and trace elements using relatively high beam currents so this expanded dead time expression seems just the ticket to maintain k-ratio accuracy in all possible combinations of high concentrations and/or large area crystals and/or high beam energies and/or high beam currents.

See this post for all three dead time correction expressions:

https://probesoftware.com/smf/index.php?topic=1466.msg10909#msg10909

A question about picoammeters: Cameca has different ranges of adjustment and we are working towards obtaining a high accuracy current source to calibrate our picoammeter, but I head from someone that JEOL uses a different system for their current measurements. Do you know anything about the JEOL picoammeter electronics and or its adjustments? 

You should also try all three dead time expressions and let's see what you get. I bet the range over which you obtain a linear response from your spectrometers is greatly extended by using the six term expression.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 03, 2022, 01:32:47 PM
Yes, if all you’re doing is measuring the two materials on the same spectrometer at the same current, then this would help address the problem, but it should also allow you to use an expression for dead time that is independent of current.  When you use the current-based expression to determine the dead time for this case, at the very least you are making an unnecessary calculation.  Heinrich et al. (1966) present the proper approach.

I also emphasized that I like to keep things simple.  I have no interest in using a non-linear expression at high count rate; instead, I’d rather avoid those high count rates.  I do in fact operate at relatively high current (say, 50 or 100 nA) when analyzing for elements in low concentration, but the dead time correction should be simple because the count rate is relatively low.  Also, keep in mind that Jercinovic and Williams (2005, Am Min 90:526-546) pointed out that operating at high current can cause ablation of the carbon coat and subsequent accumulation of static charge.  How closely do you monitor absorbed current when operating at high current?

When you suggest that I try all three of the current-based calculations, you’ve totally missed my point.  In my plots using the simplest (linear) current-based calculation, I show that dead time appears to be calculated incorrectly at low count rate (< 50 kcps) when current is high.  This suggests that systematic error is present in the picoammeter reading (as it cannot be expected to be perfectly accurate).  Adding terms to the current-based dead time expression will not fix this problem and is not necessarily a “better” approach.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 03, 2022, 02:44:48 PM
Yes, if all you’re doing is measuring the two materials on the same spectrometer at the same current, then this would help address the problem, but it should also allow you to use an expression for dead time that is independent of current.  When you use the current-based expression to determine the dead time for this case, at the very least you are making an unnecessary calculation.  Heinrich et al. (1966) present the proper approach.

The constant k-ratio method is essentially independent of beam current because the two materials are always measured at the same beam current.  If the beam current reading is off by a few percent, that would merely slide the points along the x-axis slightly. The dead time correction (traditional or expanded!) would handle this automatically because these expressions are *not* based on beam current, they're based on count rate.  You seem to be missing this essential point.

You do realize that the traditional dead time correction is an incomplete mathematical treatment of dead time because it only utilizes the first term of the Taylor expansion series which is an infinite probability series?  The expanded dead time expression merely takes that into account by including a few extra terms to deal with extremely high count rates.

The Heinrich method still requires a dead time correction in order to obtain a linear response for the slope calculation.  By using the expanded form of the dead time correction, you would be able to utilize a wider range of beam currents and therefore higher count rates and still maintain a linear response for your slope calculations.

I also emphasized that I like to keep things simple.  I have no interest in using a non-linear expression at high count rate; instead, I’d rather avoid those high count rates.  I do in fact operate at relatively high current (say, 50 or 100 nA) when analyzing for elements in low concentration, but the dead time correction should be simple because the count rate is relatively low.  Also, keep in mind that Jercinovic and Williams (2005, Am Min 90:526-546) pointed out that operating at high current can cause ablation of the carbon coat and subsequent accumulation of static charge.  How closely do you monitor absorbed current when operating at high current?

Well then you should love the constant k-ratio method because there's nothing simpler in our field than a k-ratio measurement.  Which, by the way, is the essential measurement we make for quantitative analysis. These instruments are k-ratio measurement tools. That's all they are.

I also like the fact that constant k-ratio method is a completely intuitive approach in that one merely observes the variation in the k-ratios as a function of count rate (beam current) and one immediately obtains a quantitative appreciation of the magnitude of these effects.  As for the various dead time correction expressions, see this post here for constant k-ratio measurements on an old JEOL 8200, where their count rates are so low, it doesn't matter which expression they utilize for the dead time correction:

https://probesoftware.com/smf/index.php?topic=1466.msg10943#msg10943

The expanded form gives almost exactly the same results as the traditional expression at low count rates, but it gives improved results at high count rates simply due to being a more complete form of the Taylor expansion probability series. It's all comes down to the statistics of how often will a photon pulse be missed due to another photon pulse coming into the detector within the dead time.

When you suggest that I try all three of the current-based calculations, you’ve totally missed my point.  In my plots using the simplest (linear) current-based calculation, I show that dead time appears to be calculated incorrectly at low count rate (< 50 kcps) when current is high.  This suggests that systematic error is present in the picoammeter reading (as it cannot be expected to be perfectly accurate).  Adding terms to the current-based dead time expression will not fix this problem and is not necessarily a “better” approach.

You need to think about this a bit more.  The constant k-ratio method is not current based. Yes, Paul's spreadsheet method is current based because he's fitting counts vs. beam current, but the constant k-ratio method is not current based. Why?  Because the primary standards and the secondary standards are measured at the same beam current for each k-ratio!  It's all about the count rate differences between the primary and secondary standards, which are more affected by dead time at higher beam currents.

Yes, the traditional form of the dead time expression starts to fall apart around 50K cps, but the wheels could come off at lower count rates on some detectors, particularly sealed Xe detectors I expect which have become contaminated or pumped out over time.

Please take a look at this post and maybe it will start to make more sense to you:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 03, 2022, 05:19:07 PM
You need to think about this a bit more.  The constant k-ratio method is not current based. Yes, Paul's spreadsheet method is current based because he's fitting counts vs. beam current, but the constant k-ratio method is not current based. Why?  Because the primary standards and the secondary standards are measured at the same beam current for each k-ratio!  It's all about the count rate differences between the primary and secondary standards, which are more affected by dead time at higher beam currents.

Yes, the traditional form of the dead time expression starts to fall apart around 50K cps, but the wheels could come off at lower count rates on some detectors, particularly sealed Xe detectors I expect which have become contaminated or pumped out over time.

Please take a look at this post and maybe it will start to make more sense to you:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972

No, you’re the one who needs to think about this a bit more.  All that I’ve emphasized is a need for accurate estimation of dead time for the simplest possible case, and Heinrich et al. (1966) present a better way to do it using measurements made simultaneously.  I don’t really care what happens at excessively high count rates, as interactions between the X-ray counter and counting electronics become more complicated.

Your argument that the “wheels could come off” at lower count rates using Xe counters is weak, as it is unsupported by any actual evidence.  In truth, I see the same pattern with my P-10 gas-flow counters, and you can see it in the data that I posted for Si (for the ratio method).

Also, keep in mind that a precision current source will incorporate the same kinds of inaccuracies (non-linearity, for instance) as a picoammeter, which is a precision circuit due to necessity.  In fact, a current source can be used to construct an ammeter – I’ve done it myself with nice results using the OPA192 op amp from TI.

I’ll post more data and plots soon.  Until then, I’m not commenting further on the subject.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 04, 2022, 09:07:26 AM
I will give it a bit more thought!   :)

And the first thought that comes to my mind is that I'm pleased that you find the Heinrich ratio method to work for you.  I myself prefer a method that tests the instrument over the full range of the beam currents I regularly utilize.

By the way, you mentioned that you do use high beam currents for minor and trace elements. I'm sure it has occurred to you that running a primary standard for a trace element, say Ti Ka using Ti metal or TiO2 (for maximum analytical precision) with the same beam current (100-200 nA?) as ones unknowns, will result in a very large dead time correction on the standard intensities, which although not the largest source of error for a trace element in an unknown (Donovan, 2011), is still worth correcting for accurately.  That is where the expanded dead time correction expression really comes into its own.

On the other hand if one chooses to run their primary standard at a lower beam current (say 30 nA) than ones unknowns, in order to avoid a large dead time correction, then the accuracy issue will be with the picoammeter calibration.  The accuracy issue cannot be avoided either way.

The nice thing about the constant k-ratio dead time constant calibration method is that when one acquires k-ratios using the same beam current for the primary and secondary standard (or unknown), one avoids any dependency on the picoammeter accuracy.  That's the beauty of a k-ratio measured at the same beam current!  And of course the whole point of the constant k-ratio method is that one should obtain the same k-ratio at *any* beam current.  One then simply adjusts the dead time constant until the k-ratios are as consistently constant as possible over the range of beam currents (which could be plotted in any order of beam current, even randomly!), of course using the high precision expanded dead time correction expression for best accuracy.

Not only that, but if one wants to then evaluate their picoammeter accuracy, then one can take their previously acquired constant k-ratio data and simply plot the secondary standard k-ratios against any *single* primary standard, as I showed in the post here:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972

Try it, you'll like it!   :D
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 04, 2022, 04:01:02 PM
I will give it a bit more thought!   :)

And the first thought that comes to my mind is that I'm pleased that you find the Heinrich ratio method to work for you.  I myself prefer a method that tests the instrument over the full range of the beam currents I regularly utilize.

By the way, you mentioned that you do use high beam currents for minor and trace elements. I'm sure it has occurred to you that running a primary standard for a trace element, say Ti Ka using Ti metal or TiO2 (for maximum analytical precision) with the same beam current (100-200 nA?) as ones unknowns, will result in a very large dead time correction on the standard intensities, which although not the largest source of error for a trace element in an unknown (Donovan, 2011), is still worth correcting for accurately.  That is where the expanded dead time correction expression really comes into its own.

On the other hand if one chooses to run their primary standard at a lower beam current (say 30 nA) than ones unknowns, in order to avoid a large dead time correction, then the accuracy issue will be with the picoammeter calibration.  The accuracy issue cannot be avoided either way.

The nice thing about the constant k-ratio dead time constant calibration method is that when one acquires k-ratios using the same beam current for the primary and secondary standard (or unknown), one avoids any dependency on the picoammeter accuracy.  That's the beauty of a k-ratio measured at the same beam current!  And of course the whole point of the constant k-ratio method is that one should obtain the same k-ratio at *any* beam current.  One then simply adjusts the dead time constant until the k-ratios are as consistently constant as possible over the range of beam currents (which could be plotted in any order of beam current, even randomly!), of course using the high precision expanded dead time correction expression for best accuracy.

Not only that, but if one wants to then evaluate their picoammeter accuracy, then one can take their previously acquired constant k-ratio data and simply plot the secondary standard k-ratios against any *single* primary standard, as I showed in the post here:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972

Try it, you'll like it!   :D

Once again, here is one of my objections to your approach, stated slightly differently:  You’ve roughly eliminated current as a variable in your calculation of dead time via collection of Si Ka k-ratio data at given current, yet, for the general case, you still use an expression for dead time that depends on accurate measurement of current.  This makes no sense to me, as your approach is inconsistent.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 04, 2022, 07:29:51 PM
I will give it a bit more thought!   :)

And the first thought that comes to my mind is that I'm pleased that you find the Heinrich ratio method to work for you.  I myself prefer a method that tests the instrument over the full range of the beam currents I regularly utilize.

By the way, you mentioned that you do use high beam currents for minor and trace elements. I'm sure it has occurred to you that running a primary standard for a trace element, say Ti Ka using Ti metal or TiO2 (for maximum analytical precision) with the same beam current (100-200 nA?) as ones unknowns, will result in a very large dead time correction on the standard intensities, which although not the largest source of error for a trace element in an unknown (Donovan, 2011), is still worth correcting for accurately.  That is where the expanded dead time correction expression really comes into its own.

On the other hand if one chooses to run their primary standard at a lower beam current (say 30 nA) than ones unknowns, in order to avoid a large dead time correction, then the accuracy issue will be with the picoammeter calibration.  The accuracy issue cannot be avoided either way.

The nice thing about the constant k-ratio dead time constant calibration method is that when one acquires k-ratios using the same beam current for the primary and secondary standard (or unknown), one avoids any dependency on the picoammeter accuracy.  That's the beauty of a k-ratio measured at the same beam current!  And of course the whole point of the constant k-ratio method is that one should obtain the same k-ratio at *any* beam current.  One then simply adjusts the dead time constant until the k-ratios are as consistently constant as possible over the range of beam currents (which could be plotted in any order of beam current, even randomly!), of course using the high precision expanded dead time correction expression for best accuracy.

Not only that, but if one wants to then evaluate their picoammeter accuracy, then one can take their previously acquired constant k-ratio data and simply plot the secondary standard k-ratios against any *single* primary standard, as I showed in the post here:

https://probesoftware.com/smf/index.php?topic=1466.msg10972#msg10972

Try it, you'll like it!   :D

Once again, here is one of my objections to your approach, stated slightly differently:  You’ve roughly eliminated current as a variable in your calculation of dead time via collection of Si Ka k-ratio data at given current, yet, for the general case, you still use an expression for dead time that depends on accurate measurement of current.  This makes no sense to me, as your approach is inconsistent.

That is incorrect.  No wonder it makes no sense to you!

The expanded expression for dead time correction is the same as the traditional (single term) expression, which also depends *only* on count rate.  The only difference is that instead of ignoring the infinite series of the Taylor expansion, it incorporates probability terms up to the 6th power for vastly improved precision at very high count rates.

Also by measuring the primary and secondary standard intensities using the *same* beam current, we have not merely "roughly eliminated  current as a variable in your calculation of dead time", but rather we have *completely* eliminated current as a variable in our calculation of dead time because we measure each k-ratio using the *same* beam current, at multiple beam currents, which simply acts as a proxy for different count rates.

You will note that like the traditional expression, beam current does not appear as a variable in our expanded dead time expression.  Only count rate and the dead time constant as shown here:

https://probesoftware.com/smf/index.php?topic=1466.msg10909#msg10909
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 04, 2022, 11:42:56 PM
Hi John,

OK, I see what you mean.  I was mistaken.  Sorry about that.

To get the k-ratio, though, you must be measuring Si Ka, for instance, at different times on the primary and secondary standards.  Thus the current might not be exactly the same, and, obviously, more than one material must be analyzed.  One material might be carbon-coated more thickly than the other or might have contamination on the surface.  One material might be more easily beam-damaged than the other or prone to accumulation of static charge…  and so on.

The advantage of the ratio method of Heinrich et al. (1966) lies in the fact that measurements are performed simultaneously on a given material using different spectrometers such that all of the sources of error that I listed above are effectively eliminated completely.  The method also allows one to work exclusively with metals and semi-metals, and so conductivity and beam damage are not issues.  I believe it is the best approach, and I’ll continue to post results.

I’m genuinely not that interested in anything other than a reliable linear expression (on a plot of N’/N versus N’.)  If I want to analyze for very minor amounts of Co, Ni, and As in pyrite, then I calibrate on a pyrite standard (for Fe and S) at 50 or 100 nA and calibrate on Co, Ni, and As standards at 10 or 20 nA.  I try to put as little faith in the dead time correction as possible.  At the very least, the ratio method has demonstrated that my Xe counters show linear behavior up to at least 85 kcps.  On plots of N’/I versus N’, I’ve always had difficulty assessing this.

Brian
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 05, 2022, 09:53:57 AM
Hi John,

OK, I see what you mean.  I was mistaken.  Sorry about that.

To get the k-ratio, though, you must be measuring Si Ka, for instance, at different times on the primary and secondary standards.  Thus the current might not be exactly the same, and, obviously, more than one material must be analyzed.  One material might be carbon-coated more thickly than the other or might have contamination on the surface.  One material might be more easily beam-damaged than the other or prone to accumulation of static charge…  and so on.

 8)

OK, no worries, glad it makes sense to you now. 

Yes, with the constant k-ratio method we are not measuring the intensities at exactly the same time, but that is also true of our quantitative measurements, so we do hope our beam currents are stable over the period of a few minutes.  Of course, for these k-ratios utilized in the constant k-ratio method we also measure the beam current and perform a beam drift correction, just as we do for normal quant measurements.  So the beam drift effects should be minimal for two materials acquired one after the other.

In fact, in Probe for EPMA we utilize exactly the same automation methods for this constant k-ratio dead time calibration method, as we do for normal quantitative measurements.  The only exception being that when using multiple (beam current) setups for quant analyses we would normally acquire all the multiple setups one after the other on each sample, but for the constant k-ratio method we need to acquire the multiple (beam current) setups one at a time on each sample (all samples at beam current N1, then all samples at beam current N2, then all samples at beam current N3, etc.).  That is why we added the "one at a time" checkbox in the Multiple Setups dialog in the Automate! window to automate this.

If we were instead automating multiple kilovolt setups (e.g., thin film analysis) it wouldn't have mattered because the software would automatically have figured out which primary standard is associated with each secondary standard at the different keVs, but of course we don't distinguish samples acquired with different beam currents in that manner for quantitative analysis.  So we need to use this new "one at a time" acquisition checkbox in PFE to ensure that the primary and secondary standards are acquired together in time, so the k-ratios are constructed using intensities acquired at the same beam current.

That's kind of the whole point of the constant k-ratio method using different beam currents as a proxy for count rate: the k-ratios at all beam currents should all be the same if our dead time calibrations are correct.   :D

As for different carbon coats on the materials, that's an easy one. Because we are using the same quant methods for this constant k-ratio method as we utilize for normal quantification, we already have the correction for coating material/thickness built into Probe for EPMA:

https://probesoftware.com/smf/index.php?topic=23.0

Though so far I have only utilized materials mounted on the same acrylic mount for these constant k-ratio calibrations:

https://probesoftware.com/smf/index.php?topic=172.msg8991#msg8991

The advantage of the ratio method of Heinrich et al. (1966) lies in the fact that measurements are performed simultaneously on a given material using different spectrometers such that all of the sources of error that I listed above are effectively eliminated completely.  The method also allows one to work exclusively with metals and semi-metals, and so conductivity and beam damage are not issues.  I believe it is the best approach, and I’ll continue to post results.

Sure, that is a nice aspect to the Heinrich method.  It's pretty clever actually, though I have to say, I'm not seeing beam damage effects on my constant k-ratio runs, but I am defocusing the beam somewhat for these beam sensitive materials. As you will see in my next post in the other dead time topic, I have acquired data on benitoite/SiO2 up to 400 nA and it yields consistent k-ratios, though I did defocus the beam to 15 nA for the test in an abundance of caution.

I would prefer to deal with beam damage effects separately using the TDI correction in PFE, which works great for these sorts of issues. In fact, the TDI correction could be applied to the acquisition of the constant k-ratio data because the process is exactly the same as for normal quant runs and can be completely automated.  In fact, I should turn on TDI acquisitions for my next constant k-ratio testing...  good idea!

The other thing that I think users will like is that they will find the constant k-ratio method very easy to use, because it's essentially just a normal probe run, so the process will be very familiar to them. It only takes a few minutes to set it up and then one can just let it acquire lots of data for 8 or 12 hours or more fully automated.

Once the constant k-ratio data is acquired and plotted as a function of beam current, it's very simple (and quite intuitive) to quantitatively evaluate the magnitude of the dead time calibration errors, and adjust the dead time constants accordingly.

I’m genuinely not that interested in anything other than a reliable linear expression (on a plot of N’/N versus N’.)  If I want to analyze for very minor amounts of Co, Ni, and As in pyrite, then I calibrate on a pyrite standard (for Fe and S) at 50 or 100 nA and calibrate on Co, Ni, and As standards at 10 or 20 nA.  I try to put as little faith in the dead time correction as possible.  At the very least, the ratio method has demonstrated that my Xe counters show linear behavior up to at least 85 kcps.  On plots of N’/I versus N’, I’ve always had difficulty assessing this.

Brian

Different beam currents for standards and unknowns is a reasonable strategy as I mentioned previously (though I have to say it surprised me when I first started in microanalysis, and found that the usual scientific precept of controlling for all variables was ignored in this way).

So instead of putting our faith in the dead time correction, we are rather putting our faith in the picoammeter linearity/accuracy.  Checking the accuracy of our picoammeters would be a reasonable next step, because we all utilize different beam currents in runs with major and minor elements. We are obtaining such a device (should arrive today in fact) for testing our picoammeter. I'll keep everyone posted on that process.

But I also think you really should consider the use of the expanded dead time expression as it would further extend your x-ray linearity.  The good news is that our dead time constants can now be accurately calibrated with two different methods without relying on the traditional beam current method. So this is real progress!  Next step: calibrating our picoammeters!    ;D
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 05, 2022, 01:10:20 PM
But I also think you really should consider the use of the expanded dead time expression as it would further extend your x-ray linearity.  The good news is that our dead time constants can now be accurately calibrated with two different methods without relying on the traditional beam current method. So this is real progress!  Next step: calibrating our picoammeters!    ;D

Don’t forget about spectrometer alignment.  Something appears to be amiss with your channel 3.  On LPET, have you verified that you get the maximum count rate for Si Ka and Cr Ka when the stage is in optical focus (after doing a new peak search at each level of focus)?
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 05, 2022, 02:00:52 PM
But I also think you really should consider the use of the expanded dead time expression as it would further extend your x-ray linearity.  The good news is that our dead time constants can now be accurately calibrated with two different methods without relying on the traditional beam current method. So this is real progress!  Next step: calibrating our picoammeters!    ;D

Don’t forget about spectrometer alignment.  Something appears to be amiss with your channel 3.  On LPET, have you verified that you get the maximum count rate for Si Ka and Cr Ka when the stage is in optical focus (after doing a new peak search at each level of focus)?

Yes, I also think there's an issue specifically with spectro 3, but we just had Edgar Chavez out three weeks ago and he and our engineer went through this spectro alignment. So it could be an asymmetrical diffraction issue but then why would it show up for both large area crystals on that spectrometer?  Probably worth checking the alignment again...

And with this observation of spectro 3 k-ratios being an outlier compared to the other spectrometers, I feel compelled again to point out the flexibility of the constant k-ratio method. Specifically that it can be utilized for three separate calibration checks:
And finally I have to mention, by utilizing the expanded dead time expression for all three tests, as seen in the latest post here, we can obtain consistent k-ratios up to 250 nA (>400K cps on SiO2!) on a large TAP crystal on spectro 2:

https://probesoftware.com/smf/index.php?topic=1466.msg10982#msg10982
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 08, 2022, 10:38:02 PM
Earlier this week, I collected data to determine dead time according to the ratio method of Heinrich et al. (1966) using the Ti Kα and Kβ lines.  As in previous cases, I used uncoated and recently polished Ti metal for the measurements and measured at the Kα or Kβ peak on three spectrometers simultaneously using LiF, LiFL, and LiFH.

When working with Ti using this method in conjunction with a sealed Xe counter, it is important to note that, while the energy of Ti Kα lies below that of the Xe L3 absorption edge, the energy of Ti Kβ falls above it, and so electronic gain, anode bias, and baseline voltage must all be considered especially carefully.  When measuring Ti Kβ, the baseline needs to be set so that it always fully excludes the Xe escape peak when working at current ranging in my case from 5 nA to 540 nA in measurement set 1 and from 5 nA to 700 nA in measurement set 2.  During the first measurement set, I counted simultaneously at the Ti Kα peak position on channel 2/LIFL and at the Kβ position on channels 3/LiF and 5/LiFH.  During the second measurement set, I counted simultaneously at the Ti Kβ peak position on channel 2/LIFL and at the Kα position on channels 3/LiF and 5/LiFH. At IPCD = 700 nA (V = 15 kV), the measured Ti Kα count rate on channel 3/LiF was ~60 kcps; on channel 5/LiFH, it was ~227 kcps.

The obtained values for τ are as follows:

Ti Kα/Kβ ratio method [μs]:
channel 2/LiFL:  1.43, 1.46
channel 3/LIF:  1.37
channel 5/LIFH:  1.42
Current-based method, Ti Kα [μs]:
channel 2/LiFL:  1.25
channel 3/LiF:  1.03
channel 5/LiFH:  1.20
Current-based method, Ti Kβ [μs]:
channel 2/LiFL:  0.81
channel 3/LiF:  -0.06
channel 5/LiFH:  1.04

(https://probesoftware.com/smf/gallery/381_08_07_22_9_35_49.png)

(https://probesoftware.com/smf/gallery/381_08_07_22_9_36_57.png)

Note that 1) the Ti dead time values calculated using the ratio method are essentially the same as those calculated for Cu and Fe and 2) use of the current-based method once again causes underestimation of the dead time, with channel 3/LiF showing the greatest departure.  Combining the results of the ratio method for Cu, Fe, and Ti, for channel 2 (six calculations total), the range of calculated dead time values is 1.43-1.50 μs, with an average of 1.46 μs.  The observed variation likely can be ascribed to counting error.  Dead time values for channel 3 (three calculations) are 1.45, 1.41, and 1.37 μs (Cu, Fe, and Ti, respectively), giving an average of 1.41 μs.  For channel 5 (three calculations), I’ve obtained 1.42, 1.38, and 1.41 μs, producing an average of 1.40 μs.

As an aside, assuming that linear behavior is maintained on a plot of N’/N versus N’ up to 80 kcps and assuming that τ = 1.45 μs, then N’/N = 0.884 (exactly) at that measured count rate, and so true count rate (N) is 90498 s-1.  This represents a roughly 13% correction relative to the measured count rate.  Once again, I prefer to work with dead time corrections smaller than this, and so I’m happy to remain within the linear correction range.

Returning to the current-based method, an anomaly appears in all plots of N’/I versus N’ collected in both Ti measurement sets.  For instance, in the plot of the data for Ti Kα on channel 2/LiFL (below), the data take a trajectory in the form of an arc, such that initial data (lowest current) plot below the regression line, then subsequently well above it (around 35 kcps), and then, in the high end of the linear range (up to ~85 kcps), most fall slightly below it.  Above 85 kcps, significantly non-linear behavior truly is present, though assessing its onset on the plot is virtually impossible.  Whatever the nature/source of the problem (certainly related to the picoammeter), the anomaly appears to vary over time in magnitude and form.  Compare, for instance with my plots for Cu (anomaly pronounced but different) and Fe (anomaly much less pronounced).  Regardless, since it is present in each set of simultaneous measurements, the error cancels when calculating ratios.  Note that, had I chosen to use only apparent count rates falling below 35 kcps, my calculated dead time would have been 1.01 μs rather than 1.25 μs.

(https://probesoftware.com/smf/gallery/381_08_07_22_9_42_16.png)

The same pattern is easily visible on channel 5/LiFH.  In calculation of dead time, had I considered only count rates below 35 kcps, I would have obtained 0.98 μs rather than 1.20 μs.

(https://probesoftware.com/smf/gallery/381_08_07_22_9_43_10.png)

The following plot gives a more accurate depiction of the magnitude to which departure from linearity affects the dead time correction at high count rates.  I’ve plotted measured Ti Kβ count rate on channel 2/LiFL divided by the measured Ti Kα count rate on channel 5/LiFH versus the measured Ti Kβ count rate on channel 2/LiFL.  The Ti Kβ count rate on LiFL reaches only 33 kcps (at 700 nA), and so all obvious non-linearity should be due to Ti Kα on LiFH.  Above 85 kcps (for Ti Kα), all ratio values fall above the regression line, though only very slightly up to 113 kcps (IPCD = 260 nA), and so I’ll take 85 kcps as my upper limit on channel 5; corresponding plots for Cu and Fe support this limit, which appears to be applicable to the other Xe counters as well.

(https://probesoftware.com/smf/gallery/381_08_07_22_9_44_11.png)

My likely next step, when I get a chance, will be to determine dead time via the ratio method for Ti using my three PET crystals.  I’ll then move on to the spectrometers with gas-flow counters.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on July 17, 2022, 07:50:36 PM
I’ve recently made determinations of the dead time constant via the ratio method using the Mo Lα and Mo Lβ3 peaks on uncoated, recently cleaned Mo metal in the same manner as in previous runs, though I increased the number of calculated ratios within a given set (60 in the first and 70 in the second).  I chose the Lβ3 peak instead of Lβ1 because I wanted to obtain a ratio of peak count rates between about 15 and 20 on a given spectrometer such that the source of noticeably non-linear correction within each calculated ratio would be accounted for essentially solely by the Mo Lα measurement.  While Si Kα/Kβ might have worked for this purpose just as well, the ratio of peak heights is even greater (so that it would have been unreasonably time-consuming to obtain an adequate number of Si Kβ counts to keep counting error low).  Unfortunately, Ti Kβ cannot be accessed on PETH, which I’d forgotten about.  For the first measurement set on Mo, I collected data at PCD currents ranging from 5 to 600 nA, and, for the second set, from 5 to 700 nA.  The maximum observed count rate (for Mo Lα on PETH at IPCD = 700 nA) was roughly 240 kcps.  On channel 2/PETL at the same current, observed Mo Lβ3 count rate was about 13 kcps.

Notation for the two measurement sets is as follows:

N’11 = measured Mo Lα count rate on channel 2/PETL
N’21 = measured Mo Lβ3 count rate on channel 3/PETJ
N’31 = measured Mo Lβ3 count rate on channel 5/PETH

N’12 = measured Mo Lβ3 count rate on channel 2/PETL
N’22 = measured Mo Lα count rate on channel 3/PETJ
N’32 = measured Mo Lα count rate on channel 5/PETH

I was hoping that my calculated dead times would be virtually identical to those I’ve previously determined, as I no longer believe that X-ray counter dead time varies systematically with X-ray energy.  (But maybe I'm wrong considering the pattern possibly emerging in the calculated dead times.)  Perhaps my PHA settings weren’t quite right?  If so, considering that PHA shifts should be smaller for lower X-ray energies, my most recent determinations on Ti and Mo should be the most accurate.  The age of the counters could be a factor as well, as anomalies certainly are present in the pulse amplitude distributions, at least under certain conditions.  I examined these distributions carefully, though, across a wide range of count rates, and I don’t think I made any serious errors.  Maybe I’ll run through the whole process again on Cu or Fe and see if I get the same results as before.  Currently, I have dead time constants for my channels 2, 3, and 5 set at 1.45, 1.40, and 1.40 μs, respectively, but I might eventually lower the values for channels 2 and 3 a little – channel 5 has been more consistent.  At any rate, the values for Mo aren’t drastically lower and are most similar to those obtained from Ti.  Continuing the same pattern as before, channel 2 gives the largest dead time constant using the ratio method:

Mo Lα/Mo Lβ3 ratio method [μs]:
channel 2/PETL:  1.38, 1.43     (Ti: 1.43, 1.46; Fe: 1.44, 1.48; Cu: 1.50, 1.46)
channel 3/PETJ:  1.33              (Ti: 1.37; Fe: 1.41; Cu: 1.45)
channel 5/PETH:  1.37             (Ti: 1.42; Fe: 1.41; Cu: 1.42)
Current-based method, Mo Lα [μs]:
channel 2/ PETL:  1.27
channel 3/ PETJ:  1.17
channel 5/ PETH:  1.27
Current-based method, Mo Lβ3 [μs]:
channel 2/ PETL:  0.18
channel 3/ PETJ:  -1.15
channel 5/ PETH:  0.20

(https://probesoftware.com/smf/gallery/381_17_07_22_7_03_01.png)

(https://probesoftware.com/smf/gallery/381_17_07_22_7_04_13.png)

The apparent picoammeter anomaly was somewhat less pronounced than it was during my measurements on Ti.  Its somewhat lower magnitude accounts for the slightly larger Mo Lα dead time values (compared to those for Ti Kα) calculated using the current-based method below 85 kcps (measured):

(https://probesoftware.com/smf/gallery/381_17_07_22_7_05_40.png)

Channel 3 shows the ugliness a little better due to its low count rate at high current:

(https://probesoftware.com/smf/gallery/381_17_07_22_7_06_37.png)

As an aside, I’d like to emphasize that operating sealed Xe counters routinely at count rates greater than tens kcps constitutes abuse of them and shortens their useful lifespans, even if anode bias is kept relatively low (~1600-1650 V).

Below I’ve fit both a straight line and a parabola (dashed curve) to the uncorrected data for N’12/N’32 (where N’12 corresponds to Mo Lβ3 on channel 2/PETL and N’32 to Mo Lα on channel 5/PETH).  Although the parabola fits the data very nicely (R2 = 0.9997), easy calculation of τ1 and τ2 relies on use of a correction function linear in τ, such as N’/N = 1 – τN’.  Although the two dead time constants cannot be extracted readily from the second degree equation, since Mo Lβ3 count rate on channel 2/PETL does not exceed 13 kcps (at IPCD = 700 nA), essentially all non-linear behavior is contributed to the ratio by Mo Lα on channel 5/PETH, and so it alone accounts for the quadratic term.  The regression line (R2 = 0.991) that provides the linear approximation at measured count rates below ~85 kcps is roughly tangent to the parabola at N’12 = N12 = 0, and so both intersect the vertical axis at or close to the true ratio, N32/N12 (subject to counting error, of course).  The size of the parabola is such that, below ~85 kcps, it is approximated very well by a straight line (considering counting error).  Further, in truth, the dead time constant determined in the effectively linear correction region should work just as well in equations of higher degree or in any equation in which the true count rate, N, is calculated (realistically) as a function of the measured count rate, N’.  If the dead time constant can be calculated simply and accurately in the linear correction region on a single material (and only at the peak), then there is no need to make the process more complicated if only to avoid additional propagated counting error and potential systematic errors.

(https://probesoftware.com/smf/gallery/381_24_07_22_10_17_36.png)

And here I’ve magnified the above plot to show the region of relatively low count rates:

(https://probesoftware.com/smf/gallery/381_24_07_22_10_18_03.png)
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on July 18, 2022, 08:40:52 AM
As an aside, I’d like to emphasize that operating sealed Xe counters routinely at count rates greater than tens kcps constitutes abuse of them and shortens their useful lifespans, even if anode bias is kept relatively low (~1600-1650 V).

This comment brings to mind something that Colin MacRae said to me during one of my visits to Australia a while back. He said that xenon detectors will either get contaminated or pumped out after a few years, so that he replaces his xenon detectors every 3 to 4 years on their JEOL instruments at CSIRO.

I was hoping that my calculated dead times would be virtually identical to those I’ve previously determined, as I no longer believe that X-ray counter dead time varies systematically with X-ray energy.  (But maybe I'm wrong considering the pattern possibly emerging in the calculated dead times.)  Perhaps my PHA settings weren’t quite right?

This is also something that I've been looking at, so I started a new topic on this question. I hope we can share data and try to come to a sort of a conclusion on this very interesting question:

https://probesoftware.com/smf/index.php?topic=1475.0
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 09, 2022, 04:51:33 PM
I’ve collected data to determine the dead time constant via the ratio method using Si Kα and Si Kβ diffracted by TAPJ on my channels 1 and 4.  I calculated a total of 80 ratios in each measurement set, with measurements made at PCD currents ranging from 1 to 136 nA.  On channel 1, the Si Kα count rate at 136 nA was 226 kcps; on channel 4, Si Kα count rate at the same current was 221 kcps.

Notation for the two measurement sets is as follows:

N’11 = measured Si Kα count rate on channel 1/TAPJ
N’21 = measured Si Kβ count rate on channel 4/TAPJ

N’12 = measured Si Kβ count rate on channel 1/TAPJ
N’22 = measured Si Kα count rate on channel 4/TAPJ

Results for the dead time constant are as follows:

Si Kα/Si Kβ ratio method [μs]:
channel 1/TAPJ:  1.44
channel 4/TAPJ:  1.07
Current-based method, Si Kα [μs]:
channel 1/TAPJ:  1.38
channel 4/TAPJ:  1.12
Current-based method, Si Kβ [μs]:
channel 1/TAPJ:  0.64
channel 4/TAPJ:  -0.42

Notice that the results for the current-based method using Si Kα are very similar to those for the ratio method.  This is likely due to the fact that picoammeter nonlinearity was minimized due to the high count rates at relatively low current.  The Si Kα count rate was in the vicinity of 70 kcps at 30 nA and around 90 kcps at 40 nA.  Using Si Kβ for the same purpose results in the usual anomalously low (or negative) dead time values, with count rates reaching only ~11 kcps at IPCD = 136 nA.

(https://probesoftware.com/smf/gallery/381_09_08_22_3_52_16.png)

(https://probesoftware.com/smf/gallery/381_09_08_22_4_28_21.png)

In the ratio plots above, I’ve propagated counting error in each ratio used in my regressions and have plotted lines at two standard deviations above and below the linear fit (i.e., I am assuming that the line represents the mean).  The propagated error is calculated for the left-hand plot as

σratio = (N’11/N’21) * ( 1/(N’11*t11) + 1/(N’21*t21) )1/2

where each N’ carries units of s-1 and where t is count time in seconds.  The small, abrupt shifts in the calculated uncertainty are due to instances in which I changed (decreased) the count time as I increased the beam current.  To be absolutely clear, I’ve plotted the counting error using the measured number of X-ray counts along with the Si Kα count rate corrected for dead time using the linear model.  Within the effectively linear region, all ratios except for one (at N'11 = 15.4 kcps) plot within two standard deviations of the line (noting that one value out of every twenty should be expected to plot outside the envelope).  This suggests that counting error is the dominant source of uncertainty and that the linear model corrects adequately for dead time at measured count rates up to several tens kcps.  Not a big surprise.

Ultimately, the goal is for the corrected data to plot along (within counting error of) the horizontal line representing the true ratio as shown in in each plot below.  When I apply the two-term (Willis) and six-term (Donovan et al.) equations, the models can be seen not to fit the data particularly well at high count rates when using the dead time constant determined using the ratio method; further, they result in positive departures in the effectively linear range on the left-hand plot and negative departures on the right-hand plot, which is physically impossible, as values are pushed beyond the limit set by the true ratio.  Further, using the plot of N12/N22 versus N12 (i.e., Si Kβ count rate on channel 1 represents the horizontal axis) as an example, if I increase the value of τ2 from 1.07 to 1.20 μs, then the fit (solid line in second set of plots) particularly within the effectively linear region deteriorates badly, while the overall fit is only marginally improved (just judging by eye).  At this point, the dead time constant has been stripped of all physical meaning and serves simply as a “fudge factor.”

(https://probesoftware.com/smf/gallery/381_09_08_22_3_54_44.png)

In the below plots showing the results of modifying τ2, the right-hand plot is an enlargement of the left-hand plot.  Because the measured Si Kβ count rate (N'12) is relatively low, changing the value of τ1 has only minimal effect on the plot.  (The arrow on each plot above or below points at the highest count rate used in the linear regression.) 

(https://probesoftware.com/smf/gallery/381_09_08_22_3_55_46.png)

In summary, while the two-term and six-term equations may appear to provide better fits than the linear equation over a very wide range of count rates, the functional forms of the corrections are easily observed to be physically unrealistic.  Graphically, the effect of the added terms is simply to allow the function to bend or pivot in an apparently favorable direction, but at the risk of producing variations in slope that are inexplicable in terms of physical processes.  In contrast, the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I’m totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N’).  For the “best” correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 09, 2022, 07:41:35 PM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I’m totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N’).  For the “best” correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:

(https://probesoftware.com/smf/gallery/395_09_08_22_7_39_33.png)

Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://probesoftware.com/smf/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 09, 2022, 10:07:39 PM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I’m totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N’).  For the “best” correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:

(https://probesoftware.com/smf/gallery/395_09_08_22_7_39_33.png)

Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://probesoftware.com/smf/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.

OK, the equation is now written in a nicer form, but it still does not accurately describe the necessary correction to the measured count rates.  Did you not look at my plots above?  I keep looking for a mistake in my spreadsheet, but I can’t find one.  (I’m not joking.)  If your model worked, then it should correct the ratios such that they plot along a horizontal line.

We just had a discussion about hypothesis testing.  You seem to be completely sold on your model.  Why are you not trying to probe it for flaws?  That task should be your primary focus.  In a sense, I’m doing your work for you.

I’ll probably post again in a few days with some more plots  that I hope will make my points more clear.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 10, 2022, 08:31:49 AM
...the failure of the linear equation at count rates exceeding several tens kcps is easily rationalized physically and is readily identifiable graphically; the slope of the corrected data steepens progressively with increasing count rate, indicating increasing underestimation of the correction as counted X-rays interfere increasingly with the counting of additional X-rays.  As far as I can tell – and maybe I’m totally wrong – the form of the Willis equation is largely ad hoc.  I have a hard time rationalizing the insertion of a (convergent) power series in the denominator of the correction function, N(N’).  For the “best” correction, why not just let the series converge?  The two-term and six-term equations should be abandoned.

Wow, you are almost there.  Let me see if I can help.  Please note the bolded text from your post above, as this statement of yours is actually correct.

The reason the *traditional* dead time expression is non-physical is because (as you say), it does not account for more than a single co-incident photon.  The Willis expression's extra "squared" term accounts for 2 coincident photons, and so on through the additional terms in the six term expression.

In other words, it is the Willis and six term expressions that are actually *more* physically realistic than the traditional single term expression, because photons are random and the probability of multiple photon coincidences is strictly a function of the count rate and the dead time constant.

So I will be merciful and reveal the integration of an infinite number of photon coincidences that Aurelien Moy derived from this Maclaurin-like series:

(https://probesoftware.com/smf/gallery/395_09_08_22_7_39_33.png)

Pretty cool, huh?  Please note that the six term and log expressions give essentially identical results as expected:

https://probesoftware.com/smf/index.php?topic=1466.msg11041#msg11041

This has been implemented in the Probe for EPMA software for several weeks now.

OK, the equation is now written in a nicer form, but it still does not accurately describe the necessary correction to the measured count rates.  Did you not look at my plots above?  I keep looking for a mistake in my spreadsheet, but I can’t find one.  (I’m not joking.)  If your model worked, then it should correct the ratios such that they plot along a horizontal line.

We just had a discussion about hypothesis testing.  You seem to be completely sold on your model.  Why are you not trying to probe it for flaws?  That task should be your primary focus.  In a sense, I’m doing your work for you.

I’ll probably post again in a few days with some more plots  that I hope will make my points more clear.

You're funny. I didn't realize that "nicer form" was an important scientific criteria.   :)   What's nice about it is that it actually captures the situation of multiple photon coincidence. As you stated yourself: "the two-term and six-term equations may appear to provide better fits than the linear equation over a very wide range of count rates"!  I personally would call that *more physically realistic* simply because it agrees with the empirical data!  Maybe you've heard about scientific models?

Yes, you mentioned science, and I responded that I know a little bit about testing hypotheses (you can look Donovan up on Google Scholar!). In fact, hypothesis testing is exactly how we arrived at these new dead time correction expressions. We performed (and are continuing to make) k-ratio measurements over a wide range of beam currents for a number of emission lines for a number of primary and secondary standards.

And what we saw (please read from the beginning of the topic to see our hypotheses evolve over time, because you know, science...) was that the traditional expression breaks down (as you have already admitted) at count rates attainable at even moderate beam currents.  So after examining alternative expressions in the literature (Willis, 1993), we realized they could be further extended (six term) and eventually integrated into a simple logarithmic expression (above).

https://probesoftware.com/smf/index.php?topic=1466.0

I don't know what you're doing wrong with your evaluations, as these new (six term and log) expressions are accurately correcting for count rates up to 400K cps.  OK, here's a thought: I suspect that you're using a dead time constant calibrated for the traditional method, and trying to apply the same value to these more precise expressions.  As pointed out in this post, using a dead time constant calibrated using the traditional expression, will cause one to overestimate the actual dead time constant:

https://probesoftware.com/smf/index.php?topic=1466.msg11013#msg11013

Maybe you should try the constant k-ratio method with these newer expressions and you'll see exactly where you're going wrong?  Plus, as mentioned in the constant k-ratio method topic, you can utilize the same k-ratio data set not only to find the correct dead time constants, but also test your picoammeter accuracy/linearity and finally also test for simultaneous k-ratios to be sure your spectrometers/crystals are properly aligned. 

It's a very intuitive method, and clearly explained so you can easily follow it. The pdf procedure is attached to this post below.

To summarize: both the six term and the log expression give essentially the same results as expected because they correctly handle cases of multiple photon coincidence at higher count rates.  As you admitted in your previous post, the traditional dead time expression does not handle these high count rate situations.

I'm not sure why you can't grasp this basic physical concept of multiple photon coincidence. Nevertheless I'm certainly not asking you to do any work for me!  That's on you!    ;D

Instead I can only thank you for your comments and criticisms as they have very helpfully propelled our manuscript forward.
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 11, 2022, 08:08:21 AM
For the  starter I will say You are both wrong  8). I said in other post that this log equation probably is one of most important advancements (I imagined it to do right thing), but now after clearly seeing it in this nice form (yes, mathematical formula is much more nicer to read than someones code. Especially the code written in VB; would it be in Julia - maybe it could be argued to be readable enough) I had changed my mind. I am starting to be very skeptical.

This problem is known for ages as proportional counters are used not only on EPMA, but no one could come up with working equation. Could it be useful? maybe yes, it needs to be thoroughly tested before being released to the public, and nomenclature of things in this model needs to be changed as currently it brings in confusion.

Lets start with definition of what is the dead time (which should had been there at the very beginning of all posts of new proposed methods)? If we can't define what it is, then all the rush with calibration efforts are doomed.

I could talk in electronics and signal processing terms (and I probably will do when I will find time to continue the thread about counting pipelines), but (from my grim experience) rather I will explain this in absurd fictional story below to be sure everyone will understand. (initially it was dark gruesome with deaths (how to get the story about dead time without any deaths? Huh?), but I changed it with some other folk-getting-rid mechanics, like opening some trap doors which send some folk through tube like in water amusement park - nothing lethal; no one dies in this story, OK?)

I put my story into a quote, in case someone wants to skip my badly written story.
Quote
In the city of Opium-Sco-Micro on some very large hills there were few gang clubs (There was "EDS gang", "WDS gang" also other gangs which this story will omit). Clubs wanted to keep everything under control and wanted to know exactly how many people comes in to the club. And so these club entrances have the guards armed with guns remote button, which control some trap door opening at the ground under the entry, which, when  activated, sends unfortunate folk(s) to some wheeeeeeeeeeeee slide-trip in the tube with water to the closest river.

Guard stands perpendicularly to the line of folks trying to enter the club. Their field of view covers few meters of folks approaching, crossing the entrance and few meters moving inside the club. Unfortunately guards are not very capable of fast counting neither have a perfect sight. Thus there is the sign above the entry that people should keep at least 2 meter distance (or more) in the line, if they don't want to be flushed to the river while trying to enter the club and not keeping the required distance. Folks are coming at random,  however all of them don't know how to read, and all of them has the same constant movement speed.

How does that works. This is where EDS gang guards and WDS gang guards would differ. EDS gang guards would flush anyone who does not keep those 2 meters before him, even if that person in front in line is just folk who is at the moment being flushed. WDS gang guard would care only about the last passed-in folk, and would measure these 2 meters from that last passed-in folk ignoring flushed folks in-between.

If more folks would come at random, sometimes incidentally two people would go side by side (a row), and guard having bad sight would be unable to distinguish if that is one or two person, and would take a row for a single person (a pile-up). So in case no one in front both would be let in as a single person, and guardian would just take a note of 1 person passing into the club, otherwise both would be flushed to the river.

Now, if a moderately large crowd of folks wanting to enter the club would arrive, there would appear more combinations where 2 or 3 or more folks would be side-by-side and bad-sighted guardian would be unable to identify if it is a single person or more (oh these sneaky bastards). If there would come very large crowd, where no one would keep distance, the EDS guard would let in the first row (and would count to 1), and would keep the trap door opened sending the rest of the crowd down to the river (EDS dead-time=100%). Differently to EDS guardians, the WDS gang club guard would let folks/rows every 2 meters, and would flush everyone in between (non-expandable dead-time).

Everyone would know that you should not joke with EDS gang (they have "expandable dead time"), and so people should not be provoked to come in too large crowds (keep your dead time below 25% they say).

It looks that Guards from EDS and WDS gangs flushes people a bit differently, also they count it a bit differently.
WDS guard has nice counting system where he pushes the button for adding next +1 to the total count number, where EDS has to replace (reset) guards every nth minute as they can count up in their head to 100 (or other settable low number), and at time when guardians are replaced - the trap door is automatically open, so no one can enter - on trial to enter they just fall into trap and are flushed to the river (additional part of EDS dead time at "Charge sensitive pre-amplifier" level, dead time which WDS has not). For some internal reasons gang announces that club can be tried to be entered only in 10 minutes (or 5, or 3, or 7 or.... you know), where entrance is shut before and gets completely shut after. Interestingly, the administration would like to know how many people wanted to enter the club in those 10, 5, 3, 7 or else minutes and thus are not so much interested in total number of passed-in folks, but rate of folks incoming to the club (i.e. then there was "Asbestos-Y" visiting the club, there was entering 200.4 folks per minute, but when "Rome Burned & ANSI-111" came to present their new album "Teflon 3", the rate was 216.43 folks per minute), as knowing the speed of folks and distance of set and declared "keep distance" sign they can calculate that. At least such was the initial idea.

Anyway, at some moment WDS gang got a bit confused.
Someone noticed that last night there had to be more people inside the club than it was counted at entrance, there was drinks for one thousand people, but all bottles and barrels got empty pretty fast as like there had been two thousands. Some of gang members would say "maybe our guards has bad sight for how much are those `2 meters`. You know I hear that other WDS gang from other city requires only 1 meter distance to be kept between - we probably are more affected with this problem as our requirements are higher." Some other gang member fluent in statistics and physics would say then: "maybe problem dwells in that guardian is unable to see those 2 meters correctly, we should calibrate his '2 meters' with account of how many drinks are consumed at the club. Let's do an experiment moving our club from city to city with same DJ's to get different sized crowds wanting to visit our club", from these results we will get regression line pointing to correct size of guardian's `2 meters`.

After few attempts it would get obvious that results is a bit weird. It would be noticed that calculations shows that this guardian's "2 meters" shrinks and enlarges depending from the size of group. Some one would say "But at least you see! - all results points that it is inaccurate in some cases it is less/more than 2 meters. We should at once send the warning to all other gangs in other cities that they should calibrate the sight and ability to assert the distance of their guards.". Few gang members reported that they think this 2 meter shrinks depending from what band or DJ was sound checking that day. Some folks thought that this is nonsense as Guards are deaf and that should not influence them at all.

Then some idea would appear, that actually we should not calibrate those "2 meters" to the single event, but to a proportions of two same happenings (i.e. compare rate of club visitors when "Asbestos-Y" plays vs rate of visitors when "There comes the riot" band plays) and in the different cities with different population (that would be k-ratios vs beam current out from this story). After some more investigative experiments someone at last would correctly notice that probably size of crowd interferes with guardians perception of the "2 meters" due to rows being formed - "Oh those sneaky Folks". Firstly, someone took into account possibility of 1 additional folk sneaking in, later someone enlarge it up to total 6, but latest iteration of equation decided to be not discriminant and replaced that with natural logarithm to account for rows with infinite number of folks. That equation itself needs to be used to find the probability how many multi-person rows would appear depending from size of the crowd by tinkering the "constant" of "2 meters" and if it fits the found size of "2 meters" then it is correct. Some voices also had raised an idea that those guardians with age goes mental, and their "2 meters" grows (or shrink) in size with age.

This new finding and practice of such calibration would in haste be spread to other gang clubs around countries and cities... While overlooking (or absolutely ignoring) the fact how in real the guardians keep those 2 meters. I.e. EDS club guard has three lines drawn on the floor (with very precisely measured distances): the main line directly at the entrance with trap doors to flush the folk(s), then there is one line 2 meters away from main line inside the club, and other is 2 meters outside the club. Guardian opens the trap the moment the folk (or row with sneaky folks) pass the main line, and closes it only if both areas (2 m before and 2 m after the entrance) are clear.

WDS gang club guardian have simplified system, only two lines drawn on the floor: the first main line with the flushing trap door is under the entrance, the other line is 2 meters further inside the club. There is no 3rd line in front of the club. If there is no folks in between these two lines the flushing trap is closed then (folk can pass).

What was missed by gang statisticians and physicists is that their observed discrepancy of folk count rate can't depend only from the distance which folks needs to keep in between (2 meters), but also the poor eyesight of the guardians and their delay of pushing button after first folk crossed the line. They need to delay the opening of trap door for some milliseconds, or else the first folk in line would be caught and flushed to the river, and absolutely no one would be able to enter the club. Because Guardian has poor eyesight and a some delay reaction to push the button, those multiple (pile up'ed) sneaky folks are able to enter the club while being counted as a single person. The equation had not accounted for delay, and resolution of eyesight of guardian and thus the created calibration and formula does not represent what it states to represent. Also there is risk that it will not work correctly if guardians would replace the requirements of distance between folks (settable dead time). These "2 meters" in this story would correspond to the dead time constant outside of the story. Taking into consideration the constant moving speed of all folks the dead-time will be time portion when the trap door was open preventing anyone from entering the club. Lets say the thought up formula can predict the real rate of folks trying to get to the club if calibrated "correctly", but even then the "calibrated value" have nothing in common with real set distance between drawn lines at club entrance and naturally should be called something else.

So, Your all formulas (Brian, Yours proposed too) tries to calculate something while excluding crucial value (variable in sens of EDS systems, or static value in WDS systems) - the "Shapping Time" of shapping aplifier, from which the result highly depends and that value is absolutely independent from the set(able) dead time. Make formulas work with pre-set, known and declared dead time constants of instrument (physically measured and precisely tweaked with oscilloscope in factory by real engineers), please stop treating instrument producers like some kind of Nigerian Scam gang, that is not serious. In case this formulas are left for publication as is, please consider replacing the tau letters with something else and please stop calling it the dead-time constant - as in this current form it is not that at all. I propose to call it "p" - the factor of sneaked-in pile-ups at fixed dead time. I also would be interested in your formula if tau would be replaced with multiplication with of the real tau (the settable, OEM declared dead time) and experimentally calculable "p" - the factor of sneaked-in pile-ups depending from shapping time. That I think would be reasonable because not everyone are able to open covers, take a note of relevant chips and electronic components, find and read corresponding datasheets to get the "Pulse Shapping" time correctly, or measure it directly with oscilloscope (which, indeed, is not so hard to do on Cameca SX probes).

Currently I saw no results present using proposed formulas when dead time (the real hardware dead time) would be set to anything else than 3µs, try Your formula with setting the hardware to 1µs, to 5µs, 7µs, 11µs... If it will work with all of those then formula can be considered to be closer to correct. Oh no'es - You need to redo finding of factor  when changing hardware dead time? You see - that is the reason why tau should be divided into two variables: settable fully controllable dead time variable (tau), and proposed constant of "p" - so that formula would work with any hardware settings without need of re-calibration of it. But probably You won't be interested in that as currently PfS is limited to single hardware dead time per detector and XTAL (As far I track the advancements and configuration file formats of PfS, please correct me if I am wrong). I even see that at this moment my strategy of not crossing 10kcps (where simple dead time correction works reasonably) and using Cameca Peaksight gives me more freedom. On Peaksight I can switch between those hardware dead time constants like crazy depending from what I need (i.e. if I need better PHA precision, cause I want to use differential mode - I switch it to 7 µs, if I need more throughput and don't care about PHA precision (integral mode) - I switch to 1µs DT (+0.3 µs to all modes to account the additional dead time by "pulse-hold" chips)).

I tried this log formula just for fun, and...
I get negative values if I enter to this log formula 198000cps at 3.3µs, Also It highly underestimates count rates if I change hardware DT to 1µs.

Going above 150kcps (i.e. there is mentioned these unrealistic 400kcps) is not very wise, and knowing how PHA looks above that point (and after seeing real Oscilloscope picture which gives much better view what is going on) I really advice of not going there. Actually I highly urge to stay away from anything >100kcps as the counting gets really complicated and is hard to model (for sure this  oversimplified log formula can't account for count losses by Base line shifting of Pulse Shapping amplifier output). Even worse This calibration process calibrates this factor to the something what should not be used biasing wrongly the middle juicy region of detector (10-100kcps, or 10 to 70kcps if low pressure).


BTW, this is not Cameca problem (please stop comparing the dead time constants between Jeol and Cameca; it is comparing of apples and oranges. Jeol is forced to use shorter constants as WDS electronics have to do more work as it needs to push background measurements through PHA; where Cameca instruments have this luxury of not caring about background noise as it is cleverly cut out and not blocking the pipeline. So Cameca engineers could place in a bit lengthier "Shapping Amplifier" for more precise PHA producing similar throughput as JEOL instruments). So pile-ups is general problem of all detectors going BIG (or really severely overly absolutely too big. Why? because it looks better on the paper and sells better? As everything.). This problem appeared on WDS due to large diff XTALS, but EDS at current market is not better at all. We are looking for new SEM configuration. Jeezz, Sales people look to me like someone coming out from a cave when I ask for EDS SDD with 10mm2 or even lesser active area. If I do any EDS standard-based analysis the dead time should not be more than 2% (as pile-ups will kick in ruining the analysis) and I want still have BSE nice and smooth at the same time, not some static noise (which would be if using very low beam currents to make these big EDS SDD stay in 2% DT). And this requirement is hard to meet with minimum offered 30mm2... Well they initially offer 70mm2 or 100mm2 (or even more). What a heck? Counting electronics had not improved this much at all to move even to 30mm2 for precise quantitative analysis. Their main argument - "You can produce these nice colorful pictures in instance - you will be more productive". Yeah I guess I will be more productive... producing garbage. OK, ME: "What about pile-ups?". THEY: "We have some built in correction...",ME: "Which I guess works nicely with Fe metal alloy...", "THEY: "How had You know this very confident information...", ME:  "What about your pile-up correction for this REE mineral?"...
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 11, 2022, 08:49:39 AM
Ha, OK. Whatever.   ::)    Everybody's a contrarian apparently.

You mention an observed count rate of 198K cps and a dead time of 3.3 usec and getting a negative count rate. Of course you do!  Talk about a "non physical" situation! At 3.3 usec, an observed count rate over 190K cps implies an actual count rate of over 10^7 cps!    :o
 
User specified dead time constant in usec is: 3.3
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
  190000   509383.3      0.373    1076881   0.1764355    7272173   2.612699E-02   1.374501E+07   0.0138232   
  191000   516635.1     0.3697    1116561   0.171061   1.073698E+07   1.778898E-02   3.869025E+07   4.936644E-03   
  192000   524017.4     0.3664    1158892   0.1656756   2.043884E+07   9.393877E-03   -4.764755E+07   -4.029588E-03   
  193000     531534     0.3631    1204149   0.1602792   2.050771E+08   9.411094E-04   -1.47588E+07   -1.307694E-02   
  194000   539188.4     0.3598    1252647   0.154872   -2.562786E+07   -7.569886E-03   -8736024   -0.0222069   
  195000   546984.6     0.3565    1304750   0.1494539   -1.208202E+07   -1.613968E-02   -6206045   -3.142098E-02   
  196000   554926.4     0.3532    1360876   0.1440249   -7913163   -2.476886E-02   -4813271   -4.072075E-02   
  197000     563018     0.3499    1421510   0.138585   -5887980   -3.345799E-02   -3931522   -5.010782E-02   
  198000   571263.7     0.3466    1487221   0.1331343   -4691090   -4.220768E-02   -3323049   -5.958384E-02   

Jesus, Mary and Joseph and the wee donkey they rode in on... can we get back to reality?  Any physically existing proportional detector is completely paralyzed at actual count rates above 400 to 500K cps. So please just stop with the nonsense extrapolations.   :)

Seriously, I mean did you not know that at sufficiently high count rates and sufficiently large dead times, even the *traditional* dead time expression will produce negative count rates?  I posted this weeks ago:

https://probesoftware.com/smf/index.php?topic=1466.msg11032#msg11032

Hint: take a look at the green line in the plot!   

Anette and I (and our other co-authors) are merely looking at the empirical data from *both* JEOL and Cameca instruments and finding that these new dead time expressions allow us to obtain high accuracy analyses at count rates that were previously unattainable using the traditional dead time expression expression (and of course they all give the same count rates at low count rates). 

In my book that's a good thing.  But you guys can keep on doing whatever it is that you do. Now maybe you coded something wrong or maybe as I suspect Brian has, you're utilizing the same dead time constant for the different expressions:

https://probesoftware.com/smf/index.php?topic=1466.msg11013#msg11013

The dead time constant *must* change if one is accounting for multiple photon coincidence! If fact the dead time constant *decreases* with the more precise expressions. Here's the code we are using for the logarithmic dead time expression in case it helps you:

Code: [Select]
' Logarithmic expression (from Moy)
If DeadTimeCorrectionType% = 4 Then
If dtime! * cps! < 1# Then
temp# = 1# + Log(1 - dtime! * cps!)
If temp# <> 0# Then cps! = cps! / temp#
End If
End If

Finally you mention these new expressions should be tested, and that is exactly what we have been doing in this topic:

https://probesoftware.com/smf/index.php?topic=1466.0

There is copious data there for anyone who wants to review it, and we further encourage additional testing by everyone.  In Probe for EPMA this is easy because all four dead time correction expressions are available for selection with a click of the mouse:

(https://probesoftware.com/smf/gallery/395_09_08_22_11_11_07.png)

But hey, if nothing else, I'm sure this is all very entertaining for everyone else.    ;D
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 11, 2022, 03:57:30 PM
In post which You had linked there is this:
...
We are actually using the logarithmic equation now as it works at even higher input count rates (4000K cps anyone?).
...
You asked "anyone" and I gave You example with supposedly 2.4Mcps input count measurement (950nA on SXFiveFE, Cr Ka on LPET high pressure P10, partly PHA shift corrected for this experiment). Had you not expected someone would come with 4Mcps like we live in caves or what? Why Would You ask in a first place? If You Want I can go to our SX100 and try putting 1.5µA to get closer to those 4Mcps and report the results.


That is my python code functionally just doing exactly the same thing as Your VB code (whithout checking for negative values):
Code: [Select]
def aurelien(raw_cps, dt):
     dt = dt / 1000000 # because I want to enter dead time in µs
     in_cps = raw_cps / (1 + np.log(1-raw_cps*dt))
     return in_cps

So at first I just entered my hardware dead time. If It is set to 3µs the real dead time can only be a bit larger (by some signalling delays at traces between chips - i.e. Pulse Hold chip). If Your equation and calibration brakes that - it simply exposes itself to be false (i.e. dead time of 2.9µs while hardware is set to 3µs) or falsifies claiming what it does...
As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable. With digital clock of 16MHz 1µs will be 16 clock cycles, 2µs will be 32 clock cycles, 3µs will be 48 clock cycles and so on. it can't be tweaked to 15 or 17 cycles randomly. If You imply that digital clocks suddenly in all  probes can decalibrate itself to such a bizarre degree, at machines maintained at stable humidity and temperature, while other technology full of digital clocks roams outside and show none of such signs - something is wrong. Should such digital clocks be failing at such unprecedented rate as proposed for all our probes, we should see most 20 year old ships sinking, massive plains exploding in the air as turbines would go boom with desync of digital clocks, power plants exploding, satellites falling... but we don't see any of these things because digital clocks are one of these simple and resilient technological miracles and corner stones of our technological age. The proposition of lower dead times than the set hardware clock cycles are simply completely unrealistic in any acceptable way whatsoever as additional to those set clock cycles there should be some delay in signaling between chips.

Thus said the 3µs stays there (the red-line) and final dead time (unless we will stop calling it like that) can't go below that value - it is counting electronics and not damn time traveling machine! What about additional hardware time. Cameca Peaksight by default adds 0.3µs to any set dead time (can be set to other values by user) - PulseHold chip has 700ns, and I was always thinking that this additional 0.3µs is difference to round up and align the counting to the digital clock at 1µs boundaries. So that is why I had entered 3.3µs in the end to the equation and found out the negative number.
This whole endeavor was very beneficial for my understanding, as finally I had realized that it is Pulse Hold Chip which decides if the signal pipeline needs to be blocked from incoming pulses. Thus in reality +0.3 or + (0.7 + 0.3) to the set integer blanking time constant of 3µs have no base, as the blocking of pipeline and pulse holding in the chip works at the same time in parallel to blocking of pipeline from incoming signal.

Ok, in that case increasing the dead time to 3.035μs these 198kcps of output gives the expected result (about 2430kcps of input). So then 3µs is the blanking time where 35 ns is signal traveling from pulse hold chip to the FPGA so that it would block the pipeline (this is huge simplification skipping some steps) - generally taking into consideration LVTTL this timing looked a bit short to me initially, but with well designed system such and even smaller digital signaling delay is achievable (the new WDS board is more compact than older board, traces are defenitely shorter and narrower - less capacitance - faster switching time). However, these 35 ns could also represent the half of clock cycle. The fact of pulse arrival can be read by FPGA only with clock resolution (period of 16MHz is 0.065µs), thus before FPGA would trigger 3µs pipe blockade some additional random part of clock cycle would need to pass, where average would be about a half of the cycle.  We also don't know with what time resolution FPGA works as it can increase the input clock frequency for internal use. Anyway now we know that to integer part other processing takes 0.035µs and thus it should be same addition for other integer time constants. Its time to test this equation for dead time constant of 1µs (+0.035).

Measured output count rate was 315kcps with only dead time changed to 1µs. The input count rate stays absolutely the same as conditions are the identical.
Calling the log function with 1.035µs gave me about 520kcps of input counts.
Ok lets find which dead time value would satisfy the log formula to get expected input count rate....
looks 1.845 µs would work - which makes completely no physical sense.

But what I should expect from the models which completely ignores how the hardware works. You say it is based on Empirical data? Maybe You have too little set of empirical data and thus this fails miserably at different set dead times.

But then it really gets funny and explains why hardware internal workings are not considered in this proposed formula:
Jesus, Mary and Joseph and the wee donkey they rode in on... can we get back to reality?  Any physically existing proportional detector is completely paralyzed at actual count rates above 400 to 500K cps. So please just stop with the nonsense extrapolations.   :)

Are these input or output rates? if those are output rates, 198kcps is just much below these,so where is that extrapolation?.
In any case that replay makes no sense, because on Cameca instruments the max output rate you can squeeze out from counting system is just a bit above 320kcps at input count rates of more than 2.5Mcps and hardware dead time constant set to 1µs. And if Jeol has not settable dead time constant and fixed to 2µs those mentioned 400Kcps or 500kcps of output count rates are not achievable in any way. It would be achievable if dead time would be settable below <1µs. Again I am talking about real measurements not extrapolations or interpolations which You try to smear me about.

But wait a moment, after rereading more of post I had realized you are indeed talking about input count rate limitations of proportional counters. When why bother asking for "4Mcps Anyone" in a first place in the linked post? Is there any proof for that far-fetched claim of limitations? I am curious, because I have physical direct proof of contrary - the real oscilloscope measurements/recordings, which directly demonstrate that proportional counters (at least those mounted on Cameca WDS) have no  any direct paralizable behaviour going all way up to 2.5Mcps. The output of Shapping amplifier gets a bit mingled due to being overcrowded at higher count rates, but I had not noticed any paralizable behaviour of Proportional counter. With solving atmospheric influence on pressure and humidity these are very incredible piece of device - with better counting electronics those would leave the performance of SDD in the dust. Even SDD - the detector chip and Charge sensitive preamplifier has no paralizable behaviour! Generated amount of X-rays on our e-beam instruments are insignificant to do any real self inflicting feed-back cut off of pulses on those detectors. It is the EDS pulse counting system which introduces such behaviour, and WDS has no such paralyzable mode inside the counting system (output count rate increases with increasing beam current - and never starts decreasing; ok it increases with slower and slower pace with higher beam currents, but curve never ever reverts, unless diff mode).

I suggest to read the story written in my previous post to understand the differences, I particularly included in-personification of EDS system there to demonstrate the functional difference clearly from WDS.

In normal circumstances the Streamer and Geiger region for high pressure counter is totally out of the reach (where Self Quenched Streamer can introduce some dead time, and Geiger events for sure can block the detection for hundreds of ms). Low pressure counter - that's a bit different story, and probably I would not want to find out that, but if Bias on those is not pushed high, the amount of generated X-rays reaching such counter is just a scratch and won't affect the performance anyhow. So Where you got this misinformation about paralyzing behavior of proportional counters again?

It would not hurt to get familiar with hardware before creating the math models for behavior of that hardware, else You have model far detached from physical reality. Modeling Pile ups from dead time constant is poor chose, when it is Pulse Shapping time which decides how small time difference needs to be so sneaky pile up peaks would make PulseHold chip include them in the amplitude measurement. The dead time is completely not influencing that anyhow at all as it is triggered after PulseHold chip read the amplitude in by that PulseHold chip.

It is often miss-understanding that dead time introduction in counting electronics is not for pile-up correction, but mainly to filter out overlapped (which pulse hold chip can often still differentiate) pulses for more precise amplitude estimation - which for EDS is crucial. The dead time in WDS without extendable behavior gives very little to no benefit (thus You get the PHA shifts, but keep the throughput) and is rather an excuse to cut the board costs down (shared ADC, buses etc...).

Any model normally is covered by some empirical data, but is it enough to say that such math model is true? Especially then it fails in some normal and edge cases and brakes very principal laws? Model with numerous anomalies?
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 11, 2022, 05:14:40 PM
So at first I just entered my hardware dead time. If It is set to 3µs the real dead time can only be a bit larger (by some signalling delays at traces between chips - i.e. Pulse Hold chip). If Your equation and calibration brakes that - it simply exposes itself to be false (i.e. dead time of 2.9µs while hardware is set to 3µs) or falsifies claiming what it does...

I've already explained that you are assuming that the hardware calibration is accurate.  It may not be.

As I said, call it differently - for example "factor". Dead time constants are constants, constants are constants and does not change - that is why they are called "constants" in the first place. You can't calibrate a constant because if its value can be tweaked or influenced by time or setup then it is not a constant in a first place but a factor or variable.

And once again we fall back to the nomenclature argument.

Clearly it's a constant in the equation, but equally clearly it depends on how the constant is calibrated.  If one assumes that there are zero multiple coincident photons, then one will obtain one constant, but if one does not assume there are zero multiple coincident photons, then one will obtain a different constant. At sufficiently high count rates of course.

But what I should expect from the models which completely ignores how the hardware works. You say it is based on Empirical data? Maybe You have too little set of empirical data and thus this fails miserably at different set dead times.

Yes, we are using the same equation for both JEOL and Cameca hardware and the constant k-ratio data from both instruments seem to support this hypothesis.  So if we are correcting for multiple photon coincidence (and that is all we are claiming to correct for), then the hardware details should not matter.  As for detector behavior at even higher count rates I cannot comment on that.  But if we have extended our useful quantitative count rates from tens of thousands of cps, to hundreds of thousands of cps then I think that is a useful improvement.

I really think you should think about this carefully because it is clear to me that you are missing some important concepts.  For example, the traditional dead time expression is non physical because it does not account for multiple photon coincidence.  Wouldn't it be a good idea to deal with these multiple photon coincidence events since they are entirely predictable (at sufficiently high count rates and sufficiently large dead time constants)?

Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 12, 2022, 04:12:02 AM
Perhaps we need to go back to the beginning and ask: do you agree that we should (ideally) obtain the same k-ratio over a range of count rates (low to high beam currents)?  Please answer this question before we proceed with any further discussion.

You know already my answer from other post about matrix correction vs matrix matched standards. And to repeat that answer it is Absolutely Certainly Yes! And yes I agree both needs to be considered: 1) the range of count rates and 2) and the range of beam currents; because i.e. PET will have different pulse density than LPET at same beam currents. Well I am even more strict on this as I would add 3) the same spectrometer (the same proportional counter) with different types of XTALS! because gas pressure of counter and take-off angle (these probable very small tilt of the sample) are the same.

Don't understand me wrong, I am as well very aware that classical dead time correction equation is not able to provide satisfactory solution to estimate the input count rates from observed output count rates, and I absolutely agree that there is a need for better equations.

You are not new in this field, look this pretty summarize some previous attempts: https://doi.org/10.1016/j.net.2018.06.014 (https://doi.org/10.1016/j.net.2018.06.014).
...And Your proposed equation is just another not universal approach, as it fails to address the Pulse Shapping time, albeit it have potential to work satisfactory for EPMA with some restrictions. That is not bad, and for sure it is much better than keeping using classical simple form of correction.

You talk like I would be clueless about coincidences.
Listen, I was into this before this was even cool!
I saw this need of accounting the coincidences much earlier than You guys started working on this here.

Look, I had even made the monte-carlo simulation to  address this more than a year ago and found out that it predicts observed counts very well (albeit it has 0 downloads. No one ever cared to check it out).

Modeled values agrees well with spectrometer count rate capping at different set dead time values.
I.e. at dead time set at 1µs the count rate caps at ~300kcps, while at 3µs it caps at about 200kcps, which this modeling predicts very well.

The only thing is that I failed to make a working equation which would take raw output counts and would output correctly predicted input counts. My attempts to use some automatic equation fitting to those MC generated datasets failed (or I am simply really bad at function fitting).

I was lacking then more knowledge and particularly important key points, which I had gathered from that time and which is crucial as it simplifies things a lot. I got some hope to finish my work in some future and construct completely transparent universal equation for Proportional counters and its counting.

So the mentioned crucial points:

So You call it multiple photon coincidence as that would be happening massively in the detector, while I call it Pulse-pile up which happens not at photon level, but at electronic signal level.

Have a MEME:
(https://i.imgflip.com/6ppsxn.jpg)

This problem of pulse-pile ups is signal processing problem, not the problem of proportional counters. Will the two (or more) photons hit the detector with time difference of 1ns or 400ns (which in first case would look as pileup in the raw voltage of counters wire and in the other as two very well separated pulses) - that completely does not matter while Shapping Amplifier (SA) is set for 500ns shapping time – in both cases Your counting electronics will see just a single piled-up peak received from the SA, unless the time difference between photon incidence will be larger than that shapping time. Would the shapping amplifier have lower shapping time than half pulse width of the raw proportional counter charge collection signal - You then should care about the photon coincidence (but seriously Shapping amplifier in such a combination would be then not just completely useless, but a signal-distorting thing).
Thus please stop repeating this "multiple photon incidence" mantra as what You should tell is your log equation solves is pulse-pile-up problem. But it actually does not just solve it, on its way it masquerade few phenomena as it would be single simply understandable constant of dead time, blaming producers and just spitting into face to physics (whole electric engineering in particularly) indirectly stating that digital clocks goes with time complete nuts - This is what I am not good with at all.

Look I salute You, for getting better constant k-ratios across multiple counting rates/beam currents and coming with very elegant simple formula. I am really good with it that this is probably going to be hastily adapted across many PfS users. But I completely am appalled by this masquerading of dead time constant where in real it is is fuse of the 3 in disguise: the real dead time constant (digital clock based constant),  probability (depends from density of events or range of readable raw cps, can change with age of the detector wire) of pile-ups depending from shaping time (constant value). You call it "calibration" but what I see in real is "masquerading" and blaming original unchangeable dead time constant.

Another thing without changing the nomenclature of your "tau", You are stating that You can find the probability of coincidence on this dead time constant alone, which is nonsense as probability of pileups does not depends from the dead time at all. It depends from shapping amplifiers shapping time and density of pulses.

Why I am so outraged with wrong nomenclature? Yes is Yes, No is No, Apple is apple, orange is orange. We keep science at maintainable state (some times I start to doubt about that, especially seeing things like here) by keeping calling things what it is. We normally try to talk in language of reason. If You suddenly start to call things or masquerade something else under the skin of previously well known and well defined nomenclature - that is no more science - that is basically modern politics.
The problem which I see could arise if someone in the future will waste time because they had read in your paper between the lines that dead time constants can change with aging. Some one who will try to improve x-ray counting even more and stumble across Your equation will have countless sleepless nights trying to understand that a heck.

Some will catch this nonsense (of fluid dead time constant) and without giving more deep thoughts will try applying to other metrological fields. I had seen countless times how some errors had spread like a fire on a dry grass, while brilliant and well thought precise and accurate and well thought stuff stays somewhere in the fringes. Sometime authors come to their senses and tries to correct their initial error later, but guess what, often at that point absolutely no one cares and keeps spreading the initial error.

Would you change tau in the equation to some other letter (I had already proposed "p" (from pile-ups), also I think "f" (from "fused") would fit, or "c" (from "coincidence") and would explain: "Look folks, we came up with this elegant equation, we ditch hitherto used strict deterministic dead time constant (we don't care any more about it in this method, we had found we can do without that knowledge) - instead we find this fused factor which fuses the dead time, shaping-time and probabilities of pile-up into single neat value with simple method. How cool is that? (BTW, we even managed to convince sem-geologist to agree that that actually is pretty cool!). The outstanding feature of our new method is that we can very easily calibrate that factor of all these three without even knowing the dead time and shapping time of electronic components - You won't need to open black box to calibrate it!". Would be it something like that – I would say "Godspeed with your publication!", but it is not.

BTW, I agree that at current state of counting electronics it is not very wise to wonder to very high beam currents. I actually am more strict about it and would not go above >100kcps (for high pressure) or > 80kcps (for low pressure) with raw counts at differential mode. However at integral mode there should be no problem going up to 1Mcps (input) with proper equation which would correspond to about 305kcps of raw counts at 1µs settings, and about 195kcps of raw counts at 3µs settings on Cameca instruments. Going over that point the uncertainty will start overgrowing the benefits of precision gain from huge count number.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 12, 2022, 09:14:22 AM
(BTW, we even managed to convince sem-geologist to agree that that actually is pretty cool!).

SG: Great post, I very much enjoyed reading it.  Here is a response that I think you will also agree is "pretty cool": 

https://probesoftware.com/smf/index.php?topic=1466.msg11100#msg11100

 :)
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 14, 2022, 07:40:00 PM
So, Your all formulas (Brian, Yours proposed too) tries to calculate something while excluding crucial value (variable in sens of EDS systems, or static value in WDS systems) - the "Shapping Time" of shapping aplifier, from which the result highly depends and that value is absolutely independent from the set(able) dead time. Make formulas work with pre-set, known and declared dead time constants of instrument (physically measured and precisely tweaked with oscilloscope in factory by real engineers), please stop treating instrument producers like some kind of Nigerian Scam gang, that is not serious. In case this formulas are left for publication as is, please consider replacing the tau letters with something else and please stop calling it the dead-time constant - as in this current form it is not that at all. I propose to call it "p" - the factor of sneaked-in pile-ups at fixed dead time. I also would be interested in your formula if tau would be replaced with multiplication with of the real tau (the settable, OEM declared dead time) and experimentally calculable "p" - the factor of sneaked-in pile-ups depending from shapping time. That I think would be reasonable because not everyone are able to open covers, take a note of relevant chips and electronic components, find and read corresponding datasheets to get the "Pulse Shapping" time correctly, or measure it directly with oscilloscope (which, indeed, is not so hard to do on Cameca SX probes).

I get rather weary of your armchair criticisms.  You pontificate on various subjects, yet you provide no data or models.  If you have data or a model that sheds light on the correction for dead time or pulse pileup, then please present it.  A meme is not an adequate substitute.

On the instrument that I operate (JEOL JXA-8230), the required correction to the measured count rate is in the vicinity of 5% at 30 kcps.  Under these conditions, how significant is pulse pileup likely to be?  I have never advocated for application of the correction equation of Ruark and Brammer (1937) outside the region in which calculated ratios plot as an ostensibly linear function of measured count rate.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 14, 2022, 07:47:25 PM
It’s clear that the linear correction equation, N(N’), of Ruark and Brammer (1937, Physical Review 52:322-324, eq. 2) as used by Heinrich et al. (1966) and many others is not perfectly accurate – I never claimed that it was.  It fails badly at high count rates and actually begins deviating from reality at count rate greater than zero, yet it still provides a very reasonable approximation of corrected count rate up to several tens kcps.  The equation is pragmatically simple, and this is adequate justification for its use.  It does not need to be perfectly accurate to provide valuable results.  This is also true, for instance, when applying any of the modern matrix correction models.

In order to determine the dead time constant using the linear model, a linear regression must be performed using data (ratios) that fall within the region of effectively linear correction.  The dead time constant may not be adjusted arbitrarily as was done in construction of the plot in the quote below.  When such an approach is taken, i.e., when it is applied improperly, the linear model can be made to behave poorly.  Notice the minimum that forms at about 60 nA (at whatever corrected count rate that produces):

So we saw in the previous post that the traditional dead time correction expression works pretty well in the plot above at the lowest beam currents, but starts to break down at around 40 to 60 nA, which on Ti metal is around 100K (actual) cps.

If we increase the dead time constant in an attempt to compensate for these high count rates we see this:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_07_33.png)

Once the dead time constant is determined by means of the linear equation within the region of effectively linear behavior, it sets a physical limit that cannot be violated either in the positive or negative direction on the plot.  The statement that the dead time constant needs to be adjusted to suit a particular model is patently false.  It is not a model-dependent quantity but instead constitutes an intrinsic property of the X-ray counter and pulse processing electronics.

Let me present once again a plot of ratios (uncorrected and corrected) calculated from data collected in my measurement set 1 for Si.  In addition to the data plotted as a function of measured count rate, I’ve also plotted the linear correction model, the Willis model, and the Donovan et al. six-term model (as functions of corrected count rate).  For the Donovan et al. model, I’ve also plotted (as a line) the result that is obtained by decreasing the dead time constant to 1.36 μs from 1.44 μs (the latter being the result from the linear regression), which appears to force the model to produce a reasonably accurate correction over a wide range of count rates:

(https://probesoftware.com/smf/gallery/381_14_08_22_7_26_17.png)

In order to examine behavior of the Donovan et al. six-term model more closely, I’ll define a new plotting variable to which I’ll simply refer as “delta.”  For my measurement set 1, it looks like this:

ΔN11/N21 =  (N11/N21)6-term – (N11/N21)linear

Since formation of the difference requires evaluation of the linear correction equation, the delta value should generally be applied within the region of the plot in which the uncorrected ratio, N’11/N’21, plots as an ostensibly linear function of the uncorrected count rate.  Below is a plot of delta for various values of the dead time constant within the linear region:

(https://probesoftware.com/smf/gallery/381_14_08_22_7_30_10.png)

Realistically, delta values should in fact always be positive relative to the linear correction.  This is the case if the dead time constant from the linear regression (1.44 μs) is used.  Unfortunately, use of this dead time constant leads to over-correction via the six-term model.  If instead the function is allowed to pivot downward by decreasing the value of the dead time constant so as to fit the dataset better as a whole, then delta must become negative immediately as count rate increases from zero.  This is not physically reasonable, as it would imply that fewer interactions occur between X-rays as the count rate increases and that a minimum is present – in this case – at ~40 kcps.  Note that, on the previous plot, which extends to higher count rate, the Donovan et al. model is seen to produce a corrected ratio that eventually passes through a maximum.

I’ll now turn my attention to my measurement set 2, in which problems with the six-term model are much more obvious.  As I did when I posted my results initially, I’ve plotted results of the six-term model using the dead time constant determined in the linear regression (1.07 μs) and also using a dead time constant (1.19 μs) that allows the six-term model to fit the data better overall:

(https://probesoftware.com/smf/gallery/381_14_08_22_7_29_01.png)

I’ll now define a new “delta” for measurement set 2 as

ΔN12/N22 = (N12/N22)6-term – (N12/N22)linear

Here is the plot of delta for various values of the dead time constant:

(https://probesoftware.com/smf/gallery/381_14_08_22_9_54_52.png)

On this plot, a negative deviation indicates over-correction of the channel 4 Si Kα count rate.  Note that varying the dead time constant for Si Kβ measured on channel 1 will produce little change in the plot, as the Si Kβ count rate requires only minimal correction.  Even if I use the dead time constant determined in the linear regression, the Donovan et al. model over-predicts the correction.  The situation gets worse as the dead time constant is adjusted so that the model fits the data better.  For τ2 = 1.19 μs, the six-term model intersects the 2σ envelope at Si Kβ count rate roughly equal to 2200 cps, which corresponds to measured Si Kα count rate around 60 kcps.  Obviously, the six-term model does not correct the data accurately.

As a final note, consider the plot in the quote below.  It illustrates a correction performed with the Donovan et al. model obtained by varying the dead time constant arbitrarily.  Although it is clouded somewhat by counting error, it is difficult to escape noticing that a minimum is present on the plot (as I've noted elsewhere).  As I’ve shown above, the presence of such a feature is a likely indication of failure of the model, with pronounced error possibly present even in the region of low count rates, in which region the linear model displays superior performance:

So, we simply adjust our dead time constant to obtain a *constant* k-ratio, because as we all we already know that we *should* obtain the same k-ratios as a function of beam current!  So let's drop it from 1.32 usec to 1.28 usec. Not a big change but at these count rates the DT constant value is very sensitive:

(https://probesoftware.com/smf/gallery/395_12_08_22_10_33_15.png)

In conclusion, I urge people in the strongest possible terms not to use the model of Donovan et al.  Not only does it produce physically unrealistic corrections at all count rates, but it is substantially out-performed by the linear model of Ruark and Brammer (1937) at the relatively low count rates (no greater than several tens kcps) at which the linear model is applicable.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 15, 2022, 10:06:48 AM
It’s clear that the linear correction equation, N(N’), of Ruark and Brammer (1937, Physical Review 52:322-324, eq. 2) as used by Heinrich et al. (1966) and many others is not perfectly accurate – I never claimed that it was.  It fails badly at high count rates and actually begins deviating from reality at count rate greater than zero, yet it still provides a very reasonable approximation of corrected count rate up to several tens kcps.  The equation is pragmatically simple, and this is adequate justification for its use.  It does not need to be perfectly accurate to provide valuable results.

Wow, really?  So you're going down with the ship, hey?   :P

Let me get this straight: you're saying that although the logarithmic expression is more accurate over a larger range of count rates, we should instead limit ourselves to the traditional expression because it's, what did you say, "pragmatically simple"?   ::)

Yeah, the geocentric model was "pragmatically simple" too!   ;D

In conclusion, I urge people in the strongest possible terms not to use the model of Donovan et al.  Not only does it produce physically unrealistic corrections at all count rates, but it is substantially out-performed by the linear model of Ruark and Brammer (1937) at the relatively low count rates (no greater than several tens kcps) at which the linear model is applicable.

I really don't think you understand what the phrase "physically unrealistic" means!  Accounting for multiple photon coincidence is "physically unrealistic"?

As for saying the logarithmic expression "is substantially out-performed by the linear model... at... relatively low count rates", well the data simply does not support your claim at all.

Here is a k-ratio plot of the Ti Ka k-ratios for the traditional (linear) expression at 1.32 usec and the logarithmic expressions at both 1.28 and 1.32 usec, plotting only k-ratios produced from 10 to 40 nA.  You know, that region you're saying where the logarithmic model is "substantially out-performed"?

(https://probesoftware.com/smf/gallery/395_15_08_22_9_10_21.png)

First of all, what everyone (except you apparently) will notice, is that the k-ratios measured at 10 nA and 20 nA, are statistically identical for all the expressions!

Next, what everyone (except you apparently) will also notice, is that although the linear model at 1.32 usec (red symbols) and the logarithmic model at 1.28 seconds (green symbols) are statistically identical at 10 nA and 20 nA, the wheels start to come off the linear model (red symbols) beginning around 30 and 40 nA.   

You know, at the typical beam currents we use for quantitative analysis! So much for "is substantially out-performed by the linear model... at... relatively low count rates"!   :o

As for your declaration by fiat that we cannot adjust the dead time constant to an expression that actually corrects for multiple photon coincidence, it's just ridiculous.  You do realize (or maybe you don't) that multiple photon coincidence occurs at all count rates. To varying degrees depending on the count rate and dead time value. So an expression that actually corrects for these events (at all count rates) improves accuracy under all conditions.  Apparently you're against better accuracy in science...   ::)

Now here's the full plot showing k-ratios from 10 nA to 140 nA.

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)

Now maybe you want to arbitrarily limit your quantitative analyses to 10 or 20 nA, but I think most analysts would prefer the additional flexibility in their analytical setups, especially when performing major, minor and trace element analysis at the same time at moderate to high beam currents, while producing statistically identical results at low beam currents.

I think we should have choices in our scientific models. We have 10 different matrix corrections in Probe for EPMA, so why not 4 dead time correction models?  No scientific model is perfect, but some are better and some are worse.  The data shows us which ones are which.  At least to those who aren't blind to progress.
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 15, 2022, 09:35:13 PM

I get rather weary of your armchair criticisms.  You pontificate on various subjects, yet you provide no data or models.  If you have data or a model that sheds light on the correction for dead time or pulse pileup, then please present it.  A meme is not an adequate substitute.

I had shared some my own made python code of monte carlo simulations, which modeled the pile-up events (making additional counts missing to the dead time) and produced results pretty closely to what counting rates was being observed going to very high current (like >800nA) on our SXFiveFE.
Here: https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892 (https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892)
The MonteCarlo simulation is much simplified (only at 1µs resolution) and I have plans to remake it in Julia with better resolution and better pulse model to make it possible to take into equation also PHA-shifting-out of pulses from the counting system. Then,  Yes I don't have currently to showoff any ready equation, and I could not fit MC dataset then, because I was being kept in the same rabbit hole as You currently are, but I am working on it (rabbit hole of trying to tackle the system as single entity instead of subdividing it into simpler independent units or abstraction levels, I am still learning the engineer's ways  of "divide and conquer").


On the instrument that I operate (JEOL JXA-8230), the required correction to the measured count rate is in the vicinity of 5% at 30 kcps.  Under these conditions, how significant is pulse pileup likely to be?  I have never advocated for application of the correction equation of Ruark and Brammer (1937) outside the region in which calculated ratios plot as an ostensibly linear function of measured count rate.

That will depend absolutely from time of shapping the pulse and how that pulse is read (the sensitivity of trigger of sensing pulse rising edge transition into pulse top). In Your case pulse pile ups could be <0.1 % or whole 5% of it. On Cameca SX line where pulses are shaped inside the AMPTEK A203 (Charge sensitive preamplifier and shapping amplifier in a single package, thus we have a public available documentation of that) - that is 500ns (full width of pulse is 1µs). According to my initial MC results on Cameca SX  at 10kcps that makes 0.5% of counted pulses, so at 30kcps in case of Cameca instrument that would be around 2% of pulses (should look back to my MC simulation to tell more precise). Would shapping time be larger - that number can be twice as big. Different than Jeol probe, on Cameca SX You can decide about dead time and set to arbitrary (integer) number and that absolutely does not impact the % of pileups at all as shapping time is fixed (that is different from EDS, where output from charge sensitive preamplifier can be piped to different pulse shaping amplifiers) and pileup occurrence directly depend only from pulse density (the count rate) - That is my main hard point here saying any critiques to You and probeman. Also I don't know how it is on Jeol probe, but on Cameca probes there are testing pins left on WDS signal distribution board, for signals coming out from Shapping amplifier and it can be monitored with oscilloscope - thus You can physically catch the pile up events, not just talk some theoretical talks, and thus I spent some time with oscilloscope at different count rates. It is a bit overwhelming to save and share those oscilloscope figures (it is not high-end rather future-poor gear), would be there such demand I would prepare something to show (I plan anyway on other thread about signal pipeline).

BTW, looking to the raw signal with a help of oscilloscope was one of the best self-educational moments during my probe carrier. It at instance cleared up everything for me how PHA works (and works-not), the role of bias, gain, why PHA shifts (none of these fairy stories about positive ions crowding around anode, or anode voltage drop - there is a much more simple signal processing based extremely simple explanation), and made me instantly aware that there are pile-ups and that they are pretty huge problem even at very low count rates. The only condition to be sure that physically there was no pile up is to have 0cps rate.

And thus after seeing quantuple (x5) pile-ups (I am not joking), I made the Monte Carlo simulation as it got clear to me that all hitherto proposed equations miss completely the point of pile-ups and fuses these two independent constants into single constant (the same is said at https://doi.org/10.1016/j.net.2018.06.014 (https://doi.org/10.1016/j.net.2018.06.014)). I also was also initially led to wrong belief that proportional counters can have dead time in the counter itself - which I had found out is not the case, thus simplifying the system (and thus the equation, which I am working at).

Meme was my answer to the ridiculous (to me) claim of "photon coincidence" where it should be "pulse-pile" instead to make sense, as their equation and method can't detect such thing at photon level as it is few orders of magnitude shorter than shaped pulses (which are counted, not photons directly). Yes, Brian, probably I would had better made a chart with pulses in time scale illustrating the differences, but seeing how probeman just ignores all Your plots (and units), so I went for Meme.

I start to understand stubbornness of probeman with nomenclature.

From that shared publication it gets clear that historically two independent processes - physical dead time (real, electronic blocking of pipeline) and pile-up events - were very often (wrongly) fused into single "tau", and probeman et al, are going to keep the same tradition.

There is other problem I have with this log method, That is  it would need "calibrations" for every other set hardware integer time on Camceca SX spectrometers (default is integer 3, but that can be set from 1 to 255).
"Whatever" - would say probeman, as he does not change hardware dead times and thus sees no problem.
But wait a minute, what about this?:
I think we should have choices in our scientific models. We have 10 different matrix corrections in Probe for EPMA, so why not 4 dead time correction models?  No scientific model is perfect, but some are better and some are worse.  The data shows us which ones are which.  At least to those who aren't blind to progress.
Is it fair to compare 4 dead time correction models with 10 matrix corrections? With matrix corrections, We can get different results from the exactly same input (some would argue that MAC should be different and particularly fit for one or other matrix correction model.). With dead time corrections that is not the case, as PfS included methods now requires to "calibrate" the """dead time constant""" for every of the methods separately as these "constants" will be at different values depending from dead time correction method used. (i.e. with classical method probably more than 3µs, with probeman et al log, less than 3µs, and Will and 6th term somewhere in between). <sarcasm on>So probably PfS configuration files will address this need and will be a tiny bit enlarged. Is it going to have a matrix of dead time "constants" for 4 methods, and different XTALS, and few per XTAL for low and high angles...? just something like 80 to 160 positions to store "calibrated "dead time constants"" (lets count: 5 spectrometers * 4 XTALS * 4 methods * 2 high/low XTAL positions) - how simple is that?<sarcasm off>
That is the main weakness of all of these dead time corrections, when pile-up correction is not understood and fused together with deterministic, prefixed signal blanking dead time.

I however tend to see this log equation could be some kind of less wrong thing, at least demonstrating that those "matrix matched" standards are pointless in most of the cases. I wish nomenclature would be given second thought and wish that probeman et al would try undoing some historical nomenclature confusions rather than repeat/continue that inherited nomenclature mess from the past. I know it can take lots of effort and nerves while trying go against impetus.

On the other hand If Brian is staying and will be staying in the low count rate ranges - I see no problem the classical equation would not work for him. I, however, Would not complain if count rates could be increased even to few Mcps, and I am making some feasible plans (a small hardware project) to get there one day with proportional counters.

It was mentioned already somewhere here that ability to do correct measurement at high currents brings in huge advantage for mappings, where condition switching is not so practical.


Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 16, 2022, 04:03:49 PM
I had shared some my own made python code of monte carlo simulations, which modeled the pile-up events (making additional counts missing to the dead time) and produced results pretty closely to what counting rates was being observed going to very high current (like >800nA) on our SXFiveFE.
Here: https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892 (https://probesoftware.com/smf/index.php?topic=33.msg9892#msg9892)
The MonteCarlo simulation is much simplified (only at 1µs resolution) and I have plans to remake it in Julia with better resolution and better pulse model to make it possible to take into equation also PHA-shifting-out of pulses from the counting system. Then,  Yes I don't have currently to showoff any ready equation, and I could not fit MC dataset then, because I was being kept in the same rabbit hole as You currently are, but I am working on it (rabbit hole of trying to tackle the system as single entity instead of subdividing it into simpler independent units or abstraction levels, I am still learning the engineer's ways  of "divide and conquer").

I apologize for criticizing you unduly.  I simply did not remember your post from last year.  It would be helpful if you could expand on your treatment and show tests of your model (by fitting or comparing to data) with in-line illustrations.

As I’ve noted on a variety of occasions, my goal is to find the simplest possible model and method for determination of dead time at relatively low count rates.  It’s clear at this point that our picoammeters are not as accurate as we’d like them to be.  The ratio method of Heinrich et al. relies on as few measurements as possible for each calculated ratio (measurement is required only at each peak position simultaneously), which reduces the impact of counting error; it eliminates the picoammeter as a source of systematic error.  Further, inhomogeneities in the analyzed material do not affect the quality of the data, as both spectrometers (used simultaneously) are used to analyze the same spot.  Even if the equation of Ruark and Brammer (1937) is not fully physically realistic, it still constitutes a useful empirical model, as deviation from linearity in my ratio plots is only visible at count rates in excess of 50 kcps.  Like I noted, the equation is pragmatically simple, and this is nothing to be scoffed at (not that you’ve done this).  Donovan et al. have attempted to create a model that they say is applicable at high count rates, yet, as I’ve shown clearly, the model sacrifices physical reality as well as accuracy at low count rates.  This is simply not acceptable.  So far, I’ve mostly been ridiculed for pointing this out.

If pulse pileup is a serious problem, then this should be revealed at least qualitatively as a broadening of the pulse amplitude distribution with increasing count rate.  Here are some PHA scans collected at the Si Ka peak position using uncoated elemental Si and TAPJ at various measured (uncorrected) count rates.  While the distribution does broaden noticeably progressing from 5 kcps to 50 kcps, deterioration in resolution only becomes visibly severe at higher count rates.  Although this assessment is merely qualitative, the behavior of the distribution is so poor above 100 kcps that I question whether quantitative work is even possible when calibrating at lower count rates.  Further, note the extent to which I had to adjust the anode bias just to keep the distribution centered in the window, let alone the fact that the distribution is clearly truncated at high count rates.

(https://probesoftware.com/smf/gallery/381_16_08_22_3_59_41.png)
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 16, 2022, 06:39:54 PM
The ratio method of Heinrich et al. relies on as few measurements as possible, which reduces the impact of counting error; it eliminates the picoammeter as a source of error.

You have got to be kidding! Fewer data points means lower counting errors?   :o  Sorry, but you're going to have to explain that one to me....

As for the picoammeter, the constant k-ratio method also eliminates the issue of picoammeter accuracy  as you have already admitted in previous posts (because we measure each k-ratio at the same beam current).  So just stop, will you?   ::)

Even if the equation of Ruark and Brammer (1937) is not fully physically realistic, it still constitutes a useful empirical model, as deviation from linearity in my ratio plots is only visible at count rates in excess of 50 kcps. 

Yes, exactly. The the traditional/Heinrich (linear) model is "not fully physically realistic", and because of this, the model is only useful at count rates under 50K cps.  The fact that the raw data (even at relatively lower count rates) starts to demonstrate a non-linear response of the counting system, means that assuming a linear model for dead time is simply wrong.  It is, in a nutshell, *non physical*.    ;D

You're a smart guy (though just a tiny bit stubborn!)  :) , so if you just thought about the probabilities of photon coincidence for a few minutes, this should become totally clear to you.

Donovan et al. have attempted to create a model that they say is applicable at high count rates, yet, as I’ve shown clearly, the model sacrifices physical reality as well as accuracy at low count rates.  This is simply not acceptable.  So far, I’ve mostly been ridiculed for pointing this out.

Indeed you should be ridiculed because you have shown no such thing. Rather, we have shown repeatedly that the accuracy of the traditional expression and the logarithmic expression are statistically identical at low count rates.

As for "physical reality", how is including the probability of multiple photon coincidence somehow not physical?  I wish you would explain this point instead of just asserting it. Do you agree that multiple photons can be coincident within a specified dead time period?  Please answer this question...

And see this plot for actual data at 10 nA and 20 nA (you know, low count rates):

(https://probesoftware.com/smf/gallery/395_15_08_22_9_10_21.png)

How exactly is this data at 10 and 20 nA "inaccurate"? The data for the linear and non-linear models at 10 nA are almost exactly the same. But at 20 nA we are seeing a slightly larger deviation, which just increases as the count rate increases.  You need to address these observations before you keep on making a fool of yourself. 

And see here for the equations themselves:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_17_47.png)

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...

So if your data disagrees, then you've obviously done something wrong in your coding or processing, but I'm not going to waste my time trying to figure it out for you.  I humbly suggest that you give the constant k-ratio method a try, as it also does not depend on picoammeter accuracy, *and* it is much more simple and intuitive than the Heinrich method.

After all, k-ratios are what we do in EPMA.
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 16, 2022, 10:54:48 PM

And see here for the equations themselves:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_17_47.png)

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...


In whole seriousness, at least myself, I have no idea how big the relative error is at small count rates from that plot. Please re-plot it with x and y in log10 scale (with major and minor gridlines on), as that will improve plot readability and prove/disprove your claim about perfect convergence.

It looks You don't get the point with formulation "as few measurements", maybe wording is not clear or wrongly chosen.  I think I start to understand what Brian is trying to convey to us, and I actually partially agree with that. The problem is this:
After all, k-ratios are what we do in EPMA.
But we don't! We do k-ratios in the software!. The software calculate these - and we need reference measurements for these to work. But wait a moment - k-ratios are not simple pulses vs pulses - we need to interpolate the background (again that depends from our interpretation how to do that correctly) count rate and remove that from counts before building k-ratios. The dead time, pulse pileup, picoamperometer non-linearity, faraday cup contamination, problems with spectrometer, beam charging - those problems won't get away and they are present on every measurement (those measurements which are not referenced to anything - i.e. WDS wavescan). EPMA measures continuously two quantities: pulses and time, and on demand - beam current. (machine itself monitors and regulates other parameters like HV). Anyway, k-ratios are our mathematical constructs calculated, not directly measured. We suppose the time is calculated precisely, problem arises from systems inability to count pulses incoming at to high rate, or incoming to close and thus that is what dead time correction models addresses. 

Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer). Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).

While k-ratio vs count rate plots have its testing merits, I rather would go systematically identifying and fixing the problems with smaller pieces of device separately, or independently testable. I would start with picoamp measurements (collecting reading going through range of C1+C2 coils (or only C2 if field emission type of tip) then plotting discrimination of raw counts vs counts (coil strenght value) and look if there are any steps at 0.5,5,50,500 nA (yes You need some knowledge about hardware to know where to look). And only if it is correct then it is worth to test the linearity (i.e. to some degree with EDS total number of pulses, which have much more advanced dead time corrections and pile-up compensations). But even better the linearity better could be measured with artificial precise injection of currents to the picoamperometer.

Then and only then, it is sensible to develop new curves which take into account collisions of piled pulses distance galaxies (well if You are able to measure directly photons with these devices, why I could not measure distant galaxies?). To illustrate the absurd of keeping it calling "photon coincidence" instead of well recognized (in literature) established and taking place in real line the "pulse pile-up"; so consider the report of two cars crashing into one to another, and someone in report would state that on the "this and that" street the two safety belts had crashed one into another (while omitting any word about cars). You are exited about "smallish" final deviation across huge range of count rates (which indeed is an achievement), but fail to admit and prove otherwise that it can have some drawbacks at particular count rate spots (which from your point of view can look as insignificant corner cases or anomalies not worth attention, but for someone it can be everyday's bread).


Brian, thanks for these PHA - these shed a lot of light how the counting system is doing on Jeol side compared to Cameca probes. Are these count rate given raw, or dead time corrected?

I think it is constructed very similar with (unfortunately to You) some clear "cutting of corners". The bias is quite low and I am very surprised You see this much of the PHA shifting. Well, actually I am not. You can replicate such severe PHA shifting at relatavely low-moderate count rates by forcing hardware dead time to 1µs (That is why Jeol probes has shorter dead times - that is at cost of severe PHA shifting, and that is why default (but user changeable) deadtime on Cameca is 3µs - to postpone the PHA shifting for the higher counting rates). The gain on Cameca SX can be tweaked with granularity of 4095 values (12bits), where I hear on Jeol You can set it to round numbers of 32, 64, 128 (correct if I am wrong). PHA shifting can look unrelated here with this problem, but actually it is hiding your pileups. You see, the 0V is baseline of the pulse only at low count rates. At higher count rates the average baseline shifts below 0V and by increasing the bias You actually get much larger pulse amplitudes, which measured from 0V looks the same. At higher count rates (and higher bias volatges) the real baseline of pulse is far away to the left from 0V and pile up peaks is far away above 10V - that is why You see only the broadening (thanks to base line distancing and keeping the center of peak constant by increasing the bias it makes kind of "zooming" move into distribution). If You want to see the pileups in the PHA plots you need: 1) set your PHA peak at something like 3V, so that double value of that would be still exposed in 0-10V range of PHA. 2) do not compensate PHA shifting with increasing bias - that makes pile up peak move to the right and can go over 10V. It is important to understand that what is above 10V is not magically anhilated - it still blocks the counting of pulses as such >10V pulses needs to be digitized before being discarded and it blocks the system the same as pulses in the range.

Does Your Jeol Probe have the ability to pass all pulses (aka integral mode on Cameca SX), also those above 10V?
I can see the pulse pile up in PHA distribution on Cameca SX, look below how the PHA at ~4V grows with increased count rates:
(https://probesoftware.com/smf/gallery/1607_05_05_22_8_05_38.bmp)

The above picture has custom bias/ gain setup to diminish the PHA shifting. Next picture is going to give you idea how normally PHA shifts on the Cameca probe. count rates are raw (not corrected).

BTW, the broadening is going to be more severe with Ar escape peaks being present, as there then is many possible pile up combinations: main peak + Ar esc, Ar esc + Ar esc, 2x Ar esc + 2x main ....

There is other image with normal counting using automatic bias and gain settings, where pile-up's are visible too on PHA:
(https://probesoftware.com/smf/gallery/1607_05_05_22_8_00_50.bmp)
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 17, 2022, 07:31:23 AM

And see here for the equations themselves:

(https://probesoftware.com/smf/gallery/395_23_07_22_9_17_47.png)

It sure looks like all the equations converge at low count rates to me.  What do you think?  I'd like to hear your answer on this...


In whole seriousness, at least myself, I have no idea how big the relative error is at small count rates from that plot. Please re-plot it with x and y in log10 scale (with major and minor gridlines on), as that will improve plot readability and prove/disprove your claim about perfect convergence.

You have got to be kidding.

(https://probesoftware.com/smf/gallery/395_17_08_22_7_08_24.png)

Does that help at all? 

It looks You don't get the point with formulation "as few measurements", maybe wording is not clear or wrongly chosen.  I think I start to understand what Brian is trying to convey to us, and I actually partially agree with that. The problem is this:
After all, k-ratios are what we do in EPMA.
But we don't! We do k-ratios in the software!. The software calculate these - and we need reference measurements for these to work. The dead time, pulse pileup, picoamperometer non-linearity, faraday cup contamination, problems with spectrometer, beam charging - those problems won't get away and they are present on every measurement (those measurements which are not referenced to anything - i.e. WDS wavescan). EPMA measures continuously two quantities: pulses and time, and on demand - beam current. (machine itself monitors and regulates other parameters like HV). Anyway, k-ratios are our mathematical constructs calculated, not directly measured.

What?  I never said we don't do k-ratios in software. I said we do them in EPMA!    

And EPMA includes both instrumental measurements of raw intensities and software corrections of those intensities. I really don't get your point here.

But let me point out that even if we just plotted up these raw measured intensities with no software corrections, these non-linear dead time effects at high count rates would be completely obvious.

The reason we plot them up as background corrected k-ratios is simply to make any dead time mis-calibration more obvious, since as you have already stated, the k-ratio should remain constant as a function of beam current/count rate!

I wonder if Brian agrees with this statement...  I sure hope so.

We suppose the time is calculated precisely, problem arises from systems inability to count pulses incoming at to high rate, or incoming to close and thus that is what dead time correction models addresses. 

And that is exactly the problem that the logarithmic expression is intended to address!    ::)

Now don't get me wrong: I agree k-ratios ideally should be the same for low, low-middle, middle, middle-high, high and ultra-high count rates. What I disagree is using k-ratios as starting (and only) point for calibration of dead time and effectively hiding problems in some of lower systems within the std dev of such approach. probeman, we had not seen how your log model calibrated to this high range of currents perform on low currents which Brian addressees here. I mean at 1-10 kcps or at currents from 1 to 10 nA. I know, that is going to be a pain to collect some meaningful number of counts at such low count rates. It should not sacrifice the accuracy at low currents as there are plenty of minerals which are small (no defocusing trick) and sensitive to beam. Could be that Your log equation takes care of that. In particularly I am absolutely not convinced that what you call anomaly at 40nA in your graphs is not actually the correct measurements, and that your 50-500nA range is wrong (picoamperometer).

First of all I have been not showing any plots of Cameca k-ratios in this topic, only JEOL.  The 40 nA anomalies were only visible in the SX100 k-ratio data, which were only shown at the beginning of the other topic.  Here I am sticking with only plotting the JEOL data from Anette's instrument because it does not show these Cameca anomalies.

Also In most of Your graphs You still get not straight line but clearly bent this or other way distributions (visible with bare eye).

Yeah, guess what, these instruments are not perfect. But the "bent this way or other way" you describe are very much within the measurement noise.  Try fitting that data to a regression and you won't see anything statistically significant. 

Though I have been planning to discuss these very subtle effects in the main topic (and have several plots waiting in the wings), but want to clear up your and Brian's misunderstandings first. If that is possible!
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 17, 2022, 07:48:39 AM
You have got to be kidding.

...

Does that help at all? 

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 17, 2022, 07:56:03 AM
You have got to be kidding.

...

Does that help at all? 

Yes, that helps a lot. And nails the point of being OK at low count rates!

Well, thank goodness for that!     ;D

Now if only Brian would "see the light" too.     ::)
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 17, 2022, 08:13:23 AM
Since Brian continues to insist these expressions yield significantly different results when working with low count rate data, even though the mathematics of these various expressions clearly show that both expressions approach unity at low count rates, let's run through some data for him.

Instead of looking at Anette's Spc3 PETL spectrometer, we'll switch to her Spc2 LIFL spectrometer which produces 1/5 the count rate of the PETL spectrometer.  So, 5 times less count rate, and then plot that up with both the traditional linear expression and the new logarithmic expression:

(https://probesoftware.com/smf/gallery/395_17_08_22_7_34_16.png)

Note that the 10 nA data starts at 4K cps, while the 200 nA data finishes with 80K cps. As measured on the pure Ti metal standard. The TiO2 count rates will be lower of course as that is the whole point of the constant k-ratio dead time calibration method!

Please note that at 10 nA (4K cps on Ti metal) the points using the traditional linear expression and the points using the logarithmic expression are producing essentially identical results.  At lower count rates, they will of course be even more identical.

Could it be any more clear?  OK, I'll make it more clear.  Here are quantitative results for our MgO-Al2O3-MgAl2O4 FIGMAS system, measured at 15 nA, so very typical (moderately low) count rates (9K cps and 12K cps respectively), starting with the traditional linear dead time expression correction:

St 3100 Set   2 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 15.0  Beam Size =   10
St 3100 Set   2 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    14.98   14.98     ---

ELEM:       Mg      Al       O   SUM 
    19  16.866  37.731  44.985  99.582
    20  16.793  37.738  44.985  99.517
    21  16.824  37.936  44.985  99.745

AVER:   16.828  37.802  44.985  99.615
SDEV:     .036    .116    .000    .118
SERR:     .021    .067    .000
%RSD:      .22     .31     .00

And now the same data, but using the new logarithmic dead time correction expression:

St 3100 Set   2 MgAl2O4 FIGMAS
TakeOff = 40.0  KiloVolt = 15.0  Beam Current = 15.0  Beam Size =   10
St 3100 Set   2 MgAl2O4 FIGMAS, Results in Elemental Weight Percents
 
ELEM:       Mg      Al       O
TYPE:     ANAL    ANAL    SPEC
BGDS:      EXP     EXP
TIME:    60.00   60.00     ---
BEAM:    14.98   14.98     ---

ELEM:       Mg      Al       O   SUM 
    19  16.853  37.698  44.985  99.536
    20  16.779  37.697  44.985  99.461
    21  16.808  37.887  44.985  99.681

AVER:   16.813  37.761  44.985  99.559

Is that close enough?   And again, at lower count rates, the results will be even closer together for the two expressions.

The only reason there is any difference at all (in the 3rd or 4th significant digit!) is because these are at 9K and 12K cps, and we still have some small multiple photon coincidence even at these relatively low count rates, which the linear model does not account for!

Now let's go back to the plot above and add some regressions and see where they start at the lowest count rates to make it even more clear:

(https://probesoftware.com/smf/gallery/395_17_08_22_8_05_58.png)

Is it clear now?
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 17, 2022, 11:23:32 AM
You have got to be kidding.

...

Does that help at all? 

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.

The different models (linear versus 2-term or 6-term/log-term) produce slightly different results at low count rates.  How could they not?  These small differences create problems when calculating ratios.  Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic.  Please look very closely at my plots and commentary for Si, especially for the more subtle case of measurement set 1.  The “delta” plot is critically important.

I am not posting further on this subject.  I have been belittled repeatedly, and I am sick of it.  If you want to examine my spreadsheets in detail, then e-mail me at brian.r.joy@gmail.com or brian.joy@queensu.ca.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 17, 2022, 12:12:33 PM
You have got to be kidding.

...

Does that help at all? 

Yes, that helps a lot. And nails the point of being OK at low count rates! - and I was absolutely not kidding about that - consider using log scales for publication as it very clearly show it is on par with classical equation at low count rates. It also would not hurt to enable those minor grid lines in the plot (it exposes where linear-like behaviour changes into curve; in case You are going to publish something like that). Also I would switch x and y in place as that would then more resemble classical efficiency plots of other detectors in literature of SEM/EPMA or detection systems or that one I linked few posts before.

The different models (linear versus 2-term or 6-term/log-term) produce slightly different results at low count rates.  How could they not?  These small differences create problems when calculating ratios.  Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic.  Please look very closely at my plots and commentary for Si, especially for the more subtle case of measurement set 1.  The “delta” plot is critically important.

Yes, they do produce statistically insignificant differences at low count rates, as they should, since the traditional linear expression cannot correct for multiple photon coincidence. Because these multiple photon events do occur even at low count rates, though again, insignificantly. At the lowest count rates, all 4 expressions will produce essentially identical results, as even SEM Geologist now accepts.

The point is, that it is the traditional linear expression which is "physically unrealistic" (as you like to say), because it can only correct for single photon coincidence. Why is it so hard for you to understand this?

You do realize (I hope) that you are fitting your dead time constant to a linear model that doesn't account for multiple photon coincidence, so it is you that is adjusting your dead time to a physically unrealistic model.  Of course the dead time constant can be adjusted to fit a better (more physically realistic) model!   ;D

But the important point of all this, is not that the various expressions all produce similar results at low beam currents, but that the newer expressions (six term and logarithmic) produce much more accurate data at count rates that exceed 50K, 100K and even 300K cps.  As you have already admitted.  Yet you prefer to sulk in the 1930s and stubbornly limit your count rates to 30K or 40K cps.

That is your choice I guess.

I am not posting further on this subject.  I have been belittled repeatedly, and I am sick of it.  If you want to examine my spreadsheets in detail, then e-mail me at brian.r.joy@gmail.com or brian.joy@queensu.ca.

Good, because you need some time to think over where you are going wrong.  I'm not going to fix your mistakes for you!
Title: Re: An alternate means of calculating detector dead time
Post by: sem-geologist on August 18, 2022, 04:43:22 AM
Further, as I’ve noted, the dead time constant cannot be adjusted arbitrarily without producing results that are physically unrealistic. 
There is what I disagree, with both of You. First of all dead time constants should not be touched or adjusted at all - there should be other adjustable variable which would make the models to fit. As dead time is introduced at deterministic scale (by physical digitally controlled clocks - it can't skip a beat with age or increase/decrease the frequency - that is not an option from this world, only in fantasies), and this "tuning" of dead time constant had risen historically from (far-) imperfect models. You both have still an approach from the start of the last century by representing the urge of "tuning" the dead time constants. Thus I find it funny when one of you blame the other of ignoring the progress and choosing of staying in the last century.  ;D

However, I think probeman's et al model is too little physically realistic as it accounts for pulse pile up's too weakly (not to strongly as Brian's argumentation suggests) and that gets obvious at higher currents/higher count rates (I don't see that high count rates as anomaly, but as one from pivotal points in testing the correctness of the model). But classical "linear" model does not do that at all, so in that sense this new log function is much better as it do it at least partially, and while it is still not perfect, it is a movement in the right direction.
I disagree on some minor points of nomenclature such as "photon coincidence". In case it would be published in some physics-instrumentation-signal processing journal such coined term would be ridiculed as such events are hidden behind electronic signals wider in time than photon coincidence events near few orders of magnitude. (250ns vs 5-10ns). Unless probeman had discovered that signals in metal wires are photon and not electron movement-based...

What?  I never said we don't do k-ratios in software. I said we do them in EPMA!    

And EPMA includes both instrumental measurements of raw intensities and software corrections of those intensities. I really don't get your point here.
I would say it depends how we use the abbreviation "EPMA" - it depends from context; if "A" stands for Analysis then OK, yes, the software is part of that. However if "A" stands for analyzer - hell no! I don't think that my personal computers can be part of the instrument - that is ridiculous. And that is not some far fetch situation, but kind of situation with EDS and DTSA-II where I take eds spectrum on the SEM-EDS, and process it on other (personal) computer where I also contain a database of standard measurements. Generally the same could be done taking raw count measurements and recalculating it with CalcZAF (am I wrong?).
Anyway, it is "analyzer" which introduce dead times, not the "analysis". And k-ratios is concept of "analysis" and is not inside of "analyzer".

Now, because Brian has some doubts about signal pile-up physical realism, I share here few oscilloscope snapshots with some explanations.
To understand more what will be going in more complicated snapshots, lets start with most simple situation - a single lone pulse with wide time of nothing before and after (typical situation at 1-10kcps). Please forgive me a highly pixelated images, as that is resolution that low cost equipment spits-out:

(https://probesoftware.com/smf/gallery/1607_17_08_22_1_40_57.bmp)
Things to note in this picture is that width of these pulses are the same for any measured energy or wavelenght of X-rays and it depends only on time set on the shapping amplifier. Also it is bi-polar pulse as it is after second RC-differentiation (RC stands here for resistor-capacitor, which are replaced by OPAMP equivalent) of mono-polar pulse. As this is Cameca SX line and I had opened the cover and took a note that current sensitive preamplifier and shapping amplifier is AMTEK A203 chip being used, its datasheet states that shaping time is 250ns (which is equal to 1 sigma in case of Gaussian shape of pulse, albeit pulses on spectroscopy are not symetric and also what we see is pulse after 2nd differentiation), that is clearly visible as it takes about 450ns from nothing (0V) to pulse peak, or ~300ns FWHM (the math is as follows: 250ns shapping time makes 2.4*250 = 600ns FWHM - that is after first differentiation which is mono-polar pulse; the bipolar pulse after 2nd differentiation of that mono-polar pulse have half of that, thus it is 300ns). However I will ignore FWHM, as what is most important is the rising edge of that pulse, as that is place where pulse detection and trigger of pulse amplitude capture happens. What it is important to understand is default 3µs (integer) dead time settings which makes to ignore the random pulses generated after the counted peak, so that we would get only pulses with correct amplitude. Sounds right? In case we are not interested in precise amplitude, but only the amount of detected peaks, we can ignore the default 3µs, and set it to 1µs and enjoy severely increased throughput at integral mode. As You can notice the 3µs does not blank the negative "after-pulse" completely, and if we would want better PHA shape (less of broadening, for better differential PHA mode) we could increase dead time to i.e. 5 µs.

Then let's look to the situation with some higher count rates (some random catch at ~20kcps input kcps) and this shows more real case scenario how hardware dead time works (also shedding light to one of causes for PHA broadening and shift (there are also additional mechanisms)). Please note I had added purple line to highlight how negative "after-tail" of bi-polar pulses influence the (relative to 0V) amplitude of following pulses, red vertical lines is manual deconvolution showing where baseline had shifted, and demonstrating that there is no physical loss of amplitude at Gas proportional counter, but it is lost from how pulse amplitude is recorded (Absolute voltages):
(https://probesoftware.com/smf/gallery/1607_17_08_22_2_08_01.bmp)
So lets look into 3 Cases with dead time set to 1, 2 or 3µs:
P.S. with some FPGA-based DSP all 4 pulses could be correctly recognized and correct amplitudes collected. The problem with missing pulses arise from simple (old-school) way of acquiring pulse information (the tandem of comparator and sample/hold chips).

But we are discussing there pile-ups, so let's look to pile-up situation below:
(https://probesoftware.com/smf/gallery/1607_18_08_22_2_36_56.bmp)
There are 3 numbers on plot but actually there are 4 pulses. pulses 1 and 2 is still possible to visually spot as the difference between the pulses is 440ns and that is more than the shaping time (250ns). The pulse 3 is a pile up of two pulses where time difference is too small to distinguish them between. With default 3µs (integer) dead time only first pulse would be counted, and 2nd and 3rd ignored. with 2 or 1 µs dead time 3rd pulse would be registered, however due to starting in negative voltage its amplitude relative to 0V would be much underestimated, and thus the pile-up would not be placed at PHA graph at 2x of value of primary peak but somewhere in between. That is why in PHA graphs where pile-ups are observed (my two examples in previous post) they do not form a nice gaussian shapped distribution at x2 value from primary PHa peak, but some washed wide irregular distribution with lots of smoothing between values x2 and primary peak position. Why it is so? There is much lesser probability (450ns getting to spot of 450ns/1s (0.45:1000000)) of second pulse to be on top of clean pulse (which has enough of random silence before), than to land into negative voltage range (450ns getting to spot of 2.5µs/1s (2.5:1000000)), with increased count rates the second is more favored.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 19, 2022, 08:00:34 PM
Actually, I do have a few more comments to make on this subject, and so I’ll go ahead and do so.  Feel free not to read if you don’t want to.

Let me list some advantages of use of the Heinrich et al. (1966) count rate correction method ("ratio method").  Obviously, the treatment could be extended to higher count rates with use of a different correction model.  The method of Heinrich et al. is very well thought out and eliminates some of the problems that can arise when k-ratios are measured/calculated; the method is actually quite elegant.  Note that I've included the reference and a template in the first post in this topic.

1) Only two measurements are required to form each ratio, as no background measurement is required.  The small number of measurements per ratio keeps counting error low.

2) The method is actually very easy to apply.  Each spectrometer is tuned to a particular peak position, and this is where it sits throughout an entire measurement set.  As long conditions in the lab are stable (no significant temperature change, for instance), reproducibility of peak positions is eliminated as a source of error.  Differences in effective takeoff angle between spectrometers do not impact the quality of the data, as only the count rates are important.

3) Because measurements are made simultaneously on two spectrometers, beam current is completely eliminated as a source of error.  If the current varies while the measurement is made, then this variation is of no consequence.  This is not the case when measuring/calculating k-ratios.

4) While I’ve used materials of reasonably high purity for my measurements, the analyzed material does not need to be homogeneous or free of surface oxidation.  Inhomogeneity or the presence of a thin oxide film of possibly variable thickness will only shift position of the ratio along the same line/curve when plotted against measured count rate, and so variation in composition does not contribute error.  As I showed in plots of my first and second measurement sets for Si, counting error alone can easily explain most of the scatter in the ratios (i.e., other sources of error have been minimized effectively).

Ideally, in evaluation of the ratio data, the count rate for one X-ray line should be much greater than the other (as is typically seen in Kα-Kβ measurement pairs, especially at relatively low atomic number).  By this means, essentially all deviation from linearity on a given plot of a ratio versus count rate is accounted for by the Kα line.  This makes the plot easier to interpret, as it facilitates simple visual assessment of the magnitude of under- or over-correction.

I’d also like to note that measurements for the purpose of count rate correction are generally made on uncoated, conductive materials mounted in a conductive medium, and I have adhered to this practice.  Metals and semi-metals are not subject to beam damage that could affect X-ray count rates.  Carbon coats of unknown and/or possibly variable thickness will also affect X-ray count rates (even for transition metal K lines due to absorption of electron energy) and contribute error, as would ablation of that coat at high current.  Accumulation of static charge should not be discounted as a potential issue when analyzing insulators.  Variably defocusing the beam could affect X-ray count rates as well.

SEM Geologist has pointed out that pulse pileup can be a serious issue.  In using a one-parameter model, the dead time and pulse pileup cannot be distinguished.  Perhaps it would be better to call this single parameter a “count rate correction constant.”  In a sense, at least at relatively low count rate, the distinction is immaterial(?), as I’ve shown in my various plots of calculated ratios versus a given measured count rate that behavior at low count rate is ostensibly linear.  As such, it presents a limiting case.  What I mean by this is that, if the correction constant is changed to suit a particular model, then the slope of that line (determined by linear regression) must change such that it no longer fits the data at those relatively low count rates.  This is not acceptable, and I'll expand on this below with reference to some of my plots.  Maybe I’m wrong, but it appears that adjustment of the constant by Donovan et al. is being done by eye rather than by minimization of an objective function.  If this is true, then this is also not acceptable.

No mathematical errors are present in my plots; I do not do sloppy work, and the math is not particularly challenging.  Use of the Donovan et al. model results in significant errors in corrected ratios at relatively low count rate, as I’ve shown in my “delta” plots for Si.  Further, keep in mind that counting error is only applicable to the data and not to the model.  As I’ve pointed out, arbitrary adjustment of the correction constant will generally either lead to over-correction or under-correction of the ratio.  In the latter case, which relates to my measurement set 1 for Si, for instance, the Donovan et al. model predicts a decrease in interaction between electronic pulses as count rate increases (as indicated by a negative slope).  Obviously, this is a physical impossibility and indicates that a flaw is present in the model.  Arbitrary adjustment of the constant can produce minima, maxima, and points of inflection in the calculated ratio as a function of corrected count rate, and none of these features is physically realistic.  A one-parameter model can only produce fully physically realistic behavior if the correction constant is equal to that determined by linear regression of data in the low-count-rate region.  (This is simply a restatement of the limit that I mentioned in the paragraph above.)  SEM Geologist has suggested that the Donovan et al. model is a “step in the right direction,” but I respectfully disagree.  This claim is impossible to evaluate considering the model in its present state.  Perhaps a two-parameter model would be better suited to the problem -- I can only guess.

As I’ve described in words and illustrated in a plot, the JEOL pulse amplitude distribution is exceedingly difficult to work with at high measured count rates (say, greater than 100 or 150 kcps).  There is no way to ensure that the distribution will not be truncated by the baseline, and thus some X-ray counts will simply be lost for good.  The situation is complicated by the fact that electronic gain can only be set at one of four values:  16, 32, 64, or 128.  Fine adjustment must be made by varying the anode bias; this situation is not ideal, as increasing the bias exacerbates shifts in the distribution as count rate changes.  Further, operating at high count rates shortens the useful lifespan of a sealed Xe counter, and these are not cheap to replace (~5+ kiloloonies apiece).

At some point – when I get a chance – I am going to bring my new digital oscilloscope into the lab and do the same kind of testing that SEM Geologist has done.  I need to talk to an engineer first, though, as the JEOL schematics provided to me are not particularly easy to work with (certainly by design).  I thank SEM Geologist for leading the way on this.

Finally, notice that, in my criticism above, I have not leveled any personal insults.  I have not told anyone that he/she “deserves to be ridiculed” or “should be ridiculed” or whatever it was that John wrote in bold type and then deleted.  I am not an idiot, nor was I born yesterday, and I am being very pragmatic in my approach to the problem of correction of X-ray count rates.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 20, 2022, 09:11:25 AM
Use of the Donovan et al. model results in significant errors in corrected ratios at relatively low count rate

There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

(https://probesoftware.com/smf/gallery/395_17_08_22_7_08_24.png)

Within a fraction of a photon count!

User specified dead time constant in usec is: 1.5
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
       0          0          0          0          0          0          0          0          0   
    1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989   
    2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955   
    3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898   
    4000   4024.145      0.994   4024.218   0.993982    4024.218   0.993982   4024.218    0.993982   
    5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718   
    6000    6054.49    0.99100   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593   
    7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445   
    8000   8097.166      0.988   8097.756   0.987928   8097.761   0.9879274    8097.761   0.9879274   
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 20, 2022, 03:57:22 PM
There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Please reread my previous post and please look at my plots.  I am not going to restate my argument.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 20, 2022, 07:05:39 PM
There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Please reread my previous post and please look at my plots.  I am not going to restate my argument.

I am not asking you to restate your argument.

I am asking to you answer the question:  do all four dead time correction equations produce the same results at low count rates as demonstrated in this graph?

(https://probesoftware.com/smf/gallery/395_17_08_22_7_08_24.png)

Within a fraction of a photon count!

User specified dead time constant in usec is: 1.5
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
       0          0          0          0          0          0          0          0          0   
    1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989   
    2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955   
    3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898   
    4000   4024.145      0.994   4024.218   0.993982    4024.218   0.993982   4024.218    0.993982   
    5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718   
    6000    6054.49    0.99100   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593   
    7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445   
    8000   8097.166      0.988   8097.756   0.987928   8097.761   0.9879274    8097.761   0.9879274   
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 20, 2022, 10:21:32 PM
There is no shame is being wrong.  The shame is in stubbornly refusing to admit when one is wrong.

If you don't want to be laughed at, then agree that all four dead time correction equations produce the same results at low count rates as demonstrated in this graph:

Please reread my previous post and please look at my plots.  I am not going to restate my argument.

I am not asking you to restate your argument.

I am asking to you answer the question:  do all four dead time correction equations produce the same results at low count rates as demonstrated in this graph?

(https://probesoftware.com/smf/gallery/395_17_08_22_7_08_24.png)

Within a fraction of a photon count!

User specified dead time constant in usec is: 1.5
Column headings indicates number of Taylor expansion series terms (nt=log)
obsv cps    1t pred   1t obs/pre    2t pred   2t obs/pre    6t pred   6t obs/pre    nt pred   nt obs/pre   
       0          0          0          0          0          0          0          0          0   
    1000   1001.502     0.9985   1001.503   0.9984989   1001.503   0.9984989   1001.503   0.9984989   
    2000   2006.018      0.997   2006.027   0.9969955   2006.027   0.9969955   2006.027   0.9969955   
    3000   3013.561     0.9955   3013.592   0.9954898   3013.592   0.9954898   3013.592   0.9954898   
    4000   4024.145      0.994   4024.218   0.993982    4024.218   0.993982   4024.218    0.993982   
    5000   5037.783     0.9925   5037.926   0.9924719   5037.927   0.9924718   5037.927   0.9924718   
    6000    6054.49    0.99100   6054.738   0.9909595   6054.739   0.9909593   6054.739   0.9909593   
    7000    7074.28     0.9895   7074.674   0.9894449   7074.677   0.9894445   7074.677   0.9894445   
    8000   8097.166      0.988   8097.756   0.987928   8097.761   0.9879274    8097.761   0.9879274   


In fact, they do produce similar corrections at low count rates, though they only give exactly the same result when the count rate is zero.  But this is not necessarily the source of the problems with your modeling.  Let me respond with another plot with more appropriate scaling; it pertains to my correction constant determined by linear regression for channel 4/TAPJ/Si for the region in which the linear model corrects the data well.  If my point is not clear, then please reread my lengthy post above, especially the long paragraph:

(https://probesoftware.com/smf/gallery/381_20_08_22_10_36_30.png)


Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 21, 2022, 09:25:21 AM
I am not asking you to restate your argument.

I am asking to you answer the question:  do all four dead time correction equations produce the same results at low count rates as demonstrated in this graph?


In fact, they do produce similar corrections at low count rates, though they only give exactly the same result when the count rate is zero. 

Thank-you! 

In fact at 1.5 usec,  they (the traditional vs. logarithmic expressions) produce results that are the same within 1 part in 10,000,000 at 1000 cps, 1 part in 100,000 at 10K cps and 1 part in 10,000 at 20K cps.  So much for your claims that the traditional expression "substantially outperforms the logarithmic expression at low count rates"!

And do you know why they start diverging at a few tens of thousands of cps? Because the traditional expression is not handling multiple photon coincidence, which Monte Carlo modeling confirms are due to these relatively infrequent multiple photon events at these relatively low count rates.  And as you have previously admitted already, at higher count rates, the traditional expression fails even worse.

But this is not necessarily the source of the problems with your modeling.  Let me respond with another plot with more appropriate scaling; it pertains to my correction constant determined by linear regression for channel 4/TAPJ/Si for the region in which the linear model corrects the data well.  If my point is not clear, then please reread my lengthy post above, especially the long paragraph:

Since you don't show us the actual data (why is that?), I can't tell if you are being disingenuous or just honestly not understanding what you are doing.  You apparently want us to accept your claim that the data are "corrected well".  So let's just accept that for now because I'm going to assume you are arguing in good faith.

The real problem is that you show us that both expressions at 1.07 usec yield very similar slopes. Of course they would wouldn't they, as you finally admitted above.  But then you show us another slope (blue line) using the logarithmic expression at 1.19 usec (though strangely enough you don't also show us the traditional expression at 1.19 usec, why is that?).

In fact it's even stranger that you decided to show us the logarithmic expression using a *higher" dead time constant, because if you thought for even a minute about this you would realize that when correcting for both single and multiple photon coincidence (using the logarithmic expression), the dead time constant must be (very) slightly decreased, not increased (compared to the traditional expression)!    >:(   

This is because the traditional expression does not account for multiple photon coincidence and therefore when regressing intensity data to a straight line, it starts to be biased towards higher dead time values than it should be, when including count rates above 20 to 30K cps or so.  This small fact is what you have been overlooking this whole time.

Please plot the traditional expression at 1.07 usec, and the logarithmic expression at 1.06 usec, you know, 0.01 usec different, and let us know what you see!  Awww nevermind, here it is for you:

(https://probesoftware.com/smf/gallery/395_21_08_22_10_06_39.png)

Note nearly identical response until we get to 30K cps or so.  So it's strange that you choose to not only change the dead time constant for the logarithmic expression by a huge amount, but also in exactly the *wrong direction*...  so is this an honest mistake or what?   Sorry, but I really have to ask.

I still think actual EPMA data shows these differences quite well (and especially well using the constant k-ratio method, as I will be writing about next in constant k-ratio topic). So I will leave you with this plot which clearly shows that both expressions yield statistically identical results at 10 nA (15K cps on TiO2), but the traditional method visibly starts lose accuracy at around 30 nA (45K cps) and the wheels are already coming off around 40 na (60K cps):

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)

Again, please note that the dead time constant must be *reduced* not increased, when correcting for multiple coincidence photon events exactly as one would expect. Maybe you need to answer this question next:

Do you agree that one (ideally) should obtain the same k-ratio over a range of beam currents, if the dead time correction is being properly applied?
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 21, 2022, 01:10:58 PM
I did not show the data on my last plot because they are constrained to lie on a given curve.  Please think about this.

Nowhere have you proven that the correction constant must be lowered to make your model fit the data better.  As I’ve already pointed out repeatedly, the linear fit at low count rate fixes the value of that constant.

Yes, of course, the k-ratio should be constant for given effective takeoff angle.

Below once again is a plot of my data for Si measurement set 2.  I’ll let you imagine how lowering the value of the correction constant will affect the fit, as I’ve already plotted your function for two different values.  The open black circles represent the Willis model.

At this point, we are just going around in circles.  If you post a response that includes disrespectful language or that can be addressed easily by what I have already posted (as above), then I will either not respond to it or will just ask you to reread what I have already written.

(https://probesoftware.com/smf/gallery/381_21_08_22_1_07_11.png)
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 21, 2022, 01:38:26 PM
Nowhere have you proven that the correction constant must be lowered to make your model fit the data better.  As I’ve already pointed out repeatedly, the linear fit at low count rate fixes the value of that constant. 

What?  No, you've assumed that. You are in fact wrong as I demonstrated in the data!  Here is is again:

(https://probesoftware.com/smf/gallery/395_15_08_22_8_52_10.png)

The data using the traditional expression is wildly wrong (admit it). The data using the logarithmic expression is very slightly over correcting (right?),  due to its ability to handle multiple photon coincidence. So we slightly reduce the value to 1.28 usec and we now have constant k-ratios over a wide range of beam currents, that works perfectly at low, moderate and high beam currents. 

Voila!

You are stubbornly unable to realize that even at low counts rates there are still non-zero probabilities of multiple photon coincidence.  So the linear model is (slightly) biased towards higher dead time constants because of these (non linear) events.  However, the logarithmic expression properly deals with these probabilities so it determines a slightly lower dead time.  That's good news that our detectors are slightly faster than we thought, hey?

Yes, of course, the k-ratio should be constant for given effective takeoff angle.

Well thank goodness for that. 

Now please explain why we should not adjust the dead time constant (slightly) to compensate for the fact that the linear expression does not account for multiple photon coincidence.  We do this because the traditional model is physically unrealistic in that it does not account for multiple photon coincidence, so we need to make an adjustment in order to obtain because, as you just stated, the "k-ratio should be constant for given effective takeoff angle".

Below once again is a plot of my data for Si measurement set 2.  I’ll let you imagine how lowering the value of the correction constant will affect the fit, as I’ve already plotted your function for two different values.  The open black circles represent the Willis model.

OK, I see the problem in your plot. Your plotting the logarithmic expression using the same dead time as the traditional expression. Of course it will slightly over correct the data at moderate count rates (before it gets much better at high count rates).  Just as I showed in the k-ratio plot above.

You've simply assumed that the dead time constant obtained from a linear regression using the traditional expression has to be the correct value.  So you're assuming the point you're trying to prove.   :o

You really don't see that?   Really?
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 21, 2022, 02:20:15 PM
OK, I see the problem in your plot. Your plotting the logarithmic expression using the same dead time as the traditional expression. Of course it will slightly over correct the data at moderate count rates (before it gets much better at high count rates).  Just as I showed in the k-ratio plot above.

You've simply assumed that the dead time constant obtained from a linear regression using the traditional expression has to be the correct value.  So you're assuming the point you're trying to prove.   :o

You really don't see that?   Really?

As I already stated, I've plotted your correction using two different values for the correction constant.  Please look more closely at my plot.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 21, 2022, 03:23:20 PM
As I already stated, I've plotted your correction using two different values for the correction constant.  Please look more closely at my plot.

Yes, at 1.07 usec and 1.19 usec.  I can think of a few more numbers in between 1.07 and 1.19 usec. Can anyone else?    ;D

I also note in your plot that the new expressions (except for the deliberately over corrected values at 1.19 msec) provide equal accuracy at low count rates and *better* accuracy at higher count rates.  That is progress- that you deliberately ignore.

What dead time constant are you using for the traditional expression?  You don't show in your plot.  Why don't you plot the data up again, but this time with the logarithmic expression at 1.08 or 1.1 or 1.12 usec, for example?  Or better yet, plot the data with the traditional expression at whatever dead time you determined using that expression, and then plot the logarithmic expression at the same dead time constant and then slowly decrease the dead time by 0.02 usec at a time and plot all those up?  I bet you'll learn something.   :)

Let me ask you this: is there any possibility of mulitple photon coincidence events at these 10K or 20K count rates? 

If you answer no, then you are making unphysical assumptions about the random nature of photon emission.

If you answer yes, then only by using an expression that includes these probablities can the correct dead time constant be determined. Simply because  the traditional linear expression is biased against multiple photon events and that skews your dead time determinations too high.
Title: Re: An alternate means of calculating detector dead time
Post by: Brian Joy on August 21, 2022, 03:44:24 PM
As I already stated, I've plotted your correction using two different values for the correction constant.  Please look more closely at my plot.

Yes, at 1.07 usec and 1.19 usec.  I can think of a few more numbers in between 1.07 and 1.19 usec. Can anyone else?    ;D

I also note in your plot that the new expressions (except for the deliberately over corrected values at 1.19 msec) provide equal accuracy at low count rates and *better* accuracy at higher count rates.  That is progress- that you deliberately ignore.

What dead time constant are you using for the traditional expression?  You don't show in your plot.  Why don't you plot the data up again, but this time with the logarithmic expression at 1.08 or 1.1 or 1.12 usec, for example?  Or better yet, plot the data with the traditional expression at whatever dead time you determined using that expression, and then plot the logarithmic expression at the same dead time constant and then slowly decrease the dead time by 0.02 usec at a time and plot all those up?  I bet you'll learn something.   :)

Let me ask you this: is there any possibility of mulitple photon coincidence events at these 10K or 20K count rates? 

If you answer no, then you are making unphysical assumptions about the random nature of photon emission.

If you answer yes, then only by using an expression that includes these probablities can the correct dead time constant be determined. Simply because  the traditional linear expression is biased against multiple photon events and that skews your dead time determinations too high.

I've shown the results of your model for different correction constants in my "delta" plots for Si.  Please reread my posts and look at my plots.  I'm done conversing with you on this subject, as you aren't mentioning anything that I haven't already addressed.  Further, your tone is demeaning and patronizing.  Feel free to have the last word if you'd like, though.

On a tangential note, I'd like to point out that Ti metal is notorious for rapid development of an oxide film.  It is not a good choice for making k-ratio measurements/calculations.
Title: Re: An alternate means of calculating detector dead time
Post by: Probeman on August 21, 2022, 06:55:36 PM
Let me ask you this: is there any possibility of mulitple photon coincidence events at these 10K or 20K count rates? 

If you answer no, then you are making unphysical assumptions about the random nature of photon emission.

If you answer yes, then only by using an expression that includes these probablities can the correct dead time constant be determined. Simply because  the traditional linear expression is biased against multiple photon events and that skews your dead time determinations too high.

I've shown the results of your model for different correction constants in my "delta" plots for Si.  Please reread my posts and look at my plots.  I'm done conversing with you on this subject, as you aren't mentioning anything that I haven't already addressed.  Further, your tone is demeaning and patronizing.  Feel free to have the last word if you'd like, though.

I've done that and explained what you're doing wrong, but I can see that you're determined to die on that hill. So be it.

On a tangential note, I'd like to point out that Ti metal is notorious for rapid development of an oxide film.  It is not a good choice for making k-ratio measurements.

This only goes to show how you just don't get it at all.    ::)

The cool thing about the "constant k-ratio" method is that it doesn't matter what the k-ratio is, only that it is constant as a function of beam current!   

We could just as well use two unknown compositions, so long as they contain significantly different concentrations of the element, and are relatively beam stable and homogeneous.