Author Topic: Dead time corrected data  (Read 3216 times)

SXFiveFEJohn

  • Post Doc
  • ***
  • Posts: 12
Dead time corrected data
« on: October 27, 2016, 02:35:22 PM »
My students love your software. They have a question. In the output in the log window of "raw data", are those data deadtime corrected. I said yes, you always correct for deadtime. But then, what is "raw"? In Cameca, there is the electronically enforced deadtime, but then PfE supplements that with a more correct deadtime (e.g. Cameca would have a set 3 microsecond DT, but once you do the Paul Carpenter spreadsheet, you find a different value, e.g. 3.27). So is ANY data in PfE NOT deadtime corrected?

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3276
  • Other duties as assigned...
    • Probe Software
Re: Dead time corrected data
« Reply #1 on: October 27, 2016, 02:52:58 PM »
My students love your software. They have a question. In the output in the log window of "raw data", are those data deadtime corrected. I said yes, you always correct for deadtime. But then, what is "raw"? In Cameca, there is the electronically enforced deadtime, but then PfE supplements that with a more correct deadtime (e.g. Cameca would have a set 3 microsecond DT, but once you do the Paul Carpenter spreadsheet, you find a different value, e.g. 3.27). So is ANY data in PfE NOT deadtime corrected?

Hi John,
Thanks.

Good question.  The data is "raw" only in the sense that it is not matrix corrected.  If you want to see the intensities *not* corrected for dead time, just turn off the "correct for dead time" option in the Analysis Options dialog from the Analytical menu.

Here is a screen shot showing how one can turn on or turn off every correction PFE makes to the intensities:

http://probesoftware.com/smf/index.php?topic=33.msg2155#msg2155

john
« Last Edit: October 27, 2016, 03:26:40 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

SXFiveFEJohn

  • Post Doc
  • ***
  • Posts: 12
Re: Dead time corrected data
« Reply #2 on: October 28, 2016, 06:43:43 AM »
Thanks John for the response. We have a followup question: we have found if the "verbose" option in Output is checked, we get a _different_ value type called 'COUNTS' which we do not understand what it represents. If we sum up the raw counts, this 'counts' number is higher than the sum of the counts by about 8%. (Secondarily, we see SQRT1-2-3. 1 might be sqrt of the 'counts' but what are the others?). Thanks.
« Last Edit: October 28, 2016, 06:49:22 AM by SXFiveFEJohn »

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3276
  • Other duties as assigned...
    • Probe Software
Re: Dead time corrected data
« Reply #3 on: October 28, 2016, 08:49:23 AM »
Hi John,
Wow, you guys are really "drilling down"!  Good for you!

The output you describe is only used to calculate the variances for the on and off-peak intensities. Because as you know, in order to calculate the variances, the intensities must first be expressed as raw photons. Perhaps the best way to explain the output you are seeing is to provide the code for those calculations.

Code: [Select]
Sub TypeCalculateOneSigma(mode As Integer, ii As Integer, jj As Integer, tAverage As TypeAverage, sample() As TypeSample)
' Adjust square roots because the counts are normalized to cps
' mode = 1 calculate 1 sigmas for on-peak counts
' mode = 2 calculate 1 sigmas for hi-peak counts
' mode = 3 calculate 1 sigmas for lo-peak counts

ierror = False
On Error GoTo TypeCalculateOneSigmaError

Dim i As Integer

Dim onaverage As TypeAverage
Dim hiaverage As TypeAverage
Dim loaverage As TypeAverage

Dim bmaverage As TypeAverage
Dim ctaverage As TypeAverage

' Calculate average count times
If mode% = 1 Then
Call MathArrayAverage(onaverage, sample(1).OnTimeData!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If
If mode% = 2 Then
Call MathArrayAverage(hiaverage, sample(1).HiTimeData!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If
If mode% = 3 Then
Call MathArrayAverage(loaverage, sample(1).LoTimeData!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If

' Calculate average beam currents for each element
If Not sample(1).CombinedConditionsFlag Then
Call MathAverage(bmaverage, sample(1).OnBeamCounts!(), sample(1).Datarows%, sample())
If ierror Then Exit Sub
Else
Call MathArrayAverage(bmaverage, sample(1).OnBeamCountsArray!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If

' Calculate average count data
If mode% = 1 Then
Call MathArrayAverage(ctaverage, sample(1).OnPeakCounts!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If
If mode% = 2 Then
Call MathArrayAverage(ctaverage, sample(1).HiPeakCounts!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If
If mode% = 3 Then
Call MathArrayAverage(ctaverage, sample(1).LoPeakCounts!(), sample(1).Datarows%, sample(1).LastElm%, sample())
If ierror Then Exit Sub
End If

' Denormalize counts for beam drift and deadtime
For i% = ii% To jj%
If UseBeamDriftCorrectionFlag Then
If Not sample(1).CombinedConditionsFlag Then
Call DataCorrectDataBeamDrift2(ctaverage.averags!(i%), bmaverage.averags!(1))      ' de-normalize for beam
If ierror Then Exit Sub
Else
Call DataCorrectDataBeamDrift2(ctaverage.averags!(i%), bmaverage.averags!(i%))     ' de-normalize for beam
If ierror Then Exit Sub
End If
End If

If UseDeadtimeCorrectionFlag And sample(1).CrystalNames$(i%) <> EDS_CRYSTAL$ Then
Call DataCorrectDataDeadTime2(ctaverage.averags!(i%), sample(1).DeadTimes!(i%))  ' de-normalize for deadtime
If ierror Then Exit Sub
End If
Next i%

' Denormalize average counts for average count time
For i% = ii% To jj%
If mode% = 1 Then ctaverage.averags!(i%) = ctaverage.averags!(i%) * onaverage.averags!(i%)
If mode% = 2 Then ctaverage.averags!(i%) = ctaverage.averags!(i%) * hiaverage.averags!(i%)
If mode% = 3 Then ctaverage.averags!(i%) = ctaverage.averags!(i%) * loaverage.averags!(i%)
Next i%

' Type out debug data
If VerboseMode Then
msg$ = vbCrLf & "ELEM: "
For i% = ii% To jj%
msg$ = msg$ & Format$(sample(1).Elsyms$(i%) & " " & sample(1).Xrsyms$(i%), a80$)
Next i%
Call IOWriteLog(msg$)

If mode% = 1 Then msg$ = "AVGON:"
If mode% = 2 Then msg$ = "AVGHI:"
If mode% = 3 Then msg$ = "AVGLO:"
For i% = ii% To jj%
If mode% = 1 Then msg$ = msg$ & MiscAutoFormat$(onaverage.averags!(i%))
If mode% = 2 Then msg$ = msg$ & MiscAutoFormat$(hiaverage.averags!(i%))
If mode% = 3 Then msg$ = msg$ & MiscAutoFormat$(loaverage.averags!(i%))
Next i%
Call IOWriteLog(msg$)

' Type average beam currents
msg$ = "AVGBM:"
For i% = ii% To jj%
If Not sample(1).CombinedConditionsFlag Then
msg$ = msg$ & MiscAutoFormat$(bmaverage.averags!(1))
Else
msg$ = msg$ & MiscAutoFormat$(bmaverage.averags!(i%))
End If
Next i%
Call IOWriteLog(msg$)

' Type current raw counts
msg$ = "COUNTS"
For i% = ii% To jj%
msg$ = msg$ & MiscAutoFormat$(ctaverage.averags!(i%))
Next i%
Call IOWriteLog(msg$)
End If

' Now calculate the square root on the actual raw data
For i% = ii% To jj%
If ctaverage.averags!(i%) > 0# Then
tAverage.Sqroots!(i%) = Sqr(ctaverage.averags!(i%))
Else
tAverage.Sqroots!(i%) = 0#
End If
Next i%

' Type current square roots
If VerboseMode Then
msg$ = "SQRT1:"
For i% = ii% To jj%
msg$ = msg$ & MiscAutoFormat$(tAverage.Sqroots!(i%))
Next i%
Call IOWriteLog(msg$)
End If

' Now normalize to count time
For i% = ii% To jj%
If mode% = 1 And onaverage.averags!(i%) <> 0# Then
tAverage.Sqroots!(i%) = tAverage.Sqroots!(i%) / onaverage.averags!(i%)
End If
If mode% = 2 And hiaverage.averags!(i%) <> 0# Then
tAverage.Sqroots!(i%) = tAverage.Sqroots!(i%) / hiaverage.averags!(i%)
End If
If mode% = 3 And loaverage.averags!(i%) <> 0# Then
tAverage.Sqroots!(i%) = tAverage.Sqroots!(i%) / loaverage.averags!(i%)
End If
Next i%

' Type current square roots
If VerboseMode Then
msg$ = "SQRT2:"
For i% = ii% To jj%
msg$ = msg$ & MiscAutoFormat$(tAverage.Sqroots!(i%))
Next i%
Call IOWriteLog(msg$)
End If

' Now normalize to deadtime and beam
For i% = ii% To jj%
If UseDeadtimeCorrectionFlag And sample(1).CrystalNames$(i%) <> EDS_CRYSTAL$ Then
Call DataCorrectDataDeadTime(tAverage.Sqroots!(i%), sample(1).DeadTimes!(i%))  ' normalize for deadtime
If ierror Then Exit Sub
End If

If UseBeamDriftCorrectionFlag Then
If Not sample(1).CombinedConditionsFlag Then
Call DataCorrectDataBeamDrift(tAverage.Sqroots!(i%), bmaverage.averags!(1))  ' normalize for beam
If ierror Then Exit Sub
Else
Call DataCorrectDataBeamDrift(tAverage.Sqroots!(i%), bmaverage.averags!(i%))     ' normalize for beam
If ierror Then Exit Sub
End If
End If
Next i%

' Type current square roots
If VerboseMode Then
msg$ = "SQRT3:"
For i% = ii% To jj%
msg$ = msg$ & MiscAutoFormat$(tAverage.Sqroots!(i%))
Next i%
Call IOWriteLog(msg$)
Call IOWriteLog(vbNullString)
End If

If you see anything amiss please let me know.
john
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

SXFiveFEJohn

  • Post Doc
  • ***
  • Posts: 12
Re: Dead time corrected data
« Reply #4 on: October 28, 2016, 11:13:06 AM »
John:
We have another question about how you apply the deadtime correction to CAMECA raw counts which you read from what CAMECA supplies. CAMECA, as you know, enforces a hard dead time electronically (e.g. 3 microseconds) but in reality deadtime does not come as an integer but a real number, so the true dead time might be 3.33 seconds. In Peak Sight, my understanding is that CAMECA applies this slight difference (e.g. 0.33) added to  (or if less than the integer, subtracted from) the 'raw' 'DT enforced' counts. So my question is, how does PfE deal with CAMECA enforced deadtime?
Does PfE apply just the difference between the two values?
Thanks.

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3276
  • Other duties as assigned...
    • Probe Software
Re: Dead time corrected data
« Reply #5 on: October 28, 2016, 11:22:46 AM »
We have another question about how you apply the deadtime correction to CAMECA raw counts which you read from what CAMECA supplies. CAMECA, as you know, enforces a hard dead time electronically (e.g. 3 microseconds) but in reality deadtime does not come as an integer but a real number, so the true dead time might be 3.33 seconds. In Peak Sight, my understanding is that CAMECA applies this slight difference (e.g. 0.33) added to  (or if less than the integer, subtracted from) the 'raw' 'DT enforced' counts. So my question is, how does PfE deal with CAMECA enforced deadtime?
Does PfE apply just the difference between the two values?

Hi John,
I think you asked me about this about a year ago:

http://probesoftware.com/smf/index.php?topic=413.msg2229#msg2229

Yes, the Cameca hardware only accepts an integer for use in the "enforced" deadtime.  As you mentioned, in the latest PeakSight software, they allow an additional floating point value to allow for the fact that the "enforced" integer deadtime isn't exactly an integer after it is applied.  In fact it can be quite different from the integer value.

PFE is designed to send the "enforced" integer deadtime value during the data acquisition, but for re-processing it uses a completely separate floating point deadtime value for the software deadtime correction.

Basically one chooses an "enforced" deadtime for the hardware based acquisition, and then after a calibration of the actual observed deadtime, the actual observed deadtime is utilized in the software (see SCALERS.DAT file).

This is described in some detail here:

http://probesoftware.com/smf/index.php?topic=33.msg2153#msg2153

A nice summary is here:

http://probesoftware.com/smf/index.php?topic=33.msg103#msg103
« Last Edit: October 28, 2016, 12:26:09 PM by John Donovan »
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"

John Donovan

  • Administrator
  • Emeritus
  • *****
  • Posts: 3276
  • Other duties as assigned...
    • Probe Software
Re: Dead time corrected data
« Reply #6 on: July 27, 2022, 09:06:11 AM »
As previously mentioned in this topic:

https://probesoftware.com/smf/index.php?topic=1466.msg11032#msg11032

Probe for EPMA now contains (in addition to the Willis (1993) two term and the six term expanded dead time correction expressions) the new logarithmic correction expression derived from integrating the infinite Maclaurin-like series by Aurelien Moy:



This new logarithmic expression for the dead time correction of WDS intensities, will handle multiple photon coincidence at high count rates and still provide accurate correction at zero to low count rates.
John J. Donovan, Pres. 
(541) 343-3400

"Not Absolutely Certain, Yet Reliable"