At least 12 bit is a standard. What kind of a**h**** would make SEM/EPMA EHT with 10bit DAC? EHT is not (can't be) fast changing as i.e. scanning coils - there is absolutely no excuse using lower resolution. Even my grandma would had used 12bit for that...
Wow, are you trying to pick a fight? Any design is a compromise. When you tell your electronics department to make a high-voltage supply that goes from a few kV to 30 or so, then their first question will be "how accurate do you want it to be?" and in our case, we specified "max 1V per step, max 1Vpp ripple". So I suppose we ended up with a 15V DAC. But that's just because these components are cheap nowadays (and, as you said - it doesn't need to be fast).
SG sometimes gets a little excited- he means well. ![Smiley :)](https://probesoftware.com/smf/Smileys/default/smiley.gif)
Our effort to find the means for precise beam energy estimation is a first step. It would be a waste of time, if we could not do the second step - do the software offset calibration. I.e. We would know that if we set 15kV we see that in real it is 15.075kV, we could then for 15kV set at our programs controlling the probe or SEM 14.925kV to offset those 75V overvoltage. If DAC is 10bit and have 30V step, then we could set the offset value either to 14.94kV or 14.91kV, which would give 15.015 or 14.985kV in real. I think if we find easy way to measure real energy, the second step is really easy to implement in PfS (I am saying this just as side observer aware of that huge list of different calibrations already there). If DAC is 12 bit we can correct it better, and if DAC is 14bit - that even much more better... And if it is 10bit DAC our correction would be kinda still OK at 15kV. But i.e. for 5kV step of 30V (10bit DAC), which s like 0.7% of the set value, - and that is not OK.
The chose of DAC (type, bitdepth, speed) often is kind of compromise of speed, price, stability. High precision DAC (16 and more bits) needs much better PCB design, else the electronic noise present on the PCB renders those additional bit precision useless. Higher bitdepth DAC tends to have larger delay which is important in example for scanning beam control and image acquisition synchronisation (thus for imaging we see 11-14bit DAC used for scanning beam control, also ADC for imaging is often not the 16bit, but 8bit, 10bit, 12bit for the same speed/real-time reason). Also price-wise 10bit and 12 bit DAC often has like 1-2$ difference (often it comes in the same IC package) when looking to models from same vendor with same technology. It makes a lot of sense price-wise to chose 10bit over 12bit when manufacturing kids-toys, in example making 1000000 toys and using cheaper 10bit instead of 12bit DAC will give very huge profit. However even taking into account bus width and its buffers (i.e. Jeol and Cameca EPMA use VME, and cards for column control have internal parallel buses, and DAC use parallel digital interface, 10 bit bus is smaller than 12 bit bus... and saves few mm of PCB space, in case of serial interface there is no difference) price saving of more less 10$ per unit for few to tens of units (costing 0.5 to few millions per unit) produced per year is really ******* ridiculous, and I mean it when I called it being AH originally. Speed reason is non existent there, HV can't be changed many times per second. The DAC for HV control basically produce stable DC signal. So in case of HV control the choice of DAC (between 10-14bit) has no basis for any compromise, and lower bit DACs there has no pros, but just cons. There is no reason to choose less bits in these cases.