Ok, I think I might know what is going on.

In x-ray maps the statistics for a single pixel are essentially determined by the pixel dwell time. So increasing the number of pixels doesn't much change the standard deviation (in a homogeneous material).

If you want to see the error decrease as a function of the (increasing) number of pixels, then what you want is the standard error (not the standard deviation) of the pixels.

This is because the standard error describes the error of the average, while the standard deviation describes the error for a single measurement.

Now as to why we (in science) routinely report the average and the standard deviation (rather than the average and the standard error), is a question I have asked several statisticians over the years, and the only answer I've heard is that "maybe because the standard deviation is larger and therefore a more conservative estimate"... but maybe one of you more mathematical types can enlighten us?

I myself would sure like to hear why it is that we don't usually see the average and standard error reported together more often.

In any case, this pixel averaging situation may be a case in which the standard error is more applicable than the standard deviation?