Effect of Self-absorption on analytical characteristics of an emission source
After having described the effect of self-absorption on the emission line shape The Spectroscopy Net; it is now interesting to look at the effect of self-absorption on the spectrochemical analysis, i.e. the shape of the calibration curves. We can model the effect of self-absorption on the calibration curve if we assume that nothing changes in the emission source (discharge) as we introduce more material. In particular if the spatial distribution of the emitters and absorbers does not change and their relative abundance is constant.
Summarising the effect of self-absorption on the line shape :
- In the case of the separated emitter-absorber model a clear dip in the emission line develops with increasing density of the optical active species.
- In the second case where emitter and absorber are simultaneously present in the same region a flat top profile develops with increasing atom density.
To estimate the effect of self-absorption on calibration curves for spectrochemical analysis using a discharge source, we will first look at the evolution of the transmitted light across the discharge source, when the concentration of the analyte species is increased.
We will consider a configuration close to a Grimm-type discharge cell. The sample material is introduced into the discharge cell on one side and the emitted light is observed from the opposite side of the tubular discharge cell.
In case 1 the emitter and absorber are separated in space. In the emitter region the light intensity, directed to the detector increases linearly across the region, once the absorbing region is reached the intensity will decrease again, but exponentially now. The light generated in the emitting region, detectable at the end of the emitting region will increase linearly with the abundance of the analyte species. At the same time the absorption coefficient describing the absorption in the absorbing region will also increase linearly, the effect on the transmitted light is, not linear due to the exponential character of the Lambert-Beer law.
The different behaviour of the emission and absorption can be illustrated choosing reference conditions for the analyte density. We define ‘IE’ ‘T ‘ the emitted light and transmission coefficient of the absorbing layer respectively. We express any analyte density ’c’ relative to the reference density ‘cr’.
When we double the analyte density, the emitted light will be multiplied by a factor of two, the transmission coefficient, which is smaller than one, will be squared.
As a consequence of this non linear dependence of the transmission coefficient on the species density the observed integral intensity goes through a maximum before it decreases as yet more material is introduced to the light source. The calibration curve is reversed. For optical thin media, the absorption is a minor effect only and the calibration curve is nearly linear. For optical thick layers the absorption effect is significant.
In the second case, when emitting and absorbing species are present in the same region, the line intensity first increase with the increasing number of emitting and absorbing species without a significant change in the line shape, the calibration curve is nearly linear. At a certain point, however, the flat top profile will be more pronounced and the increase in integral intensity will slow down. The figure below illustrates the increase of intensity across the emission source, for different species density. We consider here light directed towards the right, where the imaginary detector sits. For optically thin sources, the light intensity increases linearly from the left to the right. For increasing densities, or optical thickness, the saturation effect becomes noticeable. The limit for very large sources does not depend on the analyte species density, but only on the ratio of absorbing and emitting species. The analyte density only determines how quickly this limit is reached. The trend, however, will not be reversed, the calibration curve will not show a local maximum.
When calibrating an optical emission spectrometer we relate the detected line intensity to the atomic concentration. A “normal” spectrometer used for this purpose will not resolve the actual line shape, but detect the integral intensity. As the concentration of the analyte species increases more photons will be emitted, but also more will be absorbed.
If we assume no self absorption at all (no absorbing atoms present in the source), the detected (transmitted) light intensity will increase linearly with the density of analyte species in the discharge.
In case 2, the mixed emitter/absorber, the transmitted line intensity first increases “almost” linearly with the analyte density, the calibration curve is expected to be linear. As the analyte density increases the emission line develops a flat top shape, hardly increasing in intensity as the density increases, the integral line intensity, however increases slowly, because the line becomes broader. There is no inversion of the calibration curve in this case. In the first case, the separated emitter/absorber, again the line intensity increases linearly with the analyte density as long as the effect of self-absorption is small, but eventually the calibration curve deviates from linearity, as the dip in the line centre develops, the calibration curve is inverted, the transmitted intensity drops as the analyte density increases. Measuring the transmission line profile with a “low resolution” spectrometer typically used for analytical purposes will not necessarily show the dip, because the line profile is actually determined by the instrumental spectral resolution and not by the characteristics of the emission source. The situation is a real discharge source is of course more complex. Neither is the distribution of emitters and absorbers in the discharge volume constant, nor is their relative distribution constant. The gas temperature will vary in the different regions of the discharge. The temperature will be high relatively close to the cathode area to drop further away from it. It is therefore obviously difficult to predict the exact line shape of the emitted (transmitted) light and consequently to derive a general form of the calibration curve, how they deviate from the ideal linear case, when no self-absorption is observed to the non-linear case in the presence of self-absorption.
The purpose of the above figure  is to illustrate the different spatial distribution of absorbers (left) and emitters (right) in a discharge cell. The sample is situated in the centre at the bottom of the discharge cell and acts as the discharge cathode, For end-on operation, the photon detector would be situated at the top of the image. The figure shows, in fact, the calculated spatial distribution of ground-state atoms (left) and ions (right) in a discharge cell. The ion-density distribution does certainly not represent the excited atom (emitter) distribution, but as the recombination reaction of the ions with free electrons in the discharge will generate excited atoms, we just pretend it does, for this qualitative discussion assumption is justified. The atom density is highest close to the cathode (sample) where the ion density maximises at 8 mm into the discharge volume. Comparing the two figures clearly shows that the emitter and absorber density both vary within the discharge volume, but they vary differently. The expected shape of the calibration curve is therefore somewhere in between the two cases described earlier.
 Annemie Bogaerts Renaat Gijbels, Glen P. Jackson ; Modeling of a millisecond pulsed glow discharge: Investigation of the afterpeak J. Anal. At. Spectrom., 18, 2003, 533–548; DOI: 10.1039/b212606k
First published on the web: 15. 03. 2008.
Author: Thomas Nelis. The text resumes and extends topics presented by Prof. E.B Steers during the first GLADNET Training course held in Antwerp, Be, in September 2007.