Reprocessing aeromagnetic data using modern semi-automatic interpretation methods
Journal name: First Break
Issue: Vol 37, No 8, August 2019 pp. 103 - 106
Special topic: Legacy Data
Info: Article, PDF ( 621.44Kb )
Price: € 30
Modern airborne geophysical data acquisition systems (such as aircraft, helicopters, and drones) can aquire large amounts of data quickly. Semi-automatic intepretation methods that provide initial estimates of source location and depth have been in widespread use since the 1970s. The output of these methods is used as the starting point for a detailed interpretation, based on forward mod-elling and inversion. One of the earliest methods in widespread use on computers was Euler deconvolution (Thomson, 1982; Reid et al., 1990) (1) where f is the potential field, Δx and Δz are the distances from the current measurement point to the source in the horizontal and vertical planes, and N is the structural index (SI), which characterizes the rate at which the field intensity decreases with increasing distance from the source (0< N <3). For map data, equation 1 becomes (2) The method was computationally undemanding (for profile datasets) even on late 1980s computer hardware, and had the considerable advantage that it was unaffected by remanent mag-netization. In use, N would be specified, and equation 1 solved for Δx and Δz using a moving window of datapoints. Each window produces a solution, so that there would be a solution for almost every datapoint. Interpretation of the results consisted of looking for clusters of solutions, which corresponded to the location of the upper corners of sources such as dykes and contacts. Figure 1b) shows the application of Euler deconvolution to a simple synthetic dataset. The method has produced clear clusters at the top of both the dykes in the model, but interference has also resulted in many spurious solutions occurring. The spatial resolution of the results can be improved by applying the method to the derivatives of the field, or to the analytic signal amplitude (Salem and Ravat, 2003), in which case the SI used needed to be increased by the order of the derivative that was used. Of course, the penalties for the use of derivatives of higher order are increased sensitivity to noise and decreased sensitivity to deeper sources. One problem with the method is that it produced a large number of solutions that are not associated with any sources (see Figure 1), and so many strategies were employed to remove the invalid solutions (Fitzgerald et al., 2004), although they can never all be completely removed. Another problem was that the use of an incorrect SI would result in sources whose horizontal and ver-tical location were both incorrect. When applied to map datasets (Figure 2d,g) the results can be difficult to visualize if there are several sources present. Note that because of the existence of the invalid solutions, gridding and then contouring Euler solutions is usually a very bad idea.