Abstract
A recently released voxel model quantifying aggregate resources of the Belgian part of the North Sea includes lithological properties of all Quaternary sediments and modelling-related uncertainty. As the underlying borehole data come from various sources and cover a long time-span, data-related uncertainties should be accounted for as well. Applying a tiered data-uncertainty assessment to a composite lithology dataset with uniform, standardized lithological descriptions and rigorously completed metadata fields, uncertainties were qualified and quantified for positioning, sampling and vintage. The uncertainty on horizontal positioning combines navigational errors, on-board and off-deck offsets and underwater drift. Sampling-gear uncertainty evaluates the suitability of each instrument in terms of its efficiency of sediment yield per lithological class. Vintage uncertainty provides a likelihood of temporal change since the moment of sampling, using the mobility of fine-scale bedforms as an indicator. For each uncertainty component, quality flags from 1 (very uncertain) to 5 (very certain) were defined and converted into corresponding uncertainty percentages meeting the input requirements of the voxel model. Obviously, an uncertainty-based data selection procedure, aimed at improving the confidence of data products, reduces data density. Whether or not this density reduction is detrimental to the spatial coverage of data products, will depend on their intended use. At the very least, demonstrable reductions in spatial coverage will help to highlight the need for future data acquisition and to optimize survey plans. By opening up our subsurface model with associated data uncertainties in a public decision support application, policy makers and other end users are better able to visualize overall confidence and identify areas with insufficient coverage meeting their needs. Having to work with a borehole dataset that is increasingly limited with depth below the seabed, engineering geologists and geospatial analysts in particular will profit from a better visualization of data-related uncertainty.
Thematic collection: This article is part of the Mapping the Geology and Topography of the European Seas (EMODnet) collection available at: https://www.lyellcollection.org/cc/EMODnet
Contributing towards a more sustainable society, pan-European data initiatives in the field of geology are on the rise. In order to streamline access to the diverse databases and services involved, the umbrella organization of all geological surveys in Europe, EuroGeoSurveys, piloted the European Geological Data Infrastructure (EGDI). EU co-funded projects include EMODnet (the European Marine Observation and Data network; Martín Míguez et al. 2019) and GeoERA (Establishing the European Geological Surveys Research Area to deliver a Geological Service for Europe; Vidovic et al. 2020).
For the marine realm, high-quality substrate and habitat maps are generated from the resulting databases, underpinning Europe's Blue Growth strategy and its Marine Strategy Framework Directive (MSFD), supporting sustainable growth in the marine and maritime sectors. A better management of the seabed and its subsurface is needed, as the pressures from human activities intensify (Halpern et al. 2008). Seabed-sediment maps of EMODnet Geology, for example, are instrumental in assessing the status of the seabed from a transnational habitat-mapping and MSFD perspective. Each European marine data initiative has the potential to enhance the effectiveness of marine spatial plans covering aggregate extraction, dredging and disposal of sediment, fisheries and windfarm development. Such plans are needed to optimize the assignment of specific zones for each activity and to designate marine protected areas at the most suitable locations (Douvere 2008; Douvere and Ehler 2011). Belgium, pioneer in science-based spatial planning, is at the forefront of integrating socio-economic, ecological and institutional aspects of human activities at sea (Compendium for Coast and Sea; Devriese et al. 2018).
In all of these initiatives, data and datasets from different origins, time periods and owners are harmonized and merged, but the quality of the supporting data is quantified seldomly. However, the applied value of scientific findings on environmental status and seabed-habitat changes may be limited by uncertainties related to metadata and the quality of the underlying geological data (van Heteren and Van Lancker 2015). Traditionally, data uncertainties were neglected or at least left unquantified in seabed-substrate and -habitat maps (e.g. 1:250 000 series of geological maps of the UK continental shelf areas; British Geological Survey 1977–2000). In the latest EMODnet Geology data products, data density is not considered, nor data quality. Instead, a highest confidence score is assigned when sediment sampling as well as remote sensing are used to create a seabed-sediment map (Kaskela et al. 2019). Generally, data are not discarded, even when old or of poor quality, since data are usually in short supply.
Dealing with uncertainty is an inherent element of the geological interpretation (Bond 2015; Pérez-Díaz et al. 2020) and therefore quantification of the full spectrum of data-related uncertainties requires some additional steps. Quality flagging is the most basic approach to quantifying uncertainty within a dataset and is done by assessing metadata fields. It can be limited to indicating the presence or absence of data, expressed in only a few categories (e.g. 1 to 5, or low to high), or be very complex with a full range of quantitative error ranges (Bárdossy and Fodor 2001). Modern data products come with indicative measures of confidence (e.g. a combination of methods; Kaskela et al. 2019), or some actions to improve confidence (e.g. the usage of historical data; Stephens et al. 2011). McBreen et al. (2011) combined measurements of uncertainty with information about data quality to produce a confidence map for the seabed-habitat map of the UK. They took into account factors such as age, data density and data-collection techniques. Garlan et al. (2018) took confidence a step further by considering not only these previous factors, but also data consistency, map scale and positioning precision. In this light, it is no surprise that automated procedures, although helpful in assigning data quality, will always be far from perfect.
Uncertainties in 3D models are even more complicated than those of 2D maps, but can be incorporated into the final data products more easily, as a parameter that can be visualized separately. Interpolation (e.g. Kriging) and simulation (e.g. stochastic) techniques create ‘modelling uncertainty’ that can easily be calculated but may have many different components (Wellmann et al. 2011). Entropy, an overall measure of modelling uncertainty based on probability distributions and calculations (Shannon 1948), is increasingly provided as a model parameter (Stafleu et al. 2011; Lindsay et al. 2012; Wellmann and Regenauer-Lieb 2012; Hademenos et al. 2018).
Particularly challenging for both data and model uncertainties is their effective implementation in user-specific applications (e.g. aggregate-resource quantification, assessments of environmental status and habitat change). Intuitively, end users have confidence in colourful models, whether their reliability is credible or notoriously overrated (e.g. Cowan 2017). Communicating the logic and relevance of uncertainty assessments to end users will remain difficult until convincing evidence can be presented that risks can be reduced, or money saved by taking uncertainty into account during decision making.
This paper presents a uniform step-by-step approach enabling consistent assessment of data uncertainty for a borehole dataset concerning the Quaternary of the Belgian Continental Shelf. Originally, the dataset was used for the creation of a voxel-based aggregate resource model (TILES consortium 2018a; Van Lancker et al. 2019). Here, we emphasize the methodology of the uncertainty assessment and the creation of confidence maps. By including data uncertainties in any 2D or 3D model, it is possible to visualize the influence of both, data-related as well as model-related, uncertainties and to compare calculations made using subsets of data meeting different quality criteria. These visualizations and comparisons can be queried in an associated decision-support tool and are key elements of data-gap analyses, a starting point for further optimization of the proposed workflow.
Study area
The Belgian part of the North Sea (BPNS), only 3455 km2 and having a 65 km-long coastline, has the ideal size and borehole-data volume to test methodologies assessing data uncertainty within a composite marine geological dataset. Its shallow-marine environment reaches depths up to 45 m LAT (Lowest Astronomical Tide) and is dominated by several groups of mostly stable sand banks and associated swales (Van Cauwenberghe 1971; Lanckneus and De Moor 1991). Offshore, these large morphological entities are mostly covered with amalgamating sand waves and megaripples of different size. Nearshore, some isolated sand-wave patches occur. In the southern Bight of the North Sea, sand waves show typically oscillatory migration at rates up to 10 m a−1 offshore and up to 20 m a−1 near the coast (Lanckneus et al. 2001; van Dijk and Kleinhans 2005).
Fine sand occurs predominantly in the nearshore, with extensive mud (clay and silt) fields towards the east, whilst medium to coarse sand is most abundant farther offshore (Verfaillie et al. 2006; Van Lancker et al. 2007). Gravel beds are limited to offshore swales, where the Quaternary cover is thinnest (Le Bot et al. 2005; Van Lancker et al. 2007). Paleogene clay crops out in this same area, where the Quaternary is absent (Mathys 2009). Information on seabed sediments and its subsurface is now available in a subsurface model of the entire Quaternary (Hademenos et al. 2018; TILES consortium 2018a) (Fig. 1).
View of the subsurface voxel model for the Belgian part of the North Sea (BPNS) (Hademenos et al. 2018).
In the Belgian marine realm, the number of activities affecting the seabed is substantial. Aquaculture, coastal protection, dredging and dumping, fisheries, military use, nature conservation, offshore energy, power and telecommunication cables, sand and gravel extraction and ports have different impacts to different depths, both separately and cumulatively (Compendium for Coast and Sea; Devriese et al. 2018). Various stakeholders are involved, including those related to shipping, tourism, cultural heritage and scientific research, all ensuring that tests on data uncertainty can be evaluated by decision makers that will profit directly from better tools for marine spatial planning.
Methodology
Assessing data uncertainty of geological datasets is complex and requires a tiered approach with a multiple-step workflow (Fig. 2). Following compilation of a standardized and harmonized marine subsurface dataset and corresponding metadata, data uncertainty was scored for horizontal positioning, sampling and vintage. Next, each uncertainty parameter was mapped individually along with measured average data density. This step was repeated for various uncertainty filters, each reducing the number of contributing data points but lowering the uncertainty and thus optimizing the maps for areas with a high-enough data density. Data uncertainty was incorporated into a voxel model for the subsurface, using ordinary kriging. Finally, all uncertainties were made available for querying in a decision support system (DSS; TILES consortium 2018b) so that different combinations of uncertainty could be visualized according to user needs (De Tré et al. 2018).
A tiered approach towards a uniform uncertainty assessment of marine geological datasets.
Geological datasets and their metadata
In the framework of the TILES project (Van Lancker et al. 2019), a lithology dataset was created containing geological descriptions of 1491 sediment cores, 348 grab samples and 30 drillings taken from the Belgian seabed (SediLITHO@SEA; Kint et al. 2020). It complements the sediment-related datasets for grain-size parameters (SediSURF@SEA; Van Lancker et al. 2007) and full particle-size distribution curves (SediCURVE@SEA; Van Lancker et al. 2012). The assembled information merges contributions of science institutes, national geological surveys and universities with a common interest in marine sediments, as well as descriptions from project-based sampling campaigns commissioned by authorities and partly owned by private companies.
Lithological data and associated metadata were harmonized and standardized to facilitate the generation of seamless seabed maps (Van Lancker and van Heteren 2013) following internationally proposed or agreed guidelines (e.g. Geo-Seas for geological and geophysical data (van Heteren 2010), SeaDataNet for oceanographic data, and INSPIRE for spatial information). To ensure machine-readability, interoperability and compatibility of the data, lithological descriptions available as text were transferred to code.
Main lithology was classified according to the Wentworth (1922) scheme; the full lithology including admixtures according to the Folk (1954) classification. Other lithological descriptors in the coded dataset are grain-size range with related mean and median; compositional percentages of clay, silt, mud (all fractions finer than sand), sand, gravel and shell matter; and minor constituents like organic matter and glauconite. Colours were converted into Munsell code listing hue, value and chroma. Details on the coding process are provided in Kint et al. (2020).
Metadata were quality-controlled and completed for borehole identifier; coordinates with geodetic reference datum and type of navigation system; data originator; subcontractor and laboratory; ship or platform; borehole age (or vintage); penetration depth; sampling equipment; and analytical method. The date and time of sampling were traced back from on-board documents and included in Coordinated Universal Time (UTC), a common international standard. Seabed depth was converted to metres below mean sea-level (MSL), as the subsurface voxel model of the BPNS is vertically referenced to that datum (Hademenos et al. 2018). Although not a perfectly uniform reference level, it serves the need for a unified system between Belgium and the Netherlands.
FAIR principles (findability, accessibility, interoperability and reusability) are guiding in creating datasets with complete metadata using controlled vocabularies and universal standards (developed by the Open Geospatial Consortium). The lithology dataset complies with the ISO 19115-1:2014 standard, which defines the schema required for describing information and services by means of metadata, and with the GeoSciML standard, a collaborative OGC–CGI product for geological data transfer. Models and digital maps made from the lithological data are visualized in web services (e.g. WGS, WMS).
Data uncertainty
By completing, harmonizing and standardizing borehole data and metadata, and by translating text fields into code, the assignment of uncertainty values to different attributes could be semi-automated in a spreadsheet. Uncertainty attributes were added to the dataset and associated qualitative or quantitative values were filled in either for entire boreholes or for each interval described. Scores between ‘1’ and ‘5’ were manually tabulated and cover the full range from very uncertain to very certain information. Lost or incomplete metadata were flagged with a ‘0’. Assigning scores was done on the basis of reviewed literature, estimated or measured errors, expert knowledge or the usage of external data from the environmental setting.
The uncertainty on the horizontal positioning of boreholes and grab samples concerns navigational accuracy (instrumental error), on-board and off-deck offsets (human error) and underwater drift of used gear (environmental error). The on-board offset is determined by the lengthways and crossways distances between the radio beacon or GPS receiver near the bridge and the location of instrument deployment on deck. This offset, a function of vessel orientation during drilling, is not always reported, incorporated or measured accurately. An extra offset should be included for the outside (safety) operating distances of instruments behind or beside the vessel. Underwater drift is an estimate between the deployment position of gear and its sampling position on the seabed. Lightweight gear is particularly susceptible. Heavy coring equipment can be positioned more accurately and its horizontal offset to the point of deployment is small. Ideally, all of these offsets should be reported and corrected for. It is impossible, however, to perfectly reconstruct offsets for vintage datasets. To obtain an indicative value, the uncertainty of horizontal positioning is estimated from maximum metric errors as (a) reported in literature on the accuracy of the navigation systems, (b) derived from image analysis of known vessels (for the on-board and off-deck offsets) and (c) calculated from underwater drift (a function of gear characteristics, local maximum current velocities and free-fall velocity in seawater).
Sampling uncertainty reflects the efficiency of each gear type in relation to the seabed substrate that was sampled, as derived from an extensive literature review supplemented by collaborative knowledge. Multiple sources were consulted to provide the best possible information on the advantages and disadvantages of each sampling device. Equipment includes surficial grab samplers (Hamon, Shipek, Van Veen) and subsurface sediment corers (box corer, flush corer, gravity corer, piston corer, vibrocorer and rotary drill). The lithological property used to determine the efficiency of sampling devices combines Wentworth (1922) and Folk (1954) characteristics. The BPNS substrate consists of various amounts of clay, silt, sand, gravel and shell hash (Houbolt 1968; Verfaillie et al. 2006; Kaskela et al. 2019); hence, sampling uncertainty is highly variable.
Assigning uncertainty to vintage or the timestamp of the sample required a dedicated approach and is not simply related to its age. Lithologies of older borehole samples, for example, may have been described with more care and in more detail than those of more recent samples. The time elapsed since sampling is more critical in areas with large and highly dynamic bedforms than in stable flat areas. In typical sandy shelf environments, erosion and deposition vary over time. Where bedforms, especially sand waves, are highly mobile and show large sedimentological differences from crest to trough (Lanckneus et al. 2001), they introduce uncertainty that impacts sample representativeness. In extreme cases, samples taken in the past may not even be suitable to map today's seabed. To estimate the degree of vintage uncertainty, sample locations were first classified according to a geomorphologically relevant benthic position index (BPI) (Fig. 3). Depending on the bathymetric position of a sample relative to the surrounding seabed, it was assigned to a crest, slope, flat or depression. These four categories were interpreted in terms of seabed stability. BPI was calculated following the approach of Verfaillie et al. (2009), but using a more recent 20 × 20 m digital terrain model available from Flemish Hydrography and an optimized, more detailed parameterization (Kint et al. 2019). The same bathymetry model was used as top surface of the voxel model of the Belgian Continental Shelf (Hademenos et al. 2018). In the context of uncertainty assessments, a fine-scale BPI turned out to be most meaningful as it accounts for the most relevant bedforms (sand waves).
Benthic position index reflecting various fine-scale geomorphological features on the BPNS. Numerous dynamic sand waves are superimposed on top of the sand banks, on their slopes and in the most offshore swales. Planation surfaces dominate the coastal zone. Details of the offshore sandwave fields and the location of the Middelkerke Bank are highlighted.
Mapping uncertainty parameters
To highlight areas with the highest uncertainties, uncertainty parameters (positioning, sampling and vintage) need to be mapped separately. Four steps are best taken: determination of the average data density to provide insight into how many data points contributed to each grid cell of a data product, providing information on lateral and depth variability; direct mapping of measured or categorized errors and accuracies; transformation of the measured values or categorical quality flags into uncertainty percentages, thus obtaining continuous variables suitable for 3D interpolation; and a selection of data subsets based on the uncertainty maps themselves. Repeating these steps is necessary to strike an optimal balance between map quality and coverage. The geographic information system QGIS, a Free and Open Source Software (FOSS) package that supports viewing, editing and analysis of geospatial data, served as a working platform.
Ordinary block kriging with logarithmic transformation was used as a 2D interpolation technique. A block size of 80 km, overlapping the BPNS, and a cell size of 200 m, corresponding to the horizontal grid size of the voxel model (see below), were chosen. A maximum search distance of 5000 m was needed to find 1 to 10 nearest data points. Neighbouring boreholes from the Netherlands, the UK and France were used to reduce edge effects along the BPNS border.
Simple subsets of the lithology dataset were selected to obtain data products with reduced data uncertainty while maintaining acceptable levels of data density so that map coverage was not reduced significantly. The number of boreholes and the average borehole density in the BPNS were quantified for each of the data selections. Within these constraints, examples involved removal of samples with a positioning error of more than 200 m and elimination of boreholes with a penetration depth less than 1 m, both equivalent to the TILES voxel dimensions of 200 × 200 × 1 m. Two-dimensional mapping is only done for positioning accuracy in metres and not for the quality flagging of sampling and vintage.
For the transformation of metric positioning errors into uncertainty percentages, minimum and maximum thresholds were set. Corresponding to acceptable positioning limits for the voxel model, the best accuracy of 0 m was translated into a value of 100%, whilst the worst accuracy, set at 1000 m (5 voxels) or more, was translated into a value of 0%. Intermediate accuracies were assigned a percentage value in between (e.g. 75% for 250 m accuracy).
For the transformation of the categorical quality flags of sampling and vintage into uncertainty percentages, ranges were set from 0 (0%) for very uncertain data to 1 (100%) for the most certain information. Quality flags of ‘1’ were set to 0.2 (20%) and those with ‘5’ to 1 (100%). Overall ranges were defined as <0.1: very uncertain; 0.1–0.33: uncertain; 0.33–0.66: equally (un)certain; 0.66–0.9: certain; and >0.9: very certain.
Incorporating uncertainty percentages in 3D geological models
In the Netherlands, Sequential Indicator Simulation (SIS; Goovaerts 1997; Chilès and Delfiner 2012) has been used to obtain 100 statistically equally probable simulations of the distribution of lithological classes in subsurface voxel models (Stafleu et al. 2011). Hademenos et al. (2018) applied this method to the BPNS marine geological dataset, profiting from abundant seismic profiles to constrain bounding surfaces delineating the different lithostratigraphic units. They used co-kriging or block kriging for the geostatistical interpolation of lithology- and stratigraphy-related attributes. The grid resolution (i.e. the size of a single voxel), set to 200 × 200 × 1 m (x; y; z) and adopted in the present study, was chosen on the basis of data density, scale of the observed geological features, and computing time (speed of interpolation). The modelling provided three measures quantifying uncertainty: probabilities of each simulated lithological class (lithoclass), modelling-related uncertainty, and the kriging error in the modelled stratigraphy (Hademenos et al. 2018). Isatis® (Geovariances 2011), a geostatistical modelling software package, was used to perform the simulations.
Data uncertainty for positioning, sampling and vintage has been incorporated in the voxelization process. Three-dimensional modelling of data-uncertainty percentages was done using the ordinary kriging method. Although kriging is a method designed to interpolate measurements of natural phenomena, modelling has been applied successfully to datasets with non-natural parameters such as uncertainty (Silva and Costa 2016; Samsonova et al. 2018). As such, the TILES subsurface model now includes not only the lithoclass probabilities (for clay, silt, fine–medium–coarse sand and gravel) and the modelling-related uncertainty (entropy), but also the series of data uncertainties (for positioning, sampling and vintage).
Using the uncertainty assessment in the DSS
In principle, all uncertainties could be summed up in a standard way. However, combining all percentages of the uncertainty parameters into one overall uncertainty percentage is neither straightforward nor always valuable, as it masks the origin of the predominant uncertainty component. Additionally, data products serve multiple end users, and each of them may assign different weights to each uncertainty factor depending on the intended objective. Therefore, it was decided to make all uncertainties queryable in a custom-made decision support application that addresses the entire voxel model and allows exports as ASCII XYZ files.
In the DSS, policy makers and other end users have the possibility to produce suitability maps (plan view) and profile plots (cross-sections) of a specific research location in the BPNS. Queries can be made on lithology (most likely lithoclass, associated probabilities and average percentages for all lithoclasses), lithostratigraphy, heterogeneity, data density, modelling-related uncertainty (entropy) and data uncertainties (positioning, sampling and vintage). Key to an optimized, informed use is the translation of data-uncertainty percentages into understandable terminology (very unreliable to near perfect). The DSS is very versatile, offering the decision maker a lot of flexibility, enabling a comparison of scenarios as well as effects of applying quality filters in science-based decision making (Van Lancker et al. 2017, 2019; De Tré et al. 2018).
Results
Uncertainty parameterization
The main factor in horizontal positioning uncertainty, the navigation system (Table 1), was translated into a coded quality flagging as a function of spatial accuracy (Table 2). Boreholes with older navigational information from before the 1990s (903 boreholes) are slightly more common than recent boreholes with high positioning accuracy (739 boreholes). The other offset attributes are supplemented for this uncertainty assessment, raising the spatial accuracy to the voxel resolution limit of 200 m. These latter errors are not yet used for uncertainty calculation and visualization in the DSS.
Characteristics of the navigation systems with their operating principles succinctly explained
Time period, quality flag and accuracy for each navigation system
Expert judgment was used to assign a relative scale for the sampling uncertainty that ranges from 1 (very uncertain) to 5 (very certain) to the various devices used (Table 3). The score of a device depends on the type of sediment being sampled, as derived from the data fields on main and secondary lithology (Table 4).
Characteristics of the main gear used for sampling soft substrata in the southern North Sea, with their advantages and disadvantages depending on sediment type
Quality flags for the sampling gear per sediment type
Quality flagging of vintage uncertainty was based on relating each sampling location to a fine-scale BPI (distinguishing crests, slopes, flats or broad swales, and local depressions) and translating these indices into scores from 1 (high seabed dynamics and low certainty) to 5 (low seabed dynamics and high certainty). The highest certainty corresponds to sand banks and swales, the lowest certainty to crests or slopes of migrating sand waves. Intermediate values were assigned to small depressions and intermediate flats.
Data selection for uncertainty mapping v. data density
To visualize the effects of data selections intended to improve the confidence of data products on overall quality and coverage, uncertainty maps were created. Figure 4 shows how data subsets with the most accurate (positioning error σ ≤ 200 m) and geologically most valuable (borehole penetration depth d ≥ 1 m) data compare to the overall mapping using all available data in terms of positioning accuracy. Both data selections result in reduced uncertainty, but at the cost of reduced coverage. The average borehole density in the BPNS decreases from 0.39 per km2 (1356 boreholes) to 0.29 per km2 (991 boreholes). On the basis of the set of maps, areas with insufficient density of data fulfilling key uncertainty requirements are easily identified.
Positioning accuracy for (a) all available data, and (b) a subset with the most accurate (σ ≤ 200 m) and geologically most valuable (d ≥ 1 m) data.
Incorporating data uncertainty into 3D geological models
Another subset of the lithology dataset was selected by Hademenos et al. (2018) based on the availability of seismic data. Figure 5 visualizes two types of data uncertainty impacting the subsurface voxel model of the BPNS. Overall, the positioning accuracy is very high. Only far offshore and near the French coast is the accuracy significantly lower. Nearshore and around several offshore sand banks, sampling uncertainty is limited. In most areas further offshore, high-quality sampling is missing. In the well-sampled windfarm area near the border with the Netherlands, data selection (on the basis of quality criteria) or data weighting can be solutions to optimize the model. Data gaps (white patches) represent here areas for which an uncertainty parameter cannot be modelled.
Top view of (a) positioning and (b) sampling (un)certainty, both derived from the subsurface voxel model for the BPNS. The uncertainty percentages range between 0% for very uncertain data (blue) and 100% for the highest certainty (red). White patches are data gaps.
Integration of data uncertainty in the DSS
Figure 6 illustrates the sandbank architecture of the well-investigated Middelkerke Bank (De Moor and Lanckneus 1993; Heyse and De Moor 1996), west of the port of Zeebrugge. Two parallel transects are drawn following a sequence of boreholes. The respective cross-sections show a fine-grained sand bank with medium sand on its top and scattered at depth, and a clayey base layer. Positioning data are near perfect. The sampling uncertainty differs between the two cross-sections. Cross-section 1 is based on little-disturbed vibrocores, whilst cross-section 2 relies on mixed borehole samples obtained by counter-flushing. Vintage uncertainty is much higher, reflecting the presence and potential migration of dynamic sand waves on the crest and slope of the sand bank. Overall, the voxel modelling results become increasingly unreliable where the mean borehole penetration is reached and exceeded.
Two cross-sections through the Middelkerke Bank, west of the port of Zeebrugge, showing the most likely lithoclass and associated data uncertainties (DSS; TILES consortium 2018b).
Discussion
Towards a flexible approach of data-uncertainty quantification and visualization
Any parameter of geological information stored in a database can be a source of uncertainty (Bárdossy and Fodor 2001). Whether it is the precise tracing of sample locations from the past or reconstructing which definitions of sand-size fractions were used in legacy borehole descriptions, correcting for all errors will generally be impossible. Crucial metadata may be missing and known sources of error, such as marine (weather) conditions, may have had non-systematic effects. Even universally automated corrections for anomalies in modern-day borehole data and metadata, made on board at the time of sampling, will be imperfect. As not all sources of error will impact the uncertainty of a data product equally, and because the degree of impact also differs per end user, the selection of relevant data uncertainties in a DSS should be adaptable to best fit decision-making, mapping purpose or research objective. For instance, although the accuracy of the navigation systems is an order of magnitude better than the resolution of the current 200 m voxel model, it will not be a limiting factor in quantifying the spatial variability of aggregate resources (e.g. Hademenos et al. 2018). However, positional anomalies will become more important when assessing local sediment or habitat changes using models with much smaller cell size (e.g. Cooper et al. 2007; Montereale-Gavazzi et al. 2018). Ideally, an uncertainty framework should be defined and regularly updated, focusing on minimum and maximum threshold of acceptability.
Aside from data optimization and informed data elimination when needed, assigning uncertainty-based weights per data point or borehole interval will be an essential future endeavour. By implementing data weighting in the interpolation process, the vast majority of data can contribute to each data product, with weight dependent on data quality (low-quality data will receive smaller weights, whilst high-quality data will obtain more decisive weights). Weighting is of particular interest when combining visual borehole descriptions and laboratory measurements, which both have their advantages and disadvantages (van Heteren and Van Lancker 2015). Striking an optimal balance between data reduction and data weighting will be an iterative process aimed at optimal data coverage and minimal data uncertainty.
In this paper, uncertainties were quantified on the field acquisition of lithology data, not on the quality of lithological descriptions, laboratory measurements or sediment-classification systems. A useful next step in data-uncertainty quantification concerns automated quality flagging of these descriptions for each borehole interval. A possible approach, implemented for the dataset of the Dutch subsurface, links quality to the number of key features described. Quality flags for laboratory results such as particle-size and loss-on-ignition analyses can be based on the suitability of devices used to measure different sediment types and fractions. Similar to sampling gear, each analytical technique (laser: Coulter counter, Malvern Instruments; X-ray: sedigraphs; sieving; and settling tubes) has a unique set of benefits and drawbacks. Misalignment of sediment-classification systems or granularities, both between datasets and in relation to intended end use, also needs to be tackled. Apart from Wentworth (1922) and Folk (1954), the most common classification schemes in geology, some original data entries followed industrial norms or national standards (such as BSI (British Standards Institution), NEN (NEderlandse Norm) and ISO (International Organization for Standardization)). Harmonization and standardization efforts introduce additional data uncertainty that should be quantified.
Uncertainty products meeting present-day user needs
As multiple data products can be generated from the same dataset by including data uncertainty, clear communication on the map or model making and on implemented thresholds of data uncertainty is indispensable. End users, and particularly decision makers, need a tool that is both intuitive and well-documented. Summing up all uncertainty percentages is the most straightforward, but lacks the flexibility needed by each user to generate output matching their purpose and to trace back the predominant uncertainty component. To verify or critically examine the DSS outcome, end users can make use of a user-friendly national data portal (TILES consortium 2018c) that holds for each borehole or sample: (a) original documents with lithological descriptions and metadata; (b) laboratory results with grain-size data and information on composition; (c) standardized and coded sheets from the originals with added data-quality flags indicating the level of uncertainty on location, gear and vintage; and (d) photographic material of cores and samples. As upcoming updates of standard GIS software will include the possibility of analysing voxel models, our voxel-based uncertainty approach can soon be adopted by offshore engineers and environmental scientists.
Marine habitat mappers are an important user group that will profit from quantified uncertainty assessments. They use sediment type of the upper voxel in the subsurface model for the BPNS (voxels representing the upper 1 m of the seabed) in the context of the European MSFD, which requires monitoring of environmental status and habitat change over a six-yearly evaluation cycle to achieve a good environmental status (GES; e.g. Korpinen et al. 2013). The assessed broad-scale habitats relate directly to the distribution of mud, sand, coarse and mixed substrates (e.g. 1:250 000 seabed substrate map of Europe; European Commission 2019). For Belgian waters, no transitions are allowed from one habitat into another (Belgian State 2012), and ongoing seabed-change assessments focus primarily on this requirement (Van Lancker et al. 2018). The incorporation of data uncertainty assists in distinguishing ‘real’ changes of sediment type compared to apparent or statistically insignificant changes caused by positioning-, sampling-, description- and interpretation-related inconsistencies or other sources of error. In order to ensure the protection of marine biodiversity in gravel-rich areas (Houziaux et al. 2008; Montereale-Gavazzi et al. 2018), it is particularly important to be aware of inadequate or insufficient sampling of the gravel beds.
Engineers stand to profit particularly from the quantification of uncertainty. The design of wind turbine foundations, cable and pipeline infrastructure and radar masts, for example, requires reliable, well-constrained values of geological and geotechnical properties (Hoek 1999; Gkoumas 2010) and thus careful data selection or weighting. When selecting stable repository sites for dumping of dredged material or identifying viable sand and gravel reserves, it is necessary to minimize geological risk (e.g. Hack et al. 2006). Kruiver et al. (2017) showed how a voxel model of the shallow subsurface above the Groningen gas field could be used to provide information for seismic hazard and risk analysis. In attributing the voxel model with shear wave velocity, the uncertainty of the velocity measurements was taken into account. In addition, efforts were made to mitigate the recognized data and model uncertainty. The pioneering study highlights the added value of novel uncertainty assessments that account for geological variability and data uncertainty. Such quantification requires close co-operation between data holders, geologists and geotechnical engineers, combining expert subsurface knowledge and a practical perspective.
Finally, any geospatial analyst, marine or terrestrial, benefits from combining newly created mapping products (2D or 3D) with confidence assessments. The relevance of instrumentation and gear accuracy and precision has long been recognized in satellite remote sensing (e.g. confidence maps in Torbick et al. 2016; Martos et al. 2017), with significant advances being made on the quantification of uncertainty factors, jointly forming uncertainty budgets (Ruddick et al. 2019). Uncertainty flags and percentages are equally suited to combined uncertainty analyses in budgets, and thus show great potential in an increasingly rational future use of marine subsurface datasets.
Conclusion
Harmonized, standardized and coded borehole data and metadata make it possible to automate the assignment of uncertainty values to relevant attributes. Quality flags for positioning, sampling and vintage can easily be converted into corresponding uncertainty percentages meeting the input requirements of existing 3D subsurface models.
Application of uncertainty filters reduces data density, impacting the degree of spatial coverage. Optimization of maps and models is only possible where data density is high enough. Any particular density reduction is not equally detrimental to all intended uses. To balance coverage and map quality, four steps are best taken in an iterative process: determination of the average data density; direct mapping of data quality; transformation of quality information into uncertainty percentages suitable for 3D interpolation; and optimizing the selection of data subsets on the basis of uncertainty maps.
A subsurface model with associated data uncertainties is most powerful when embedded in a decision support system (DSS) with understandable terminology, enabling policy makers and other end users to compare scenarios, visualize overall confidence and provide feedback needed to finetune the model. Summing all uncertainty percentages, although straightforward, is not recommended as it precludes end users from generating dedicated output and from identifying the predominant uncertainty component.
Marine habitat mappers are an important user group that will profit from an intuitive and well-documented decision tool. In Marine Strategy Framework Directive (MSFD)-related monitoring of environmental status and habitat change, uncertainty quantification may help establish the statistical significance of observed seabed-sediment changes. Marine engineers can use data-uncertainty filters to optimize construction and infrastructure designs, and to reduce risk. Reproducible confidence maps of the presented uncertainty indicators will support geospatial analysts in their interpretative findings.
Including the full suite of data uncertainties in subsurface models is a work in progress. Loss of information can be minimized by weighting rather than eliminating data, which is of particular interest when working with visual borehole descriptions as well as laboratory measurements. Automated quality flagging of such uncertainty components is another future challenge.
Acknowledgements
This paper is a contribution to the BRAIN-be project Transnational and Integrated Long-term marine Exploitation Strategies (TILES).
Author contributions
LK: investigation (lead), methodology (lead), writing – original draft (lead); VH: visualization (supporting); RDM: visualization (supporting); JS: visualization (supporting), writing – review & editing (supporting); SH: methodology (supporting), writing – original draft (supporting), writing – review & editing (supporting); VVL: methodology (supporting), supervision (lead), writing – original draft (supporting), writing – review & editing (supporting)
Funding
This paper was funded by the Belgian Science Policy (BELSPO; BR/121/A2/TILES). The TILES research was fully supported by the federal Belgian programme for continuous monitoring of sand and gravel extraction paid from private revenues (ZAGRI). Contributing European Commission projects have been EMODnet Geology (MARE/2008/03; MARE/2012/10; EASME/EMFF/2016/1.3.1.2/Lot 1/SI2.750862; EASME/EMFF/2018/1.3.1.8/Lot1/SI2.811048), Geo-Seas (FP7-INFRASTRUCTURES ID: 238952), ODIP (FP7-INFRASTRUCTURES ID: 312492; H2020-EU.1.4.3.2. ID: 654310) and SeaDataNet/SeaDataCloud (FP6-INFRASTRUCTURES ID: 26212; FP7-INFRASTRUCTURES ID: 283607; H2020-EU.1.4.1.2. ID: 730960).
Data availability
The datasets generated during and/or analysed during the current study are available in the Belgian Marine Data Centre (BMDC) repository, https://www.doi.org/10.24417/bmdc.be:dataset:2161.
Scientific editing by Cherith Moses; Mike Long
- © 2020 The Author(s). Published by The Geological Society of London
This is an Open Access article distributed under the terms of the Creative Commons Attribution 4.0 License (http://creativecommons.org/licenses/by/4.0/)