Abstract: |
Over the past decade, many geostatistics algorithms have been developed to generate sets of conditional simulations that better reflect the complexities of a geological deposit. Given that it is difficult to accurately infer high-order spatial statistics from sparse data sets, a large portion of the developed algorithms depend on training images, or geological analogues, as a source of patterns or statistics. Generating these training images, in particular training images for continuous variables, can be a labour-intensive process without any guarantee that the true high-order statistics of the deposit are accurately represented. For these reasons, there is interest in being able to infer or approximate these high-order statistics from sparse data sets, which could then be integrated into existing and new classes of stochastic simulation algorithms that do not require training images. This work proposes a decomposition of a high-order statistic (moment) into a set of weighted sums. Using this decomposition, it is possible to approximate the n-point moment by searching for pairs of points and combining the pairs in the various directions at a later step, rather than searching for replicates of the n-point template, which is often unreliable for sparse data sets. This approximation is tested using varying amounts of data on a sample data set, and an analysis of the methods used to generate the values at the unknown locations from the pairs are discussed. Experimental results indicate that the approximations perform much better than using the true moments obtained from a sparse sample data set, particularly when considering shorter lag distances with less lag tolerance. Additionally, the quality of the approximation does not appear to degrade significantly for higher-orders, which is generally not true for the moments from the sample data. |