Basic grade interpolation in Leapfrog Ron Reid Jun 27, 2013
In my last post I post I mentioned that I composite for basic geostatistical reasons. Recently I observed a eapfrog grade interpolation run on ra! gold assays, using a linear variogram, and the result !as a!ful to say the least and in fact !as completely !rong by any an y measure. "rom a geostatistical point of vie! a number of rules !ere bro#en , it is not the purpose of this article to go into these in detail, but rather sho! ho! a simple application of some basic rules of o f thumb !ill result in a much more robust grade model. $ere I !ill cover the basics of the database, datab ase, compositing, applying a top cut, appro%imating a variogram and the basics of finding a &natural& cut for your first grade shell in order to define a grade domain to contain your model. 'ote that in the forthcoming discussion I refer largely largely to processes in eapfrog (ining, it being a more po!erful and useful tool than eapfrog )eo in its current form, ho!ever if you are a )eo user the follo!ing still applies * the !or#flo! may be slightly different.
The database +s in all things the )I) principle, &garbage in, garbage out&, applies in eapfrog. If your database has not been properly cleaned cleaned and validated validated you !ill get erroneous erroneous results. I have noticed noticed that many many " users !ill load a drill hole database and not fi% the errors flagged by eapfrog. -he most common issue is retaining the belo! detection values as negative values values such as 0.01 for belo! detection gold for instance. If this is left in the database then the interpolation interpolation !ill use this value as an assay and it !ill lead to errors in the interpolation model. It is better to flag it as a belo! detection sample and instruct eapfrog in ho! to treat these. /here the database uses values such as for lost sample, or for insufficient sample you !ill get a spectacular fail !hen you attempt to model this yes it does happen4. If you only have a fe! errors it is simple enough to add a fe! special values through the fi% errors option to correct these issues "igure 14. If you have a large number of errors the fastest !ay to fi% the errors is to load the data once, e%port the errors in order to identify them and then build a special values table that records each error. -his is fairly simple to do and should be laid out as sho!n in "igure 2, you save this as a csv into the same folder as your drill hole data. -his file can then be used for every " (ining (ining pro5ect you build build as long as your field names names do not change, or the particular particular assays do not vary, although it is not too big a 5ob to ad5ust the table if need be. 6ou then delete your database from your pro5ect and reload it, selecting the special values table at the same time "igure 34, the database !ill then load !ith the assay issue fi%ed. If you are a eapfrog )eo user you cannot do this as the special values option has been removed, you have to manually correct and validate as assessed every error flagged a process that can become uite tedious in a large pro5ect, and frustrating8 "igure 94. nce your assay table has been validated you can move on8 technically your !hole database should be validated validated but I !ill ta#e it as a given that the process has been completed, most people understand issues around drill holes !ith incorrect coordinates or drill holes that do a right hand bend due to poor uality survey data.
Figure 1. Fixing a simple series of errors in Leapfrog Mining is a simple process as the file can be adjusted to correct errors. In this case I have 2 errors a series of !"s that represent insufficient sample and #$.$1 %hich is belo% detection. I can fix these using the &dd 'pecial &ssa( value) option and selecting *ot 'ampled or Belo% +etection.
Figure 2. ,ith Leapfrog Mining (ou can create a 'pecial -alues Table that can be loaded at the +atabase import stage the 'pecial -alues Table should be structured as above.
Figure . Top image sho%s %here (ou can load the 'pecial -alues table /blue arro%0 this can onl( be loaded at the time of loading the database it cannot be added after the database has been loaded.
Figure . Leapfrog eo does not have a facilit( to import 'pecial &ssa( -alues (ou must manuall( correct the errors ever( time (ou create a ne% project once the rules have been decided (ou must tic3 the 4These rules have been revie%ed5 option to get rid of the red cross.
6omposite (our data7 I have not yet come across a drill hole database that consists of regular 1, 2 or 3 metre sampling, there is al!ays a spread of sample lengths, occasionally due to sampling on geological boundaries, through to bul# bac#ground composites and unsampled lengths. -his leads to a large variation in !hat is termed support length "igure 4. It is also common for there to be a correlation bet!een sample length and grade, ie smaller sample lengths !here grade is higher "igure 4. -his can lead to problems !ith the estimation process that is !ell understood in )eostats, perhaps less !ell understood outside of the resource geology !orld. eapfrog:s estimation is basically a method of #riging, and so is sub5ect to all the foibles of any #riged estimate, these include issues of e%cessive smoothing and grade blo! outs in poorly controlled areas. eapfrog has a basic blog article about ho! leapfrog;s modeling method !or#s on their !ebsite. $aving multiple small high grade intervals and fe!er larger lo! grade intervals !ill cause the high grade to be spread around share and share ali#e4.
+ simple !ay of dealing !ith this is to composite your data.
that I sit in the -< post composite camp, for pure practical reasons as I !ill e%plain belo!.
Figure 8. raph sho%ing sample interval length %ith average grade b( bin it is evident that the 1 and 2m intervals have significant grade and should not be split b( compositing bin onl( has minor grade and bin 9 has no grade it is probabl( not a significant issue if these bins are split b( compositing. I %ould probabl( composite to m in this case as the m assa( data ma( still be significant even if the number is not high /and I happen to 3no% the dataset is for an open pit %ith benches on this order0 2m %ould also be a possibilit( that %ould not be incorrect.
/ith respect to the regularising of the sample length, this has a profound effect on the variability of the samples and !ill also give you a more robust and faster estimate. =electing a composite length can be as involved as you !ant to ma#e it ho!ever there are a couple of rules of thumb8 first your composite length should relate to the type o f deposit you have and the ultimate mining method, a high grade underground mine !ill reuire a different more selective sample length to a bul# lo! grade open pit operation for e%ample. -he other rule of thumb is that you should not &split& samples, ie if most of your samples are 2m, selecting a 1m or even a m composite !ill split a lot of samples, spreading the same grade ab ove and belo! a composite boundary, this gives you a dataset !ith drastically lo!er variance than reality !hich translates as a very lo! nugget in the variogram4, and results in a poor estimate. If you have 2m samples you should composite at 2, 9 or >m, if you have uite a fe! 9m samples then this should be pushed out to ?m if 9m is determined to be too small, the composite should al!ays be a multiple of those belo! it. -his must be balanced against the original intent of the model and practicality, it is no good using ?m composites if your orebody is only >m !ide, and the longer the composite the smoother the estimate and you are creating the same issue you are trying to avoid by not splitting samples. 6ou !ill find that there is commonly very little change in the basic statistics once you get past 9>m, and implies that there is no real reason to go larger from a purely stats point of vie!, there may be ho!ever from a practical point of vie!.
"or the sa#e of the argument here let us assume a metre composite !ill suit our reuirements. $aving assessed our ra! data !e find that !e have a data set that has e%treme grades that imply the reuirement for a top cut of say 2gpt gold I !ill stic# to gold in this discussion but the principle applies across the board4, the uestion becomes &should I composite pre or post applying the top cut@A. et;s say the 1m samples that ma#e up a particular composite are 2., .?, 1.>, 12.1, and 1?gpt. -he straight average of this composite !ould be 30.> gpt. If I apply the top cut first I !ould get 2., .?, 1.>, 2.0 and 1?gpt !hich !ill composite to 10.? gpt gold. If I apply the top cut after, my grade for the composite !ill be 2gpt given the original composite grade is 30.> cut to 24. +s you can see by applying the top cu t first !e are potentially !iping a significant amount of metal from the system, also !hen assessing the dataset postcomposite it is sometimes the case that a dataset that reuired top cutting precompositing no longer reuires it post, or that a very different top cut is reuired * sometimes a higher one than indicated in the ra! dataset. If geological sampling has been done !here sample lengths are all over the shop this becomes even more involved as length !eighting has to be involved. Besides, it is a simple process to top cut postcompositing in eapfrog !hich ma#es the decision easy "igure >4. /hy do !e top cut in the first place you might as#, simply because if !e !ere to use the data !ith the very high grade say the 12.1 gpt sample above4 !e !ill find that the very high grades !ill unduly influence the estimate and give you an overly optimistic grade interpolation. +pplying a top cut in leapfrog is a simple process of assessing the data in the histogram "igure >48 -able 1 sho!s the statistics for the gold dataset sho!n in "igure and "igure >, composited by lengths of 2 and 9m. =tatistical purists !ill say the
Figure 9. :ou can pic3 a simple top cut that stands up to relativel( rigorous scrutin( using the graph option %hen generating the interpolant. & %idel( used method is to select %here the histogram brea3s do%n this at its most basic is %here the histogram starts to get gaps here it is approximatel( 28g;t for the 2m dataset on left but $g;t for the m +ataset on right /arro%ed in red lognormal graph is simpl( for better definition0 (ou enter this value into the
The -ariogram *ever run a grade interpolation using a linear variogram, doing so implies that t!o samples, no matter ho! far apart, have a direct linear relationship, !hich is never true in reality and can lead to some very !eird results "igure 74. + basic understanding of sample relationships is essential !hen running a grade interpolation. 'amely that there is al!ays some form of nugget effect, ie t!o samples side by side !ill sho! some difference, and that as you move the samples further apart the samples lose any relationship to each other so that at some point the samples bear no relationship to each other. In cases !here t!o samples side by side bear no relationship at all !e have a phenomenon #no!n as pure nugget, in this case you may as !ell ta#e an average of the !hole dataset as it is neigh impossible to estimate a pure nugget deposit, as many companies have found to their cost.
Figure =. This figure sho%s the effect of appl(ing a linear isotropic variogram /blue0 and a spheroidal 8$> nugget variogram /(ello%0 to the same dataset each surface represents a $.g;t shell a significant blo% out is evident in the Linear variogram.
)iven that one benefit of eapfrog is its ability to rapidly assess a deposit, it does not ma#e sense to delve deeply into a geostatical study of sample distribution and generate comple% variograms, especially given eapfrogs simplistic variogram tools. $o!ever a basic understanding of ho! a variogram should behave for various deposit types !ill allo! you to appro%imate the variogram for your dataset. "or instance, the nugget value for most deposits assuming fe! sample errors4 !ill generally be the same across the !orld, a po rphyry gold deposit !ill have a nugget some!here bet!een 1020D of the total variance call it 1D4, epithermal gold deposits tend to sit in the 30>0D range call it 90D4 and lode gold deposits are commonly in the 070D range call it >0D4.
porphyry deposit might have a shallo!er shoulder and thus an alpha value of say 3, !hereas the lode gold deposit may have a very sharp shoulder and thus an alpha value of maybe more appropriate. -he alpha values are also useful if you #no! your variogram has several structures, if you have several structures a lo!er alpha number helps appro%imate this. Be%are if (ou are a leapfrog eo user this relationship is the reverse * changes to the !ay eapfrog )eo !or#s means that a " )eo +lpha 3 E a " (ining +lpha * soft!are engineers 5ust li#e to #eep us on our toes et us say !e have a lode gold deposit, !e !ill assume a nugget of >0D of the sill, a range of say 2m and use an alpha value of .
"igure ?. "igure sho!ing the effect of varying the nugget value, -op is a straight isotrop ic linear interpolation /Linear is al%a(s a *o *o0 belo% that is a $> nugget then a $> nugget and finall( a 9$> nugg
"igure . "igure sho!ing the effect of the +lpha Cariable, on the graph on the top is for LF Mining the graph on the botom is for LF eo? *ote that the variable changes bet%e en Mining and eo so that a higher &lpha variable in Mining /eg @0 is eAuivalent to a l o% &lpha in eo /eg
The natural cut -he ne%t step is to define the natural cut of the data. =ometimes !hen !e run an interpolation !e find that the lo!est cutoff !e use creates a solid bo% !ithin our domain "igure 104, this is because there are too many samples at that grade that are unconstrained, ie !e are defining a bac#ground value. -he first step in defining a set of shells from our interpolant is to start !ith one lo! grade shell, say 0.2gpt. +s !e are creating 5ust one shell, after the interpolant has been created the one shell is uite uic# to generate. /e may find that 0.2 fills our domain so generate a shell of 0.3 and rerun, continue doing this until you find the cutoff !here you suddenly s!itch from filling the domain to defining a grade shell, this is your natural cutoff for your data "igure 104. 6ou can use this as the first shell in your dataset, simply add several more at relevant cutoffs for assessment and vie!ing, or you can generate a )rade Fomain using this cutoff to constrain an additional interpolation that you can then use to select and evaluate a grid of points, effectively generating a eapfrog bloc# model.
Figure 1$. Figure sho%ing the effect of shells above and belo% a natural cut#off. Bro%n C $.2g;t %hich is an unconstrained shell blueC$.g;t %hich is the constrained shell and defines the natural cut#off of the dataset.
"ollo!ing this process outlined above !ill vastly improve your grade modelling and lead to better interpolations !ith better outcomes. 'ote I have not spo#en about search ellipses, ma5or, minor or semi minor a%is, orientations of grade etc, this is because this is all dependent upon the
deposit. 6our deposit may reuire an isotopic search, or some long cigarshaped search, depending upon the structural, lithological and geochemical controls acting upon the d eposit at the time of formation and effects post formation. -he average nugget and the range of the variogram !ill generally conform to !hat is common to that deposit type around the !orld. + bit of study and research on the deposit is something that should already have been done as part of the e%ploration process, adding a uic# assessment of common variogram parameters is not an arduous addition to this process. It is not a reuirement to understand the intricacies of variogram modelling, nor the maths behind it, but #no!ing the average nugget percent and range for the deposit type should be an integral part of your investigations, and should inform your eapfrog )rade interpolations. $appy modelingG