What is one example of a problem mine operators face that can be solved with machine learning?
Inefficient material classification results in significant lost revenue during every mine project life cycle stage.
During the exploration stage, the ability to rapidly and accurately classify core intervals for newly drilled boreholes is essential to making real-time decisions about future drillhole location, spacing and depth. Operators are typically limited in the ability they have available to classify new core intervals and make targeting decisions in the field.
In production and early closure, millions to billions of tons of excavated material require classification (and then segregation). This material represents multiple classes of ore, waste, and construction material. Detailed, expensive classification studies are often conducted producing relatively small datasets that are difficult to extend to the operational scale.
Common Classification issues in:
Advanced drill core studies suggest that drillhole intervals can be classified by distance to mineralization as a function of a series of complex and overprinting mineral alteration assemblages and structural events. Operators have difficulty recognizing this complex sequence and opt instead to rely on simple metal assay, with a much smaller footprint, for targeting.
Distance to mineralization is successfully predicted by machine learning algorithms using assay data for each borehole interval.
Detailed geometallurgical studies indicate that acid consumption and metal recovery rates in multi-element ore are tied to complex mineral and trace element assemblages. These assemblages are not readily identified at the operational level, and a simpler scheme based on single element grade is adopted instead.
Acid consumption and metal recovery rates can be quantified using borehole assay and a machine learning approach.
Environmental characterization studies indicate that classifying waste by lithology effectively separates materials with high acid rock drainage (ARD) potential from those with low potential. Waste with high ARD potential costs significantly more to dispose and manage. However, operators cannot reliably identify lithology in the field, and thus resort to using far less efficient carbon and sulfur assay for material classification and segregation.
Machine learning algorithms correctly predict lithology based on waste material bulk composition, resulting in far more accurate classification and segregation.
The bottom line
When materials are under-classified or mis-classified, mining projects lose money.
The Machine Learning Solution
How can machine learning help address issues with materials classification?
Significant exploration and production borehole assay data exists for most mining projects, much of which goes largely unused. Machine learning algorithms can be used to establish borehole assay as a proxy for the many classification criteria that are identified by way of detailed scientific study at each project life cycle stage. The advantage of using borehole assay as a proxy for these classification criteria is that it allows the results of detailed classification studies to be extended to every existing and new drillhole for which assay data is available. This means that new exploration borehole intervals can be rapidly classified for target vectoring, and that geometallurgical and environmental behavior criteria can be used for mine planning and extended to the operational scale.
A relatively new type of downhole data, hyperspectral mineralogy, shows great promise as being superior to standard assay for extending the results of detailed classification studies to new and existing boreholes. The advantages of hyperspectral mineralogy are that the data is continuous (i.e. significantly more data is captured), contains ‘organized’ compositional, structural and paragenetic information, can and be presented variably using a variety of filters. Furthermore, the data are collected via a non-destructive and non-invasive approach. Hyperspectral imagery differs from traditional assay in that the data is unstructured, that is, not organized in such a way that it can be stored in a simple database such as Access or SQL. Unstructured imagery such as hyperspectral mineralogy requires a specialized machine learning approach known as deep learning, which relies on artificial neural networks.
Machine Learning Algorithms
How does machine learning work?
A machine learning model takes as input the results of a classification study. For example, classified distance to mineralization, geometallurgical behavior or acid-generating potential for a number of borehole intervals, as well as the assay chemistry associated with each these intervals. The classification criteria are known as ‘target variables’, the property which the machine learning algorithms are trying to predict based on their corresponding assay chemistry. This dataset is split into a training dataset (typically 75 to 80% of the dataset) and a predictor dataset (the remaining dataset fraction). Using the training dataset, machine learning algorithms attempt to identify the range of assay chemistry associated with each target variable value (target variables can be categorical or continuous), as follows:
- Examples of target variables: distance to mineralization, ore grade, acid consumption, metal recovery rate, acid generation and neutralization potential, metals leaching potential
- Examples of predictor variables: whole rock chemistry (major element oxides / trace elements) and mineralogy
Once the machine learning model has identified the assay characteristics of the target variables in the training dataset, it then tries to predict target variables values in the predictor dataset based on assay chemistry. Since the classification for each borehole interval in the predictor dataset is already known, the accuracy of the machine learning predictions can be compared against the actual classification. If the predictive accuracy is low, additional steps can be taken to improve accuracy. In this sense, the machine learning process is iterative.
Life Cycle Geo Expertise
Why Life Cycle Geo?
A typical data science project requires deep expertise in three broad areas:
- Machine learning expertise
- Domain expertise
- IT engineering
Successful execution of a machine learning project requires strength in each of these three areas. Life Cycle Geo has significant experience in all three:
- Data science: LCG has successfully executed numerous projects requiring unsupervised (dimensionality reduction, clustering, data structure analysis) and supervised (classification, regression, anomaly detection) learning techniques for both structured and unstructured datasets, as well as those requiring other advanced statistical, machine learning and data visualization methods.
- Domain expertise: LCG has worked across the project life cycle for numerous clients, and specializes in developing an advanced, site-specific understanding of geologic material characteristics, and effectively applying this understanding to data science projects.
- IT: LCG and its partners are able to deliver solutions to scale as project needs warrant. We have experience working on projects of all sizes, from small Excel-based datasets that are analyzed using simple R or Python scripting platforms to large image databases hosted and evaluated in cloud-based environments such as Microsoft Azure.