Articles | Volume 7, issue 3
https://doi.org/10.5194/gchron-7-265-2025
https://doi.org/10.5194/gchron-7-265-2025
Research article
 | 
05 Aug 2025
Research article |  | 05 Aug 2025

The quantification of down-hole fractionation for laser ablation mass spectrometry

Jarred C. Lloyd, Carl Spandler, Sarah E. Gilbert, and Derrick Hasterok
Abstract

Down-hole fractionation (DHF), a known phenomenon in static spot laser ablation, remains one of the most significant sources of uncertainty for laser-based geochronology. A given DHF pattern is unique to a set of conditions, including material, inter-element analyte pair, laser conditions, and spot geometry. Current modelling methods (simple or multiple linear regression, spline-based regression) for DHF do not readily lend themselves to uncertainty propagation, nor do they allow for quantitative inter-session comparison, let alone inter-laboratory or inter-material comparison.

In this study, we investigate the application of orthogonal polynomial decomposition for quantitative modelling of LA-ICP-MS DHF patterns. We outline the algorithm used to compute the models, apply it to an exemplar U–Pb dataset across a range of materials and analytical sessions, and finally provide a brief interpretation of the resulting data.

In this contribution we demonstrate the feasibility of quantitative modelling and comparison of DHF patterns from multiple materials across multiple sessions. We utilise a relatively new data visualisation method, uniform manifold approximation and projection (UMAP), to help visualise the data relationships in this large dataset while comparing it to more traditional methods of data visualisation.

The algorithm presented in this research advances our capability to accurately model LA-ICP-MS DHF and may facilitate reliable decoupling of the DHF correction for non-matrix-matched materials, lead to improved uncertainty propagation, and facilitate inter-laboratory comparison studies of DHF patterns.

The generalised nature of the algorithm means it is applicable not only to geochronology but also more broadly within the geosciences where predictable linear (x-to-y) relationships exist.

Share
1 Introduction

LA-ICP-MS of geological materials has significantly advanced since its adoption at the end of the 20th century and is today the technique of choice for most applications for mineral geochronology. Initial geochronological studies on zircon were only able to produce tens of individual data per session due to technical, time, and computing limitations (e.g. Hirata and Nesbitt1995). Nowadays high-precision individual analyses on multiple minerals using a range of geochronologic systems (e.g. U–Pb, Rb–Sr, Lu–Hf, Re–Os) can be rapidly and accurately acquired (Chew et al.2019; Gehrels et al.2008; Glorie et al.2023; Hogmalm et al.2017; Kendall-Langley et al.2020; Larson et al.2024; McFarlane2016; Mohammadi et al.2024; Roberts et al.2020; Simpson et al.2021; Subarkah et al.2021; Tamblyn et al.2024; Zack et al.2011). However, to achieve accurate, high-quality data, correction procedures need to be implemented, including calibration to reference material(s) (RM), corrections for matrix offsets, and inter-element down-hole fractionation (DHF) (Agatemor and Beauchemin2011; Allen and Campbell2012; Gilbert et al.2017; Günther et al.2001; McLean et al.2016; Ver Hoeve et al.2018). As DHF is a geometry-dependent spot-ablation phenomenon unique to each material and (inter-element) analyte ratio pair (e.g. NIST SRM glasses, zircon, 206Pb/238U, 207Pb/235U, 87Rb/87Sr), it cannot be corrected via ICP-MS tuning (Mank and Mason1999). However, DHF can be optimised to balance the overall count rate against the magnitude of DHF by adjusting the spot geometry and fluence parameters for analysis of a given material (Guillong and Günther2002; Košler et al.2005). Researchers have proposed several methods for minimising DHF, typically by surface rastering or limiting acquisition times to short intervals; however, these compromise either the spatial or temporal resolution of analysis (Horstwood et al.2003; Paton et al.2010). Alternatively, regression modelling can be used to correct DHF during the data reduction process either using a predetermined empirical model or fitting a model to the observed data (e.g. Horn et al.2000; Paton et al.2010). Currently, the latter method of fitting a regression model to the observed DHF pattern of an RM then applying that model to unknowns is used by commercial data reduction software (e.g. LADR: Norris and Danyushevsky2018; Iolite: Paton et al.2011) for DHF correction in LA-ICP-MS methodology. In this process a matrix-matched RM is used (or a glass RM if no suitable matrix-matched material is available) and requires the assumption that the unknowns (i.e. target samples) have the same DHF behaviour as the RM. If the modelled RM poorly matches the DHF of the unknown, it will introduce some artefact (user-induced error) that may either increase the uncertainty of the observed ratio of the unknowns if using a time-matched signal or result in over- or underestimation of the observed ratio if using a subset of the signal (Fig. 1).

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f01

Figure 1Demonstration of the impact of using an inappropriate material to correct down-hole fractionation (DHF). The two rows are 206Pb/238U ratios for NIST SRM 610 glass (a–c) and GJ-1 zircon (d–f), collected in a single session. The columns from left to right are not corrected for DHF (grey points, raw ratios), DHF corrected using NIST SRM 610 as the model, and DHF corrected using GJ-1 as the model. Blue points indicate an appropriate DHF correction (e.g. GJ-1 by GJ-1), while orange points indicate an inappropriate DHF correction (e.g. GJ-1 by NIST SRM 610). Black horizontal spans indicate the geometric mean and its asymmetric lower and upper 2 standard error for each data set. The percent error shown is 2 standard error level of the larger geometric standard error (usually upper uncertainty). Note the significantly larger error for the inappropriate corrections (orange points) compared to the appropriate (blue) corrections. These uncertainties would then propagate into further processing steps to obtain accurate LA-ICP-MS ratios (e.g. calibration to a known value).

Download

Additionally, these modelling methods (i) do not readily lend themselves to arithmetic propagation of their uncertainties (Paton et al.2010), (ii) require an arbitrary user choice for the fitting method (simple or multiple linear regression polynomial order, or non-linear regression), and (iii) cannot be quantitatively compared between laboratories or even intra-laboratory analytical sessions. Thus, inter-element DHF remains one of the largest sources of uncertainty for LA-ICP-MS analyses and is difficult to either qualitatively or quantitatively compare (Horstwood et al.2016).

Here we present a technique to numerically quantify processes in the geosciences which can be modelled by a linear combination of basis functions (e.g. DHF) using the coefficients of orthogonal polynomials fit to the signals. We develop upon the algorithms for quantitative modelling of rare earth element patterns by O'Neill (2016) and Anenburg and Williams (2022). Our modified implementation of these algorithms uses input uncertainties as weights and can be more generally applied to data where there is a predictable x-to-y relationship. Here we demonstrate the technique's application for modelling DHF in a quantitative manner.

Using a LA-ICP-MS dataset (time, ratio of analytes), an orthogonal decomposition is used to fit a polynomial to the signal data by generalised least squares (Fig. 2). This method of using orthogonal polynomial decomposition during model fitting enables the quantitative comparison of DHF patterns via their coefficients for data from different analyte ratio pairs, for varied materials, and for inter-session (Fig. 2) and inter-laboratory datasets. Furthermore, this fitting method allows for improved computation of the uncertainty for each coefficient (and thus overall fit) and their covariances which can then be propagated into the final result.

In this research we highlight the utility of the algorithm for numeric comparison of the U–Pb DHF patterns from a range of natural and synthetic RMs used for U–Pb age date calibration. We envisage that this algorithm could be used to self-correct individual analyses for DHF (or as spline bases for signals that display zonation, e.g. different age domains), with a fall back to a known RM in the case of particularly noisy (analytical noise) data. This would allow reliable decoupling of the DHF correction for non-matrix-matched materials and propagation of the uncertainty in the down-hole model into the result, thereby improving the accuracy and precision of LA-ICP-MS analyses.

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f02

Figure 2(a) Centred, point-wise 206Pb/238U ratios for all zircon GJ-1 analyses measured in multiple analytical sessions from 2020 to 2024 that used the same laser parameters. All analyses in this subset of data were collected using a 30 µm spot diameter, 5 Hz repetition rate, and nominal fluence of 2.0 J cm−2. The black line is the fitted orthogonal polynomial, with grey shading indicating the 95 % confidence interval of the fit. The shaded confidence interval is barely visible due to its small absolute range. (b) Individual orthogonal components (i.e. the result of λjφj(x)) of the fit in the upper panel excluding λ0 (mean ratio, i.e. vertical central tendency) as this is a function of mass-spectrometer tuning and not a result of down-hole fractionation (DHF). φj is the orthogonal function evaluated at xj (i.e. x0,x1,x2,) for j=1 to order k. For this aggregated GJ-1 data, different mass-spectrometer tuning parameters and drift are accounted for by centring the data (subtraction of geometric mean and adding a constant of 1) resulting in λ0≈1. Components λ1 and higher represent the increasing shape component (linear slope, quadratic curvature, etc.) of the observed DHF pattern. Note in this case the magnitude of λ4 is approximately zero, and as such, the result of λ4 is approximately a horizontal line at zero. The sum of all components forms the polynomial in the upper panel.

Download

2 Experiment

To improve our ability to assess, quantify, and compare DHF we implement a method of orthogonal polynomial decomposition to perform multiple linear regression (polynomial) model fitting to time-resolved analytical data from ICP-MS instruments. In keeping with the terminology of O'Neill (2016) and Anenburg and Williams (2022) we use the term lambda (λ) coefficients to denote the polynomial coefficients derived from models using orthogonal decomposition.

Unlike regular linear regression fitting, the implementation of orthogonal polynomial decomposition for regression modelling imparts the property of independence on lower-order polynomial coefficients from their higher-order counterparts (Bevington and Robinson2003). That is to say that the first coefficient is independent of the second- and higher-order coefficients, while the second coefficient is independent of the third- and higher-order coefficients and so forth. In practice, this means that the values of the polynomial coefficients are stable (you can fit a first- or second-order polynomial to some data and the value of first coefficient will not change) and they have some physical meaning in relation to the data. In the case of DHF data, λ0 (the first coefficient) represents the (arithmetic) mean ratio, while λ1 and higher are the shape parameters that represent the DHF pattern (Fig. 2). For example, λ1 represents the linear slope (mean rate of change in ratio), while λ2 represents the quadratic curve of the DHF pattern (Fig. 2). The independence and physical meaning of the lambda coefficients allows them to be used to quantitatively compare independent fits (e.g. single analyses, materials, analytical sessions, differing laboratories) so long as other parameters (e.g. fluence, spot diameter/volume, laser wavelength) are considered. Further details on the fitting algorithm and mathematics behind this process are outlined below in Sect. 2.2, detailed in Appendix A, and described in Bevington and Robinson (2003), O'Neill (2016), and Anenburg and Williams (2022). The raw data ingestion and required preprocessing steps are outlined below in Sect. 2.1.

Code written to perform the data ingestion, preprocessing, and fitting was written in the Julia (version 1.10, LTS) programming language (Bezanson et al.2017). The code forms part of an in-development Julia package, GeochemistryTools.jl, that will be formally released in the future. Should users want early access to the package or source code for the algorithms outlined in this paper, they are available via GitHub (see “Code and data availability”). Figures were generated using the Makie.jl plotting library (Danisch and Krumbiegel2021) using “Scientific colour maps” (Crameri et al.2020) implemented in ColorSchemes.jl.

2.1 Data ingestion and preprocessing

Data were imported from the raw ICP-MS CSV files; in this case from Agilent ICP-MS instruments. The algorithm written for this task processes the individual files (which can be a single analysis, a specified sample, or entire session) and performs several operations, outlined in Fig. 3, on user-specified mass counts-per-second (CPS) columns (i.e. channels, analyte signals).

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f03

Figure 3Outline of data ingestion algorithm. CPS = counts per second of analyte (e.g. 238U).

Download

An automated procedure for determining gas blank and signal windows, as well as laser fire time, and aerosol arrival was developed to aid in rapid data processing and quality control. This algorithm is outlined at https://doi.org/10.25909/27041821 (Lloyd2024a, Fig. S1) with examples of the automatically determined time points and signal windows shown in https://doi.org/10.25909/27041821 (Lloyd2024a, Fig. S2).

Gas blank determination uses a geometric mean adapted for zeros (Eq. 1) as per Habib (2012):

(1) for G 0 , G = n + G + + n 0 G N = n + N G + ,

where G is the geometric mean, N is the total count of data, xi is the ith input value from i=1 to i=N, G+ is the geometric mean of all xi>0, n+ is the count of xi>0, G0 is the geometric mean of all xi=0 (i.e. 0), and n0 is the count of all xi=0.

As previously outlined (Sect. 2) the nature of orthogonal polynomial decomposition means that lambda coefficients (and their uncertainties) one and higher should not change with respect to a centred or non-centred dataset for a single analysis. For aggregated data (session and/or sample aggregation), higher-order lambda coefficients of centred and non-centred may not be equal unless all external factors (fluence, spot diameter, machine drift) have been controlled for. We strongly recommend that centred datasets are used for fitting intra- and intersession sample-aggregated datasets as the centring process will remove the influences of machine drift and tuning that λ0 is sensitive to. Aggregation of data requires grouping by factors known to influence DHF (see Sect. 3.1) to ensure like-for-like data are being modelled.

Centring of the data aligns the central tendencies of each individual analysis to ≈1 while retaining the scale and shape (Fig. 4) of the change in raw ratios over the time-resolved signal, i.e. the DHF. Centring is performed by subtracting the central tendency (i.e. geometric mean, or a user could alternatively use the arithmetic mean, or median) from the time-resolved ratios for each individual file (i.e. analysis) and adding a constant value of 1.0 to the result. The addition of the constant to centre the values around one is to prevent numeric errors during weighted fits when the weights (input errors) are very small (i.e. close to 0). This centring enables visual comparison (Fig. 4) and accurate numeric assessment of DHF behaviours across samples, sessions, spot geometries, and fluences without the need to calibrate data to a known RM ratio for each session or accounting for machine drift during a session.

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f04

Figure 4(a) Point-wise computed 206Pb/238U ratios for each measurement of NIST SRM 610 glass in the dataset. Symbol shapes represent spot diameter, and colour represents date of analytical session. (b) Centred, point-wise 206Pb/238U ratios for each measurement of NIST SRM 610 glass in the dataset (same data as in panel a). Centring (by subtraction of geometric mean and adding a constant of 1) preserves the relative positions of each data point but aligns them to a common central tendency. This preserves the shape and scale of the DHF pattern while removing ICP-MS tuning and drift effects, thus allowing accurate comparison between measurements using differing laser parameters, on different materials, and across ICP-MS sessions or laboratories.

Download

2.2 Outline of fitting algorithm

In the following equations x denotes the independent variable (e.g. time in seconds) and the dependent variable (e.g. analyte ratio) is represented by y for observed values or y^ for predicted values of some function of x, i.e. f(x). For clarity, we use φ to differentiate the orthogonal function terms and λ to differentiate the coefficients of the orthogonal polynomial from their regular counterparts (f and β, respectively). Function terms and coefficients use j and l as indices with k denoting the polynomial order. The indicator i is used for the ith value of xi, yi, and y^i for i=1 to i=N (total count of data).

Polynomials of order k (Eq. 2) can be orthogonal to each other under some inner product (Eq. 3). The property of independence of each coefficient also enables efficient computation of the polynomial statistics (errors, variances and covariances, residual sum of squares, etc.) at each order. The efficient computation is enabled because only the coefficient matrix of the highest order (five in our case) needs to be calculated rather than the entire solution for each polynomial order. Once computed, subsequent calculations can use specific subsets of the operator and coefficient matrices to determine predictions and statistics for each polynomial order.

(2)Fx=β0+j=1kβjfjx(3)fjx,flx=0 for jl(4)Φx=λ0+j=1kλjφjx

In this contribution, we apply this to the DHF observed during LA-ICP-MS using the orthogonal property in Eq. (5).

(5) φ j x i φ l x i = 0  for  j l

We construct a fourth-order polynomial function (Eq. 6) using the sum of a set of orthogonal polynomials (Anenburg and Williams2022; Bevington and Robinson2003; O'Neill2016).

(6) y ^ x i = λ 0 + λ 1 φ 1 x i + λ 2 φ 2 x i + λ 3 φ 3 x i + λ 4 φ 4 x i

The orthogonal functions are solved for by using the methods outlined in Anenburg and Williams (2022) which make use of Vieta's formula and polynomial root finding. See Anenburg and Williams (2022), Appendix A, or the source code for further detail. Once the orthogonal polynomial forms are computed, the algorithm generates the operator matrix X and uses generalised least-squares regression (Eqs. 7 to 9) to solve the vector, λ, and matrix, U, while utilising analytical uncertainties (ϵ) for weights. The vector λ contains the polynomial coefficients λ0λ4, while the matrix U contains the variances and covariances. Scaled values of ϵ are used to construct the diagonal values of the weight matrix Ω.

(7)y=Xλ+ϵEϵ|X=0, and Covϵ|X=σ2Ω-1=U(8)λ=XTΩ-1X-1XTΩ-1y(9)U=XTΩ-1X-1

A graphical representation of the individual orthogonal polynomial components and an example of the resulting fit is shown in Fig. 2. Our algorithm allows the user to employ automated outlier removal with an outlier considered to have a studentised residual ≥3. The residuals used in outlier identification are computed from the orthogonal polynomial order with the minimum Akaike information criteria corrected (AICc) value (Eq. 10) (Akaike1974; Burnham and Anderson2002). If outlier removal is enabled, the algorithm will iterate over the input data until no studentised residuals ≥3 remain in the fit or it has gone through 10 iterations. The minimum AICc is used to determine the “best fit” (i.e. minimum information loss) polynomial order for each dataset.

(10) AICc = n log rss n + 2 k + 2 k k + 1 n - k - 1 ,

where k is the order plus two, n is the count of data, and rss is the residual sum of squares for the model. Several measures of fit are computed and accessible to the user via the object created during the fitting algorithm. These measures include the Bayesian information criteria corrected (BICc), the AICc and BICc weights, the normalised root-mean-square error, the reduced chi-squared statistic χr2, the multiple correlation coefficient ρ2 or R2 value, and an unbiased Olkin–Pratt-adjusted estimator ρOP2 of ρ2 (Karch2020; Olkin and Pratt1958). Further details on these measures of fit can be found in Appendix A.

An additional consideration in this algorithm is that floating point computation is inherently inexact, has finite precision, and can incur round-off errors (Fernández et al.2003). While round-off errors are not an issue with most geochemical computations generally, due to the often small absolute values and numerous sequential computations being performed in this algorithm, these errors can lead to inaccuracy in the result. To overcome these potential accuracy issues we utilise the extended precision library MultiFloats.jl (Zhang2024) alongside Julia's native extended precision operations. We also use QR factorisation or singular value decomposition (SVD) depending on the condition number of the operator matrix to solve for the λ vector and the variance–covariance U matrix to ensure numeric stability. If, after the final calculation, the absolute value of a coefficient is less than the default machine rounding tolerance for the extended precision type (Float64x4), it is rounded to zero as it is considered to be unresolvable from a value of zero.

2.3 Analysis of reference materials by LA-ICP-MS

Reference materials for apatite (401, KO, MAD, Durango, Wilberforce), baddeleyite (BADPHE, G15874, G18650), monazite (TS1MNZ, 222, RW1, MAdel, MtGar, Ambat), rutile (R10, R19), titanite (Mt Painter, MKED), xenotime (MG1, BS1), and zircon (Mud Tank, Plešovice, GJ-1, 91500, Temora, Rak17) were analysed for U–Pb isotope ratios and trace element concentrations using optimal methods for the specific mineral (Bockmann et al.2022; Fletcher et al.2004; Gain et al.2019; Glorie et al.2020; Hall et al.2018; Horstwood et al.2016; Liu et al.2011; Lloyd et al.2022; Payne et al.2008; Sláma et al.2008; Spandler et al.2016; Thompson et al.2016; Wiedenbeck et al.2004; Yang et al.2024). An additional monazite (Pilbara) and several potential cassiterite (in-house) RMs were also measured. Samples were analysed using a RESOlution-LR 193 nm ArF excimer laser ablation system coupled to an Agilent 8900 ICP-MS/MS. Both instruments are housed at the University of Adelaide within the analytical facilities at Adelaide Microscopy. Full metadata for LA-ICP-MS analysis can be found in Appendix B.

Additional zircon, rutile, and baddeleyite RM data from prior studies were added to supplement the dataset and allow the assessment of the viability of inter-session comparisons (Lloyd et al.2020, 2022, 2023, 2024; van der Wolff2020; Yang et al.2024). These supplemental data were collected using the same LA system, coupled to either an Agilent 7900 ICP-MS (prior to November 2021) or the Agilent 8900 ICP-MS/MS in single quadrupole mode (from November 2021). Analytical conditions for these additional data can be found in the relevant references.

3 Results and discussion

In total, 5478 analyses (CSV files) were processed and their DHF patterns were modelled with the orthogonal polynomial decomposition outlined above (Sect. 2.2). This results in 5478 analysis fits, 188 session fits (sample aggregated per session), and 58 sample fits (sample aggregated across all sessions) accounting for differences in spot geometry across 29 unique materials (Figs. 5 and 6).

Linear slopes for the sample fits (λ1) range from -5.88×10-5 to 0.0217s−1, quadratic curvatures (λ2) range from -3.06×10-5 to 3.27×10-5s−2, cubic curvatures (λ3) range from -9.76×10-6 to 1.91×10-5s−3, and quartic curvatures (λ4) range from -4.23×10-6 to 3.82×10-7s−4. Given the small numbers and large quantity of fits, it is not feasible to display a table with all parameters nor is it intuitive for the reader. Instead, we provide the visual representation of λ1 plotted against λ2 and their best fit uncertainties (2 standard error) in Fig. 5, as well as the visual polynomial fit and its uncertainty for the sample fits in Fig. 6.

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f05

Figure 5Scatter plot of λ1 (x axis) and λ2 (y axis) for sample-aggregated (a), session-sample-aggregated (b), and individual analysis (c) orthogonal polynomial fits. Symbol shapes represent spot diameter, and symbol colour represents material type. Uncertainty bars are 2 standard error. Fits with a greater positive linear slope will plot further to the right, and fits with a greater quadratic curvature will plot to higher (positive) or lower (negative) positions on the y axis. In these simple bi-plots it can be readily observed that distinguishing between points becomes difficult or near impossible when many data are plotted or there is large variation in some data.

Download

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f06

Figure 6Orthogonal polynomial fits of down-hole fractionation (206Pb/238U) grouped by sample material. Note that the baddeleyite has a significantly larger y-axis scale due to the steeper linear fractionation component. Shaded areas show the 95 % confidence interval of the individual fit. Increasing line solidity corresponds to increasing spot diameters.

Download

3.1 Factors affecting down-hole fractionation

The scope of this contribution is to provide a tool that can be used for quantitative modelling of DHF and not an assessment of factors that influence DHF. Nonetheless, it is worth highlighting what the influencing factors are and what considerations are needed for accurately using the proposed method to model these patterns. As stated earlier, DHF is a phenomenon observed in LA-ICP-MS (and other mass-spectrometry methods) where the observed inter-element isotopic ratios change as a function of ablation depth/time independent of geological heterogeneity. This phenomenon is known to have several contributing factors, and their interactions are complex (Guillong and Günther2002; Günther et al.1997; Hergenröder2006a, b; Horn et al.2000; Košler et al.2005; Kroslakova and Günther2006; Mank and Mason1999; McLean et al.2016; Paton et al.2010; Sylvester and Ghaderi1997; Ver Hoeve et al.2018). It is known that sample material, spot diameter–depth ratio/geometry, laser wavelength, fluence, laser repetition rate, and analyte volatilities all noticeably influence the observed down-hole inter-element isotopic ratio. Thus, careful consideration of these parameters is needed when performing any modelling of the observed patterns. For example, it would not be sensible to combine data from two analyses of the same material that have different laser repetition rates and model that combined data, as a priori knowledge dictates that we would observe different DHF patterns over the same time interval from the two separate analyses. As such, when aggregating data in this current study, we ensure that only like-for-like data are combined, e.g. same material using the same laser wavelength with the same laser parameters (spot geometry, repetition rate, fluence) within a small tolerance (e.g. nominally 2.0±0.1J cm−2). Future studies could assess what a suitable tolerance is for each parameter; however, tolerances used in the current study (e.g. ±0.1J cm−2) do not appear to cause spurious results.

3.2 Physical interpretation of λ coefficients

Without needing RM calibration, the derived coefficients (λ1 and higher) represent numerical parameters of DHF patterns for any given analysis (Fig. 2).

The first coefficient, λ0, will inherit the units of the y variable. For this study, the value is unitless due to the y variable being CPSCPS but can be interpreted as the mean of y, i.e. the mean ratio. As we are centring all data around a value of one to remove the influence of machine calibration and drift, λ0 has limited use in this research contribution; however, it could be used to calculate machine drift, and the nominal correction value for known-ratio materials (ICP-MS mass-bias correction) during data reduction. The second coefficient, λ1, has units of s−1 and can be interpreted as the average rate of change/overall magnitude of change of the ratio between two analyte intensities over time. We suggest that this component largely reflects the fractionation occurring as a result of the change in depth-to-radius ratio during ablation. The third coefficient, λ2, has units of s−2 and can be interpreted as the acceleration (or deceleration) in the rate of change of the ratio over time. We propose that fractionation due to differences in the relative volatilities of the analytes is the major control on this component, i.e. the analyte signal intensities decaying at different rates as the depth-to-radius ratio increases.

Higher-order terms have more complex meanings: they are subsequent derivatives of the prior term. The fourth coefficient, λ3, has units of s−3 and can be interpreted as the rate of change in the acceleration (λ2) of the ratio rate of change through time. This is likely reflecting the focussing of the laser-induced plasma as the hole geometry changes and causes the subsequent change in shape of the ablation plume from the ablation site. It will be represented by an inflection point in the signal (i.e. free surface expansion to increased crater wall interaction). The fifth coefficient, λ4, has units of s−4 and can be interpreted as the rate of change of λ3 over time. We believe this term could be incorporating detector effects such as electrical noise, low signal-to-background ratios, and Poisson noise at low signal intensity.

From the above reasoning, we expect that a third-order polynomial (Φ3(x), λ1…3) should account for most of the variation seen in ratios over time during static spot ablation as a function of the ablated material and measured analytes. The fifth term (fourth-order polynomial) should incorporate the remaining contributions to the fit that are likely to be a result of instrument-induced artefacts.

3.3 Data analysis and visualisation

The proposed algorithm can efficiently handle large datasets, generating substantial numeric data. This presents two challenges: (1) analysing the data as it becomes a “big data” problem and (2) visualising the data in a meaningful and interpretable way.

There exists well-established literature for the analysis of large and/or multidimensional datasets. For multivariate data there is often a transformation from a higher-dimensional space to a lower-dimensional space to enable univariate and bivariate statistics. For example, methods such as principal component analysis (PCA) may be used to numerically evaluate the multivariate datasets.

Visualisation of multidimensional and/or large datasets is challenging. For simple 2-D data, an analyst can produce bi-plots (xy scatter plots) to visualise any relationships in data. For 3-D and 4-D data, analysts can plot either ternary or quaternary diagrams which require mathematical transformations of the data into a lower dimension (e.g. 3-D to 2-D) that emulates the higher-order space; hence, they are increasingly complex to visualise and interpret. In all cases large datasets can lead to visual clutter (over-plotting) where the viewer will not be able to readily distinguish between similarly positioned data points.

In Fig. 5 we have plotted λ1 on the x axis and λ2 on the y axis, with corresponding uncertainty bars (2 SE). These plots are limited to displaying only two of the five coefficients and suffer from visual clutter where it is difficult to distinguish some point from another and from scaling issues where the variation in some data is responsible for obscuring the view of other data.

Alternatively an analyst can employ dimensional reduction visualisation methods such as PCA, multidimensional scaling (MDS), or uniform manifold approximation and projection (UMAP) to visualise the higher-dimensional data in a lower-dimensional space. PCA can still incur visual clutter as multiple reduced components may still share the same coordinates, and PCA may not always be able to reduce the data to only two or three principal components that explain the majority of variance in a dataset. Conversely, UMAP will always be able to reduce a dataset to a specified number of dimensions and has hyperparameters that enhance visualisation to eliminate visual clutter.

We utilise UMAP (Fig. 7) to visualise the relative relationships of the higher-dimensional data in a lower-dimensional space (McInnes et al.2020). UMAP takes the input multidimensional data (5478×4 λ coefficients in this study) and tries to find a common embedding space (using the manifold assumption) to represent the local and global data topology in a lower dimension. UMAP has several variables called hyperparameters which control the output of the algorithm. The n-nearest neighbours hyperparameter is the most important for finding the balance between the global (low n-n) and local (high n-n) data structure. We use a value of 19 for the n-nearest neighbours hyperparameter as the minimum number of analyses for a single material in a session is 20, so we expect there to be 19 related points. We set the minimum distance hyperparameter to 1.0 to push the points apart to help visualisation.

In simpler terms, UMAP works by constructing a graph in high-dimensional space (e.g. 4-D) that is projected onto a lower-dimensional space (e.g. 2-D) where points are connected based on their closeness in higher-dimensional space. The shape of the data clusters in UMAP is a manifestation of the spatial relationships between higher-dimensional data with similar attributes and is not directly interpretable in a physical manner like traditional bi-plots or PCA (McInnes et al.2020).

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f07

Figure 7(a) Uniform manifold approximation and project (UMAP) of λ coefficients 0 to 4 for all analysis data (non-centred) in the study. (b) Subset of (a) only including zircon analyses. (c, d) UMAP of λ coefficients 1 to 4 for centred analysis data in the study with (d) being the zircon data subset of (c). In a practical sense, the closer or more clustered the points are to each other, the more similar they are. Axes are omitted, as xy coordinates are a normalised distance metric value with no physical meaning. This method of data visualisation is to help an end user understand the relative similarities within large datasets. Panels (a) and (b) are strongly controlled by the λ0 component, while (c) and (d) are reflecting the shape parameters λ1…4 of different materials.

Download

3.3.1 Interpreting the (sample-aggregated) data fits

Oxides generally have a greater linear slope component (λ1) compared to other materials analysed, cassiterite being an exception (Figs. 5 and 6). The glass (NIST SRM 610), cassiterite, and xenotime samples show the least overall DHF. Of note with the glass (NIST SRM 610) is we can clearly see the impact of spot diameter (thus geometry) on DHF. With decreasing spot diameter, the overall magnitude (linear slope) of DHF is increased (Fig. 6), but it also exacerbates the complex shape parameters (quadratic and cubic curvatures, etc.).

There are two distinct groupings (accounting for different spot diameters) for baddeleyite, with one group being the samples from Phalaborwa and the other group being the Kovdor sample (Fig. 6). The cause of this stark disparity in baddeleyite DHF is unknown, but the Ti concentration is 9–14 times higher in the Phalaborwa samples than in the Kovdor samples. The obvious outliers in the apatite, cassiterite, and monazite panels of Fig. 6 are the Durango and Wilberforce apatite samples, CstT4370 cassiterite, and the Pilbara monazite sample. For the Pilbara monazite, some analyses show considerable variation in their Pb and U concentration (proxied by count rate; see the “Code and data availability” section for signal plots available at figshare). Excluding these analyses from the sample/session aggregated fit for the Pilbara monazite will reduce the polynomial confidence interval and would improve the quality of the fit. However, these data are still analytically relevant and indicate that the Pilbara monazite may not be suitable to be an RM due to the variable ratio of Pb and U. For the Durango apatite, the flat DHF fit and larger uncertainty are due to low Pb counts, and for Wilberforce, the steeper linear DHF component and larger uncertainty are due to the inclusion of several points from some analyses that are highly leveraging the fit, even with automated outlier removal being applied. For cassiterite, the relatively flat fits and outlier (CstT4370) are generally due to low Pb and/or U counts and greater scatter in the underlying data. The discrepancies and/or larger uncertainties in the lambda coefficients for materials of the same type provide a way to numerically check for outliers in RM data, prior to further data reduction, rather than needing a user to review all the data graphically. A user should still review their data via graphical means as well to check for spurious results.

3.3.2 Application of UMAP as a data visualisation aid

For this study, a UMAP diagram (Fig. 7) helps to visualise the relative similarities between the λ coefficients for the 5478 analyses. If there are relationships in the data, we expect them to correlate with the individual materials (e.g. NIST SRM 610 glass, GJ-1 zircon, Plešovice zircon), the type of material (e.g. glass, zircon, xenotime), laser parameters, or a combination of these factors and thus plot in clusters on a UMAP diagram. The resulting map (Fig. 7) clearly differentiates the various materials better than the simple bi-plot (Fig. 5c), which suffers from over-plotting, obscuring any similarities/trends. The UMAP algorithm was not provided prior knowledge about the material, only the λ coefficients. Panels (a) and (b) of Fig. 7 are strongly controlled by the mean ratio (λ1) as the UMAP algorithm was provided with lambda values from non-centred fits to aid the reader in understanding the general principle of these diagrams. Panels (c) and (d) of Fig. 7 are more useful in visualising the relationships of the shape parameters within the DHF models as the effects of mass bias and drift are removed by providing the algorithm with the lambda values from centred fits.

For small, simple datasets, a simple bi-plot may suffice, but in larger datasets, and especially an inter-laboratory comparison where there is likely to be several hundreds of thousands to millions of individual data points, a method such as UMAP should be employed to visualise the relationships of the data.

https://gchron.copernicus.org/articles/7/265/2025/gchron-7-265-2025-f08

Figure 8(a) Orthogonal polynomial fitting of down-hole fractionation for zircon data in this study. The visualised polynomials represent the best-order fit for the aggregated data of a given sample at a single spot diameter, e.g. GJ-1 at 30 µm, GJ-1 at 19 µm. Shaded areas show the 95 % confidence interval of the individual fit. The inset graph (b) shows the λ1 and λ2 coefficients and their 2 standard error uncertainty for each of the session-based fits for the seven zircons analysed. Fits with a greater linear slope plot further to the right, and fits with a greater negative quadratic curvature (i.e. the slope flattens at a faster rate) will plot toward the bottom.

Download

In Fig. 7, it is apparent that baddeleyite (pale blue-green markers in the upper panels) DHF is different to all other materials analysed in this study as the points are forming their own clusters (particularly Fig. 7c). The deviation by baddeleyite is also reflected in Fig. 5 where baddeleyite analyses are plotting to the far right of each subplot, indicating a much stronger linear slope fractionation component. What is also noticeable is that there is a large diversity in the DHF patterns of zircon (Fig. 7b and d), which in part is due to analytical noise as the individual points represent a single analysis and the fits have greater uncertainty as they are more susceptible to that noise. Nevertheless, in general 91500 is behaving most differently from all other zircons at a given spot diameter (Fig. 8), and Plešovice is behaving most like NIST SRM 610 of all the zircons (Fig. 7c and d). We do not suggest that NIST SRM 610 glass is a suitable alternative to correct DHF for zircon (as seen in Figs. 6 and 1), rather that there is significant variation in the DHF of zircon standards, and therefore careful consideration should be given to applying appropriate zircon standards for analytical sessions depending on the unknown zircons to be analysed and the time period used for signal integration (Guillong and Günther2002; Hergenröder2006a; Košler et al.2005; Paton et al.2010).

We can see that spot geometry has a significant impact on DHF (Fig. 6, glass and zircon panels; Figs. 7 and 8), which is a known phenomenon (Horn et al.2000; Mank and Mason1999; Paton et al.2010). However, all zircons except 91500 have remarkably similar DHF patterns at 29 µm (and 30 µm) (Fig. 8). The linear slope component (λ1) of the DHF pattern appears to be relatively constant between laser sessions for a given zircon and spot diameter, while the quadratic component varies (Fig. 8b) but has greater uncertainty associated with it. The exact mechanism as to why 91500 shows greater DHF than the other analysed zircons is unknown and not the focus of this paper; however, prior studies have investigated the potential causes of differing DHF patterns in zircon and glass (Košler et al.2005), and it is possible that radiation damage plays a role (Allen and Campbell2012; Marillo-Sialer et al.2014; Solari et al.2015; Thompson et al.2018).

3.4 Applications

The algorithm defined in this paper offers a way to numerically quantify processes in geoscience that can be modelled with a linear combination of basis functions, building on the work of O'Neill (2016) and Anenburg and Williams (2022) with a more generalised approach. The generalised nature of this algorithm allows it to be used for orthogonal polynomial fitting, up to fourth order, of any data where it is sensible; i.e. there exists a polynomial of order k(0…4) that can model the input data. In this contribution we use it to numerically quantify DHF patterns of varied materials during LA-ICP-MS and demonstrate UMAP as a visualisation technique to help understand the structure of the data.

We envisage that this algorithm could be implemented in data reduction software to self-correct the DHF pattern of well-behaved materials (i.e. fitting a single geochemical zone) or as splines to geologically meaningful zonation, with a fallback to a known homogeneous material where the analysis signal is complex (e.g. due to complex zonation or inclusions). In the latter case, the user would need to select an appropriate material, and it would likely result in somewhat inaccurate correction in any case, thus impacting the accuracy of the final result (Fig. 1). Self-correction would require modelling the individual analysis signal and using the orthogonal model to then correct the data for DHF observed in that analysis.

Alternatively the coefficients of the orthogonal models could be used to determine which RM DHF pattern best suits the unknown material DHF pattern. The quantification of the DHF pattern enables numeric assessment of fits to unknowns against fits of knowns to find the most similar material with respect to DHF and of the appropriate model order. For example, the AICc is minimised for the fourth-order polynomial for NIST SRM 610 and the third-order polynomial for most zircon RMs, and we can see this reflected visually in Figs. 2, 4, 6, and 8. Using the measures of fit provides users with a way to choose an appropriate model that accurately reflects the uncertainty and scatter while minimising overfitting and the introduction of artefact errors. Additionally, the quantification done by this algorithm provides a way for laboratories to quantitatively compare the DHF behaviour of their RMs and analytical setup against other laboratories.

4 Conclusions

This contribution generalises the orthogonal polynomial decomposition algorithms of O'Neill (2016) and Anenburg and Williams (2022) to allow for modelling of processes in geoscience that can be represented by a linear combination of basis functions (predictable x-to-y relationships) beyond rare earth element (REE) patterns while also using input uncertainties as weights for the models. Being written in the Julia programming language enables fast computation of many thousands of input files from data ingestion to model output.

We demonstrate quantitative modelling of DHF patterns observed during static spot LA-ICP-MS using the developed algorithm by applying it to an exemplar dataset of U–Pb RMs and guide the reader through data visualisation and interpretation of the derived coefficients. The algorithm provides the ability to quantitatively compare DHF for the same RMs across laboratories, differences between RMs, and differences between analytical setups.

Furthermore, this algorithm could be implemented into data reduction programmes to numerically assess the similarities and fit qualities of DHF correction, ensuring more accurate corrections are made. It is possible that the algorithm could be used to self-correct the DHF pattern of a single analysis by first modelling it and then using that model to correct the pattern so long as the assumption that the signal (or portion of) reflects a single, geologically meaningful zone holds. This allows a relaxation of the assumption that the DHF pattern of an unknown is the same as a specified RM by decoupling the correction from that material which can lead to improvements in accuracy and/or precision of the final result. More conservatively or in cases where the assumption of a single geologically meaningful zone does not hold, models of the individual analyses can be used to quantitatively choose the most similar RM to use as a DHF correction material.

Appendix A: Algorithmic mathematics

LA-ICP-MS count data are strictly positive integers and follow a (discrete) Poisson distribution at low count level (e.g. gas blank), although they eventually approximate a normal distribution at high count rate as implied by the central limit theorem (Bevington and Robinson2003). Data from a mass spectrometer used for geochronology and/or elemental analysis are generally output in counts-per-second (CPS), not counts, and violate the integer requirement of a discrete probability distribution. This combined with the less intuitive and asymmetric scale parameters of geometric means has led to the use of normal statistics (e.g. arithmetic means) for ratio computations. The use of normal statistics for gas blank measurements generally leads to overestimation (arithmetic mean) or underestimation (median if lots of 0 counts) and to a violation of the equality A/B=B/A-1. This latter violation is commonly seen in U–Pb geochronology, where 206Pb/238U238U/206Pb-1. Given that elemental count data and elemental ratios are compositional data and are strictly positive real values (i.e. positively skewed) which often follow a log-normal distribution, a geometric mean is a more appropriate measure of the central tendency of the data. In our algorithm we use the geometric mean to compute the gas blank value. While geometric means are often used to address the problems above, we use a modified geometric mean to incorporate valid zero values which a standard geometric mean cannot. We will first review the standard geometric mean before detailing the modified version. The geometric mean is defined for a set of positive real numbers as Eq. (A1) (Habib2012; Feng et al.2017):

(A1) G = i = 1 N x i N ,

where G is the geometric mean, N is the total count of data, and xi is the ith input value from i=1 to i=N. Alternatively, it can be calculated by the arithmetic mean of the logarithm of the values (Eq. A2) and then raising the result to e (Eq. A3) (Habib2012; Kirkwood1979; Feng et al.2017).

(A2)logG=1Ni=1Nlogxi(A3)G=elogG

As computing the product of an arbitrarily large series of numbers can lead to overflow errors (Fernández et al.2003; Polhill et al.2006) and therefore inaccurate results, most geometric mean algorithms implement the second form where the arithmetic mean is calculated from the logarithm-transformed data and raised to e. The logarithmic transformation requires that all xi>0. Additionally, as this is a multiplicative mean, when any xi=0, G=0.

To overcome these limitations and obtain a more accurate estimate of the mean gas blank, we implement a geometric mean that accounts for zeros (Eq. A4). This equation is effectively a weighted arithmetic mean of the geometric mean of the values >0, and the geometric mean of the zeros, i.e. 0, with the weights equal to the number of values in each category (Habib2012):

(A4) for  G 0 G = n + G + + n 0 G 0 N = n + N G + ,

where G+ is the geometric mean of all xi>0, n+ is the count of xi>0, G0 is the geometric mean of all xi=0 (i.e. 0), and n0 is the count of all xi=0. While this form of the geometric mean has its critics (de la Cruz and Kreft2018), we believe it provides a more accurate estimate of the true gas blank which should lie somewhere between 0 and the arithmetic mean which will bias the result to values >0, and it avoids the issue of small constants being added to the data causing major changes in the geometric mean (Feng et al.2017).

Although they cannot handle values of zero, we implement a geometric variance (Eq. A5), geometric standard deviation (Eq. A6), and standard error of the geometric mean (Eq. A7) (Kirkwood1979; de Carvalho2016):

(A5)for x>0σG2=eσlogx2,(A6)for x>0σG=eσ(logx),(A7)for x>0SEG=eSElogx,

where σlogx2, σlog x, and SElog x are, respectively, the variance, standard deviation, and standard error of the mean of the logarithms of xi>0. The σ(log x) can also be interpreted as the average magnitude of the relative deviation from the geometric mean and SElog x (Chatfield et al.2025). In geometric statistics, the measures of variance, standard deviation, and standard errors are scale parameters. The corresponding uncertainty range of G for these statistics is asymmetric and denoted by G×u;G÷u, where u is the corresponding statistic (e.g. σ2, σ, SE).

A1 Orthogonal polynomial decomposition

We use orthogonal polynomial decomposition to fit a polynomial to analyte ratio data where the coefficients have physical meaning in relation to those data. This decomposition enables quantification of the shape parameters of a down-hole fractionation pattern. Integral to this process is the calculation of the orthogonal polynomials (Eq. A8) to be used to fit the final model.

(A8) y ^ x i = λ 0 + λ 1 φ 1 x i + λ 2 φ 2 x i + λ 3 φ 3 x i + λ 4 φ 4 x i

Our implementation can fit up to a fourth-order polynomial and requires solving the orthogonal property of Eq. (A9) for each orthogonal function, φ(x).

(A9) φ j x i φ k x i = 0 for  j k

The functions φ1…4 are as follows.

φ1=xi-βφ2=xi-γ1xi-γ2φ3=xi-δ1xi-δ2xi-δ3φ4=xi-ε1xi-ε2xi-ε3xi-ε4

Here β, γn, δn, and εn are predetermined constants calculated prior to fitting the final model. The solution to β is simply the arithmetic mean of the x values (Eq. A10).

(A10) i = 1 N x i - β = 0 , leading to 1 N i = 1 N x i = β

The solutions to γ1…2, δ1…3, ε1…4 require solving systems of equations with increasing complexity as follows, with a line break between polynomial orders.

xi-γ1xi-γ2=0xixi-γ1xi-γ2=0xi-δ1xi-δ2xi-δ3=0xixi-δ1xi-δ2xi-δ3=0xi2xi-δ1xi-δ2xi-δ3=0xi-ε1xi-ε2xi-ε3xi-ε4=0xixi-ε1xi-ε2xi-ε3xi-ε4=0xi2xi-ε1xi-ε2xi-ε3xi-ε4=0xi3xi-ε1xi-ε2xi-ε3xi-ε4=0

This system of equations can be solved for numerically using an optimisation algorithm, or as stated in Anenburg and Williams (2022) we can utilise Vieta's formulas to rearrange the complex system of equations to achieve an analytical solution. The application of Vieta's formula allows the conversion of the above complex systems to a simple polynomial whose real roots are the γ1…2, δ1…3, and ε1…4 values. Defining γ1+γ2=a and γ1γ2=b and through simplification, we obtain the matrix form of the following problem.

-xiN-xi2xiab=-xi2-xi3

Once solved, a and b are the coefficients used in the quadratic polynomial (Eq. A11) whose real roots are the two parameters γ1 and γ2.

(A11) γ 2 - a γ + b = 0

Following the same process, δ1…3 can be solved using Eqs. (A12) and (A13), while ε1…4 can be solved using Eqs. (A14) and (A15).

(A12)-xi2xi-N-xi3xi2-xi-xi4xi3-xi2abc=-xi3-xi4-xi5(A13)δ3-aδ2+bδ-c=0(A14)-xi3xi2-xiN-xi4xi3-xi2xi-xi5xi4-xi3xi2-xi6xi5-xi4xi3abcd=-xi4-xi5-xi6-xi7(A15)ε4-aε3+bε2-cε+d=0

To account for errors in y (i.e. analyte ratios) we use generalised least squares (Eq. A16) to fit the model. The errors in y are used as weights for the generalised least-squares algorithm. As we have not yet implemented input covariances, our model is the specialised case of generalised least squares known as weighted least squares.

(A16) Let  y = X λ + ϵ E ϵ | X , and  Cov ϵ | X = σ 2 Ω = U ,  then  λ = X T Ω - 1 X - 1 X T Ω - 1 y and  U = X T Ω - 1 X - 1

To generate the orthogonal nature of the fit, the design matrix, X, uses the φ1…4 functions from above and is as follows.

1φ1x1φ2x1φ3x1φ4x11φ1xiφ2xiφ3xiφ4xi1φ1xnφ2xnφ3xnφ4xn

Let the individual weights, ωi, be the absolute error of yi, then the weight matrix Ω is as follows.

ω12ωi2ωn2

The system can then be solved for the vector of coefficients, λ, using Eq. (A16).

For true generalised least squares, the non-diagonal elements of Ω would be the covariances. Future improvements of the algorithm will implement covariance terms.

In the following equations, yi is the ith value of the observed dependent variable (e.g. analyte ratio), y^i is ith value of the predicted dependent variable, n is the total number of values, and k is the polynomial order. To assess the quality of fitted models and to assist with choosing an optimal model, we implemented several measures of fit. Standard measures of fit calculated are the residual sum of squares (rss, Eq. A17),

(A17) rss k = i = 1 n y i - y ^ i k T Ω - 1 i = 1 n y i - y ^ i k ,

the mean-square error or reduced chi-squared value in multiple regression (mse or χr2, Eq. A18),

(A18) χ r k 2 = mse k = rss n - k + 1 ,

the root-mean-square error (rmse, Eq. A19),

(A19) rmse k = mse k ,

the normalised root-mean-square error (nrmse, Eq. A20),

(A20) nrmse k = rmse k max y - min y ,

and the multiple regression coefficient (ρ2 or R2, Eq. A21),

(A21) ρ 2 = 1 rss k y - y ¯ T Ω y - y ¯ .

Additionally, we implement two measures based on Bayesian reasoning and information theory. The first of these is the corrected Bayesian (or Schwarz) information criterion (BICc, Eq. A22) and corresponding BICc weights (BICcW, Eq. A23) (Burnham and Anderson2002; Schwarz1978).

(A22)BICck=nlogrssn+k+2logn+log2π+n(A23)BICcWk=exp-0.5BICck-minBICc0maxkexp-0.5BICck-minBICc

The second is the corrected Akaike information criterion (AICc, Eq. A24) and corresponding AICc weights (AICcW, Eq. A25) (Akaike1974; Burnham and Anderson2002).

(A24)AICck=nlogrssn+2k+2+2k+2k+2+1n-k+2-1(A25)AICcWk=exp-0.5AICck-minAICc0maxkexp-0.5AICck-minAICc

Finally, we implement an Olkin–Pratt-adjusted multiple correlation coefficient (ρ^OP2, Eq. A26) which is the optimal unbiased estimator of ρ2 (Karch2020; Olkin and Pratt1958).

(A26) ρ ^ OP 2 ρ 2 = 1 - n - 3 n - k + 1 - 1 1 - ρ 2 2 F 1 1 , 1 ; n - k + 1 - 1 2 ; 1 - ρ 2

The Olkin–Pratt-adjusted multiple correlation coefficient requires the computation of the Gauss hypergeometric function, which is computationally non-trivial; however, Karch (2020) outlines the process to do this, and this is implemented as Eq. (A27) using the Taylor series expansion (Eq. A28) of the Gauss hypergeometric function (Pearson et al.2017).

(A27) 2 F 1 a , b ; c ; z = 0 if  z = 0 c - 1 c - 2 if  z = 1 j = 0 a j b j c j 1 j ! z j T j if  0 < z < 1

For the case, 0<z<1, the Taylor series is truncated either when the ratio of the value for the next term in the series and the current sum of the series are less than or equal to the machine epsilon value for a Float64 type or when the number of iterations (and thus terms) reaches 1000 (Eq. A28). This will effectively truncate the series when the machine cannot resolve the difference between the change in successive terms.

(A28) Let  T 0 = 1 S 0 = 1 , and  j = 1 , then for  1 < j < 1000 and  T j + 1 S j > ϵ T j + 1 = T j a + j b + j c + j z j + 1  and  S j = j = 1 j < 1000 S 0 + T j + 1 .

A2 Outlier detection

We implement automated outlier removal based on the studentised residual. An outlier is considered to have a studentised residual ≥3 from the model with the polynomial order (k) that minimises the AICc. This outlier removal process is computationally intensive as it requires the calculation of leverages hii which are the diagonal values of the projection matrix. The individual leverages are calculated using Eq. (A29).

(A29) h i i = X i , 1 k U 1 k , 1 k X 1 k , 1 n T Ω - 1

From the non-studentised residuals (Eq. A30) and the mean square error (mse, Eq. A18), the studentised residuals are calculated using Eq. (A31).

(A30)ri=yi-y^i(A31)si=mse1-hii

If the user chooses this automated outlier removal, the algorithm will loop until either (a) no studentised residuals are ≥3 or (b) the loop has performed 10 iterations.

Appendix B: Laser conditions

Table B1LA-ICP-MS parameters.

Download Print Version | Download XLSX

Table B2Summary of laser conditions and materials analysed in each session.

Download XLSX

Code and data availability

The Julia package which implements the above algorithms is in early development; however it is available to all via GitHub at github.com/jarredclloyd/GeochemistryTools.jl (DOI: https://doi.org/10.5281/zenodo.16624256, Lloyd2025). Raw and derived data used in this study are freely available from figshare at https://doi.org/10.25909/26778298, (Lloyd and Gilbert2024). Code used to compile the raw data, fit the data, and generate the figures in this paper is freely available from figshare at https://doi.org/10.25909/26779255.v3 (Lloyd2024b).

Supplementary Figs. S01 and S02 detailing the automatic signal time algorithm employed in this paper and example data it was tested on are available at https://doi.org/10.25909/27041821 (Lloyd2024a). Additional figures showing the individual analysis signals are also available from figshare at https://doi.org/10.25909/26778592.v1 (Lloyd2024c). A plot exists for each sample for each session, with the arbitrary colours of each plot representing individual analyses.

Author contributions

JCL: conceptualisation, data curation, formal analysis, investigation, methodology, resources, software, validation, visualisation, writing (original draft), writing (review and editing). CS: conceptualisation, resources, supervision, writing (review and editing). SEG: formal analysis, methodology, resources, writing (review and editing). DH: methodology, validation, writing (review and editing).

Competing interests

The contact author has declared that none of the authors has any competing interests.

Disclaimer

Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.

Acknowledgements

We acknowledge and pay respects to the Kaurna People, the traditional custodians whose ancestral lands the University of Adelaide is built on, and we work on. We acknowledge the deep feelings of attachment and relationship of the Kaurna people to Country, and we respect and value their past, present, and ongoing connection to the land and cultural beliefs.

The authors also acknowledge the Tate Museum and South Australian Museum for the provision of some sample material used in this study.

We thank the handling editor (Klaus Mezger), handling associate editor (Axel Schmitt), and referees (Noah McLean and one anonymous referee) for their constructive commentary that improved the manuscript.

Paul Olin and Mark Chatfield are thanked for their conversations regarding some concepts presented in this paper.

Financial support

This research was co-funded by the Australian Critical Minerals Research Centre at the University of Adelaide and the Department for Energy and Mining, South Australia.

Review statement

This paper was edited by Axel Schmitt and reviewed by Noah M. McLean and one anonymous referee.

References

Agatemor, C. and Beauchemin, D.: Matrix Effects in Inductively Coupled Plasma Mass Spectrometry: A Review, Anal. Chim. Acta, 706, 66–83, https://doi.org/10.1016/j.aca.2011.08.027, 2011. a

Akaike, H.: A New Look at the Statistical Model Identification, IEEE T. Automat. Contr., 19, 716–723, https://doi.org/10.1109/TAC.1974.1100705, 1974. a, b

Allen, C. M. and Campbell, I. H.: Identification and Elimination of a Matrix-Induced Systematic Error in LA–ICP–MS 206Pb/238U Dating of Zircon, Chem. Geol., 332–333, 157–165, https://doi.org/10.1016/j.chemgeo.2012.09.038, 2012. a, b

Anenburg, M. and Williams, M. J.: Quantifying the Tetrad Effect, Shape Components, and Ce–Eu–Gd Anomalies in Rare Earth Element Patterns, Math. Geosci., 54, 47–70, https://doi.org/10.1007/s11004-021-09959-5, 2022. a, b, c, d, e, f, g, h, i

Bevington, P. R. and Robinson, D. K.: Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, Boston, 3rd edn., ISBN 978-0-07-247227-1, 2003. a, b, c, d

Bezanson, J., Edelman, A., Karpinski, S., and Shah, V. B.: Julia: A Fresh Approach to Numerical Computing, SIAM Review, 59, 65–98, https://doi.org/10.1137/141000671, 2017. a

Bockmann, M. J., Hand, M., Morrissey, L. J., Payne, J. L., Hasterok, D., Teale, G., and Conor, C.: Punctuated Geochronology within a Sustained High-Temperature Thermal Regime in the Southeastern Gawler Craton, Lithos, 430–431, 106860, https://doi.org/10.1016/j.lithos.2022.106860, 2022. a

Burnham, K. P. and Anderson, D. R.: Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach, Springer, New York, 2nd edn., ISBN 978-0-387-95364-9, 2002. a, b, c

Chatfield, M. D., Marquart-Wilson, L., Dobson, A. J., and Farewell, D. M.: Mean Relative Error and Standard Relative Deviation, Statistica Neerlandica, 79, e70001, https://doi.org/10.1111/stan.70001, 2025. a

Chew, D. M., Drost, K., and Petrus, J. A.: Ultrafast, > 50 Hz LA-ICP-MS Spot Analysis Applied to U–Pb Dating of Zircon and Other U-Bearing Minerals, Geostand. Geoanal. Res., 43, 39–60, https://doi.org/10.1111/ggr.12257, 2019. a

Crameri, F., Shephard, G. E., and Heron, P. J.: The Misuse of Colour in Science Communication, Nat. Commun., 11, 5444, https://doi.org/10.1038/s41467-020-19160-7, 2020. a

Danisch, S. and Krumbiegel, J.: Makie.Jl: Flexible High-Performance Data Visualization for Julia, J. Open Source Softw., 6, 3349, https://doi.org/10.21105/joss.03349, 2021. a

de Carvalho, M.: Mean, What Do You Mean?, Am. Stat., 70, 270–274, https://doi.org/10.1080/00031305.2016.1148632, 2016. a

de la Cruz, R. and Kreft, J.-U.: Geometric Mean Extension for Data Sets with Zeros, arXiv [preprint], https://doi.org/10.48550/ARXIV.1806.06403, 2018. a

Feng, C., Hongyue, W., Yun, Z., Yu, H., Yuefeng, L., and Tu, X. M.: Generalized Definition of the Geometric Mean of a Non Negative Random Variable, Commun. Stat. A- Theor., 46, 3614–3620, https://doi.org/10.1080/03610926.2015.1066818, 2017. a, b, c

Fernández, J.-J., García, I., and Garzón, E. M.: Floating Point Arithmetic Teaching for Computational Science, Future Gener. Comp. Sy., 19, 1321–1334, https://doi.org/10.1016/S0167-739X(03)00090-6, 2003. a, b

Fletcher, I. R., McNaughton, N. J., Aleinikoff, J. A., Rasmussen, B., and Kamo, S. L.: Improved Calibration Procedures and New Standards for U–Pb and Th–Pb Dating of Phanerozoic Xenotime by Ion Microprobe, Chem. Geol., 209, 295–314, https://doi.org/10.1016/j.chemgeo.2004.06.015, 2004. a

Gain, S. E. M., Gréau, Y., Henry, H., Belousova, E., Dainis, I., Griffin, W. L., and O'Reilly, S. Y.: Mud Tank Zircon: Long-Term Evaluation of a Reference Material for U-Pb Dating, Hf-Isotope Analysis and Trace Element Analysis, Geostand. Geoanal. Res., 43, 339–354, https://doi.org/10.1111/ggr.12265, 2019. a

Gehrels, G. E., Valencia, V. A., and Ruiz, J.: Enhanced Precision, Accuracy, Efficiency, and Spatial Resolution of U-Pb Ages by Laser Ablation-Multicollector-Inductively Coupled Plasma-Mass Spectrometry, Geochem. Geophys. Geosyst., 9, Q03017, https://doi.org/10.1029/2007gc001805, 2008. a

Gilbert, S. E., Olin, P., Thompson, J., Lounejeva, E., and Danyushevsky, L.: Matrix Dependency for Oxide Production Rates by LA-ICP-MS, J. Anal. Atom. Spectrom., 32, 638–646, https://doi.org/10.1039/C6JA00395H, 2017. a

Glorie, S., March, S., Nixon, A., Meeuws, F., O`Sullivan, G. J., Chew, D. M., Kirkland, C. L., Konopelko, D., and De Grave, J.: Apatite U–Pb Dating and Geochemistry of the Kyrgyz South Tian Shan (Central Asia): Establishing an Apatite Fingerprint for Provenance Studies, Geosci. Front., 11, 2003–2015, https://doi.org/10.1016/j.gsf.2020.06.003, 2020. a

Glorie, S., Mulder, J., Hand, M., Fabris, A., Simpson, A., and Gilbert, S.: Laser Ablation (in Situ) Lu-Hf Dating of Magmatic Fluorite and Hydrothermal Fluorite-Bearing Veins, Geosci. Front., 14, 101629, https://doi.org/10.1016/j.gsf.2023.101629, 2023. a

Guillong, M. and Günther, D.: Effect of Particle Size Distribution on ICP-induced Elemental Fractionation in Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry, J. Analyt. Atom. Spectrom., 17, 831–837, https://doi.org/10.1039/B202988J, 2002. a, b, c

Günther, D., Frischknecht, R., Heinrich, C. A., and Kahlert, H.-J.: Capabilities of an Argon Fluoride 193 Nm Excimer Laser for LaserAblation Inductively Coupled Plasma Mass Spectometry Microanalysis ofGeological Materials, J. Analyt. Atom. Spectrom., 12, 939–944, https://doi.org/10.1039/A701423F, 1997. a

Günther, D., v. Quadt, A., Wirz, R., Cousin, H., and Dietrich, V. J.: Elemental Analyses Using Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry (LA-ICP-MS) of Geological Samples Fused with Li2B4O7 and Calibrated Without Matrix-Matched Standards, Microchim. Acta, 136, 101–107, https://doi.org/10.1007/s006040170038, 2001. a

Habib, E. A. E.: Geometric Mean for Negative and Zero Values, International Journal of Research and Reviews in Applied Sciences, 11, 419–432, 2012. a, b, c, d

Hall, J. W., Glorie, S., Reid, A. J., Boone, S. C., Collins, A. S., and Gleadow, A.: An Apatite U–Pb Thermal History Map for the Northern Gawler Craton, South Australia, Geosci. Front., 9, 1293–1308, https://doi.org/10.1016/j.gsf.2017.12.010, 2018. a

Hergenröder, R.: A Model of Non-Congruent Laser Ablation as a Source of Fractionation Effects in LA-ICP-MS, J. Analyt. Atom. Spectrom., 21, 505–516, https://doi.org/10.1039/B600698A, 2006a. a, b

Hergenröder, R.: Hydrodynamic Sputtering as a Possible Source for Fractionation in LA-ICP-MS, J. Analyt. Atom. Spectrom., 21, 517–524, https://doi.org/10.1039/B600705H, 2006b. a

Hirata, T. and Nesbitt, R. W.: U-Pb Isotope Geochronology of Zircon: Evaluation of the Laser Probe-Inductively Coupled Plasma Mass Spectrometry Technique, Geochim. Cosmochim. Ac., 59, 2491–2500, https://doi.org/10.1016/0016-7037(95)00144-1, 1995. a

Hogmalm, K. J., Zack, T., Karlsson, A. K. O., Sjöqvist, A. S. L., and Garbe-Schönberg, D.: In Situ Rb–Sr and K–Ca Dating by LA-ICP-MS/MS: An Evaluation of N2O and SF6 as Reaction Gases, J. Analyt. Atom. Spectrom., 32, 305–313, https://doi.org/10.1039/c6ja00362a, 2017. a

Horn, I., Rudnick, R. L., and McDonough, W. F.: Precise Elemental and Isotope Ratio Determination by Simultaneous Solution Nebulization and Laser Ablation-ICP-MS: Application to U–Pb Geochronology, Chem. Geol., 164, 281–301, https://doi.org/10.1016/S0009-2541(99)00168-0, 2000. a, b, c

Horstwood, M. S. A., Foster, G. L., Parrish, R. R., Noble, S. R., and Nowell, G. M.: Common-Pb Corrected in Situ U–Pb Accessory Mineral Geochronology by LA-MC-ICP-MS, J. Analyt. Atom. Spectrom., 18, 837–846, https://doi.org/10.1039/B304365G, 2003. a

Horstwood, M. S. A., Košler, J., Gehrels, G. E., Jackson, S. E., McLean, N. M., Paton, C., Pearson, N. J., Sircombe, K. N., Sylvester, P., Vermeesch, P., Bowring, J. F., Condon, D. J., and Schoene, B.: Community-Derived Standards for LA-ICP-MS U-(Th-)Pb Geochronology - Uncertainty Propagation, Age Interpretation and Data Reporting, Geostand. Geoanal. Res., 40, 311–332, https://doi.org/10.1111/j.1751-908X.2016.00379.x, 2016. a, b

Karch, J.: Improving on Adjusted R-Squared, Collabra: Psychology, 6, 45, https://doi.org/10.1525/collabra.343, 2020. a, b, c

Kendall-Langley, L. A., Kemp, A. I. S., Grigson, J. L., and Hammerli, J.: U-Pb and Reconnaissance Lu-Hf Isotope Analysis of Cassiterite and Columbite Group Minerals from Archean Li-Cs-Ta Type Pegmatites of Western Australia, Lithos, 352–353, 105231, https://doi.org/10.1016/j.lithos.2019.105231, 2020. a

Kirkwood, T. B. L.: Geometric Means and Measures of Dispersion, Biometrics, 35, 908–909, 1979. a, b

Košler, J., Wiedenbeck, M., Wirth, R., Hovorka, J., Sylvester, P., and Míková, J.: Chemical and Phase Composition of Particles Produced by Laser Ablation of Silicate Glass and Zircon – Implications for Elemental Fractionation during ICP-MS Analysis, J. Anal. At. Spectrom., 20, 402–409, https://doi.org/10.1039/B416269B, 2005. a, b, c, d

Kroslakova, I. and Günther, D.: Elemental Fractionation in Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry: Evidence for Mass Load Induced Matrix Effects in the ICP during Ablation of a Silicate Glass, J. Analyt. Atom. Spectrom., 22, 51–62, https://doi.org/10.1039/B606522H, 2006. a

Larson, K. P., Dyck, B., Shrestha, S., Button, M., and Najman, Y.: On the viability of detrital biotite Rb–Sr geochronology, Geochronology, 6, 303–312, https://doi.org/10.5194/gchron-6-303-2024, 2024. a

Liu, Z., Wu, F., Guo, C., Zhao, Z., Yang, J., and Sun, J.: In Situ U-Pb Dating of Xenotime by Laser Ablation (LA)-ICP-MS, Chinese Sci. Bull., 56, 2948–2956, https://doi.org/10.1007/s11434-011-4657-y, 2011. a

Lloyd, J. C.: Supplementary figures – the quantification of downhole fractionation for laser ablation mass spectrometry, Figshare (The University of Adelaide) [data set], https://doi.org/10.25909/27041821, 2024a. a, b, c

Lloyd, J. C.: Julia scripts – The quantification of downhole fractionation for laser ablation mass spectrometry, Figshare (The University of Adelaide) [software], https://doi.org/10.25909/26779255.v3, 2024b. a

Lloyd, J. C.: Supplementary analyte signal figures – The quantification of downhole fractionation for laser ablation mass spectrometry, Figshare (The University of Adelaide) [data set], https://doi.org/10.25909/26778592.v1, 2024c. a

Lloyd, J. C.: GeochemistryTools.jl: v0.0.3-dev, Zenodo [software], https://doi.org/10.5281/zenodo.16624256, 2025. a

Lloyd, J. C. and Gilbert, S.: Raw and derived data: The quantification of downhole fractionation for laser ablation mass spectrometry, The University of Adelaide [data set], https://doi.org/10.25909/26778298, 2024. a

Lloyd, J. C., Blades, M. L., Counts, J. W., Collins, A. S., Amos, K. J., Wade, B. P., Hall, J. W., Hore, S., Ball, A. L., Shahin, S., and Drabsch, M.: Neoproterozoic Geochronology and Provenance of the Adelaide Superbasin, Precambrian Res., 350, 105849, https://doi.org/10.1016/j.precamres.2020.105849, 2020. a

Lloyd, J. C., Collins, A. S., Blades, M. L., Gilbert, S. E., and Amos, K. J.: Early Evolution of the Adelaide Superbasin, Geosciences, 12, 154, https://doi.org/10.3390/geosciences12040154, 2022. a, b

Lloyd, J. C., Preiss, W. V., Collins, A. S., Virgo, G. M., Blades, M. L., Gilbert, S. E., Subarkah, D., Krapf, C. B. E., and Amos, K. J.: Geochronology and Formal Stratigraphy of the Sturtian Glaciation in the Adelaide Superbasin, Geol. Mag., 160, 1321–1344, https://doi.org/10.1017/S0016756823000390, 2023. a

Lloyd, J. C., Collins, A. S., Blades, M. L., Gilbert, S. E., Mulder, J. A., and Amos, K. J.: Mid- to Late Neoproterozoic Development and Provenance of the Adelaide Superbasin, τeκτoniκa, 2, 85–111, https://doi.org/10.55575/tektonika2024.2.2.65, 2024. a

Mank, A. J. G. and Mason, P. R. D.: A Critical Assessment of Laser Ablation ICP-MS as an Analytical Tool for Depth Analysis in Silica-Based Glass Samples, J. Analyt. Atom. Spectrom., 14, 1143–1153, https://doi.org/10.1039/A903304A, 1999. a, b, c

Marillo-Sialer, E., Woodhead, J., Hergt, J., Greig, A., Guillong, M., Gleadow, A., Evans, N., and Paton, C.: The Zircon `Matrix Effect': Evidence for an Ablation Rate Control on the Accuracy of U–Pb Age Determinations by LA-ICP-MS, J. Analyt. Atom. Spectrom., 29, 981–989, https://doi.org/10.1039/c4ja00008k, 2014. a

McFarlane, C. R. M.: Allanite UPb Geochronology by 193   Nm LA ICP-MS Using NIST610 Glass for External Calibration., Chem. Geol., 438, 91–102, https://doi.org/10.1016/j.chemgeo.2016.05.026, 2016. a

McInnes, L., Healy, J., and Melville, J.: UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, arXiv [preprint], https://doi.org/10.48550/arXiv.1802.03426, 2020. a, b

McLean, N. M., Bowring, J. F., and Gehrels, G.: Algorithms and Software for U-Pb Geochronology by LA-ICPMS, Geochem. Geophys. Geosyst., 17, 2480–2496, https://doi.org/10.1002/2015GC006097, 2016. a, b

Mohammadi, N., Lentz, D. R., Thorne, K. G., Walker, J., Rogers, N., Cousens, B., and McFarlane, C. R. M.: U-Pb and Re-Os Geochronology and Lithogeochemistry of Granitoid Rocks from the Burnthill Brook Area in Central New Brunswick, Canada: Implications for Critical Mineral Exploration, Geochemistry, 84, 126087, https://doi.org/10.1016/j.chemer.2024.126087, 2024. a

Norris, A. and Danyushevsky, L.: Towards Estimating the Complete Uncertainty Budget of Quantified Results Measured by LA-ICP-MS, Goldschmidt, Boston, 2018. a

Olkin, I. and Pratt, J. W.: Unbiased Estimation of Certain Correlation Coefficients, Ann. Math. Stat., 29, 201–211, https://doi.org/10.1214/aoms/1177706717, 1958. a, b

O'Neill, H. S. C.: The Smoothness and Shapes of Chondrite-normalized Rare Earth Element Patterns in Basalts, J. Petrol., 57, 1463–1508, https://doi.org/10.1093/petrology/egw047, 2016. a, b, c, d, e, f

Paton, C., Woodhead, J. D., Hellstrom, J. C., Hergt, J. M., Greig, A., and Maas, R.: Improved Laser Ablation U-Pb Zircon Geochronology through Robust Downhole Fractionation Correction, Geochem. Geophys. Geosyst., 11, Q0AA06, https://doi.org/10.1029/2009GC002618, 2010. a, b, c, d, e, f

Paton, C., Hellstrom, J., Paul, B., Woodhead, J., and Hergt, J.: Iolite: Freeware for the Visualisation and Processing of Mass Spectrometric Data, J. Analyt. Atom. Spectrom., 26, 2508–2518, https://doi.org/10.1039/C1JA10172B, 2011. a

Payne, J. L., Hand, M., Barovich, K. M., and Wade, B. P.: Temporal Constraints on the Timing of High-Grade Metamorphism in the Northern Gawler Craton: Implications for Assembly of the Australian Proterozoic, Austr. J. Earth Sci., 55, 623–640, https://doi.org/10.1080/08120090801982595, 2008. a

Pearson, J. W., Olver, S., and Porter, M. A.: Numerical Methods for the Computation of the Confluent and Gauss Hypergeometric Functions, Numer. Algorithms, 74, 821–866, https://doi.org/10.1007/s11075-016-0173-0, 2017. a

Polhill, J. G., Izquierdo, L. R., and Gotts, N. M.: What Every Agent-Based Modeller Should Know about Floating Point Arithmetic, Environ. Model. Softw., 21, 283–309, https://doi.org/10.1016/j.envsoft.2004.10.011, 2006. a

Roberts, N. M. W., Drost, K., Horstwood, M. S. A., Condon, D. J., Chew, D., Drake, H., Milodowski, A. E., McLean, N. M., Smye, A. J., Walker, R. J., Haslam, R., Hodson, K., Imber, J., Beaudoin, N., and Lee, J. K.: Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) U–Pb carbonate geochronology: strategies, progress, and limitations, Geochronology, 2, 33–61, https://doi.org/10.5194/gchron-2-33-2020, 2020. a

Schwarz, G.: Estimating the Dimension of a Model, Ann. Stat., 6, 461–464, https://doi.org/10.1214/aos/1176344136, 1978. a

Simpson, A., Gilbert, S., Tamblyn, R., Hand, M., Spandler, C., Gillespie, J., Nixon, A., and Glorie, S.: In-Situ Lu Hf Geochronology of Garnet, Apatite and Xenotime by LA ICP MS/MS, Chem. Geol., 577, 120299, https://doi.org/10.1016/j.chemgeo.2021.120299, 2021. a

Sláma, J., Košler, J., Condon, D. J., Crowley, J. L., Gerdes, A., Hanchar, J. M., Horstwood, M. S. A., Morris, G. A., Nasdala, L., Norberg, N., Schaltegger, U., Schoene, B., Tubrett, M. N., and Whitehouse, M. J.: Plešovice Zircon – A New Natural Reference Material for U–Pb and Hf Isotopic Microanalysis, Chem. Geol., 249, 1–35, https://doi.org/10.1016/j.chemgeo.2007.11.005, 2008. a

Solari, L. A., Ortega-Obregón, C., and Bernal, J. P.: U–Pb Zircon Geochronology by LAICPMS Combined with Thermal Annealing: Achievements in Precision and Accuracy on Dating Standard and Unknown Samples, Chem. Geol., 414, 109–123, https://doi.org/10.1016/j.chemgeo.2015.09.008, 2015. a

Spandler, C., Hammerli, J., Sha, P., Hilbert-Wolf, H., Hu, Y., Roberts, E., and Schmitz, M.: MKED1: A New Titanite Standard for in Situ Analysis of Sm–Nd Isotopes and U–Pb Geochronology, Chem. Geol., 425, 110–126, https://doi.org/10.1016/j.chemgeo.2016.01.002, 2016. a

Subarkah, D., Blades, M. L., Collins, A. S., Farkaš, J., Gilbert, S., Löhr, S. C., Redaa, A., Cassidy, E., and Zack, T.: Unraveling the Histories of Proterozoic Shales through in Situ Rb-Sr Dating and Trace Element Laser Ablation Analysis, Geology, 50, 66–70, https://doi.org/10.1130/G49187.1, 2021. a

Sylvester, P. J. and Ghaderi, M.: Trace Element Analysis of Scheelite by Excimer Laser Ablation-Inductively Coupled Plasma-Mass Spectrometry (ELA-ICP-MS) Using a Synthetic Silicate Glass Standard, Chem. Geol., 141, 49–65, https://doi.org/10.1016/S0009-2541(97)00057-0, 1997. a

Tamblyn, R., Gilbert, S., Glorie, S., Spandler, C., Simpson, A., Hand, M., Hasterok, D., Ware, B., and Tessalina, S.: Molybdenite Reference Materials for In Situ LA-ICP-MS/MS Re-Os Geochronology, Geostand. Geoanal. Res., 48, 393–410, https://doi.org/10.1111/ggr.12550, 2024. a

Thompson, J. M., Meffre, S., Maas, R., Kamenetsky, V., Kamenetsky, M., Goemann, K., Ehrig, K., and Danyushevsky, L.: Matrix Effects in Pb/U Measurements during LA-ICP-MS Analysis of the Mineral Apatite, J. Anal. Atom. Spectrom., 31, 1206–1215, https://doi.org/10.1039/C6JA00048G, 2016. a

Thompson, J. M., Meffre, S., and Danyushevsky, L.: Impact of Air, Laser Pulse Width and Fluence on U–Pb Dating of Zircons by LA-ICPMS, J. Analyt. Atom. Spectrom., 33, 221–230, https://doi.org/10.1039/C7JA00357A, 2018. a

van der Wolff, E. J.: Detrital Provenance and Geochronology of the Burra, Umberatana and Wilpena Groups in the Mount Lofty Ranges, Honours Thesis, The University of Adelaide, Adelaide, South Australia, 2020. a

Ver Hoeve, T. J., Scoates, J. S., Wall, C. J., Weis, D., and Amini, M.: Evaluating Downhole Fractionation Corrections in LA-ICP-MS U-Pb Zircon Geochronology, Chem. Geol., 483, 201–217, https://doi.org/10.1016/j.chemgeo.2017.12.014, 2018. a, b

Wiedenbeck, M., Hanchar, J. M., Peck, W. H., Sylvester, P., Valley, J., Whitehouse, M., Kronz, A., Morishita, Y., Nasdala, L., Fiebig, J., Franchi, I., Girard, J. P., Greenwood, R. C., Hinton, R., Kita, N., Mason, P. R. D., Norman, M., Ogasawara, M., Piccoli, P. M., Rhede, D., Satoh, H., Schulz-Dobrick, B., Skår, O., Spicuzza, M. J., Terada, K., Tindle, A., Togashi, S., Vennemann, T., Xie, Q., and Zheng, Y. F.: Further Characterisation of the 91500 Zircon Crystal, Geostand. Geoanal. Res., 28, 9–39, https://doi.org/10.1111/j.1751-908X.2004.tb01041.x, 2004.  a

Yang, C.-X., Santosh, M., Lloyd, J., Glorie, S., Anilkumar, Y., Anoop, K. S., Gao, P., and Kim, S.-W.: Breakup of the Neoarchean Supercontinent Kenorland: Evidence from Zircon and Baddeleyite U-Pb Ages of LIP-related Mafic Dykes in the Coorg Block, Southern India, Geosci. Front., 15, 101804, https://doi.org/10.1016/j.gsf.2024.101804, 2024. a, b

Zack, T., Stockli, D. F., Luvizotto, G. L., Barth, M. G., Belousova, E., Wolfe, M. R., and Hinton, R. W.: In Situ U–Pb Rutile Dating by LA-ICP-MS: 208Pb Correction and Prospects for Geological Applications, Contrib. Mineral. Petr., 162, 515–530, https://doi.org/10.1007/s00410-011-0609-4, 2011. a

Zhang, D. K.: Dzhang314/MultiFloats.Jl, https://github.com/dzhang314/MultiFloats.jl, last access: 17 April 2024. a

Download
Short summary
Laser-based dating of rocks and minerals is invaluable in geoscience. This study significantly advances our ability to model and correct for a process called down-hole fractionation (DHF) that impacts the accuracy and uncertainty of dates. We develop an algorithm that quantitatively models DHF patterns but can be used on other geoscientific data. The implications are far-reaching: improved accuracy, reduced uncertainty, and easier comparisons between different samples and laboratories.
Share