the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A method for quantifying the time of cooling in thermochronometric inversions
Abstract. Reconstructing geological processes and events from thermochronometric data typically requires the interpretation of time-temperature path ensembles calculated by inverse methods. Commonly, this may be as simple as associating heating or cooling in thermal histories with specific geologic events and indirectly "dating" such events by estimating the time of observed heating or cooling. While visual assessments may suffice in the simplest cases, statistical comparison requires quantitative estimations of the time of cooling. This study presents a straightforward methodology wherein we ascertain the time of peak cooling for the entire cooling signal within a thermal history model. The focus is on the time-temperature paths intersecting the half-maximum cooling isotherm, where the full distribution of interpolated model times at that isotherm provides a quantitative metric for the characteristic "time of cooling". We apply this method to thermochronologic inversions of synthetic and natural examples, demonstrating its practicality and functionality. This systematic approach provides an effective means of quantitatively reporting the peak cooling time from thermal history inversions.
- Preprint
(5869 KB) - Metadata XML
- BibTeX
- EndNote
Status: final response (author comments only)
-
RC1: 'Comment on gchron-2024-3', Kendra Murray, 31 May 2024
GENERAL COMMENTS
This manuscript describes a quantitative method for extracting the timing of rock cooling from a QTQt inversion, given a time period of interest that encompasses a simple cooling event. Specifically, this approach documents the timing of the maximum rate of cooling, which is different than other common metrics for characterizing the timing or magnitude of thermal features in inversion results (such as the timing of the start of cooling, or the rate of cooling). This timing of the peak cooling rate is inferred to occur at the midpoint of the cooling event, as tT histories pass through a ‘half-maximum isotherm’. The distribution of times when each thermal history solution passes through this isotherm is termed the Full time-Distribution at Half-Maximum temperature (FDHM), i.e., the timing of peak cooling. The source code is available in GitHub and was originally developed and used to interpret data in a publication from 2022 by the same authors. To demonstrate how this tool works, the authors use several synthetic examples. Then, they discuss how the FDHM could be used to address geological questions using their previously published deep-time inversion result from the Canadian Shield and new modeling of published data that document the recent cooling of the Bergell pluton in the Alps.
There is a need for this kind of contribution, which extracts a inverse model analysis tool developed for use in a recent paper (McDannell and Keller, 2022) and makes it more accessible to the broader community. I think that with moderate revisions, this manuscript will provide a valuable entry point anyone who wants to add this method to their thermal history modeling toolkit. Below I give an overview of the specific areas where I think revision or additional discussion would strengthen this manuscript’s description and demonstration of the method; in the accompanying PDF, I flag a number of additional areas for the authors to consider additional rephrasing and revision. I hope the authors find this feedback useful. I look forward to taking this method out for a spin!
Kendra Murray
SPECIFIC COMMENTS (see also PDF)
- What is the geological significance of “the timing of peak cooling”? The mathematical elegance of the FDHM is clear, but that doesn't mean it has inherent or intuitive geological significance. To me, this metric doesn’t intuitively relate to a part of a geological process, and so it is much less compelling than the starting, ending, or rate of cooling; perhaps this is because it is a novel way of thinking about cooling events. Although the geological examples help somewhat, I find the synthetic histories and accompanying discussion significantly less helpful on this front, for a few reasons. Revising how the synthetic examples are leveraged and discussed could support a more compelling argument for how and when to use the FHDM metric.
- Line 155 described the cooling rates chosen in the synthetic tT histories as “arbitrary” (see comment in pdf about clarifying the meaning of this sentence). If that is the intended meaning, choosing ‘arbitrary’ cooling rates is unsatisfying, because it isn’t clear if and how they are relevant to real cooling histories rocks commonly experience in nature, and thus how to connect these synthetic examples to the real world. Synthetic tT histories have the most value if they are designed to mimic something in nature (exactly what depends on the context, of course).
- The synthetic examples in Fig 2 have constant cooling rates during the time interval of interest, so describing the FDHM as the timing of the peak cooling rate during 200˚C of cooling is a bit misleading, because there rate is actually constant during the entire event. In this way, the FDHM value will not always document the timing of the “peak” cooling rate even in the simplest circumstances, like when rates that are constant for long periods of time; instead, it is the maximum rate, but also the sustained rate, which has different geological implications. Is there a particular style of tT history that the FDHM approach is well-suited for? If so, the manuscript would be strengthen by more discussion, and perhaps demonstration, of this.
- Figure 1 nicely demonstrates why three cooling curves can be described mathematically by the same FDHM value, and the synthetic tT histories (Fig 2) demonstrate how this works with synthetic data and inversion results. But…how are the FDHM values derived from these models helpful for answering geological questions, if we get the same values for different thermal histories? Don’t we want to be able to clearly distinguish among tT histories with such differences? Let’s take the two synthetic tT histories. Cooling 200˚C over 20 Myr (10˚C/Myr cooling rate) suggests a radically different process, operating over an order of magnitude shorter timescale, compared to cooling 200˚C over 200 Myr (1˚C/Myr). But they yield the same FDHM value (and it’s clear why). But, I’m left wondering: what do I learn from this FDHM ‘cooling time’?
- As discussed starting at line 60, cooling rates are one of the key ways we relate tT model results to geological processes. So, how is an metric that is agnostic about cooling rate, such as FDHM, helpful? A more compelling pitch would be useful. Arguably the 700 Ma FDHM ‘peak cooling time’ from the synthetic history with 1C/Myr cooling rate has little geological significance; its just the middle of a protracted steady slow cooling event. In contrast, I can see how for more rapid cooling events, this ‘peak cooling time’ could be geologically significant, e.g., be used to relate cooling to some geological event (as in the real examples). However, the sentence at lines 70-72 seems to suggest that the FDHM approach is a good fit for regions that have experienced slow cooling; it is not clear to me why this would be true.
- Addressing some of these questions can be in part accomplished by describing what other thermal information one would need in order to make a robust geological interpretation of the FDHM ‘peak cooling time’, in addition to what is discussed in Section 3.4. The cooling rate? Other metrics for quantifying a cooling event from a model result? Is this only useful for rapid cooling? Addressing some of these questions would really help future users avoid using this as a shiny black box that spits out ‘cooling times’.
- Finally, in a case where the FDHM cooling time is geologically significant and one wants to relate this cooling time to some event or process…how exactly should we be thinking about this relationship? The challenge of translating thermal information back into a geological framework has been a core interpretive challenge for low-T thermochron for decades now—cooling start, end, and rate are also not entirely intuitive to relate to processes like erosion, but the community has just spent a lot of time on this problem such that the fundamentals are well established—and the readers are going to be accustomed to thinking about this in the common approached used to characterizing thermal events. For example, hypothesis testing with thermochronology can leverage start/stop times of processes that produce cooling signals, for example ‘cooling started after 5 Ma in the model result, and therefore the start of process X, which we know started at 15 Ma from independent information, cannot be responsible’. So, what should we do with ‘peak cooling timing’ information? It seems to require a different approach to hypothesis testing and conceptual models about the thermal consequences of various processes. I think we have much less practice thinking about this, and some guidance from the authors would provide a solid foundation to build upon. Also, importantly, what do that authors think we should not do with FDHM peak cooling timing?
- The way the half-maximum isotherm is determined from an inversion result is not explained, and it also appears to lack the quantitative rigor of the other elements of the FDHM calculation and the consequences of this are not discussed. A key piece of the FDHM approach is the need to quantify the ‘total cooling magnitude’ of the cooling event of interest, in order to identify the half-maximum isotherm that is then used as the reference point for quantifying the timing of cooling through that temperature. This manuscript would be more impactful, and the method more accessible to potential users, if this part of the workflow and its potential limitations were discussed in additional detail. The results of modeling synthetic data produced from known (‘true’) tT paths presented in Figure 2, and the discussion of those results at lines 191-204, exemplify how this part of the workflow could be better explained. Line 196 simply says: “A key feature of the inversions is that the total cooling magnitude was accurately recovered,” but it does not then compare some total cooling magnitude retrieved from the inversion result to the ‘true’ cooling magnitude. Instead, the analysis goes on to use the total cooling magnitude from the ‘true’ tT histories (exactly 200˚C, from 240-40C) to determine the isotherm of interest (140˚C). However, if one looks at the actual inversion results, and pretends the true history is unknown, the total cooling magnitude is much less clear and does not seem to be not well represented by a single value with no uncertainty. For example, at the end of cooling, the high relative probability region in both examples spans at least 20˚. At the start of cooling, the slow-cooling example is a similarly narrow range of high relative probabilities, but the fast cooling example has no high relative probability region around 240˚C. Of course, such features are not unique to these results; even the best-constrained inversion result with perfect kinetics and a model design tuned to successfully reproduce the ‘true’ history will not resolve that exact tT history with no uncertainty. So, several question arise:
- How exactly does one figure out the total cooling magnitude from an inversion result as a part of the FDHM method? Visual estimation? assessment of the relative probability space shown in Figure 2? Assessment of the ensemble of individual tT histories used to build that relative probability heat map?
- Does a difference of a few tens of degrees matter for the half-max isotherm, and under what circumstances? If the FDHW value isn’t that sensitive to the choice of half-maximum isotherm, then that is important to demonstrate and discuss why. In any case, the approach that the authors used to determine the cooling magnitude and isotherms from the inversion results for all the examples in this manuscript needs to be more completely explained so their approach can be understood and replicated by others.
- And, perhaps most critically for the future use of this tool, why is uncertainty on this cooling magnitude not accounted for in this method? The inversion result for the 10˚C/Myr cooling rate would be a natural vehicle for discussing this.
- The importance of paying careful attention to the temperature sensitivities of the chronometers being used to produce the inversion result is nicely emphasized in Section 3.4, which uses one of the synthetic examples to explore what happens to a FDHW assessment when a cooling event starts at temperatures higher than the chronomter(s) are sensitive to. However, the same considerations should apply at the cold, low-temperature end of cooling events, too, and this is not discussed. A similar assessment of a lack of low-T sensitivity could, for example, be demonstrated by only modeling a high-T system in the same synthetic example. More pointedly, I think the Bergell pluton example highlights the potential perils of not exploring this more completely. 6.45 Ma AHe ages are input into the model, and the FDHM approach is used with a 30˚C isotherm to infer peak cooling at ca. 1 Ma. However, 30˚C is well below the temperature sensitivity of the AHe system, and it is not clear how or why the AHe system would be sensitive to the exact shape of the thermal history after the rock exits the PRZ at ~50-40 ˚C; to me this seems like a function of the QTQt model design, not the data. Although the authors simply say this is an ‘interesting’ result and note the temporal correspondence to previously hypothesized accelerated cooling at a similar time, it is very unclear whether this FDHM is a rigorous result that is capable of supporting such a hypothesis (or rejecting the other proposed times of peak cooling). Some additional discussion of this is warranted as a part of demonstrating this new method.
- Several other aspects of the FDHM method and workflow would benefit from being more completely explained. This information is very succinctly described in lines 129-135, tucked into section 3.1. A few suggestions:
- The text comparing FDHM and “full width at half-maximum” FWHM (lines 120-124) is confusing. How is the FWHM’s “width of a distribution” different from “the full distribution rather than merely a width” used for the FDHM?
- The FWHM inspiration for this approach is an intuitive metric because it is easy to visualize the connection between the shape of a (spectral) curve and the FWHM. I suggest further embracing this ‘spirit’ by using at least one figure to more clearly illustrate the connection between [1] what one sees on an inversion result in tT space (such as a cooling event of interest in an Expected Thermal History plot) and [2] the FDHM results (such as a PDP histogram), as well as [3] what the code (and user) does to get from [1] to [2], emphasizing the role of variables that may need to be “tuned” or chosen by the user vs. what is build into the program.
- Which QTQt outputs are used in this method? How are they loaded into the FDHM code? Certainly not all related details are necessary, but a quick description of the workflow and specific direction about where the reader can find additional documentation is needed.
- “Peak cooling” vs. “time of cooling” vs. “peak cooling time” are all used interchangeably in the title and abstract, but their meaning is vague without a more clear definition at the very beginning of what exactly this approach quantifies. A more explicit and comprehensive description is important because the FDHM approach is quantifying something different than features of tT inversion results that are typically quantified. I suggest re-writing the sentence that starts in line 5 “This study presents a straightforward methodology wherein we ascertain the time of peak cooling for the entire cooling signal within a thermal history model.” to include a clarification of what is meant by ‘peak cooling’ and ‘the entire cooling signal’. Additionally, the authors may consider incorporating this detail into the manuscript title.
- The paragraph starting at line 74 (section 2.2) claims that the commonly used ““onset of cooling” for a thermal history is an unreliable metric,” but provides no clear support (references, model results that demonstrate the specific “unreliability” described, etc) for this characterization. The last sentence in the caption of Figure 1 offers a similarly unsupported assessment, which also doesn’t seem to be related to or demonstrated by Figure 1. I suggest that the authors either clearly support claims about the limitations of the “onset of cooling” (line 80) and other common metrics, or simply remove these claims from the paper and focus instead on better articulating the strengths and limitations of the FDHM method, with specific demonstrations of how FDHM compares to other methods only as needed.
- Although I broadly agree with the sentiments expressed in the paragraph that starts at line 28, I suggest some moderate revisions that would better represent previous work and situate this new contribution in that context.
- I think it would be useful to briefly distinguish between thermal history models (like QTQt and HeFTy, when they are used for single-sample analysis) and thermokinematic models (such as Pecube). The latter have thoroughly discussed methods for quantifying “the timing of cooling within inversions”; see for example the method papers lead by Braun cited in line 24.
- “a systematic approach to defining and quantifying the time of cooling within inversions has to our knowledge never been thoroughly discussed in the literature.” It depends what is meant by “thoroughly discussed,” and I suggest tweaking this language to better articulate the problem at hand. Many studies present and use (commonly one-off) quantitative approaches to quantify a particular feature of a tT inverse model result, for a particular scientific question. I absolutely agree that such previous approaches are almost always presented as a part of papers that are focused on geology, not on modeling methods, and are certainly ad hoc in the sense that they are commonly designed for a specific study, not more broad use. As a result there has been little discussion of the merits of various approaches for extracting information from inversion results, how they compare and contrast, etc. And as a community it can feel like we are constantly reinventing the wheel. So, this new manuscript can provide a substantial contribution, by extracting such an approach from a recent paper (McDannell and Keller, 2022) and demonstrating how it works more broadly in a stand-alone manuscript. I think this is fantastic, and we need more of this kind of paper to help document these tT inversion interpretation tools and make them more accessible in our modeling toolkits.
- “The ad hoc method most often used is a visual estimate—where cooling initiation is typically framed with respect to a specific temperature at the time of maximum reheating preceding cooling—in other words, the point where there is a change from heating to cooling.” I agree, many “ad hoc’" methods do have a visual component—indeed, a qualitative visual assessment in tT space is a common starting point, and it is also common to illustrate tT features qualitatively in figures (in fact, above I am suggesting this paper do more of this illustration). However, such approaches can be (and in many cases are) accompanied by a quantitative component. The authors may find it useful to refer to Murray, K.E., Goddard, A.L.S., Abbey, A.L., and Wildman, M., 2022, Thermal history modeling techniques and interpretation strategies: Applications using HeFTy: Geosphere, v. 18, p. 1622–1642, doi:10.1130/ges02500.1. (for example, Fig. 8).
TABLES
Table 1: Please report the Ft-corrected He ages here. Corrected ages are the geologically meaningful He ages; they are directly connected to the timing of cooling in the ‘true’ forward paths, the inversion results, and FDHW timing discussed in the text and presented in the figures. Currently, it is confusing to see a table of AHe ages <500 Ma when these rocks have been colder than 40˚C since 600 Ma. I also suggest reporting the grain sizes for the He systems, since they were not kept constant.
FIGURES
All Figures: please use letters to identify panels in figures that have more than one plot
Figure 2.
- This and other similar plots would be more readable if the legend was expanded to include: a label on the color bar indicating the heat map is for relative probability, and an entry indicating the dotted line is the ‘true’ forward model path
- Consider overlaying the 95% credible intervals on these expected thermal history plots, especially if these CI are used to define the total cooling magnitude or FDHM
- What is the geological significance of “the timing of peak cooling”? The mathematical elegance of the FDHM is clear, but that doesn't mean it has inherent or intuitive geological significance. To me, this metric doesn’t intuitively relate to a part of a geological process, and so it is much less compelling than the starting, ending, or rate of cooling; perhaps this is because it is a novel way of thinking about cooling events. Although the geological examples help somewhat, I find the synthetic histories and accompanying discussion significantly less helpful on this front, for a few reasons. Revising how the synthetic examples are leveraged and discussed could support a more compelling argument for how and when to use the FHDM metric.
-
RC2: 'Comment on gchron-2024-3', Pieter Vermeesch, 17 Jun 2024
The comment was uploaded in the form of a supplement: https://gchron.copernicus.org/preprints/gchron-2024-3/gchron-2024-3-RC2-supplement.pdf
Viewed
HTML | XML | Total | BibTeX | EndNote | |
---|---|---|---|---|---|
480 | 104 | 14 | 598 | 9 | 9 |
- HTML: 480
- PDF: 104
- XML: 14
- Total: 598
- BibTeX: 9
- EndNote: 9
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1