Articles | Volume 8, issue 1
https://doi.org/10.5194/gchron-8-191-2026
© Author(s) 2026. This work is distributed under the Creative Commons Attribution 4.0 License.
The conflict between sampling resolution and stratigraphic constraints from a Bayesian perspective: OSL and radiocarbon case studies
Download
- Final revised paper (published on 30 Mar 2026)
- Supplement to the final revised paper
- Preprint (discussion started on 12 Mar 2025)
- Supplement to the preprint
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
-
RC1: 'Comment on egusphere-2025-890', Anonymous Referee #1, 20 May 2025
- AC1: 'Reply on RC1', Guillaume Guérin, 19 Sep 2025
-
RC2: 'Comment on egusphere-2025-890', Anonymous Referee #2, 28 Jun 2025
- AC2: 'Reply on RC2', Guillaume Guérin, 19 Sep 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
ED: Publish subject to revisions (further review by editor and referees) (29 Sep 2025) by Michael Dietze
AR by Guillaume Guérin on behalf of the Authors (06 Nov 2025)
Author's response
Author's tracked changes
Manuscript
ED: Publish subject to revisions (further review by editor and referees) (13 Nov 2025) by Michael Dietze
ED: Referee Nomination & Report Request started (08 Dec 2025) by Michael Dietze
RR by Anonymous Referee #1 (13 Jan 2026)
RR by Anonymous Referee #2 (21 Jan 2026)
ED: Publish subject to minor revisions (further review by editor) (23 Jan 2026) by Michael Dietze
AR by Guillaume Guérin on behalf of the Authors (30 Jan 2026)
Author's response
Author's tracked changes
Manuscript
ED: Publish as is (09 Feb 2026) by Michael Dietze
ED: Publish as is (05 Mar 2026) by Georgina King (Editor)
AR by Guillaume Guérin on behalf of the Authors (05 Mar 2026)
Manuscript
Dear Editor(s) and Authors,
I have read the manuscript ‘The conflict between sampling resolution and stratigraphic constraints from a Bayesian perspective: OSL and radiocarbon case studies’ in detail. It is a well structured and well-written manuscript, and the scope fits Geochronology. You address a standing topic and issue in age-depth modeling, and your contribution is relevant. While your conclusions are not entirely new, these are based on real and well selected datasets, and your manuscript is a welcome part for the scientific literature. I clearly support publication in Geochronology after revisions.
The authors use two case studies to point out challenges with Bayesian age-depth modeling – and rightfully demonstrate that the tested models are not without bias and artifacts. I particularly like – and agree with - the repeated call for applying common sense and looking at data with an experienced eye of a geochronologist even when applying models, e.g. as (quotes from your submission follow) ‘As frustrating as it may be, in our view none of the tested models can tell us anything better than the actual data themselves’, and as ‘when testing any chronological model, it is of utmost importance to compare the model outcome with the input data.’. I fully agree and find this an important lesson: look at data, know possible issues – and then think if a model may help and/or is of any help.
The final statement ‘Our study shows that this goal [make use of prior observations to refine the precision, accuracy and robustness] is difficult to reach and that using models to correct measurements appears to be dangerous.’. Well – that really depends on the case and individual data structure in my opinion, and such a general statement should be at least softened when based on two datsets only (why is no reference to the often used models BChron & Bacon made?), and few datasets which are indeed challenging.
With this I come to my main criticism of this manuscript: the arbitrary selection of models, seemingly influenced by previous work of the authors. When speaking of luminence modeling I ask you to refer to ADMin (https://www.sciencedirect.com/science/article/pii/S187110141730047X) – probably the model least affected by the spread effect (?), but at the same time slow/unsuitable for large (and these?) datasets. Generally, I disagree with the BChron and Bacon models not even being mentioned, as these are really often used.
Further comments.
References to Ramsey should in my opinion be to Bronk Ramsey
Line 84: ‘event model of Lanos and Philippe (2018)’ – could you please introduce this one – it is less known than the one by Bronk Ramsey which you introduce in detail
Line 115: please explain ‘Theta matrix’
160ff: Can BayLum model 14C ages!? - that would be different than luminescence modeling, because here 'only' the 14C age is used?
In Fig. 3 (and others) please include original ages.
Generally, I find your figures would benefit from clearer explanation in captions, and systematically placing units on axes - ideally all would be on the same age (ka or BC ,please dont mix here).
Abscissa of Fig. 3 : space before bracket missing
Figure 5 and its explanation: ordinate unclear. Why was this only done for BayLum?
Line 278: please explain the phase structure here
284: ‘between samples OxA-9893 and OxA-23251’ – please mark in Figure so that these can easily be found
286f: I disagree with your statement ‘These two bottom-most samples are PL-980252A, whose age lies outside the calibrated age of all samples above’ – the densities do overlap
Chapter 3.2.2., and Fig. 8 limited to the lower 17 samples – was the model run for all or these samples?
Line ~322: please highlight where the spread effect is pronounced why
Fig. 10: units on both axes missing – please also include original dating - either as distribution or mean ages.
352ff: given that Chronomodel and OxCal partly do not overlap the praising of larger uncertainty alone seems unjustified.
In chapter 4.1. I find a prominent feature missing: The duration of the sequence when using OxCal is much shorter than when using BayLum or Chronomodel. This is worrying in my opinion, and the OxCal results seem much more similar to original ages than the BayLum and Chronomodel results. Especially the outer model ends seem unrealtistic long in BayLum and Chronomodel. The spread effect of the whole sequences seems therefore best captured by OxCal.
In line 415 I suggest reference to
https://www.sciencedirect.com/science/article/pii/S0277379103003160
https://journals.sagepub.com/doi/full/10.1177/0959683616675939
It is really good to see the computer code in Supplements. Yet I am wondering why this is only the case for one of the two examples. Further, R code would benefit from better documentation, please do so that also non-R-familiar colleagues can follow what is done why.
Further, I would like you to provide results (data plotted in Figures) in Supplements.
I am aware of issues with suggesting literature in the review process, and I am asking the editors to have a critical look at these – yet I ask you to consider including the information contained within the suggested literature in your manuscript.
Kind Regards,