Principles for automated and reproducible benchmarking

[thumbnail of Open Access]
Preview
Text (Open Access) - Published Version
· Available under License Creative Commons Attribution Non-commercial.
· Please see our End User Agreement before downloading.
| Preview

Please see our End User Agreement.

It is advisable to refer to the publisher's version if you intend to cite from this work. See Guidance on citing.

Add to AnyAdd to TwitterAdd to FacebookAdd to LinkedinAdd to PinterestAdd to Email

Koskela, T. orcid id iconORCID: https://orcid.org/0000-0002-5813-6539, Christidi, I. orcid id iconORCID: https://orcid.org/0000-0002-5045-7987, Giordano, M. orcid id iconORCID: https://orcid.org/0000-0002-7218-2873, Dubrovska, E. orcid id iconORCID: https://orcid.org/0009-0003-8066-5458, Quinn, J. orcid id iconORCID: https://orcid.org/0000-0002-0268-7032, Maynard, C. orcid id iconORCID: https://orcid.org/0000-0002-6253-9154, Case, D. orcid id iconORCID: https://orcid.org/0009-0001-3735-5687, Olgu, K. orcid id iconORCID: https://orcid.org/0000-0003-0351-2055 and Deakin, T. orcid id iconORCID: https://orcid.org/0000-0002-6439-4171 (2023) Principles for automated and reproducible benchmarking. In: SC-W 2023: Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis, 12-17 Nov 2023, Denver, Colorado, pp. 609-618. doi: 10.1145/3624062.3624133 (ISBN: 9798400707858)

Abstract/Summary

The diversity in processor technology used by High Performance Computing (HPC) facilities is growing, and so applications must be written in such a way that they can attain high levels of performance across a range of different CPUs, GPUs, and other accelerators. Measuring application performance across this wide range of platforms becomes crucial, but there are significant challenges to do this rigorously, in a time efficient way, whilst assuring results are scientifically meaningful, reproducible, and actionable. This paper presents a methodology for measuring and analysing the performance portability of a parallel application and shares a software framework which combines and extends adopted technologies to provide a usable benchmarking tool. We demonstrate the flexibility and effectiveness of the methodology and benchmarking framework by showcasing a variety of benchmarking case studies which utilise a stable of supercomputing resources at a national scale.

Altmetric Badge

Item Type Conference or Workshop Item (Paper)
URI https://reading-clone.eprints-hosting.org/id/eprint/114121
Identification Number/DOI 10.1145/3624062.3624133
Refereed Yes
Divisions Science > School of Mathematical, Physical and Computational Sciences > Department of Computer Science
Science > School of Mathematical, Physical and Computational Sciences > Department of Meteorology
Publisher ACM
Download/View statistics View download statistics for this item

Downloads

Downloads per month over past year

University Staff: Request a correction | Centaur Editors: Update this record

Search Google Scholar