Karlsruhe School of Elementary Particle and Astroparticle Physics: Science and Technology (KSETA)

Robin Hofsaess

Information

Institutes: ETP/SCC
Room: 8-21 (CS,30.23)/Flexoffice (CN,449)
Phone:
Email: Robin.Hofsaess#kit.edu

PhD Thesis (preliminary): Jet Energy Calibration for CMS and Workflow Optimization for HEP Jobs on National HPC Centers

Referee: Prof. Dr. Günter Quast
Second Referee: Prof. Dr. Achim Streit
Supervisor: Dr. Manuel Giffels
Jet Energy Calibration for CMS: L3 Residual Z+Jet
The Level 3 (L3) Residual Jet Energy Correction is one of the last steps in the full calibration chain. It is a data-driven adjustment applied to jet energies to eliminate residual discrepancies between simulation and actual data after standard corrections. This correction enhances the accuracy of jet measurements (on a % lecvel), reduces systematic uncertainties, and ensures uniformity across different regions of the detector. Applying the L3 Residual Correction is important for precision measurements and maintaining consistency between data and simulation in CMS analyses. My contribution was deriving the correction based on Z(->ee and ->mumu)+Jet events for the full RunII (2016-2018). (CMS internal)
Workflow Optimization for HEP Jobs on National HPC Centers
With the future German HEP computing strategy (LINK), processes and mechanisms need to be adapted to the differences of HPC compared to "classic" dedicated WLCG resources to ensure a reliable and efficient operation of such centers as future Grid sites. With this project, we are working on optimizations and improvements for the (currently opportunistic) integration of HoreKa into the WLCG Tier-1 center, Gridka, especially an XRootD-based mitigation for data-access bottlenecks. With our deployed P.o.C, we show that an efficient integration of HPC is possible and we gain further benefits in terms of a reliable site operation (e.g. extended monitoring and debugging capabilities).

Other Projects

Contributions to the XRootD Community With my computing project, I am an XRootD power user and in close contact with the development team. Our rather complex prerequisites on HPC required many adaptions (e.g. running everything in rootless containers). My experiences from our project I regularly propagate to the development team (general feedback, bug reports, feature requests). Additionally, I am developing own plugins, mainly related to selective caching, for our HPC site, and experimenting with XRootD in general (e.g. xrd-interactive).
My newest project is a XRootD knowledge data base for sharing experiences with the XRootD community that will go live soon.
Developments and Enhancements for the CMS Monitoring For my PhD computing project, it is important to monitor what is happening on the sites. Especially when it comes to CPU efficiency optimizations, monitoring is crucial to However, the CMS monitoring is lacking some things, like the provisioning of successful job's logs, which is a pitty when one tries to identify inefficiencies of the systems or running workflows. Therefore, I am currently working on tools to make the most out of the available CMS monitoring (Collecting logs of successfull jobs, Matching of monitoring sources).
Support for CMS CompOps from the Site POV While optimizing our operaions at the HPC center, I often have a close look into what is going on in the Grid. For identifying problems, a close look to the CMS job logs is often the only chance on HPC. From this, I am very experienced in identifying problems of workflows or other sites. In combination with our powerful meta-monitoring tool HappyFace4, I was able to identify several problems in the past and support CMS CompOps in fixing them.
Analysis Facility Concept and Prototype for the DARWIN Collaboration Our computing group at ETP started supporting the DARWIN community some time ago with a concept and prototype of an analysis facility. I participated in the concept design and ordered the hardware.
Technical Support for the DELight Collaboration The DELight Experiment is a rather new low mass DM search experiment. I am providing technical assistance to the young working group and hosting and maintaining their services (ldap user management, bookstack wiki, full-stack web)
Administration of the IT Infrastructure and User Support For nearly three years, I was one of the main (linux) administrators for our institute's infrastructure. This included hardware (buying and maintaining the servers) and provisioning of services (ldap, Gitlab/mattermost, bookstack, ceph, webservers, ...), IT security, and more.

Schools, Conferences, and Talks

2021

Fidium Kickoff Meeting 2021
DPG 2021

2022

JETMet Workshop Florence 2022 (L3Res Z+Jet: Framework Synchronization)
DPG 2022 (Jet Energy Calibration for Ultra Legacy Data with Z+Jet Events at CMS)
FSP CMS 2022 in Aachen

2023

DPG 2023 (Caching in Distributed Computing Infrastructures)
XRootD&FTS Workshop at Ljubljana 2023 (Data-Aware Scheduling for Opportunistic Resources (with XRootD and HTCondor))
FSP CMS 2023 at Hamburg (GridKa Report and R&D Overview at KIT)
Thematic CERN School of Computing Security (sCSC) 2023 at Split
CMS Week at CERN

2024

DPG 2024 (Workflow Optimization for HEP Jobs on Opportunistic Resources with XRootD)
ACAT 2024 at Stony Brook, Long Island (Paving the Way for HPC: An XRootD-Based Approach for Efficiency and Workflow Optimizations for HEP Jobs on HPC Centers)
Annual Meeting of the BMBF-funded Research Compound - "Föderiertes Computing für die ATLAS- und CMS-Experimente am Large Hadron Collider in Run-3" (GridKa Report)
Lecturer at the Inverted Cern School of Computing iCSC 2024 (Unraveling Grid Computing: From Basics to WLCG)
WLCG Workshop 2024 at DESY
XRootD&FTS Workshop at Abingdon 2024 (Experince with XCache on HPC in Germany (CMS))
FSP CMS 2024 at Aachen (GridKa Report)
FIDIUM Collaboration Meeting 2024 and FC-AC Kickoff Meeting at Aachen (Optimizations for HEP Jobs on HPC with XRootD Caching)
CHEP 2024 at Krakau (Author: First Deployment of XCache for Workflow and Efficiency Optimizations on Opportunistic HPC Resources in Germany, Co-Author: A Lightweight Analysis & Grid Facility for the DARWIN Experiment)
CMS OnC Week 2024 at CERN (Making the most out of CMS Monitoring)

Publications and Contributions

TODO

Procjects, Tools, and Helpful Resources

TODO: ldap, ssh-agent, phd-hacks, xrd-interactive