Author(s):
Balcas, J. ; Bockelman, B. ; Hufnagel, D. ; Anampa, K Hurtado ; Khan, F Aftab ; Larson, K. ; Letts, J. ; Marra Da Silva, J. [UNESP] ; Mascheroni, M. ; Mason, D. ; Yzquierdo, A. Perez-Calero ; Tiradani, A.
Date: 2022
Persistent ID: http://hdl.handle.net/11449/220987
Origin: Oasisbr
Description
Made available in DSpace on 2022-04-28T19:07:12Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23
U.S. Department of Energy
National Science Foundation
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.
California Institute of Technology
University of Nebraska
Fermi National Accelerator Laboratory
University of Notre Dame
National Centre for Physics Quaid-I-Azam University
University of California San Diego
Universidade Estadual Paulista
Port d'Informació Científica
Centro de Investigaciones Energéeticas Medioambientales y Tecnológicas CIEMAT
Universidade Estadual Paulista