Made available in DSpace on 2022-04-28T19:07:11Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; The CMS experiment collects and analyzes large amounts of data coming from high energy particle collisions produced by the Large Hadron Collider (LHC) at CERN. This involves a huge amount of real and simulated data processing that needs to be handled in batch-oriented platforms. The CMS Global Pool of c...
Made available in DSpace on 2022-04-28T19:07:11Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; AsyncStageOut (ASO) is the component of the CMS distributed data analysis system (CRAB) that manages users transfers in a centrally controlled way using the File Transfer System (FTS3) at CERN. It addresses a major weakness of the previous, decentralized model, namely that the transfer of the user's out...
Made available in DSpace on 2022-04-28T19:07:12Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to...
Made available in DSpace on 2022-04-28T19:07:12Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; U.S. Department of Energy; National Science Foundation; The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data reprocessing activities. The total resources at Tier-1 an...
Made available in DSpace on 2022-04-28T19:07:13Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; The CMS experiment at the LHC relies on HTCondor and glideinWMS as its primary batch and pilot-based Grid provisioning systems, respectively. Given the scale of the global queue in CMS, the operators found it increasingly difficult to monitor the pool to find problems and fix them. The operators had to ...
Made available in DSpace on 2022-04-28T19:07:13Z (GMT). No. of bitstreams: 0 Previous issue date: 2017-11-23; CRAB is a tool used for distributed analysis of CMS data. Users can submit sets of jobs with similar requirements (tasks) with a single request. CRAB uses a client-server architecture, where a lightweight client, a server, and ancillary services work together and are maintained by CMS operators at CERN....
Made available in DSpace on 2018-12-11T16:41:38Z (GMT). No. of bitstreams: 0 Previous issue date: 2015-01-01; The CMS Remote Analysis Builder (CRAB) is a distributed workflow management tool which facilitates analysis tasks by isolating users from the technical details of the Grid infrastructure. Throughout LHC Run 1, CRAB has been successfully employed by an average of 350 distinct users each week executing ab...