Nittany Lion in data center

Roar System Specifications

Writing a proposal? You can find grant-ready text about our computing systems, data storage, and facilities here.

Our Computing System Location

The Roar supercomputer team members work out of the Computer Building on Penn State’s University Park Campus. The Roar cyberinfrastructure, which supports Penn State research computing, is also located at the University Park Campus, within a state-of-the-art data center. This data center provides 2.4MW of redundant power and 12,000 square feet of environmentally controlled space for our hardware. Approximately 50 percent of the facility’s power and equipment resources are dedicated to supporting the Roar system infrastructure.

About the Roar Cyberinfrastructure

Roar operates more than 30,000 Basic, Standard and High Memory cores to support Penn State research. The system provides dual 10- or 12-core Xeon E5-2680 processors for Basic and Standard memory configurations and quad 10-core Xeon E7-4830 processors for High Memory configurations. The Basic, Standard, and High Memory configurations are 128 GB, 256 GB, and 1 TB respectively. You can view a complete list of Roar computing options on the Roar rate sheet.

Operating System

The computing environment is operated by Red Hat Enterprise Linux 7.

Storage

Roar provides a High Performance Storage Archive (ACI-HPSA) capability that supports users’ processing and research needs for data storage. The storage infrastructure includes General Parallel (GPFS), Clustered Network (CNFS), and Common Internet (CIFS) file systems that interconnect across high speed Ethernet, Infiniband, and Fibre Channel network fabrics. The storage architecture contains:

  1. Active storage pools that provide access to Home, Work, Group, and Scratch directories;
  2. Near-line storage pools that provide a long-term and archive storage repository for files that are not needed real-time to support ongoing research efforts;
  3. Data management nodes that enable users to move data between the storage pools. Users have access to data that is stored in the Active storage pools and the Near-line storage pools without requiring administrative support.

Network

Our computing and storage capabilities are complemented by a high-speed network that utilizes high performance Ethernet fabric, Infiniband, and Fibre Channel network protocols.

The high-performance Ethernet network is built on Brocade VCS Fabric Technology and supports connectivity for all processing nodes that are within the Roar system boundary. A VCS “fabric” provides a flexible interconnecting network between individual switches, creating a virtual cluster of physical switches. The current Roar VCS network fabric includes:

  • SonicWall SuperMassive 9400 firewall appliance that provides 20 Gbps, low latency IPsec intrusion prevention
  • Brocade VDX 8770-8 Enterprise-level switches that are configured with 10/40/100 Gb network link capacity
  • Brocade VDX 6740 switches that provide 10 Gb link capacity
  • Dell N2024 switches for host iDRAC (integrated DELL Remote Access Controller) remote management over Gb link capacity

Roar uses Mellanox FDR (Fourteen Data Rate) Infiniband interconnects for high performance compute and storage network connectivity. FDR Infiniband provides high-bandwidth, low-latency connections for all Standard and High Memory processor nodes and their associated storage systems.