Resources for Researchers

Ready-to-go Text for Your Proposal

You can use these text blocks as is, or select only the details relevant to your proposal. If you need more information, please contact us at icds@psu.edu.

Jump to:

Data Management Plans: Storage and Preservation

Over the course of the research project, research data will be hosted by the Pennsylvania State University’s Institute for Computational and Data Sciences (ICDS) through its Roar supercomputer. Roar provides both active storage (for data that is being worked on, requiring frequent access) and near-line storage (for back-up purposes and data that needs only infrequent access). Active storage is achieved through DDN 12KX40 and GS7K flash storage array systems, while near-line storage utilizes Oracle’s FS1 flash storage appliance and a SL8500 Tape Library. Active storage is backed up to the SL8500 daily.

We will also use Roar to archive the research data for at least three years after the end of the award or after public release of the data, whichever comes later. ICDS provides long-term archival services using the Oracle SL8500. Multiple copies of archived data are created through Oracle Hierarchical Storage Manager (Oracle HSM) to safeguard against data corruption. Oracle HSM generates and maintains metadata on archived files so that the data can be readily accessed. This technology affords us easy retrieval of data, even years after it has been written.

Data Security

ICDS implements various security measures to ensure that data stored on the Roar system remains safe. Roar requires a strong password and two-factor authentication for access, and all access can be audited by ICDS staff. To mitigate the potential for malicious software and security attacks, Roar employs automated weekly scans for identifying and patching software vulnerabilities. Roar provides the capability to encrypt data in-flight (when moving between points) and at rest (while written in storage). Roar login/endpoint nodes are protected by software-based firewalls that only permit Secure Shell (SSH) traffic. By default, ICDS enforces “Least Privilege” access concepts across the system, providing users with only the minimum set of permissions and accesses required to complete their function.

Roar storage is physically protected in Penn State’s Tower Road Data Center (TRDC). Physical access to the systems is limited to systems administration personnel with exceptions controlled by the TRDC’s secure operations center. TRDC requires swipe-card access and is monitored at all times.

Data stored on Roar’s active storage systems is backed up to tape storage for a period of 90 days. Backup data is automatically purged from tape once the 90 days has been exceeded.

Data Center Facilities

Roar equipment is located a newly-constructed Data Center facility at Penn State’s University Park Campus. This facility operates in compliance with all Penn State IT policies. The facility provides 2.15MW power capacity and contains 12,000 square feet of floor space for computing equipment, termed as the data center “white space.” The building is powered efficiently and is undergoing LEED certification. The facility is designed to operate with an annualized average Power Usage Effectiveness (PUE) of 1.21.

Power (2N configuration):

  • 2 independent utility feeds.
  • Dual 2MW diesel generators, with sufficient fuel capacity to support generators for 48 hours (minimum).
  • All power is backed up by static uninterruptible power supplies (UPS).
  • Power provided in a Tier 1 configuration (single non-redundant distribution path serving the IT equipment; non-redundant capacity components with expected availability of 99.671%) and a Tier 3 configuration (multiple independent distribution paths serving the IT equipment; concurrently maintainable site infrastructure with expected availability of 99.982%).

Cooling (N+1 configuration):

  • Primary cooling is provided via indirect evaporative means with cold air supplied to the whitespace via a raised floor plenum.
  • All racks exhaust from the rear into a hot-aisle containment systems.
  • Select quantity of racks are fitted with rear-door heat exchangers with chilled water piping, to accommodate the highest rack power densities as needed.

Building control and monitoring:

  • Automation system for mechanical system controls, rack level monitoring, an electrical power monitoring system, and a data center infrastructure management system
  • Fire protection includes fire alarms and a Very Early Warning Smoke Detection Apparatus (VESDA).
  • All environmental conditions, systems and networks are monitored from an operations center that is staffed on a 24x7x365 basis.

Security:

  • All people within the data center must display an authorized Penn State ID at all times, and visitors must check in at a security station to receive a visitor/escort badge.
  • Electronic locking on all doors throughout the spaces with additional authorization required to whitespace and mechanical areas.
  • Security is monitored by video surveillance cameras, located inside and outside of the facility to capture and monitor all activity of protected areas.

Roar Equipment

The Roar high-performance research cloud is composed of hardware that is interconnected over high-speed network fabrics and includes various software offerings and services.

Hardware

Roar currently maintains 23,000 computational cores. Roar offers four different core configurations: high-memory cores (1TB RAM per server), standard-memory cores (256 GB RAM per server), and basic-memory cores (128 GB RAM per server), and GPU cores (using NVidia Tesla K80 GPU accelerators). The standard-memory and basic-memory compute cores are housed within high density Dell M1000E Blade server enclosures, while the high-memory and GPU accelerator cores are in conventional 4U and 2U rack mount configurations, respectively.

Roar also maintains 20 PB of data storage capacity. The storage is comprised of 8 PB of active storage pools that provide immediate data access and retrieval, and 12 PB of near-line storage for long-term and archival purposes. The active storage operates on DDN 12KX40 and GS7K flash storage array systems, while near-line storage utilizes Oracle’s FS1 flash storage appliance and a SL8500 Tape Library.

The compute and storage hardware is interconnected using Ethernet and Infiniband network fabrics. The Ethernet network utilizes Brocade VCS fabric technology and is currently comprised of 1) four aggregate and two core layer Brocade VDX 8770-8 Enterprise-level switches that provide 10, 40, and 100 Gbps link capacity, 2) four Brocade VDX 6740 switches that provide 10 Gbps link capacity, and 3) one Brocade N2024 switch per rack for host iDRAC (integrated DELL Remote Access Controller) remote management over Gbps line rate. The Infiniband network consists of 15 Mellanox SX6025 switches and two Mellanox SX6536 648 port non-blocking SDN switch systems, all operating on 56 Gbps (FDR) line rate.

Software

Roar maintains and regularly updates an expansive software stack. The stack currently contains 240 applications, with more added at regularly-scheduled intervals. The applications include security monitoring software (e.g., OSSEC), batch schedulers (e.g., MOAB, Torque), compilers, file transfer programs, and communication libraries (e.g., MPI, OpenMP). The system also contains software applications commonly used by researchers, such as MATLAB, COMSOL, R, and Python, as well as programs for performing specialized tasks, such as Abaqus, Quantumwise, and TopHat.

Roar Support

Roar is maintained by the ICDS staff, who provide network monitoring, backup services, software updates, code optimization, and service-desk support. ICDS uses Solarwinds network monitoring software to monitor the health and status of the network, hardware, and storage. Roar is actively monitored during normal business hours (9:00 AM – 5:00 PM) Monday through Friday. Roar also hosts OSSEC, an open-source host-based intrusion detection system, which is used to control the system by monitoring available logs, alerting administrators of unauthorized system modifications, and providing a mechanism to enforce security requirements. The team uses NESSUS Professional to scan the system for potential vulnerabilities such as hacking and Denial of Service (DoS) attacks.

The ICDS website offers documentation to help users resolve technical issues they may encounter. This support is supplemented by the i-ASK Center, a service desk which supplies expert technical assistance for user problems. In the event of more complex issues, the engineers of the ICDS Technical Support Team provide advanced in-person support to users to ensure that problems are resolved in a timely and professional manner.

Roar Security Information

The Institute for Computational and Data Sciences Roar system implements the following security measures:

  • Electronic Security
  • Physical Security
  • Controls for Servers / Data Access
  • Data Destruction

Electronic Security

The Roar architecture enables electronic security through file access controls and mitigation of software vulnerabilities. Roar provides the capability to audit all system access and requires a strong password and two-factor authentication. To mitigate the potential adverse impacts of malicious software and security attacks, Roar uses automated mechanisms to identify and patch for software vulnerabilities.

Physical Security

Roar is deployed in secure data facilities located on University premises. Each data center requires card swipe and/or pin access to gain entrance into the physical space. Access is limited to systems administration personnel only with exceptions controlled by the Information Technology Services (ITS) SOC (Secure Operations Center). The data center has successfully completed a DCAA audit.

Controls for Servers / Data Access

Roar login/endpoint nodes are protected by software based firewalls which only permit Secure Shell (SSH) traffic. Other connections are immediately dropped. Data and services hosted on the Roar are not discoverable by the public internet. By default, Roar enforces Least Privilege access concepts across the system, providing users with only the minimum set of permissions and accesses required to complete their function. File systems are secured with standard POSIX based Access Control Lists (ACLs) as well as standard Unix directory and file permissions. This enables individual accounts to be organized into groups; a Principal Investigator (PI) may designate specific users in the PI’s group to access certain data. Group access to sensitive data, such as genomic and phenotypic data, is only granted to users with the consent of the responsible Principal Investigator (PI). Users are only permitted access to data which they have permission to view. For example, a user in one group with access to NIH data is not by default granted access to the NIH data of another group.

Data Destruction

Data stored on Roar is backed up and remains active for a period of 90 days. Data is automatically purged from tape once the 90 days has been exceeded.

All PIs, along with Roar and PSU IT leadership, are required to sign an NIH Compliance document prior to storing any data on Roar.

Roar meets the standards laid out in NIH’s “Security Best Practices for Controlled-Access Data Subject to the NIH Genomic Data Sharing (GDS) Policy” document. Roar is compliant with NIST Special Publication 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations”. Roar is targeting FedRAMP certification, which maximizes the security benefits that, once realized by Roar, ensures compliance with NIST Special Policies and embedded references to their associated security requirements by agencies such as the National Institutes of Health (NIH).