Poznan Supercomputing and Networking Center

PSNC_logo_niebieskie_

Contact: Krzysztof Kurowski (krzysztof.kurowski@man.poznan.pl)

Phone: +48618582072

Poznan Supercomputing and Networking Centre (PSNC) was established in 1993 as a research laboratory of the Polish Academy of Sciences and is responsible for the development and management of the national optical research network, high-performance computing and various eScience services and applications in Poland. The optical network infrastructure called PIONIER is based on dedicated fibres and DWDM equipment owned by PSNC. PSNC has several active computer science research and development groups working on a variety of aspects including: innovative HPC applications, portals, digital media services, mobile user support technologies and services, digital libraries, storage management, tools for network management, optical networks and QoS management. As it was demonstrated in many international projects founded by European Commission, PSNC experts are capable of bringing unique IT capabilities to the research and e-Science based on many experiences in the 5th, 6th and 7th Framework Programs. An active participation in the design and development of high-speed interconnects, fiber-based research and education networks allows PSNC today to be a key member of pan-European GEANT optical network connecting 34 countries through 30 national networks (NRENs). PSNC is also participating in the biggest scientific experiments offering the access to large scale computing, data management and archiving services. In addition we have been engaged in European initiative of building high performance computing e-Infrastructure – PRACE which will end in provisioning of permanent future Petaflops supercomputing installations involving reconfigurable hardware accelerators. PSNC is also taking an active role in EUDAT contributing with the development of sustainable data storage, archiving and backup services. Another branch of PSNC activity is the hosting of high performance computers, including SGI, SUN and clusters of 64-bit architecture PC application servers. PSNC was participating in multiple national and international projects (Clusterix, ATRIUM, SEQUIN, 6NET, MUPBED, GN2 JRA1, GN2 JRA3, GN2 SA3). It was also a coordinator of pan-European projects such as GridLab, PORTA OPTICA STUDY and PHOSPHORUS and took an active part in many other EU projects such as HPC EuropeI/II, OMIIEurope, EGEEI/II, ACGT, InteliGrid, QosCosGrid or MAPPER.

Service Portfolio

  • Provider of HPC and cloud resources
  • Provider of fast and reliable networking
  • Reliable storage and backups provider
  • Software development and code optimization
  • Performance benchmarking
  • Large scale simulation consulting and support
  • Trainings
  • Data analytics on HPC systems
  • Green technologies – energy optimization in IT systems

HPC Resources available

No. of systems:

1

Architecture(s):

Eagle PC-Cluster: Intel Xeon E5-2697, Infiniband FDR

Performance:

1372,13 TFLOPS

Storage:

Lustre: 3,6 PB, RAM: 120,6 TB

Testsystems for Accelerators:

none

HPC applications and software available:

Development Tools:

different compiler and performance tools (not limited to academic)

ISV-codes:

Abaqus FEA, AMBER, acml, CodeAnalyst, Cudatoolkit, Gaussian, Matlab, ORCA, plink, TBB

Opensource:

ABINIT, acml, Bowtie, Gromacs, Hmmr, mapDamage, mumax, NAMD, Quantum Espresso, Rna-Seqc, RSEM, SIESTA, TABIX, TopHat, Trinityrnaseq, Vcftools, Velvet, Vowpal-wabbit, Xenome

Comment:

Customer dependent software will be installed by request

Access

Interactive (SLURM), QCG tools (CLI, desktop tool, portal)

Access policies

No restrictions.

Non-academic users need a contract.

 

No. of systems:

1

Architecture(s):

Inula PC-Cluster: mixed of AMD (Interlagos) and Intel Xeon E5-2697, InfiniBand QDR

Performance:

303,4 TFLOPS

Storage:

43,7 TB, RAM: 6,56 TB

Testsystems for Accelerators:

205 x TeslaM2050

HPC applications and software available:

Development Tools:

different compiler and performance tools (not limited to academic)

ISV-codes:

Abaqus FEA, AMBER, acml, CodeAnalyst, Cudatoolkit, Gaussian, Matlab, ORCA, plink, TBB

Opensource:

ABINIT, acml, Bowtie, Gromacs, Hmmr, mapDamage, mumax, NAMD, Quantum Espresso, Rna-Seqc, RSEM, SIESTA, TABIX, TopHat, Trinityrnaseq, Vcftools, Velvet, Vowpal-wabbit, Xenome

Comment:

Customer dependent software will be installed by request

Access

Interactive (PBS), QCG tools (CLI, desktop tool, portal)

Access policies

No restrictions.

Non-academic users need a contract.

No. of systems:

1

Architecture(s):

Chimera SMP: Intel Xeon E7-8837, NUMAlink® 5, paired node 2D torus

Performance:

21,8 TFLOPS

Storage:

1,2 TB, RAM: 16 TB

Testsystems for Accelerators:

none

Access

Interactive

Access policies

No restrictions.

Non-academic users need a contract.

 

No. of systems:

1

Architecture(s):

Cane PC-Cluster: AMD (Interlagos), InfiniBand QDR – Fat tree, 32Gb/s

Performance:

86 TFLOPS

Testsystems for Accelerators:

128 x TeslaM2050

Access

Interactive

Access policies

No restrictions. Non-academic users need a contract.

 

Other resources available

No. of systems:

2 production-grade OpenStack instances

Architecture(s):

Intel Xeon E3/E5 v3/v4

No. of servers

100-150

Memory:

8-20 TB TB

Storage:

1 PB as SDS or disk arrays, spinning drives and SSD

Virtualization platform

OpenStack Liberty / KVM

Access/VPN

VPN: Portal – management/API

Access policies

No restrictions. Non-academic users need a contract.

 

HPC related Products

Users can submit jobs directly to the queueing system, but to address more sophisticated scenarios and to facilitate the process of submission and control of various types of computational experiments a set of services and tools, like for example science gateways portals, desktop tools, command-line clients, is also offered.

The reporting of problems and tracking of the progress of solving them is done through a dedicated portal-based helpdesk system.

The consumption of computational resources assigned to users’ grants can be monitored in the dedicated accounting portal.

The infrastructure offers also access to the specialized hardware and software dedicated to remote or in-situ visualization of simulation results.

The list of installed software packages and their versions with dynamic information about the current status and planned downtimes is presented to the users in Applications Catalogue portal.

Quality management certifications

Not at this moment, ISO 9001 and 27001 are planned for IaaS services

Collaborations with SMEs

Cooperation with SMEs since many years by contract and in 3rd party projects.

Posted in .