Welcome to the web site of National Center for Supercomputing Applications, Bulgaria!

PRACE Supercomputers, Software and Applications

Last Updated: Monday, 19 August 2013

Partnership for Advanced Computing in Europe
PRACE

Supercomputers, Software and Applications

Prof. DSc. Stoyan Markov

 

PRACE Supercomputers


 

1985 – An unthinkable up to then technological breakthrough took place – a personal computer capable of performing one million operations per second. Prof. Dr. Thomas Lippert

 

Julich Supercomputing Centre JUGENE – 2009. Prof. Dr. Thomas Lippert Presentation

 

JUQUEEN Blue Gene/Q - 2013 5,000,000,000,000,000 FLOPS

Specification
  • racks (7 rows with 4 racks) - 28,672 nodes (458,752 cores)
  • Rack: 2 midplanes with16 nodeboards (16,384 cores)
  • Nodeboard: 32 compute nodes
  • Node: 16 cores
  • Main memory: 448 TB
  • Overall peak performance: 5.9 Petaflops
  • Linpack: 5.0 Petaflops
  • I/O Nodes: 248 (27x8 + 1x32) connected to 2 CISCO Switches
IBM BlueGene/Q Specifications:
  • Compute Card/Processor -IBM PowerPC® A2, 1.6 GHz, 16 cores per node
  • Memory 16 GB SDRAM-DDR3 per node
  • Networks 5D Torus — 40 GBps; 2.5 ?sec latency (worst case)
  • Collective network — part of the 5D Torus;
  • Global Barrier/Interrupt — part of 5D Torus, PCIe x8 Gen2 I/O7;
  • 1 GB Control Network — System Boot, Debug, Monitoring;
  • I/O Nodes (10 GbE);
  • 16-way SMP processor; configurable in 8,16 or 32 I/O nodes per rack;
  • Operating system;
  • Compute nodes: CNK, lightweight proprietary kernel;
  • PowerDirect-current voltage (48V to 5V, 3.3V, 1.8V on node boards);
  • 10 - 80 kW per rack (estimated); maximum 100 kW per rack;

Research Centre Julich, Germany

 

CRAY XE6 (Hermit), Stuttgart, Germany

Specification:
  • Peak performance for the whole system: 1045708.8 GFLOP/s)
  • Total compute node RAM: 126 TB
  • 3552 dual socket G34 compute nodes
  • AMD Opteron 6276 16 Cores/CPU @ 2.3 GHz) processors (2 per node)
  • 32MB L2+L3 Cache, 16MB L3 Cache
  • 8 channels of DDR3 PC3-12800 bandwidth to 8 DIMMs
  • 25.6 GB/s dual channel Orchi , 51.2 GB/s CPU quad channel data rate, 102.4 GB/s per node
  • Direct Connect Architecture 2.0 with Hyper Transport HT3: supports ISA extensions Peak performance per socket: 147.2 GFLOP/s
  • 102.4 GB/s Byte/Transfer
  • Peak performance per node: 294.4 GFLOP/s
  • Standard of 32GB RAM/node;
  • 64GB memory for 480 nodes (13.5%) ; 4GB/Core for 15360 Cores;
  • Stream benchmark shows about 65 GB/s data transfer for a node

Automotive Simulation Center Stuttgart – Daimler, Opel, Porche, Altair,Supplier,FKFS, HLRS, KIT, Karmann, University of Stuttgart , CRAY,IBM, NEC, ABACUS, DYNA, Engineous, INTES,CADFEM, SIT, VDC, ESI

Lamborgini Aventadore 2012

 

SuperMUC Petascale System, LRZ, Computer centre for Munich's universities and for the Bavarian Academy of Sciences and Humanities

Specification:
  • 155,656 processor cores in 9400 compute nodes
  • >300 TB RAM
  • Infiniband FDR10 interconnect
  • 4 PB of NAS-based permanent disk storage
  • 10 PB of GPFS-based temporary disk storage
  • >30 PB of tape archive capacity
  • Powerful visualization systems
  • Highest energy-efficiency

 

CURIE, CEA Tres Grand Centre de Calcul, France

Specification:
  • 360 S6010 bullx nodes. For each node: 4 eight-core x86-64 CPUs, 128 GB of memory, 1 local disk of 2TB. 105 teraflops peak;
  • Processors :1440 eight-core processors, Intel ® Nehalem-EX X7560 @ 2.26 GHz, total of 11 520 cores;
  • This will rely on a specific Bull Coherent Switch (BCS) grouping nodes 4 by 4;
  • These ‘fat’ nodes are targeted for hybrid parallel codes (MPI OpenMP) requiring large memory and / or multithreading capacity, and pre and post processing tasks.
  • Curie hybrid nodes specifications
  • 16 bullx B chassis with 9 hybrid GPUs B505 blades 2 Intel® Westmere® 2.66 GHz/ 2 Nvidia M2090 T20A, total of 288 Intel® + 288 Nvidia processors. 192 teraflops peak.
  • Curie Thin nodes specifications
  • 5040 B510 bullx nodes For each node: 2 eight-core Intel® processors Sandy Bridge EP (E5-2680) 2.7 GHz, 64 GB, 1 local SSD disk.
  • Processors
  • 10080 eight-core processors, Intel® Xeon® Next Generation, total of 80640 cores. These nodes are targeted for MPI parallel codes
  • Interconnection Network InfiniBand QDR Full Fat Tree network.
  • Global file system
  • 5 PB of disks (100 GB/s bandwith), 10 PB of tapes, 1 PB of disk cache;
  • Software Linux, LUSTRE, SLURM.
  • Bull/CEA software : Shine, ClusterShell, Ganesha, Robinhood.

 

MareNostrum III, Barcelona Supercomputer center, Spain

Specification:
  • Peak Performance of 1 Petaflops;
  • CPUs: 48,448 Intel SandyBridge-EP E5-2670 cores at 2.6 GHz (3,028 compute nodes);
  • Memory: 94.625 TB of main memory (32 GB/node);
  • Disk storage: 1.9 PetaB
  • Interconnection networks: Infiniband, Gigabit Ethernet;
  • Operating System: Linux - SUSE

 

FERMI Blue Gene/Q, CINECA Supercomputer center, Italy

Specification:
  • Architecture: 10 BG/Q Racks;
  • Frame Model: IBM-BG/Q;
  • Processor Type: IBM PowerA2, 1.6 GHz;
  • Computing Cores: 163840;
  • Computing Nodes: 10240;
  • RAM: 1GByte / core;
  • Internal Network: Network interface with 11 links ->5D Torus;
  • Disk Space: 2PByte of scratch space;
  • Peak Performance: 2PFlop/s.

 

CINECA Eurora Supercomputer Prototype

Specification:
  • Architecture: Linux Infiniband Cluster;
  • Processors Type: Intel Xeon ( 8 Core SandyBridge) E5-2658 2.10 GHz (Compute); Xeon (8 Core SandyBridge) E5-2687W 3.10 GHz (Compute); Intel Xeon E5645 2.4 GHz (Login)
  • Number of nodes: 64 Compute + 1 Login;
  • Number of cores: 1024 (compute) + 12 (login);
  • Number of accelerators: 114 nVIDIA Tesla K20 (Kepler) and 14 Intel Xeon Phi (MIC);
  • RAM: 1.1 TB (16 GB/Compute node + 32GB/Fat node);
  • OS: RedHat CentOS release 6.3;

 

Blue Gene/P, Sofia Supercomputer center, Bulgaria

Specification:
  • Processor module: four IBM (850 MHz) PowerPC 450 processors are integrated on a single chip. Each chip is capable of 13.6 billion operations per second ;
  • Double-precision, dual pipe floating-point acceleration on each core;
  • Two racks, 2048 PowerPC 450 processors, a total of 8192 cores;
  • A total of 4 TB random access memory;
  • 16 I/O nodes currently connected via fibre optics to 10 Gb/s;
  • Ethernet switch;
  • Maxima l LINPACK performance achieved: Rmax = 23.42 Tflops;
  • Theoretical peak performance: Rpeak = 27.85 Tflops;
System software
  • Operation System: SUSE Linux Enterprise Server (SLES10), Service Pack 1
  • Compilers:
    • IBM XL C/C++ Advanced Edition for Blue Gene/P V13.0;
    • IBM XL Fortran Advanced Edition for Blue Gene/P V11.1;
    • GNU Toolchain (gcc, glibc, binutlils, gdb, python).
Libraries
  • Engineering and Scientific Subroutine Library (ESSL);
  • MPI (MPICH2);
  • IBM Tivoli Workload Scheduler;
  • Load Leveler Version 3 Release 4.2 PTF 1.

 

Application software


The Applied Software Packages and Libraries:

Molecular Dynamics

  • GROMACS 6 (Groningen Machine for Chemical Simulations)

GROMACS is an engine to perform molecular dynamics simulations and energy minimization. These are two of the many techniques that belong to the realm of computational chemistry and molecular modeling. Computational Chemistry is just a name to indicate the use of computational techniques in chemistry, ranging from quantum mechanics of molecules to dynamics of large complex molecular aggregates. Molecular modeling indicates the general process of describing complex chemical systems in terms of a realistic atomic model, with the aim to understand and predict macroscopic properties based on detailed knowledge on an atomic scale. Often molecular modeling is used to design new materials, for which the accurate prediction of physical properties of realistic systems is required. Volume: 53.5 MB; 1.1 million rows of code.

  • NAMD 2.9 (NAnoscale Molecular Dynamics )

NAMD is a parallel, object-oriented molecular dynamics code designed for high performance simulation of large bio molecular systems. Simulation preparation and analysis is integrated into the visualization package VMD. NAMD pioneered the use of hybrid spatial and force decomposition, a technique used by most scalable programs for bio molecular simulations, including Blue Matter. NAMD not only runs on a wide variety of platforms ranging from commodity clusters to supercomputers, but also scales to thousands of processors. NAMD was tested on up to 64,000 processors and has several important features. Volume: 32.9 MB

  • LAMMPS 2012 (Large scale Atomic / Molecular Massively Parallel Simulator)

LAMMPS is a classical molecular dynamics code that models an ensemble of particles in a liquid, solid, or gaseous state. It can model atomic, polymeric, biological, metallic, granular, and coarse-grained systems using a variety of force fields and boundary conditions. It was developed at Sandia National Laboratories, a US Department of energy facility, with funding from the DOE. LAMMPS integrates Newton's equations of motion for collections of atoms, molecules, or macroscopic particles that interact via short or long range forces with a variety of initial and/or boundary conditions. For computational efficiency LAMMPS uses neighbor lists to keep track of nearby particles. The lists are optimized for systems with particles that are repulsive at short distances, so that the local density of particles never becomes too large. On parallel machines, LAMMPS uses spatial decomposition techniques to partition the simulation domain into small 3D sub domains, one of which is assigned to each processor. Processors communicate and store "ghost" atom information for atoms that border their subdomain. LAMMPS is most efficient (in a parallel sense) for systems whose particles fill a 3d rectangular box with roughly uniform density. Volume: 66.6 MB

  • MMFF94 (Combined protein-ligand force field calculations)

Merck Molecular Force Field (MMFF) is a family of force fields developed by Merck Research Laboratories. They are based on the MM3 force field. MMFF is not optimized for a single use (like simulating proteins or small molecules), but tries to perform well for a wide range of organic chemistry calculations. The parameters in the force field have been derived from computational data. Volume: 14.1 MB

 

Ab initio Molecular Dynamics

  • CP2K 2.4.

CP2K is a suite of modules, collecting a variety of molecular simulation methods at different levels of accuracy, from ab-initio DFT to classical Hamiltonians, passing through semi-empirical NDDO approximation. It is used routinely for predicting energies, molecular structures, vibrational frequencies of molecular systems, reaction mechanisms, and ideally suited for performing molecular dynamics studies.
CP2K is written in Fortran 95, to perform atomistic and molecular emulations of solid state, liquid, molecular and biological systems. Volume: 102 MB

 

  • NWChem 6.1. (Northwest Chemistry Laboratory)

NWChem is a computational chemistry package. The program is designed for parallel computer systems including parallel supercomputers and large distributed clusters. It takes advantage of available parallel computing resources and high networking bandwidth. It can perform many molecular calculations including density functional, Hartree-Fock, Muller-Plesset, coupled-cluster, configuration interaction, molecular dynamics including the computation of free energies using a variety of force fields (AMBER, CHARMM), mixed quantum mechanics, geometry optimizations, vibrational frequencies, static one-electron properties, relativistic corrections (Douglas-Kroll, Dyall-Dirac, spin-orbit), ab-initio molecular dynamics (Carr-Parinello), extended (solid-state) systems DFT and periodic system modeling. NWChem is scalable, both in its ability to treat large problems efficiently, and in its utilization of available parallel computing resources. Volume: 299.7 MB

  • GAMESS (General Atomic and Molecular Electronic Structure System )

GAMESS general primary focus is on ab initio quantum chemistry calculations Also can do: Density functional theory calculations; Other semi-empirical calculations (AM1, PM3); QM/MM calculations.
Types of wave functions: Hartree-Fock (RHF, ROHF, UHF, GVB); CASSCF,CI, MRCI; Coupled cluster methods; Second order perturbation theory; MP2 (closed shells); ROMP2 (spin-correct open shells); UMP2 (unrestricted open shells); MCQDPT (CASSCF - MRMP2); Localized orbitals (SCF, MCSCF);
Fragment Molecular Orbital Theory (FMO) is a computational method that can compute very large molecular systems with thousands of atoms using ab initio quantum-chemical wave functions. There are two main application fields of FMO: biochemistry and molecular dynamics of chemical reactions in solution. In addition, there is an emerging field of inorganic applications.
In 2005, FMO was applied to the calculation of the ground electronic state of photosynthetic protein with more than 20,000 atoms. A number of applications of FMO to biochemical problems have been published including Drug design quantitative structure-activity relationship (QSAR) as well as the studies of excited states and chemical reactions of biological systems. In the recent development (2008), the adaptive frozen orbital (AFO) treatment of the detached bonds was suggested for FMO, making it possible to study solids, surfaces and nano systems, such as silicon nanowires. The FMO method is implemented in GAMESS and ABINIT-MP software packages.

 

Quantum Chemistry

  • Quantum Espresso

Quantum Espresso can currently perform the following kinds of calculations: ground-state energy and one-electron (Kohn-Sham) orbital’s; atomic forces, stresses, and structural optimization; molecular dynamics on the ground-state Born-Oppenheimer surface, also with variable cell; Nudged Elastic Band (NEB) and Fourier String Method Dynamics (SMD) for energy barriers and reaction paths; macroscopic polarization and finite electric fields via the modern theory of polarization (Berry Phases). Volume: 43.1 MB. 347,000 rows of code

 

  • DALTON.

Dalton is a powerful molecular electronic structure program, with an extensive functional for the calculation of molecular properties at the HF, DFT, MCSCF, and CC levels of theory. General Features: First- and second-order methods for geometry optimizations; Robust second-order methods for locating transition states; Constrained geometry optimizations; bonds, angles and dihedral angles can be fixed during optimizations; General numerical derivatives that automatically makes use of the highest order analytical derivative available; HF and DFT code direct and parallel using replication of Fock matrices using either MPI or PVM3 for message passing ; Effective core-potentials (ECPs).

 

  • Qbox.

First-principles molecular dynamics (FPMD) is an atomistic simulation method that combines an accurate description of electronic structure with the capability to describe dynamical properties by means of molecular dynamics (MD) simulations. This approach has met with tremendous success since its introduction by Car and Parrinello in 1985.
Because of its unique combination of accuracy and generality, it is widely used to investigate the properties of solids, liquids, biomolecules, and, more recently, nanoparticles. Corresponding with this success, FPMD has also become one of the most important consumers of computer cycles in many supercomputing centers. This, in turn, has motivated research on the optimization of the algorithms used in FPMD. The success of FPMD can be partially attributed to the fact that, in its original implementation, which was based on Fourier representations of solutions, the method readily benefited from mature and well-optimized numerical algorithms, such as the fast Fourier transform (FFT) and dense linear algebra.
The Qbox project was started in anticipation of the first Blue Gene machine, the IBM Blue Gene/L platform which was to be installed at the Lawrence LivermoreNational Laboratory with 65,536 nodes.

 

  • DOCK 6.4.

DOCK addresses the problem of "docking" molecules to each other. In general, "docking" is the identification of the low-energy binding modes of a small molecule, or ligand, within the active site of a macromolecule, or receptor, whose structure is known. A compound that interacts strongly with, or binds, a receptor associated with a disease may inhibit its function and thus act as a drug. Solving the docking problem computationally requires an accurate representation of the molecular energetics as well as an efficient algorithm to search the potential binding modes.
We and others have used DOCK for the following applications: predict binding modes of small molecule-protein complexes ;search databases of ligands for compounds that inhibit enzyme activity; search databases of ligands for compounds that bind a particular protein; search databases of ligands for compounds that bind nucleic acid targets; examine possible binding orientations of protein-protein and protein-DNA complexes ;help guide synthetic efforts by examining small molecules that are computationally derivatized.

 

  • ROSETTA 3 (High - Resolution Protein Structure Prediction Codes).

Rosetta 3 is a library based object-oriented software suite which provides a robust system for predicting and designing protein structures, protein folding mechanisms, and protein-protein interactions. The Rosetta3 codes have been successful in the Critical Assessment of Techniques for Protein Structure Prediction (CASP7) competitions.
The Roseta3 method uses a two-phase Monte Carlo algorithm to sample the extremely large space of possible structures in order to find the most favorable one. The first phase generates a low-resolution model of the protein backbone atoms while approximating the side chains with a single dummy atom. The high-resolution phase then uses a more realistic model of the full protein, along with the corresponding interactions, to find the best candidate for the native structure.
The library contains the various tools that Rosetta uses, such as Atom, ResidueType, Residue, Conformation, Pose, ScoreFunction, ScoreType, and so forth. These components provide the data and services Rosetta uses to carry out its computations. The major library is named "core", which contains conformational representations of polymer structures, their constituents, the capability to alter the conformations of those structures, and the ability to score those structures with the Rosetta scoring algorithms. It also contains the command-line options system.
The next most important library is "protocol". It contains higher-level classes and functions needed by Rosetta subprotocols. All the protocols are wrapped by Mover objects, which unify their interface and integrate them with the job distributor for use on clusters. This Mover interface also allows you to use pre-developed protocols by just calling the corresponding mover, making it easier to create new protocols. Volume: 1.9GB

Arrays of applications:

  • genomics, proteomics, molecular biology, biochemistry, biophysics, quantum chemistry, cells medicine , material science;
  • 3D protein structure predictions, 3D mutant protein structure reconstruction in basis of the comparison between normal and mutant DNA;
  • protein-protein, protein – receptor, protein – DNA and protein RNA interaction simulation;
  • molecular dynamics simulations of the protein-antibody and protein-antibody-antigen interactions and enzyme catalysis processes;
  • computer aided drug design;
  • polymer structure simulations, material science, examines the atomic processes operating at the contact interface between unlike as well as like materials, oxide-metal, polyimide-metal, and metal-metal contacts simulation, the details study of a nanometer size metallic tip approaching, adhering to, and being withdrawn from a substrate of the same or different material at the atom by atom level, ceramic, composite and metal–oxide structure molecular dynamics simulation, nano-cluster simulation at the atom.

 

Seismic and seismic risk

  • SPECFEM3D ( Seismic Wave Propagation 3D Model ).

Unstructured hexahedral mesh generation is a critical part of the modeling process in the Spectral-Element Method (SEM). We present some examples of seismic wave propagation in complex geological models, automatically meshed on a parallel machine based upon CUBIT (Sandia Laboratory), advanced 3D unstructured hexahedral mesh generator that offers new opportunities for seismologist to design, assess, and improve the quality of a mesh in terms of both geometrical and numerical accuracy.
The main goal is to provide useful tools for understanding seismic phenomena due to surface topography and subsurface structures such as low wave-speed sedimentary basins. Spectral-Element Method, developed in Computational Fluid Dynamics; accuracy of a pseudo spectral method, flexibility of a finite-element method; large curved “spectral” finite elements with high-degree polynomial interpolation; Mesh honors the main discontinuities (velocity, density) and topography; very efficient on parallel computers, no linear system to invert (diagonal mass matrix).
Dynamic geophysical technique of imaging subsurface geologic structures by generating sound waves at a source and recording the reflected components of this energy at receivers. The Seismic Method is the industry standard for locating subsurface oil and gas accumulations.
Wave propagation with spectral elements: geometrically flexible; it is necessary to refine the mesh where the velocity contrast is high; the mesh has to honor the major interfaces.

 

Dynamics of fluids, thermo dynamical and hydro dynamical loading in high power energy facilities

All software packages have been developed by Électricité de France (EDF).

 

  • Computational Fluid dynamics Code Saturne/ Syrthes1.3.2.

Code Saturne is a system designed to solve the Navier-Stokes equations in the cases of 2D, 2D ax symmetric or 3D flows. Its main module is designed for the simulation of flows which may be steady or unsteady, laminar or turbulent, incompressible or potentially dilatable, isothermal or not. Scalars and turbulent fluctuations of scalars can be taken into account. The code includes specific modules, referred to as “specific physics”, for the treatment of Lagrangian particle tracking, semi-transparent radiative transfer, gas, pulverized coal and heavy fuel oil combustion, electricity effects (Joule effect and electric arcs), weakly compressible flows and rotor/stator interaction for hydraulic machines. The code also includes an engineering module, Matisse, for the simulation of nuclear waste surface storage.
Code Saturne relies on a finite volume discretization and allows the use of various mesh types which maybe hybrid (containing several kinds of elements) and may have structural non-conformities (hanging nodes).

Code Saturne has been under development since 1997 by EDF R&D (Electricite de France) . The software is based on a co-located Finite Volume Method (FVM) that accepts three-dimensional meshes built with any type of cell (tetrahedral, hexahedral, prismatic pyramidal, polyhedral) and with any type of grid structure (unstructured, block structured, hybrid). It is able to handle either incompressible or compressible flows with or without heat transfer and turbulence. Parallel code coupling capabilities are provided by EDF’s “Finite Volume Mesh” library (under LGPL license). Since 2007, Code_Saturne has been open-source and available to any user.
Several advances have recently been made to Code Saturne that have significantly improved the petascale performance of the code.

Conjugate Gradient solver. The introduction of multigrid methods has improved the overall performance and numerical robustness of the linear solver in the code markedly. Initial benchmark results show that multigrid methods may improve performance by around a factor of three on lower processor counts, though the performance gains somewhat reduce on higher processor counts.

 

  • Code_Aster.

Code_Aster enables structural calculations of thermal, mechanical, thermo-mechanical or coupled thermo-hydro-mechanical phenomena, with linear or non-linear behaviours, as well as closed space acoustics calculations.
The main range of application is deformable solids: this explains the great number of functionalities related to mechanical phenomena. However, the study of the behavior of industrial components requires a prior modeling of the conditions to which they are subjected, or of the physical phenomena which modify their behavior (internal or external fluids, temperature, metallurgic phase changes, electro-magnetic stresses and etc.).
For these reasons, Code Aster can « link » mechanical phenomena and thermal and acoustic phenomena together. Code_Aster also provides a link to external software, and includes a coupled thermo-hydro-mechanics kit. Even though Code_Aster can be used for a number of different structural calculation problems (general purpose code), it has been developed to study the specific problems of components, materials and machines used in the energy production and supply industry. Thus, preference has been given to the modeling of: metallic isotropic structures, geo-materials, reinforced concrete structure components and composite material components. Thermal and mechanical nonlinear analyses are the main features of Code_Aster: simple but effective algorithms have been developed to enable quick processing. Note that the creators did not want for the algorithms to function merely as independent “black boxes”.
For complex projects, it is necessary to understand the operations conducted by the code so that they can be controlled in the most efficient manner: users should refer to the theoretical manuals of the Reference Manual for information about models and methods.
The nonlinear calculations concern material behaviors (plasticity, viscous plasticity, damages, metallurgic effects, concrete hydration and drying), important deformations or rotations, and frictional contact. Users should refer to the presentation chart of manual for a detailed description of the different features.
Usual research in the industrial sector requires mesh generators and graphical representation tools, which are not included in the Code. However other tools can be used for such tasks through interface procedures integrated within the Code.

 

  • SALOMÉ

Over the last decade, the improvements in computer hardware and software have brought significant changes in the capabilities of simulation software in the field of nuclearapplications. New computer power made possible the emergence of simulations that are more realistic (complex 3D geometries being treated instead of 2D ones), more complex (multi-physics and multi-scales being taken into account) and more meaningful (with propagation of uncertainties).
The SALOMÉ simulation platform, developed jointly with the CEA and a number of industrial and academic partners, gives access to the fine degree of modeling required. Its design allows for detailed representation of all physical phenomena (mechanical, thermo-hydraulic, neutronic) and their interaction. It provides a common environment (meshing, display, supervision of calculations) which significantly increases the productivity of studies. This technology is providing the basis for new tools developed for nuclear engineering.
SALOMÉ provides modules and services that can be combined to create integrated applications that make the scientific codes easier to use and well interfaced with their environment. SALOMÉ is being actively developed with the support of EURIWARE/Open Cascade with 10 years of development effort of a very committed and dedicated team. SALOMÉ is used in nuclear research and industrial studies by CEA and EDF in the field of nuclear reactor physics, structural mechanics, thermo-hydraulics, nuclear fuel physics, material science, geology and waste management simulation, electromagnetism and radio protection.
Many projects at CEA and EDF now use SALOME, bringing technical coherence to the software suites of these companies with the following purposes:

  • Supports interoperability between CAD modeling and computation software (CAD-CAE link);
  • Makes easier the integration of new components on heterogeneous systems for numerical computation;
  • Sets the priority to multi-physics coupling between computation software;
  • Providing an integrated environment dedicated to the numerical simulation of physical phenomena;
  • Responding to the specific demands for quality in the context of civil nuclear applications.
  • Enabling elaborate schemes around legacy and state-of-the-art physics codes (workflows, code coupling).
  • Taking advantage of high performance computing and visualization.

 

Several Supercomputing Applications in Science, Innovations and Industry


Why does the world, including Europe, put such an emphasis on the development of supercomputers and the software for them?

Over the last 70-80 years mathematics, physics, chemistry, biology, medicine, geology, economics, the engineering and social studies have amassed an enormous volume of thorough knowledge about the world surrounding us, the animate and inanimate nature, about society, the structure of the micro and macro worlds, about the forces that govern them and the laws they are subject to; about their composition, about the essence of the phenomena and processes running through them, and about the causal relations between them.

In other words, we already know relatively well the intimate structure of the macro and micro worlds, the forces and energies that shape them and the characteristics of interactions between the objects that built them up. Based on the knowledge, accumulated by humanity, models are created, that accurately enough describe the processes, phenomena and interdependencies in real life and our environment as a whole. These models then turn into pieces of software, which help one carry over significant pieces of often expensive and labor-intensive research, the creation of new products and technologies and analysis of the mechanisms that govern society to high performing computers.

Just remember how the human genome was sequenced and the gene functions determined. By using automatic sequencers and supercomputers, John Craig Venter was able to sequence the three billion gene bases in only four years. With the help of supercomputers one can solve the big, hard-to-comprehend theoretical models and thus to predict events and phenomena, which could be verified experimentally. The Standard Model is a theory describing the world’s evolution since shortly after the Big Bang up to the present day. But is it true? Modeling had enabled one to derive from this theory the existence and qualities of thousands of different processes in the world of elementary particles. That was how Peter Higgs foresaw the existence of the Higgs-bosone. Experiments at CERN have confirmed with a 99.6% probability that it exists, which means that now the theory has become true.

Supercomputers are the only instrument that combines the enormous computing power of the machines with the huge amount of knowledge, accumulated by humanity. It is that uniqueness of theirs that has substantially changed the scientific research methods, manufacturing, medicine, the media, telecommunications and people’s everyday life. Virtual design of new products, technologies and services is now an engineering practice which significantly decreases the time and costs, necessary for their production and distribution.

But the cardinal benefit of this amalgamation of spirit and matter is the accelerated diffusion of knowledge in society. The software, incorporated in them makes the use of accumulated experience and knowledge much easier for millions of people. Undoubtedly, this will speed up the twist of the spiral of progress. Whether for the better or worse – nobody knows.

The neutron was discovered in 1933. In 1937 it was experimentally proven that the external neutrons divide, “split” the nucleus of uranium 235. This was a staggering discovery. For three thousand years the scientists had believed that the nucleus is undividable, indestructible. The very word “atom” derives from the Greek ἄτομος (undividable). Even the young Einstein had believed in this. On that occasion The Daily Telegraph took an interview from the Nobel laureate Paul Dirac. Asked the question, what he thought of the future neutron applications, he replied: "I believe this discovery is important for Physics. I do not see its practical application. “ Eight years later, the first atomic bomb was detonated and 14 years after this the first nuclear power plat was put in operation.

The experiments, carried out in CERN have solely to do with elementary particles Physics. But we have a vague idea of how the obtained results may affect the future.
The mission of the specialists unified in PRACE is, based on the knowledge available, to create models, develop mathematical methods in order to solve them and create new software, so that the virtual “copies” of the micro and macro world with all its variety could be implanted in computers. And to train people how to use this software.

What could happen eventually – you may look at BBC’s movie A for Andromeda (2006). But that is how life goes. No one can predict the future. Likewise, we do not know what lies in the heads of people around us and of those, who are to come after us.

 

Computer aided drug design

What does it take to discover a new medicine? – just look at the picture below.


Sequence of steps in the process of synthesizing a new drug and its release on the market:


Stages in the process of identifying drug – candidate molecules, their synthesis and experimental verification:

 

The problem with big numbers in pharmacy

About 10^80 is the number of theoretically possible biologically active substances, 10^18 of them could be probable drugs (this assumption has no foundation), 10^7 is the number of known chemical compounds, 10^6 are the compounds on sale, other 10^6 compounds are located in data bases, 10^5 is the number of chemical substances in pharmaceutical companies data bases, there are about 5x10^4 drugs on the market, among which 10^3 are commercially viable.
It is very difficult to find the suitable molecule in that ocean of possible chemical compounds. And this is where supercomputers and models of molecular interactions come in. But more about this – later on in the presentation.

 

High throughput screening

The most powerful automatic laboratories are capable of analyzing up to 100,000 chemical compounds a day and cost several million dollars. In order to select enough drug candidates one usually needs to analyze tenths of millions of compounds (combinatorial chemistry). The price of a single experiment varies between 50 and 600 dollars.


The time it takes to discover, synthesize, produce and release on the market a new drug is between 9 and 26 years the inherent costs come up to between 150 and 800 million dollars.
Molecular dynamics is the mechanism that protects the bio molecules from binding randomly, which could irreversibly impair cellular functions. It ensures accuracy and selectivity in building up genetic networks and sustainability and repetitiveness of the biochemical processes chains. It also presents the main obstacle before drug design. The picture below shows the ejection of a molecule from the active center of a protein and the ligand bound to it.


In order to calculate the force, necessary to eject some substance (ligand), bound to the protein, one uses the Steered Molecular Dynamics (SMD) method. SMD induces unbinding of ligands and conformational changes in biomolecules on time scales accessible to molecular dynamics simulations. Time-dependent external forces are applied to a system, and the responses of the system are analyzed. SMD has already provided important qualitative insights into biologically relevant problems, as demonstrated here for applications ranging from identification of ligand binding pathways to explanation of elastic properties of proteins. Several algorithms to obtain that force after entry of the atoms to which a force of certain size is to be applied and the direction in which it is to be applied are featured in the NAMD software package.


You may find further popular information on the topic here: www.nsf.gov.


In order to carry out a purposeful search for drug candidates, one needs to trace in detail the way it binds with the bio molecules’ active centers. It is a specific zone, which by rule is concave (like a “pocket”) in its 3D structure. For each different protein, this is a specific configuration of amino acids which binds complementary with the drug candidate. There are no experimental methods available to observe these phenomena directly. The behavior of bio molecules and the interactions between them is described accurately enough with the help of classical and quantum mechanical physics models. Designing a drug is a complex process demanding thorough interdisciplinary knowledge and serious research experience.

 

A little bit about the models – Molecular dynamics

In order to trace the dynamics of biological molecules binding and protein behavior under certain conditions (folding, changes in conformity, etc.) it is necessary to calculate the coordinates and velocities, both variable in time, of the atoms in the system. One obtains them by solving the Newton equations for that system:

where x and v(t) denote respectively the coordinate vectors and velocities of the atoms in space. m denotes the mass of the atoms and V(x) – the potential energy of molecular interactions.

 

The potential energy could be presented as the sum of the different interactions between atoms:

Vb – potential energy of the valence bonds, Vθ– valence bonds, VΦ– torsion angels, Vf – dihedral angle and pseudo-torsion angels, VHb – hydrogen bonds. Vvw – Van der Waals interactions, Vqq – Coulomb forces.


The formula to calculate potential energy is shown on the next page:


Perhaps the model, used in DL POLY, best describes the potential energy of molecules. It is presented in a detailed and comprehensive manner in DL POLY User Manuel 4.01.

 

The Newton equation could not be solved analytically for bio molecules of several thousand atoms. One uses numerical methods in order to obtain discrete values of the forces that have an effect on each atom and its velocity. From them, one derives the new coordinates of the atoms and the directions they are moving in over short time intervals (2-3 fs). Thus the system’s trajectory is obtained. Because the processes of bound molecule states formation or changes in their configuration take place relatively fast, in most cases it is enough to trace the dynamics of the molecule for time spans ranging from several tens to several hundred nanoseconds (ns=10-9s). It takes days for the most powerful supercomputers to construct these trajectories for large systems of atoms.

Computerized drug design demands enormous computational power.

 

Example: Calculating the dynamics of a bio molecule of average size - F1-ATPase in water (183 674 atoms), for a time span of 100 nanoseconds, time step - 2 fs.

33 * 10^12 FLOPS per time step [roughly 1000xn(n-1) operations per time step ]

5 x 10^7 integration steps.

Total: 1.5 * 10^21 operations (15 00 x billion x billion)

Standard multi-core cluster performing 10 billion operations per second – about 4700 years.

Supercomputing system - 26 x 10^12 FLOPS – about 1.87 years.

By using well-structured models and appropriate mathematical models to solve them, the time necessary to calculate the dynamics would “shrink” to 10-15 days.

In living organisms, the bio molecules are placed in a thermo-dynamic environment, surrounded by water. We assume that the temperature and pressure are constant. Whatever the initial position of the atoms’ coordinates, in a certain time span the molecule will reach some dynamic equilibrium state. By averaging the equilibrium trajectories, one could obtain significant functions that characterize it, such as the free energy of the molecule. The time, necessary for the molecule to assume an energetically stable state is in the order of milliseconds, because it needs to overcome several significant energy barriers. With the use of classical methods, it is not possible to model the dynamics in such time intervals, even on the most powerful computers, Currently the so-called multi -scale multi – level models and the algorithms to solve them are being intensively developed to tackle the problem.

And so, generally, the dynamics of the process of transition from an entry into an equilibrium state could be traced by molecular dynamics modeling (MD). The modeling accuracy is a function of the potential energy calculation’s accuracy, i.e. the model’s completeness.

For example, the results of solving the simplified models of polypeptide chains with spiral-like and oval conformations significantly deviate from the experimental data. Our advice is to carefully select the molecular modeling software packages.

Another problem is that, due to the large number of calculations, modeling the dynamics of a protein-solvent system could only be performed for a short interval of 10-100 nanoseconds. Nevertheless, protein-ligand interactions can be predicted with help of MD.

 

Computer modeling the dynamics of bio molecular binding using the molecular dynamics packages GROMACS and NAMD




The main task of computer-aided synthesis is to determine the drug prototypes, that are biologically active and specific for the given protein, among the hundreds of thousands and, sometimes, millions of compounds. The procedure is called virtual high performance screening and contains two steps:


And so, docking means placing the ligand in the protein’s active center. The energy of the ligand-protein bond is usually generated by their electrostatic fields and Wan der Waals forces. The task is to find the global minimum of this energy. Even for one simple ligand, the minimum is to be obtained in a multidimensional space, due to the many freely spinning valence bonds (degrees of freedom). In the case of the molecule, shown below, it is a 21-dimensional space.


If we assume that the active center of the protein has the same number of degrees of freedom, finding the global minimum is a practically unsolvable problem. One possible simplification is to consider the protein as fixed (motionless). Its surface is covered with a finite elements mesh and thus the free energy is obtained. The energy of the bond is the difference in the free energies of the protein and ligand. In other words, the lock is still, and the key is spinning freely in space and one has to pick the moment in which it is aligned with the lock, so as to enter it. But at that moment it is also pushing water away from the center and the lock’s pins get rearranged. The key also needs to rearrange its teeth. These difficulties are almost insurmountable but there are models and algorithms, with the help of which, one can solve the docking problem on supercomputers.


Example: An example of a “frozen” protein, cast in a finite element mesh. (Prof.Vladimir Sulimov, Moscow State University)



Using a solid model for the protein may lead to significant errors. Over the last years, docking models taking account of the natural flexible structure of proteins have been created.

In order to obtain accurately enough the free binding energy of a “flexible protein-ligand” system, one needs to pass through different levels of complexity and preciseness. One can achieve greater accuracy by combining molecular dynamics (MD) with solving the Poisson–Boltzmann (PB) equation for the electrical potential, the surface area models ( MM_PB-SA) and for the implicit water.
Despite their usefulness, such schemes do not offer a well-structured algorithm for calculating the free energy. Significantly more precise results could be obtained by modeling the free energy perturbation – FEP. This model is described in the paper "Calculation of absolute protein–ligand binding free energy from computer simulations Hyung-June Woo and Benoît Roux".

 

A few examples of supercomputer-aided drug synthesis

Caspase 3 is one of the 11 enzymes in the Caspase family. It is test-proven on animals that molecules, blocking the Caspase could be used as drugs in different pathological states, connected with inadequate progress of the apoptosis processes, such as acute respiratory conditions, liver diseases, nephritis, haemorrhage and heart diseases, caused by myocardial infarction. A chemical reaction which causes protein bonds to break plays a crucial role in programmed cell death, or apoptosis. The catalyst for the reaction is caspase3.
Based on computerized modeling, the 3D structure of caspase3 has been recovered and the mechanism that binds an alkaloid from red geranium in the allosteric active center of the enzyme, researched. Drug candidates have been searched for among the active bio molecules, found in herbs, used in traditional medicine – tribulus, sumac, red geranium and chicory.
The simulations have been carried out, using the software package GROMACS 4.5.
Based on the example of selecting an active molecule in order to decrease or block the caspase3 enzyme’s activity, we have perfected the computer-aided procedure for drug candidate search and verification with the help of meta dynamics.
The interaction Caspase 3 – Taxifolin, which is a drug candidate.


The ribosome is a molecular machine, specialized in synthesizing proteins. It is a compact, tightly wrapped molecular system with a diameter of about 30nm. It is made up of ribosomal RNA (ribonucleic acid) and ribosomal proteins.

The proteins are synthesized in the ribosome's tunnel where three processes take place. The first one is deciphering the genetic information, carried by the messenger RNA (mRNA). The second one is comparing the mRNA codon with the transfer RNA (tRNA) anti codon. tRNA carries the amino acid, corresponding to the anti-codon. The complementary and anti-parallel correspondence between the mRNA codon and the tRNA anti codon ensures the inclusion of the right amino acid in the synthesized polypeptide chain.

There is a specific area in the tunnel, where the contact between mRNA and tRNA takes place.

The next figure shows a model of the ribosome with the main functional blocks – A,P and E, the sub-structures 50s and 30s, tRNA and mRNA, depicted in it. The tunnel, which forms between 30s and 50s, when the ribosome is in a working state, is visible in it. This narrow channel widens significantly at one place, called the “eye of the ribosome”. This is the mRNA-tRNA binding center, which we mentioned earlier.

So far, the structures of ribosomes in thermophiles, published in PDB, have been used in computer modeling. The reason is that they are resistant to X-ray radiation. The E.Coli ribosome gets damaged during X-ray structural analysis.
Under the influence of heat the electron density chart gets blurred and many monomer chains go missing in the 3D atomic structures thus obtained. At certain spots the chains even look divided in separate disconnected pieces.

We have created a new algorithm and several programmmes, which have helped us predict and recover the missing links in monomers and their positions and recover the 3D structure of the E.Coli ribosome. There is no accurate 3D structure, published in the Protein Data Bank, which is the base for synthesizing new drugs. The next slide shows the dynamics of the ribosome’s complete structure in a working phase, obtained by the BG/P.


Bacterial ribosome is the target of more than 70% of the antibiotics on the market. That is why, as was justified above, one of the main contributions of crystallography to the pharmaceutical industry is the reconstruction of the 3D atomic structures of the complexes which the ribosome forms with different antibiotics and the precise determination of the atomic coordinates and calculation of force fields in the binding zones. Modeling the atomic structures helps one clarify the reasons for the protein synthesis’s stability when attacked by antibiotics and carries vital information for the contact sites between the protein being synthesized and the walls of the ribosome tunnel.
Example: Structures of macrolide antibiotics in the tunnel.


All mentioned above about supercomputer modeling we can demonstrate on the example of the ligand selection procedure for G-protein-coupled receptors (GPCRs), also known as seven-transmembrane domain receptors,(7TM). The GPCR forms the largest class of cell surface signal receptors. They are activated by a wide range of extracellular signals (such as small bioorganic amines, large protein hormones, neuropeptides, and light etc.) and transduce their signals across the plasma membrane, triggering a cascade of intracellular events leading eventually to the physiological response of the cell to the stimulus. Their structure is shown on the figure, below.


GPCRs represent one of the most important families of drug targets in pharmaceutical development. There are over 375 non-chemosensory GPCRs encoded in the human genome, of which 225 have known ligands and 150 are orphan targets with no identified natural ligands and functions. Many various pharmaceutical drug researches are being carried out to find out novel ligands, functions and pathophysiological mechanisms of such orphan GPCRs. Six of the top ten drugs in the US target GPCRs. Of the top 200 best-selling prescription drugs, more than 20% interact with GPCRs, providing worldwide sales of over $20 billion (2011). Dr, Brian Kobilka of Stanford University School of Medicine and Dr. Robert Lefkowitz of Duke University Medical Center have won the 2012 Nobel Prize in Chemistry "for studies of G-protein coupled receptors".
Procedure for selecting ligands that bind to a certain G-receptor, developed by Andrew Binkowski and Michael Kubal

 

Using the AMBER package, one obtains the geometry and force fields of the ligand and the receptor’s active center. DOCK6 FRED selects five thousand out of four million ligands. With the help of molecular dynamics, combined with the “grand canonical Monte Carlo” method, one then calculates their free binding energy. It is used for modeling the ligand’s transition between different potential levels, as well as for calculating the General Solvent Boundary Partition of water molecules, located in the protein’s active center during the ligand’s entry. Thus, the free energy of the ligand-receptor bond is obtained with a much greater accuracy. The procedure is depicted figuratively on the next figure:


The first step is, out of the receptors’ 3D structure, obtained by X-ray structure analysis and published at the Protein Data Bank(PDB), to create a new file, containing the receptor and all other molecules that surround it and are necessary for its realistic binding with the ligand. (rec.pdb file). Although the ligand interacts only with the protein, one needs to assess, whether the water molecules in the receptor’s active center, the ions stabilizing its structure, the low molecular compounds and monomers that could be perceived as a part of the receptor, should be taken into account. Then, one removes the rest of the components from the pdb.
Rec.pdb file is to be presented in two formats – one for each of the docking programes DOCK6 and FRED. ZINC is a base featuring about 20 million small molecules and theirs 3D structures.

 

Virtual bio laboratory prof.dr.Thomas Lippert

 

Medicine

The brain, with its hundreds of millions of interconnected neurons, is undoubtedly the most complex machine, created by nature and long time will pass before we would be able to understand its organization and functioning. The aim of the Human Brain Project is to integrate all that we know about the brain in computer models, but in a way that would enable one to simulate the brain’s activity as a whole. The models will encompass all the brain’s hierarchical levels of organization and functioning – from separate neurons up to the whole cortex. At the end of 2012 the European Commission has approved a project budget to the tune of 1,190 million euro. http://www.humanbrainproject.eu/.

Prof. Henry Markram is the director of both projects - Blue Brain (in collaboration with IBM) and the Human Brain Project. They are coordinated by Ecole Polytechnique Federale de Lausanne(EPFL).

Prof. Dr. Thomas Lippert is the Head of the project’s third pillar: “Hiararchical mathematical and physics models, methods and algorithms for their solution and supercomputer modeling”. The models of the brain centers will be solved on JUQUEEN – Julich Supercomputing Centre.

If you would like to get better acquainted with the aims, results and future tasks of the project, you may look at Henry Markram’s video lecture:



Superficial, raw and inaccurate comparison between the functional capabilities of the Human Brain and JUQUEEN

 

In August 2013 Okinawa Institute of Technology Graduate University (OIST) in Japan and Forschungszentrum Jülich in Germany have carried out the largest general neuronal network simulation. The network consisting of 1.73 billion nerve cells connected by 10.4 trillion synapses. Although the simulated network is huge, it only represents 1% of the neuronal network in the brain.

The simulation was made possible by the development of advanced novel data structures for the simulation software NEST.

This new method for  description of the neural networks  is created from the team, led by Markus Diesmann in collaboration with Abigail Morrison from Institute of Neuroscience and Medicine at Forschungszentrum Jülich.

To realize this feat, the program NEST recruited 82,944 processors of the K computer. The process took 40 minutes to complete the simulation of 1 second of neuronal network activity in real, biological, time. For more details see:

 

Epidermal growth factor receptor (EGFR)

EGFR exists on the cell surface and is activated by the binding of its specific ligands, including epidermal growth factor and transforming growth factor α. These interactions induce receptor dimerization and tyrosine autophosphorylation and leads to cell proliferation. Mutations, amplifications or misregulations of EGFR or family members are implicated in about 30% of all epithelial cancers. It is found at abnormally high levels EGFR on the surface of many types of cancer cells.

Many therapeutic approaches are aimed at EGFR. The monoclonal antibodies block the extracellular ligand binding domain. With the binding site blocked, signal molecules can no longer attach and activate the tyrosine kinase. Cetuximab and panitumumab are examples of monoclonal antibody inhibitors. Another therapeutic method involves using small molecules to inhibit the EGFR tyrosine kinase on the cytoplasmic side of the receptor. Without kinase activity, EGFR is unable to activate itself, which is a prerequisite for the binding of downstream adaptor proteins.

The 3D structure of EGFR has been given to us by Dr. Iliyan Todorov, CCLRC Daresbury Laboratory, Daresbury, UK. Then we created a system which includes an EGFR dimer (light blue), placed on a lipid bilayer of molecular membrane and surrounded by water molecules. The system contains 465 399 atoms:



The figure shows the result of simulating the molecular dynamics of the receptor’s dimer (one part of the dimer is colored in green, the other in red). It lays horizontally on the cell membrane. This has been confirmed by the X-ray structural analysis of the receptor, together with the adjacent part of the membrane.



Unlike traditional chemotherapies or radiotherapies, newer treatments aim to specifically target malignant cells. This is done by identifying certain proteins that are differentially expressed in tumorigenic cells as compared to normal cells. One example is the depicted EGFR with 4 of its modules (domains) colored in red, green, black and cyan. EGFR is a key intermediate in the growth signal transduction pathways in cells. Cetuximab, works by blocking the messenger molecule 'landing' domain in EGFR (in black). Cetuximab, designed to bind more strongly than the messenger, outcompetes it avoiding signal transmission and tumor growth. However, due to the high mutation rate of cancer cells, after prolonged treatment with this drug, researchers at the Hospital del Mar in Barcelona are finding new mutations that affect the affinity of cetuximab to EGFR. The tumor cells evolve and become resistant to the drug. Barcelona supercomputing centre will send several simulations measuring binding affinities of the normal and mutant receptors with the drug. Additionally they will compare these with the affinities of 'natural' messenger molecule that outcompetes the drug in the mutant case and finally they will measure affinities for a couple of other versions of the drug to see if we can 'fix it'.

 

The annual flu epidemics are thought to result in between 3,000,000 and 5,000,000 cases of severe illness and between 250,000 and 500,000 deaths around the world. Several subtypes of influenza A virus are known according to their major surface antigens hemagglutinin (H) and neuraminidase (N). There are 16 different types of H antigen (H1 to H16) and 9 types of the N antigen (N1 to N9). Influenza virus genome is variable and its changeability helps it survive the attack of the immune system. Genome variability, however, is not unlimited. Although the flu virus genome contains areas, which are capable of accumulating significant amount of point mutations, other areas must remain unchanged because they are responsible for viral identity. This in turn means that the flu genome should contain conservative nucleotide sequences common for all influenza viral strains.


Influenza virus A genomic sequences were downloaded from the NCBI Database (http://www.ncbi.nlm.nih.gov/genomes/ FLU/) containing more than 178, 285 RNA sequences.

In this study we aim to determine the conservative domains in all subgenomic RNAs of all strains and individual isolates of the influenza virus A available at the NCBI database in order to identify nucleotide/protein sequences common for all of them. To this end Parallel ClustalW software package, adapted to the Bulgarian BlueGene P and Julich JUQUEEN supercomputers. It was employed for identification of conservative and variable regions in full length RNAs of 154 115 influenza viruses.

Conservative nucleotide regions were searched amongst 14194 fragment No 1 (PB2) RNA sequences. Four conservative regions of 16 to 35nt were identified, representing 3.54% of the fragment length (2400 nt).

The first two conservative regions are represented in viruses circulating from 1918 till now.

All conservative domains can be targeted by micro-RNAs that have no target in the human genome. Therefore all these sequences are potential candidates for the design of small interfering RNAs (mi-RNAs) for gene therapy of influenza.

Results:

  • The lack of conservative nucleotide sequences in the genes of the surface antigens HA and NA demonstrates that manufacturing efficient ant flu vaccines based on these proteins is impossible.
  • The conservative sequences found in the gene of the surface ion channel M2 protein reveal opportunity for designing a new generation of recombinant/synthetic vaccines effective against all influenza virus A strains.
  • The conservative nucleotide sequences in the RNA segments No 3 (PA), 5 (NP) and 7 (MP, coding for the matrix protein M1) makes them appropriate target for inactivation by synthetic micro-RNAs to be used as antiviral drugs against all influenza virus A strains, without affecting the normal function of the host (human) genome.

 

Localization of the conservative motif VWFC in hemagglutinin of the human influenza virus H1N1:

 

Parallel Simulations for cerebral aneurysms for risk evaluation calculations and patient intervention planning

Cerebral aneurysm rupture is one of the most important causes of subarachnoid hemorrhage that results in significant rates of morbidity and mortality. We know a little about the process by which the intracranial (cerebral) aneurysms grow and rupture. Epidemiological evidence from multiple vantage points suggests that a large majority of intracranial aneurysms do not rupture. The current treatment options for the enraptured intracranial aneurysms carry small but significant risk that can exceed the natural risk of aneurysm rupture. It is therefore desirable to identify which enraptured aneurysms are at greatest risk of rupture and least risk for repair when considering which ones to treat.

Illustration of brain aneurysms (Source: 2001 eCureMe.com)

For this reason, the assessment of patient specific cerebral aneurysm rupture risk is of key importance for clinicians’ decision making. The practical essence of Cerebral aneurysm simulation is to identify criteria – predictors of the aneurysm rupture by comparing clinical, diagnostic and intraoperative observations with patient specific CFD (computational fluid dynamics) investigations of ruptured and enraptured aneurysms in the same hemodynamic conditions.

 

Modeling Blood Flow

The computer model is based on a strongly coupled system consisting of time-dependent fluid dynamics (Navier-Stokes) and elasticity partial differential equations. A realistic rheology of the blood and blood vessels is to be taken into account. The problem is highly nonlinear in a rather complicated 3D geometry. Blood flow is mathematically modeled by the unsteady Navier–Stokes equations for an incompressible fluid:

where ρ is the density, v is the velocity field, p is the pressure, and t is the deviatoric stress tensor. Although the stress/strain rate relationship is in general a tensor relation, it is usually expressed as an algebraic equation of the form

where µ is the viscosity, and the strain-rate is defined as the second invariant of the strain-rate tensor, which for incompressible fluids is

In order to close the system of equations, a constitutive law must be provided to compute the local viscosity of the fluid. The simplest rheological model is a Newtonian fluid, which assumes a constant viscosity: µ = µ0. Typical values used for blood are ρ = 1.105g/cm3 and µ = 0.04 Poise.

However, blood can be thought of as a suspension of particles (red blood cells) in an aqueous medium (plasma). Thus, it is neither homogeneous nor Newtonian. The rheological properties of blood are mainly dependent on the hematocrit, or the volume fraction of red blood cells in the blood.

One of the most commonly used non-Newtonian fluid models for blood is the model of Casson, which assumes a stress/strain-rate relation of the form

where τ0 is the yield stress and µ0 the Newtonian viscosity. The existence of a yield stress implies that blood requires a finite stress before it begins to flow, a fact that has been observed experimentally. The apparent viscosity of the Casson model can be written as

 

Mechanical Behaviour of Arteries

Biological materials in the human body like vessels have a nature of being deformed in stress situations. If an artery is subjected to a pressure as mechanical loading or a compressive force, then deformation occurs and stresses are developed within that arterial wall. The amount of deformation and strain depend on the history and rate of the applied loading, temperature, etc., and the deformed shape usually does not return to its original configuration when the load is removed.

When the deformation is very small and adiabatic (i.e., no heat is gained or lost), then the most likely condition is that the stress and strain are independent of the loading rate and history, and that the material is likely to return to its original configuration after the compressive force is removed. If deformation is the only factor, the stress in an elastic body can be expressed as a function of the strain as follows :

where si,j and ei,j denote the first-order stress and strain tensors, respectively and E is Young’s elasticity modulus. The Fluid-structure interaction (FSI) is a technique used in numerical problems to provide an understanding of the impact of the flow on structures, both within and surrounding the flow. Due to the drastic increase in computing power over the last decade, numerical methods are becoming increasingly more effective. The governing equations for the motion of an elastic solid are mathematically described by the following equation

ρw is the solid density, Fi are the components of body force acting on solid. A single-layered arterial wall was modeled as a hyper-elastic material or Neo-Hookean elastic solid material which shows nonlinear dependence of stress-strain behavior of materials undergoing large deformations.

W is strain energy density function and can be used to measure the energy stored in the material as a result of deformation; G, k are the shears and bulk modulus respectively, representing the material properties, J is the ratio of the deformed elastic volume over the under formed volume and I is the first invariant of the left Cauchy-Green deformation tensor.

Afterwards the marching cubes algorithm and Taubin smoothing are applied to obtain the 3D geometry. At the third stage the main surface is extracted and after additional smoothing of the domain, the data is stored in STL format. This file is then used in Netgen to create a tetrahedral mesh. The output of Netgen is converted to the file format of the FEM software package Elmer. The preconditioned BiCGStab with incomplete factorization is used in the performed parallel numerical tests on the Blue Gene/P. The currently obtained results demonstrate a good scalability for the stationary problem.

     

Boundary Conditions

Appropriate boundary conditions are imposed on the inflow, outflow and walls of the vessels. The computational domain is patient specific. As a rule the patients are with rather complicated geometries of the blood vessel with aneurisms processed. From a set of DICOM images, obtained from digital subtraction angiography (DSA) ,using the software GIMIAS (Graphical Interface for Medical Image Analysis and Simulation), we derive the three-dimensional digital image of the blood vessel wall, in which we are interested. At the first step, some appropriate transfer function is applied to the medical data saved in vtk format in order to visualize the blood vessels and a parallelogram containing the aneurysm is cropped. Then, binary threshold segmentation is performed with respect to the voxel intensity. The threshold interval is also patient-specific and even case-specific (if there is more than one aneurism) and has to be additionally tuned to fit more precisely the form of the blood vessels.

The animation shows blood motion in a brain aneurism, obtained by the OpenFOAM software.

     

Designing and studying nano structures – quantum mechanical modeling

Ab initio MD simulations

Using CP2K code, Born-Oppenheimer MD is calculated on BG/P supercomputer; 20 ps simulation time with 1 fs time step at 300 K for the zeolite supported clusters and 450K for the MgO supported clusters and Au wires

Simulated systems:

  • Coalescence of gold sub-nanowires into a nanotube


  • Dynamics of small transition metal clusters in protonic forms of zeolites. Proton transfer from zeolite hydroxyl groups onto rhodium cluster in zeolite (the movie is prepared with lower speed close before and after the proton transfer). Only protons and rhodium atoms are shown as large balls, while the other centers in the zeolite are shown as small balls.


  • Interaction of silver cluster with zeolite hydroxyl groups. Only protons and silver atoms are shown as large balls, while the other centers in the zeolite are shown as small balls.


  • Mobility of rhodium cluster on MgO nanoparticle. The simulation shows the movement of the metal cluster towards the defect on the surface of the nanoparticle – F center (neutral oxygen vacancy).

 

Modeling the atomic level of the physical processes that lead to material disintegration (based on the Schrodinger atom equation)

This is a main instrument for designing materials with very tough strength characteristics

 

Risk management and risk minimization.

Modeling the motion of large human clusters and crowd behavior Julich Institute for Advanced Simulations

 

Flood modeling and assessment of possible damages

Example: The “Topolnitsa” dam break

 

Simulation of a flood under intensive and long-lasting rain

 

Earthquakes


Nuclear Reactors - Benefits of Advanced Simulation

A nuclear plant contains a large number of components, including the fuel assemblies ,control assemblies, reflectors and shields, reactor vessel, heat exchangers, pumps, steam converters, and containment. Accurate simulation of a reactor requires adequate models for all of these components.

However, the same degree of fidelity is not required in the representation of each part. In particular, the reactor core region—containing nuclear fuel and control rods—has the most sensitive and complicated physics and dictates many aspects of the overall plant response. It is thus the starting point for the overall plant design process.

One important benefit is the reduction of uncertainties in predicted quantities. Uncertainties arise from myriad sources, ranging from fabrication variability and imperfect knowledge of input data to simplifications in models and solution algorithms.

A second benefit of advanced simulation is the ability to evaluate new designs with reduced dependence on experiment. Typically, reactor designers are forced to stay very close to existing designs, unable to make predictions about the potential effects of new materials or new geometric configurations. Even when this is not the case, an entirely new set of experiments is required as a foundation for calibrating the new factors. An important question is whether representations of the key physics at significantly higher fidelity will allow scientists to bypass or minimize this latter step and enable a much faster pace of innovation.

A multi-institution, multidisciplinary team of scientists is currently designing new tools and carrying out high-fidelity simulations software aimed at addressing this question. S C I D A C R E V I E W F A L L 2 0 0 8.


Advanced nuclear reactor

 

NURESIM European Reference Simulation Platform for Nuclear Reactors

The NURESIM platform provides an accurate representation of the physical phenomena by promoting and incorporating the latest advances in core physics, two-phase thermal-hydraulics and fuel modelling. It includes multi-scale and multi-physics features, especially for coupling core physics and thermal-hydraulics models for reactor safety using mesh generators).

Easy coupling of the different codes and solvers is provided through the use of a common data structure and generic functions (e.g., for interpolation between nonconforming meshes).

More generally, the platform includes generic pre-processing, post-processing and supervision functions through the open-source SALOME software, in order to make the codes more user-friendly.

The platform also provides the informatics environment for testing and comparing different codes. For this purpose, it is essential to permit connection of the codes in a standardized way. The standards are being progressively built, concurrently with the process of developing the platform.

The NURESIM platform and the individual models, solvers and codes are being validated through challenging applications corresponding to nuclear reactor situations, and including reference calculations, experiments and plant data. Quantitative deterministic and statistical sensitivity and uncertainty analyses tools are also developed and provided through the platform.

 

SVBR 75/100.

Four generation Lead-Bismuth Fast Reactor (Pb 45% – Bi 55%, 125^o). The prototype shown is that of an Alpha submarine. It was put in production in Oleg Deripaska’s Basic Element Corporation.

Submarine “Alpha”

has a top underwater speed of 41 knots (76 km/h), time to reach the maximum velocity – 1 minute, makes a U-turn in 42 seconds. Thanks to its dynamic characteristics and automatic control systems, surveillance and weaponry, the “Alpha” used to be practically invincible to the weapons of that time. Five officers on deck.

 

Modeling Liquid Metal Coolant Flow

In addition to high-performance simulation of the heat source, a team at the Argonne National Laboratory is studying the coolant flow in liquid-metal-cooled reactors. Fission heat generated in the fuel rods is transferred out of the main vessel by a coolant of liquid metal. Power output can be increased by increasing the temperature or, at fixed temperature, by increasing the coolant flow rate. Generally, both paths are pursued to increase power output for a given capital cost in order to make the reactors economically viable. The output is ultimately limited by the maximum allowable temperature within the core and by the pumping costs, both of which are governed by the coolant flow. Computational modeling of the coolant flow on leadership-class computers is necessary to understand these limits and to predict and maximize the power output.

To date, the researchers have developed a reactor- specific version of Argonne’s state-of-the-art computational fluid dynamics code Nek5000.

The Nek5000 code is ideally suited for petascale science. The spectral element method on which the code is based yields rapid numerical convergence which implies that simulations of small scale features transported over long times and distances incur minimal numerical dissipation and dispersion. In effect, the accuracy per grid point is maximized.

The code also features spectral element multigrid and has a parallel coarse-grid solver that has demonstrated scalability to more than 10,000 processors.

 

Results of modelling the turbulent flow of the cooling liquid metal in the upper chamber of a new generation of fuel recycling fast reactors.

The colors show the velocity of the fluid. Colored in red are the high-speed flows and in blue – the low-speed flows.

 

BREST 300 Cooling – molten lead. The cladding and structural material is made of special stainless steel EP823

 

The wheel of a Pelton turbine is propelled thanks to high speed water jets. This image shows most of the geometric model used in the numerical simulation of a 6-jet driven Pelton turbine (with the distributor feeding the wheel) and shows the water jets (colored in blue) hitting the wheel buckets.

 

Computerized hydro-dynamics

The Swiss National Supercomputing Centre and Hydro Dynamics have designed a new generation of water turbines with a 7% greater efficiency. Code FLUENT.

 

Highly Parallel Large Eddy Simulation of Multiburner Configuration in Industrial Gas Turbine.

Recent advances in computer science and highly parallel algorithms make Large Eddy Simulation(LES) an efficient tool for the study of complex flows. The available resources of today allow us to tackle full complex geometries that cannot be installed in laboratory facilities. The present paper demonstrates that the state of the art in LES and computer science allows simulations of combustion chambers with one, three or all burners and those results may differ considerably from one configuration to the other. Computational needs and issues for such simulations are discussed. A single burner periodic sector and a triple burner sector of an annular combustion chamber of a gas turbine are investigated to assess the impact of the periodicity simplification. Cold flow results validate this approach while reacting simulations underline differences in the results. The acoustic response of the set-up is totally different in both cases so that full geometry simulations seem a requirement for combustion instability studies.

To demonstrate the feasibility and usefulness of multi-burner and full burner simulations, LES of an annular combustion chamber are performed. The injection system consists of two co-rotating partially premixed swirlers. The swirler vanes are not simulated and appropriate boundary conditions are set to mimic the vane effects on the flow. LES is carried out with a parallel solver called AVBP, simulating the full compressible Navier Stokes equations on structured, unstructured or hybrid grids. The sub-grid scale influence is modeled with the standard Smagorinsky model. One step chemical scheme for methane, matching the behavior of the M. Frenklach GRI-mesh 3.0 scheme. It is employed under specific target conditions to represent the chemistry. The Thickened Flame Model (TFLES) ensures that the flame is properly solved on the grid. Finally all simulations employ the Lax-Wendroff numerical scheme. For more details, you may read the following papers: Highly Parallel Large Eddy Simulation of Multiburner Configuration in Industrial Gas Turbine and LES Modeling of Combustors.

 

You may see the Simulation of Flame front propagation in the combustion chamber of a helicopter engine in the video file "Flame front propagation in the LES of the ignition of a full helicopter combustor CFD TEAM".

 

Computerized design of stelth technology ships and simulating their motion using the software OpenFOAM on Blue Gene/P.

Flow modeling and turbulence of the water around the stern of the ship at various speeds. The results of simulations have been compared with experiments carried out in a hydro-dynamic pool with a forced flow around a model of the stern. The overlap is great enough to trust OpenFOAM simulations when designing vessels.

 

Aerodynamics - Simulating the flow around an air plane’s wings and parts of the body

 

Direct Numerical Simulation of Separated Low-Reynolds Number Flows around an Eppler 387 Airfoil

Low Reynolds number aerodynamics is important for various applications including micro-aerial vehicles, sailplanes, leading edge control devices, high-altitude unmanned vehicles, wind turbines and propellers. These flows are generally characterized by the presence of laminar separation bubbles. These bubbles are generally unsteady and have a significant effect on the overall resulting aerodynamic forces. In this study, the time-dependent unsteady calculations of low Reynolds number flows are carried out over an Eppler 387 airfoil in both two- and three-dimensions. Various instantaneous and time-averaged aerodynamic parameters including pressure lift and drag coefficients are calculated in each case and compared with the available experimental data. An observed anomaly in the pressure coefficient around the location of the separation bubble in two-dimensional simulations is attributed to the lack of span wise flow due to three-dimensional instabilities.

The numerical simulations are performed using the ARGO code developed at CENAERO. The ARGO code is based on edge-based hybrid finite element- finite volume defined on unstructured P1 tetrahedral meshes. The original finite element formulation is reformulated into a finite volume formulation for computational efficiency and to allow for convective stability enhancements. In accordance to the finite element discretization, the convective terms use central, kinetic-energy preserving flux functions; however this flux is blended with a small amount (typically 5%) of a velocity-based upwind flux for stability; the diffusive fluxes and source terms retain the original finite element formulation at all times.

The time integration method is the three-point backward difference scheme. Since the numerical schemes are implicit, the flow solver must solve at each time-step a system of nonlinear equations.

For this purpose, it relies on an damped inexact Newton method; the resulting linear equations are solved iteratively with the matrix-free (finite difference) GMRES algorithm, preconditioned by the minimum overlap RAS (restricted additive Schwarz) domain decomposition method.

The solver uses the AOMD (Algorithm Oriented Mesh Database) library for the management of the topological mesh entities across the processors. In addition, it relies on the message passing interface (MPI) for exchanging data between the nodes and the Autopack library for handling non-deterministic asynchronous parallel communications. You may see the results of modeling the flow around the Eppler 387 wing, designed by NASA, in the video file bellow.

For more information you may read the paper Direct Numerical Simulation of Separated Low-Reynolds Number Flows around an Eppler 387 Airfoil.

 

Simulation of the air flow around a helicopter’s propeller and calculating the lift using DNS and FLUENT:

 

You may see the Effect of angle of attack on the flow field on the following video file: Effect of angle of attack on flow field Results of the computational fluid dynamics of cars (CFD) are shown in the video file F1 car CFD analysis:

 

Astrophysics

Virgo Consortium is an international group of astrophysicists, who have developed the so far most precise evolutionary model of the Universe – since the Big Bang till the present day. The simulation begins with 10 billion objects, in order to trace evolution in a cube with side lengths of 2 billion light years. It took more than a month for Max Planck Society's Supercomputing Center in Garching to solve it. The obtained output information was 25 terabytes. The simulation shows the evolutionary history of more than 20 million galaxies and the emergence of super Quasars with mass billions of times greater than that of the Sun in the young Universe. This coincides with the observations made by Sloan Digital Sky Survey (SDSS). These quasars have transformed into giant black holes. The simulation period covered 12 billion years.

The latest results of theoretical Physics and astrophysics observations have shown that 70% of the Universe is made up of an incomprehensible energy field, called Dark Energy. This field is expanding with a continuously increasing speed. One fourth of it is supposed to be made up of new elementary particles, unobserved in the visible universe. They are called Cold Dark Matter. About 5% is the visible matter. Within the scale of 9 billion light years (l.y) the universe seems to be homogenous, but when the scale is 300 million l.y, dark matter looks like a web surrounding big empty spaces and bright haloes that are actually clusters of thousands of galaxies. This is the theoretical model of the present day, but it does not give answers to fundamental questions such as:

  1. Why was the universe so hot at its birth?
  2. Why is the universe so homogenous on a big scale, why does it look uniform form all points in space and why the cosmic background temperature practically does not vary along the different directions?

The existing models of the universe’s state right after the Big Bang fail to explain how the different areas have leveled in temperature. Light did not have the time to pass from one distant area to the other and, therefore, they could not have exchanged any information or impact, i.e. they could not have affected each other. Hence, there is no mechanism for temperature leveling and it could be equal in different areas only if for some reason, it had been equal by birth!

>Why had the universe begun to expand with a speed, very close to the critical one which splits the two possible scenarios – gravitational collapse and rapid expansion and cooling down of the Universe, which should have already turned into a frozen desert? “If the rate of expansion one second after the BigBang had been smaller by even one part in a hundred thousand million million, the universe would have collapsed before it ever reached its present size.” (Stephen Hawking). If it had been larger by the same order of magnitude, incontrollable expansion would have occurred.

Out of the clouds of chaotically moving particles, stars and galaxies formed, strictly obeying the laws of gravity and conservation of momentum. And, of course, the laws of quantum interactions between elementary particles, including the quarks and nuclei formation. And let’s not forget that all objects, from quarks to galaxies, obey the second law of thermodynamics – each body strives to minimize its energy. Boundary condition – temperature equal to absolute zero, no motion, no time, no space. And that is? Absolute death.

The evolution has been modeled, using galactic evolution 3D chemo-dynamical code, featuring dark matter, the forming stars and interstellar dust/gas environment. In addition, it includes the ionized gas molecules, radiating in the ultraviolet and infrared spectrum. The animation only shows the stellar particles heated to different temperatures (luminosity) in the visible range.

 

Dark Matter Distribution

We present the Millennium-II Simulation (MS-II), a very large N-body simulation of dark matter evolution in the concordance ? cold dark matter (?CDM) cosmology. The MS-II assumes the same cosmological parameters and uses the same particle number and output data structure as the original Millennium Simulation (MS), but was carried out in a periodic cube one-fifth the size (100h^-1Mpc) with five times better spatial resolution (a Plummer equivalent softening of 1.0h^-1kpc) and with 125 times better mass resolution (a particle mass of 6.9 × 10^6h-1Msolar). By comparing results at MS and MS-II resolution, we demonstrate excellent convergence in dark matter statistics such as the halo mass function, the subhalo abundance distribution, the mass dependence of halo formation times, the linear and non-linear autocorrelations and power spectra, and halo assembly bias. Together, the two simulations provide precise results for such statistics over an unprecedented range of scales, from haloes similar to those hosting Local Group dwarf spheroidal galaxies to haloes corresponding to the richest galaxy clusters. The `Milky Way' haloes of the Aquarius Project were selected from a lower resolution version of the MS-II and were then resimulated at much higher resolution. As a result, they are present in the MS-II along with thousands of other similar mass haloes. A comparison of their assembly histories in the MS-II and in resimulations of 1000 times better resolution shows detailed agreement over a factor of 100 in mass growth. We publicly release halo catalogues and assembly trees for the MS-II in the same format within the same archive as those already released for the MS.

Contact Info
©Copyright 2011-2018 NCSA, IICT-BAS | Powered By IICT- BAS