|
HPC Workshop
July 27--29, 2020
IIT Madras
|
|
IIT Madras is organizing an HPC Workshop primarily for IIT Madras students. The workshop would include a basic HPC session, and several research talks. The presentations are expected to address a wider audience.
The event is planned from July 27 to 29, and is open to all the IITM students, staff, employees, as well as to IITM research park.
Registration
Registration is free but mandatory. Please register for the workshop here. More details would be sent to the registered participants.
Program
The event is scheduled online.
E-meeting details will be have been mailed to the participants at the e-mail id filled in the registration form.
Each talk is for one hour.
|
[click to download flyer]
|
Date | Time | Speaker and Topic |
July 27 | 10:00 |
| Kameswararao Anupindi (ME) Introduction to MPI Programming [Video, Slides, Codes]
In this talk the audience will be introduced to basics of message passing
interface (MPI) and its usage in programming. Sample programs will be demonstrated
using which several MPI functions will be discussed. Resources for further
learning/understanding will also be provided.
|
July 27 | 14:30 |
| Krishna Nandivada (CSE) Introduction to (Efficient) OpenMP Programming [Video]
With the multi-core systems becoming mainstream it is becoming
essential for application programmers to learn and use parallel
programming. OpenMP is an industry standard for parallel programming. In
this talk, we will introduce some basic concept on OpenMP and discuss some
performance/correctness implications.
Required background: basic understanding of C programming.
|
July 27 | 16:00 |
| Rupesh Nasre (CSE) Introduction to GPU Programming [Video, Slides, Codes]
Graphics Processing Units (GPUs) are now an integral part of systemic acceleration of applications from varied domains.
This talk would cover the basics of GPU Programming using CUDA C.
|
July 28 | 10:00 |
| Jithin John Varghese (CHE) Computational Catalysis and Multiscale Modelling
Catalysis and catalytic reaction engineering form the core of most chemical processes and
process industries. Modelling and simulations for catalysis and reaction engineering
traditionally was restricted to macroscopic phenomena and observables. However,
developments in quantum mechanical theoretical frameworks and high-performance
computing has now made atomistic modelling an integral part of catalysis research. With
development of tools, approaches and workflows enabling investigation of phenomena from
the atomistic to process scales, multiscale modelling is now the new paradigm in
computational catalysis research. This talk will primarily focus on some of the common
atomistic/molecular modelling and simulation approaches in catalysis research, multiscale
approaches, and the role of HPC systems in this research framework. Examples from research
in the group on catalytic systems ranging from metals, metal oxides, metal carbides and
zeolites, and chemical systems ranging from carbon dioxide, natural gas hydrocarbons,
biowastes and biomass derivatives will be presented.
|
July 28 | 11:00 |
| Srinivasa Chakravarthy (BT) Simplifying the brain: A vision for Neuroscience Research [Video]
Brain is often touted as “one of the most complex objects in the universe,” a perfectly
unscientific statement considering our ignorance of the nature of “objects” found all over
the universe. In this talk, the speaker argues that part of the reason behind this undue
“complexification” of the brain lies in the profoundly descriptive traditions of biology. With
the availability of sophisticated measurement tools, - and the big data revolution in the air, -
currently there is a world-wide movement to generate mountains of brain data without
making a commensurate effort to develop elegant brain theories that can explain the data.
By separating principles from details, engineers create and master complex systems. The
brain is no different.
As a demonstration of how it is possible to develop simple brain theories/models that can
explain diverse functions of brain systems, the speaker outlines his lab’s (CNS Lab) decade-
long work in a brain system called the Basal Ganglia, a part of the brain associated with
Parkinson’s disease. Next the speaker describes the CNS lab’s work on spatial navigation
functions of another brain system called the hippocampus. Discovery of the “spatial cells” of
the hippocampus was awarded the Nobel prize in 2014. The CNS lab had developed a simple
model that can explain a wide variety of phenomena related to spatial navigation in 2d (in
rats and mice) and in 3D (in bats).
Taking the above work to its logical consummation, the speaker outlines his lab’s plans to
build a reduced model of the whole brain called the MESOBRAIN. The MESOBRAIN, once
realized in software and hardware, is expected to have immense applications in medicine
and engineering.
|
July 28 | 15:00 |
| Nagabhusan Vadlamani Rao (AE) High-Fidelity Simulations in Turbomachines using distributed and shared
memory architectures. [Video]
Modelling strategies in gas-turbine world are evolving rapidly with the
advent of High Performance Computing (HPC). The industry is trending
towards multi-objective optimization at a system level rather than a
single-objective optimization at a component level. Minimizing the cycle
time from a design idea to engine prototype (low to high Technology
Readiness Levels) has also been one of the critical targets. However, due
to the non-linear dynamics of the fluid flow, the inaccuracies of the
low-order models at a component level accumulate and consequently alter
the system level optimization. There is a strong need for research into
the high-fidelity simulations (LES/DNS). Such simulations a) could provide
the detailed flow physics and statistics to improve the accuracy of the
low-order models and b) can be coupled with low-fidelity methods to
accurately build frameworks to handle multi-components.
This talk will provide an overview of the high fidelity simulations to
capture turbulent flows in gas-turbines using HPC. Importance of
accelerating applications on both distributed memory (MPI) and shared
memory (GPU) platforms for high order schemes will be highlighted.
|
July 28 | 16:00 |
| Narasimhan Swaminathan (ME) High Performance Computing in Molecular Dynamics Simulations [Video]
Molecular Dynamics (MD) Simulations is an ideal example where progress in High-
Performance Computing Environment has played a critical role. MD was introduced in the
1950s and realistic systems were explored during the late 1960s and early 1970s. Since
then, the length and time scales of problems addressed by MD has grown with
computational power. Currently, MD is used for a range of applications in materials science,
biology and chemistry. In one recent work, MD has been used to simulate the behavior of
an entire gene which consists of a billion atoms. These simulations were conducted in a
HPCE with over a million cores. This talk will discuss some basics of MD and describe its
various aspects where HPC is necessary. The talk will then discuss several examples where
MD together with HPC is currently being used.
|
July 29 | 10:00 |
| Tarak Patra (CHE) Accelerating Inverse Materials Design via Gaming AI and High-Performance Computing [Video]
The manipulation of molecular scale structures and interactions can alter the properties and
performance of materials dramatically and provides enormous opportunities for materials
design. However, due to a general lack of rapid, parallelizable techniques for measuring
materials properties, the progress in de novo molecular design is slow and offen intractable.
In fact, it is almost impossible to efficiently explore the vast chemical and architectural
parameter space by traditional methods. The recent advancements in big data analytics and
powerful supercomputers have brought artificial intelligence (AI) and machine learning (ML)
techniques that can address this challenge in accelerating molecular-scale materials design.
To this end, we have developed a design framework where a gaming AI algorithm is
combined with molecular dynamics (MD) simulations to solve the inverse design problem of
materials. In this framework, molecular simulation is used to establish the structure-property
relationship data set while the AI algorithm screens the data set very efficiently to identify
optimal structure that correspond to a target property. Moreover, high performance
computing (HPC) enables rapid measurement and building large data set of molecular
properties via the MD simulation. Here, we demonstrate the AI-MD scheme for a
representative albeit complex polymer inverse design problem, viz., design of sequence
specific copolymer that corresponds to a user-desired property. The AI-MD has efficiently
explored the sequence space of a copolymer to identify optimal structue that can be used to
improve thermodynamic and mechanical stability of two immiscible liquids. In particular, the
AI-MD has identified copolymer structures that nullify the surface tension between two
immiscible liquids. The data generated during the design cycle provide new mechanistic
insights of compatibilization of emulsion and other composite materials.
|
July 29 | 11:00 |
| Chakravarthy Balaji (ME) Numerical simulations of extreme rainfall event in December 2015 over Chennai under Pseudo Global Warming Conditions [Video]
For the last few years, the frequency of extreme weather events in the south Indian region is seen
increasing. In the last two consecutive years, Kerala has been experiencing heavy rainfall and related
floods Similarly, in 2015, Chennai experienced very severe rainfall. Therefore, it is imperative to study
the changes in these extreme weather events in future climatic conditions with a view to systematically
predict the effect of carbon emissions, if any, on the future behavior of already severe weather events of
the recent past.. In this study, numerical experiments have been carried out to study the changes in the
intensity of Chennai extreme rainfall event that occurred in November-December 2015 in the far future
i.e. 2075 under the highest emission scenario.
Weather Research and Forecasting (WRF) model is used for all numerical simulations. Initially, the
simulations are carried out for Chennai rainfall event of 2015 and the results are compared with the
observed data. The simulations show good agreement with the observations. Later, the event is projected
to the future for representative concentration pathway of 8.5 (which means that emissions continue at the
same rate till the year 2100). The future climatic conditions are obtained by the pseudo-global warming
(PGW) methodology. In PGW, the mean change of atmospheric variables in future is calculated from
global circulation model (GCM), we used the Coupled Climate System Model Version 4.0 GCM data to
calculate the mean changes. The calculated mean changes are added to the initial and boundary conditions
of the current event to obtain future climatic conditions for a similar event. WRF simulations are carried
out using these initial and boundary conditions to obtain the changes in the event in future. All these
simulations require high computation power. Virgo computer cluster in IIT Madras was used to carry out
the simulations.
From our study, it is concluded that the intensity of precipitation increases in future. The peak rainfall is
obtained on December 1st, for both current and future simulations, however, in future, the peak rainfall
increased by 17.37%. The amount of rainfall in succeeding days also seen increasing by 183.5%, 233.9%
and 70.8%. It is also seen that the atmospheric instability is increasing in future climatic conditions
compared to the current conditions. The increase in precipitation is likely be due to the increased amount
of moisture in the atmosphere because of the increase in atmospheric temperature due to global warming.
However, more studies with more similar events have to be carried out to obtain a concrete conclusion.
|
July 29 | 15:00 |
| Himanshu Goyal (CHE) High Performance Computing for Clean Energy Technologies [Video]
One of the biggest challenges facing humanity is global warming caused by the increase of greenhouse
gases in the atmosphere due to the excessive use of fossil fuels. Today, the need to develop and employ
clean energy technologies is imperative to ameliorate the dangers of global warming and at the same
time, meet the growing energy demand of society. Examples of clean energy technologies include the
utilization of renewable energy sources (e.g., solar, wind, and biomass), enhanced energy efficiency of
the existing processes, electrification, and carbon capture technologies. Most of these processes are
multiscale (large separation of length scales) and multiphysics (e.g., multiphase flow, chemical kinetics,
and microwave heating) in nature, making their modeling challenging. These issues, along with the
scarcity of computational resources in the past, have led to the widespread use of empirical relations,
which are not reliable and cannot be used for optimizing the process. Recent progress in High
Performance Computing (HPC) provides an opportunity to gain unprecedented insights into emerging
clean energy processes. This talk will demonstrate the usage of HPC in multiscale simulations of
thermochemical conversion of biomass into biofuels and microwave heating of multiphase systems. For
thermochemical conversion of biomass, particle-scale and chemical kinetics models are developed and
integrated into an in-house computational fluid dynamics (CFD) code to perform reactor-scale
simulations. For microwave heating, a coarse-graining based multiscale methodology is developed to
perform high-throughput simulations. The focus of the work is on the development of multiscale
computational frameworks and their utilization to develop rational design tools.
|
Organizers
Himanshu Goyal (CHE),
Kameswararao Anupindi (ME),
Ratna Kumar Annabattula (ME),
Rupesh Nasre (CSE),
Santanu Ghosh (AE),
Sunetra Sarkar (AE)
|