High-Performance Computing Meets High-Performance Medicine

About the Workshop

Artificial intelligence (AI) is making a big impact on patient experiences, clinician workflows, researchers, and the pharmaceutical industry work in the healthcare sector. In recent decades, technological advancements across scientific and medical disciplines have led to a torrent of diverse, large-scale biomedical datasets such as health, imaging data, clinical notes, lab test results, and other ‘omics data. The dropping costs of genomic sequencing coupled with advances in computing allow unprecedented opportunities to understand the effects of genetics on human disease etiologies and has resulted in the creation of population-level biobanks. As a consequence, the demand for novel computational methods, computational infrastructure, and algorithm improvements to efficiently process and derive insights from these datasets, particularly where it applies to clinical translational research, has dramatically increased. In addition to handling the sheer size and quantity of biomedical data, newly developed methods must also adapt and employ state-of-the-art AI algorithms that account for the unique complexities of biomedical datasets, such as sparseness, incompleteness, and noisiness of data, data multidimensionality such as clinical measurements from electronic health records, prescription drug data, environmental exposures. Additionally, these methods have to leverage the advances in high-performance computing like GPUs, faster inter-connects, and fast-access memory to help generate the needed insights at a faster rate.

In this workshop, we have invited leading experts to share their viewpoints on the development and application of artificial intelligence and cutting-edge computing approaches that are driving innovation in precision medicine. We will discuss current breakthroughs in which our speakers are involved and the strengths and limitations of artificial intelligence in medicine.

Mauna Kea Mauna Kea Mauna Kea

Workshop Topics

AI in Healthcare

AI can help identify patterns in data that are too difficult for humans to see, making it possible to diagnose diseases earlier or more accurately and develop new treatments. AI has already been used to create algorithms that can predict heart attacks, diabetic retinopathy, and breast cancer with high accuracy. We will discuss the latest innovations in healthcare from AI and deep learning. Additionally, what are the obstacles that can lead to inaccurate and biased AI systems?
Genomics in medicine

Genomics is an integral part of precision medicine and targeted effective treatments. AI systems allow the identification of patterns and correlations between genomic and clinical data that would otherwise be hidden in the data. By understanding these patterns, researchers can develop new treatments and therapies that are more targeted and effective. The workshop will cover the state of the art AI systems in genomics medicine and their use in preventing diseases using techniques that could assess the risk of disease development. Further, how these approaches, and methods allow researchers to explore different treatment options quickly and efficiently until they find the best possible solution for each individual patient's needs.
Drug discovery

The use of artificial intelligence (AI) and high-performance computing (HPC) for drug discovery and drug repurposing is a growing trend in the pharmaceutical industry. AI can be used to identify potential new drugs by screening large databases of chemical compounds, and HPC can be used to simulate the effects of these drugs on biological systems. The workshop will cover the topics at the intersection of AI and HPC has the potential to speed up the process of drug discovery, reduce costs, and improve patient outcomes.
Exascale computing to advance precision medicine
As biomedical researchers strive to improve our understanding of complex diseases, they are increasingly turning to computation for help. Exascale computing is a term used for computers that are capable of processing one billion billion calculations per second. This level of power is necessary for running detailed simulations of biological systems, which can involve millions or even billions of individual atoms or molecules. By accurately modeling these systems, we can better understand how they work and what goes wrong when a disease occurs. The workshop will aim to provide a broader overview of exascale computing and ways this technology can provide the power needed to run detailed simulations of biological systems, as well as deep learning algorithms that can identify patterns in data to accelerate the discovery of new drugs, treatments, and more broadly across biomedical research.

Worshop Organizers

Ali Torkamani, PhD is the Director of Genomics and Genome Informatics at the Scripps Research Translational Institute and Professor at The Scripps Research Institute. Dr. Torkamani’s research centers on the use of genomic and informatics technologies to identify the genetic etiology and underlying mechanisms of human disease to define health risks and individualized interventions. Major focus areas include human genome interpretation, genomic discovery of novel rare diseases, comprehensive, genetically-informed machine- and deep-learning prediction of risk for common diseases, and digital communication of genetically-informed disease risk. He has authored over 100 peer-reviewed publications as well as numerous book chapters and Medscape references, and his research has been highlighted in the popular press. Dr. Torkamani’s overall vision is to decipher that code in order to understand and predict interventions that restore diseased individuals to their personal health baseline.

Anurag Verma, PhD is the Associate Director of Clinical Informatics and Genomics for Penn Medicine BioBank and Instructor in the Department of Medicine at the University of Pennsylvania. His research has focused on the study of the genetic basis of complex diseases using big data techniques with the main focus of studying the genetic architecture of multimorbidity, the phenotypic architecture of common genetic risk, polygenic risk scores, and phenome-wide association studies to identify the complex phenotypic and genomic interactions that lead to complex disease. He has biomedical informatics expertise in the integration of genetic data with electronic health records (EHRs) from large biobanks, with extensive experience in analyzing large biobank datasets, including Penn Medicine BioBank, Million Veteran Program, Geisinger MyCode, and eMERGE network.

Jennifer Huffman, PhD is an Assistant Professor in the Department of Medicine at Harvard Medical School and the Scientific Director for Genomics Research within the Center for Population Genomics at the VA Boston Healthcare System. She is currently an investigator with the VA Million Veteran Program. She leads research investigations into the genetic contributions to cardiovascular risk factors and coordinates and implements several infrastructure programs for the program. This has also allowed her to actively participate in several collaborations with statisticians and computer scientists to improve analyzing “big data” methods.

Ravi Madduri is a computer scientist in the Data Science and Learning Division at Argonne National Laboratory and is a Senior Scientist at the Center of Research Computing at the University of Chicago. He is an innovation fellow at the Polsky Center of Entrepreneurship at the University of Chicago. Ravi co-leads the Department of Veteran Affairs and Department of Energy collaboration called MVP-CHAMPION with a overall objective of applying advances in High Performance Computing and AI to improve veteran health. Additionally, Ravi leads the Globus Genomics project (www.globusgenomics.org), which thousands of researchers worldwide use for genomics, proteomics, and other biomedical computations on Amazon cloud and other platforms. He architected the Globus Galaxies platform that underpins Globus Genomics and several other cloud-based analytical services realizing the vision of Science as a Service for creating maintaining sustainable services for science. Ravi plays an important role in applying large-scale data analysis deep learning to problems in biology. For his work on the “Cancer Moonshot” project, he received the Department of Energy Secretary award in 2017.