What’s New in HPC Research: Image Classification, Crowd Computing, Genome Informatics & More

In this bimonthly feature, HPCwire highlights newly published research in the high-performance computing community and related domains. From parallel programming to exascale to quantum computing, the details are here. Classifying images at supercomputer scale Deep learning is a computationally intensive task, increasingly requiring faster accelerators in larger clusters. Training the deep learning models to work on this new hardware, however, means overcoming algorithmic and software challenges. This paper — which is written by a team from Google — discusses three systems-related optimizations to overcome these challenges: distributed batch normalization; input pipeline optimizations; and 2-D torus all-reduce. Combining these optimizations, they train an image classification deep learning model to 76.3% accuracy in 2.2 minutes. Authors: Chris Ying, Sameer Kumar, Dehao Chen, Tao Wang and Youlong Cheng. Streaming production application performance data for system monitoring Understanding how HPC applications interact with their platforms is necessary for application performance tuning and troubleshooting — however, monitoring applications typically incur either delays in feedback or heavy I/O loads. In this paper, researchers from the University of Central Florida and Sandia National Laboratories present an approach to streaming collection of application performance data. Their approach uses application event counters to create and gather performance data streams in a low-overhead way. They conduct performance analyses and demonstrate that their method imposes 0.5% CPU usage overhead at most. Authors: Ramin Izadpanah, Benjamin A. Allan, Damian Dechev and Jim Brandt. Using FPGA-accelerated machine learning inference as a service for particle physics computing Large-scale particle physics experiments are demanding more and more high-throughput computing resources, and heterogeneous computing paradigms with increased parallelization (like FPGAs) offer promising solutions. In this paper, led by a team from the Fermi National Accelerator Laboratory, the authors demonstrate that “machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that requires minimal modification to the current computing model.” In tests, they demonstrate improvements over traditional CPU inference. Authors: Javier Duarte, Philip Harris, Scott Hauck, Burt Holzman, Shih-Chieh Hsu, Sergo Jindariani, Suffian Khan, Benjamin Kreis, Brian Lee, Mia Liu, Vladimir Lončar, Jennifer Ngadiuba, Kevin Pedro, Brandon Perez, Maurizio Pierini, Dylan Rankin, Nhan Tran, Matthew Trahms, Aristeidis Tsaris, Colin Versteeg, Ted W. Way, Dustin Werran and Zhenbin Wu. Enabling sustainable HPC with smartphone crowd computing Due to supercomputers’ enormous energy requirements and their substantial e-waste, the environmental impacts of supercomputing are swiftly increasing. In this paper, a research team from the Indian National Institute of Technology and the Bengal Institute of Technology advocates for a transition to smartphone-based crowd computing to enable greater sustainability of the supercomputing industry. The authors discuss the likely enablers for such a transition and the feasibility of meaningfully replacing existing supercomputing power. Authors: Pijush Kanti Dutta Pramanik, Saurabh Pal and Prasenjit Choudhury. Transitioning numerical astrophysics into the exascale era Digital sky surveys are continuing to produce increasingly larger and more complex datasets for use in numerical astrophysics experiments. As these authors — a team from Italy and Greece — argue, these ballooning datasets will necessitate the development of “a new generation of high performance data reduction and analysis tools” as part of the transition to the exascale era. The authors discuss the challenges facing the design and development of the appropriate systems and tools, as well as the progress made in Europe in recent years. They conclude by assessing the impact of new computing resources on the ecosystem of numerical code for astronomy and astrophysics. Authors: Giuliano Taffoni, Giuseppe Murante, Luca Tornatore, David Goz, Stefano Borgani, Manolis Katevenis, Nikolaos Chrysos and Manolis Marazakis. Facilitating HPC operation and administration via cloud Cloud computing for HPC — and by extension, HPC as a service (HPCaaS) — has taken lift off its long runway. The authors of this paper — a team from China and Slovenia — argue that cloud computing could be used for other purposes: namely, facilitating the operation and administration of deployed HPC systems. The authors introduce a tool called ‘EasyOP’ that can manage HPC systems over a centralized and unified control platform, demonstrating how the tool can manage and communicate various aspects of system status and management. Authors: Chaoqun Sha, Jingfeng Zhang, Lei An, Yongsheng Zhang, Zhipeng Wang, Tomi Ilijas, Nejc Bat, Miha Verlic and Qing Ji. Enabling accelerating genome informatics on parallel HPC platforms Genome informatics (GI) are encountering a similar problem — rapidly expanding datasets involving intricately linked subsystems, requiring co-designed parallel and accelerated biocomputing with reconfigurable hardware. In this paper, a team from the Indian Institute of Science presents ReneGENE-GI, a GI pipeline designed to optimize GI applications for use on HPC systems with modules for use with GPUs and FPGAs. Comparing it to another GI tool, they find a dramatic increase in speed compared to the competition. Authors: Santhi Natarajan, Krishna Kumar N., Debnath Pal and S.K. Nandy. Do you know about research that should be included in next month’s list? If so, send us an email at [email protected]. We look forward to hearing from you.