Semantic workflows for benchmark challenges: Enhancing comparability, reusability and reproducibility

Arunima Srivastava1, Ravali Adusumilli2, Hunter Boyce2, Daniel Garijo3, Varun Ratnakar3, Rajiv Mayani3, Thomas Yu4, Raghu Machiraju1, Yolanda Gil3, Parag Mallick2,*


1Computer Science and Engineering, The Ohio State University
2Canary Center for Cancer Early Detection, Stanford University
3Information Sciences Institute, University of Southern California
4Sage Bionetworks
*Corresponding author
Email: srivatava.1@osu.edu, ravali@stanford.edu, hboyce@stanford.edu, dgarijo@isi.edu, varunr@isi.edu, mayani@isi.edu, thomas.yu@sagebionetworks.org, machiraju.1@osu.edu, gil@isi.edu, paragm@stanford.edu

Pacific Symposium on Biocomputing 24:208-219(2019)

© 2019 World Scientific
Open Access chapter published by World Scientific Publishing Company and distributed under the terms of the Creative Commons Attribution (CC BY) 4.0 License.


Abstract

Benchmark challenges, such as the Critical Assessment of Structure Prediction (CASP) and Dialogue for Reverse Engineering Assessments and Methods (DREAM) have been instrumental in driving the development of bioinformatics methods. Typically, challenges are posted, and then competitors perform a prediction based upon blinded test data. Challengers then submit their answers to a central server where they are scored. Recent efforts to automate these challenges have been enabled by systems in which challengers submit Docker containers, a unit of software that packages up code and all of its dependencies, to be run on the cloud. Despite their incredible value for providing an unbiased test-bed for the bioinformatics community, there remain opportunities to further enhance the potential impact of benchmark challenges. Specifically, current approaches only evaluate end-to-end performance; it is nearly impossible to directly compare methodologies or parameters. Furthermore, the scientific community cannot easily reuse challengers' approaches, due to lack of specifics, ambiguity in tools and parameters as well as problems in sharing and maintenance. Lastly, the intuition behind why particular steps are used is not captured, as the proposed workflows are not explicitly defined, making it cumbersome to understand the flow and utilization of data. Here we introduce an approach to overcome these limitations based upon the WINGS semantic workflow system. Specifically, WINGS enables researchers to submit complete semantic workflows as challenge submissions. By submitting entries as workflows, it then becomes possible to compare not just the results and performance of a challenger, but also the methodology employed. This is particularly important when dozens of challenge entries may use nearly identical tools, but with only subtle changes in parameters (and radical differences in results). WINGS uses a component driven workflow design and offers intelligent parameter and data selection by reasoning about data characteristics. This proves to be especially critical in bioinformatics workflows where using default or incorrect parameter values is prone to drastically altering results. Different challenge entries may be readily compared through the use of abstract workflows, which also facilitate reuse. WINGS is housed on a cloud based setup, which stores data, dependencies and workflows for easy sharing and utility. It also has the ability to scale workflow executions using distributed computing through the Pegasus workflow execution system. We demonstrate the application of this architecture to the DREAM proteogenomic challenge.


[Full-Text PDF] [PSB Home Page]