Machine Learning and Deep Analytics for Biocomputing: Call for Better Explainability
Organizers: Prof. D. Petkovic (San Francisco State University), Prof. L. Kobzik (Harvard University), Prof. C. Re (Stanford University)
Contact: D. Petkovic Petkovic@sfsu.edu
The goals of this workshop are to discuss challenges in explainability of current Machine Learning and Deep Analytics (MLDA) used in biocomputing, and ways to improve it. Explainability in MLDA refers to easy-to-use information explaining why and how the MLDA approach made its decisions intended for experts and non-experts alike. As such, explainability can refer to derived (trained) MLDA model as a whole or to the decisions that have been made on a specific data set. Given the increased importance and use of MLDA methods in biocomputing, explainability is a critical issue in achieving wider MLDA adoption and improving its effective usage.
Specifically, we believe that the improved explainability of MLDA in biocomputing will result in the following benefits:
• Increased confidence of application and domain experts who are often key decision makers (and who are often non ML-experts) in adopting MLDA.
• Easier testing, evaluation, and verification of MLDA results critical in formal adoption and approvals
• Improved “maintenance” and facilitation of “human-in-the-loop” operation where MLDA methods have to be supervised, changed or tuned to new data or decision needs
• Discovery of new knowledge and ideas (e.g. by discovering new patterns and factors that contribute to MLDA decisions)
Nine workshop panelists bring outstanding experience covering all four constituencies in the ecosystem of biocomputing R&D and applications, namely: computational researchers who are experts in MLDA and who develop and use the technology; biocomputing practitioners who are using MLDA but are not experts; editors and evaluators who need to evaluate MLDA approaches in order to decide what to publish; and members of funding agencies who evaluate research results and use the funding to influence the direction of research.
Workshop panelists are:
· Dr. M. Axton (Chief Editor, Nature Genetics)
· Dr. P. Bourne (Stephenson Chair of Data Science, Dir. of the Data Science Inst. and Prof. Dept. of Biomedical Eng., University of Virginia; Formerly Associate Dir. of Data Science, NIH)
· Dr. A. Esteva (Ph. D. Candidate, Stanford Univ.)
· Dr. R. Ghanadan (Program Manager, Defense Sciences Office, DARPA)
· Dr. W. Kibbe (Dir. of NCI Center for Biomedical Informatics and Inf. Technology)
· Dr. B. Percha (Assistant Professor, Mount Sinai; Head of R&D, HD2i))
· Dr. C. Re (Associate Prof., Stanford Univ.)
· Dr. R. Roettger (Assistant Prof., Syddansk Univ., Denmark)
The three hour workshop is designed and structured to encourage lively discussion to first understand the problem and then to identify ideas and next steps needed to improve MLDA explainability.
2)Need for better Explainability in ML and Deep Analytics - View of “users “
Panelists who are users but not necessarily experts or developers of MLDA will outline their experience and needs for better explainability in MLDA; they will be encouraged to offer specific challenges and goals to developers of MLDA technology to provide better explainability.
3)Achieving Better Explainability in ML and Deep Analytics – view of researchers
Panelists on the research side of MLDA methods will present the latest state-of-the-art in MLDA explainability and discuss potential solutions to the challenges outlined by the previous panelists.
4)Discussion with panelists and audience
Goal of this section is to engage with audience in discussion and to attempt to come up with suggestions and guidance on how to improve MLDA explainability
We look forward to panelist presentations and to active audience participation in this workshop!!!!