Introduction
The increase in cancer cases, the democratization of healthcare, even the recent pandemic, are some of the numerous reasons pointing out that there is a great need in leveraging AI technology in patients’ care practice. However, while AI has demonstrated the capability to be a valuable companion to practitioners, with respect to meeting accuracy levels, removing bias and increasing diagnostic throughput, its adoption to clinical practice is still slow.
While the problem is more complex, in this instructional workshop we will focus on two critical aspects of adoption. AI technologies, notably deep learning techniques, may hide inherent risks such as the codification of biases, the weak accountability and the bare transparency of their decision-making process. AI technology needs to both improve the diagnostic power of the data processed but also provide evidence for the prediction in a user understandable way.
Additionally, AI technologies are majorly focused on single modalities, limiting their role in representing the patient’s overall state. AI technology should empower the user, in particular in complex tasks, like cancer patient care, in advancing knowledge and understanding. Integrating different modalities in an interpretable way enhances cancer understanding and paves the way for personalized patient care.
In this workshop we will focus on understanding the decision process of medical imaging algorithms, in particular cancer subtyping. We will invite researchers to submit abstracts to share their novel methods to address challenges associated with multimodal data integration and interpretable AI in cancer patients’ care. The abstracts will be reviewed, and the selected abstracts will be presented as spotlight talks to reveal latest trends. We will then provide hands-on tutorials on how to implement explainability and multimodal data integration techniques on a multimodal cohort, aimed for both technical and clinical participants. Finally, we will discuss the generated models and their potential links.
Participants in this workshop will:
- Obtain an overview of basic methods and the latest trends in multimodal data integration and interpretable AI, and how they have the potential to close the gap between AI technologies and patient care practitioners, with focus on cancer patient care
- Gain hands-on experience with integrating clinical data and radiology images for cancer subtyping and learn how to interpret the decisions of these models
- Gain hands-on experience with explainable image-based and graph-based classification of tumor tissue in histology images and learn how to interpret the decisions of these models
- Understand how Graph Neural Networks (GNNs) operating on cell graphs can directly incorporate a higher level of transparency in terms of entity importance, which can be interpreted by graph-pruning
- Understand the link between clinical data, radiology and pathology images-based predictions through hands-on experiments and their evaluation
- Get access to and learn how to use ready-to-use tools that they will be able to further leverage in their own projects
The workshop is part of the AMIA 2021 Annual Symposium.
Program
AMIA 2021 Workshop ELAINE 30 October, from 13:00-16:30
- Welcome! This workshop consists of 3 parts:
- Spotlights Talks
- Hands on Tutorial
- Clinical Relevance and Discussion
Tentative program
Time | Topic |
---|---|
Spotlight Talks | |
13:00 - 13:45 | Session 1 - Multimodal cancer data integration |
13:45 - 14:30 | Session 2 - Interpretability in cancer data analysis |
14:30 - 14:45 | Break |
Hands-on Tutorials | |
14:45 - 15:05 | Overview of interpretability & multimodality in deep learning, and demos
|
15:05 - 16:15 | Hands-on session: participants to choose one out of three:
|
Clinical Relevance and Discussion | |
16:15 - 16:30 | Clinical perspective of the adoption, benefits and limitations of explainable multimodal cancer data integration Discussion on participants models outcomes and the links between radiology and pathology image-based predictions |
16:30 | END |
Call for workshop participation
You are kindly invited to submit your abstract to and/or to attend one of the hands-on tutorials of the AMIA 2021 Annual Symposium workshop:
ELAINE: Explainable muLtimodal AI in caNcer patient carE: how can we reduce the gap between technology and practice
The increase in cancer cases, the democratization of healthcare, even the recent pandemic, are some of the numerous reasons pointing out that there is a great need in leveraging AI technology in patients’ care practice. However, while AI has demonstrated the capability to be a valuable companion to practitioners, with respect to meeting accuracy levels, removing bias and increasing diagnostic throughput, its adoption to clinical practice is still slow.
While the problem is more complex, in this instructional workshop, we will focus on two critical aspects of adoption, namely explainability and multimodal data integration, as these impact the understanding of the decision process of medical imaging algorithms, and in particular for cancer subtyping. Integrating different modalities in an interpretable way enhances cancer understanding and paves the way for personalized patient care.
In ELAINE, we aim at bringing together clinicians and AI experts to share and review the scientific progress and discuss challenges in the development of AI technologies for multimodal data integration and interpretable AI in cancer patients’ care. The workshop will include the following three sessions:
-
Session 1: Multimodal cancer data integration
How to compensate for scarcity of data? Quickly achieving a comprehensive patient cohort for the development of AI technologies, how feasible and important is it in the early days of a pandemic and what are the alternatives that technology can offer? -
Session 2: Interpretability in cancer data analysis
What explainable technology on what type of imaging modality is most effective? Are current solutions meeting the needs of AI outcome understanding by clinical practitioners? How do we validate? -
Session 3: Hands-on tutorials
- Overview of interpretability and multimodal data integration in deep learning and demos (20 mins)
- Introduction to common techniques for both radiology and pathology images and ready to use tools and datasets
- Definition of radiology and pathology exercises: cancer subtyping
- Hands-on session (40 mins); participants to choose one out of the three
- Multimodal data integration: radiology images and clinical data
- Interpretability in pathology images’ analysis
- Radiology and pathology-based cancer subtyping explainable models’ evaluation
- Clinical Relevance and discussion (15 mins)
- Clinical perspective of the adoption, benefit and limitations of explainable multimodal cancer data integration (10 mins)
- Discussion on participants’ models outcomes and the links between radiology and pathology images’ based predictions (5 mins)
- Overview of interpretability and multimodal data integration in deep learning and demos (20 mins)
We invite computational and clinical experts to submit Abstracts for Spotlight talks and/or join one of the hands-on tutorials. The abstracts will be reviewed, and the selected abstracts will be presented as spotlight talks to reveal latest trends. Abstracts will consist of a maximum of 1 page (text, references, with optional figures and tables). They should be submitted electronically, to the CMT system (Spotlight talks track), according to the submission guidelines. In the hands-on tutorials we will focus on how to implement explainability and/or multimodal data integration techniques on a multimodal cohort, aimed for both technical and clinical participants. To join the hands-on tutorials please register through the CMT system (Hands-on tutorial track).
The ELAINE instructional workshop will be a live event and will be held as part of AMIA 2021 on Oct 30, 2021, in San Diego, CA, USA. Please note that according to AMIA policy all participants have to register at least for the day of the Workshop.
Looking forward to your submission,
Maria Gabrani
Michal Rosen-Zvi
Michal Guindy
Lisa Ann Mullen
Efrat Hexter
Antonio Foncubierta
Guillame Jaume
Pushpak Pati
Moshiko Raboh
Tal Tlusty
Submission Guidelines & Registration
The Workshop consists of two main parts (see program); spotlight talks and hands-on tutorials.
- The Spotlight talks are focused in 2 main areas namely:
- Multimodal cancer data integration, and
- Interpretability in cancer data analysis
The Spotlight talks will be based on peer-reviewed submitted Abstract selection process (see Spotlight talks’ abstract’s submission guidelines below).
- The hands-on tutorials include 3 options:
- Multimodal data integration: radiology images and clinical data
- Interpretability in pathology images’ analysis
- Radiology and pathology-based cancer subtyping explainable models’ evaluation
To join the workshop by either submitting a Spotlight Talk abstract and/or request participation in one of the Hands-on tutorials, please use the CMT Tool and the related track.
Spotlight talk abstract submission guidelines
The spotlight talk’s abstract format allows for the presentation of cutting-edge unpublished research that the author wishes to reserve publication rights for future consideration or outstanding work previously published in a peer-reviewed journal or conference proceedings in the last year (i.e., publication after March 2020). Citation of the previous work is required. Priority will be given to previously unpublished work. The abstracts may be one of the following types:
- Work complete (published as defined above, or unpublished)
- Work in progress but must be based on performed research and present actual (not expected) results
- Lessons learned
Authors of accepted abstracts will have up to 10 minutes to present their work in a Spotlight talk format at the Workshop including any questions and discussion. Abstracts will not be indexed in MEDLINE, enabling authors to submit their best work that is destined for future journal publication.
Abstract submissions, must be submitted as a one-page (including all text, references tables, figures) in a (U.S. Letter; 8.5 x 11 inch) document and may include:
- The names, affiliations, and locations (city, state, and country, if international) of all authors;
- A short background and objective(s) of the study;
- A short description of the approach and outcome measurement;
- Key findings;
- Key conclusions;
- Optional illustrations (figures or tables);
- References.
To submit a Spotlight talk abstract please use the CMT tool and the Spotlight Talks track.
Please note that according to AMIA policy all presenters have to register at least for the day of the Workshop. For registration and venue information please refer to the Registration and venue information section below.
Participation in Hands-on Tutorials
In the first part of the Hands-on Session, participants will be given an overview of interpretability and multimodal data integration in deep learning and common techniques will be introduced. Demos will also be presented. Next the hands-on session will give participants ready-to-use tools that they will be able to further implement in their own projects. The session will conclude with a clinical perspective of the adoption, benefit and limitations of explainable multimodal cancer data integration, and discussion.
The hands-on tutorials are open to general audience including healthcare researchers, clinicians, informaticists, policy makers, healthcare organization professionals, AI professionals, etc. The level of expected knowledge is:
- Beginner level in interpretability/explainability methods for any participant
- Intermediate level of expertise in ML/DL code development for technology participants
- Any level of expertise in cancer radiology or pathology practice
- The hands-on tutorials are relevant to anyone interested in:
- learning more about AI-based multimodal data integration methods in cancer patient data analysis, with special, but not limited, focus on clinical data, radiology and pathology images
- learning basic concepts and skills of AI-based cancer subtyping, leveraging explainable AI technologies
- learning how to interpret AI models based on explainable AI technologies
- linking predictions from clinical data, radiology and pathology images in cancer patient care
To participate in the Hands-on tutorials the below pre-requisites are expected:
- Participants of tutorials (1) and (2):
- Good command in Python
- Familiarity with deep learning frameworks like Keras, PyTorch
- Knowledge of common deep learning tools: Convolutional Neural Networks (CNNs), Fully connected Networks
- Basic knowledge in graph theory
- Own Laptop (with working Conda setup). We will recommend setting up the necessary environment for the exercises before the workshop
- Participants of tutorial (3):
- Familiarity with breast/prostate cancer subtyping
- Own laptop
To participate in any of the tutorials please indicate your intent through the CMT Tool (Hands-on Tutorials track). We will respond to you with confirmation and additional information.
Please note that according to AMIA policy all presenters have to register at least for the day of the Workshop. For registration and venue information please refer to the Registration and venue information section below.
Registration and venue information
REGISTRATION: Registration for the AMIA 2021 Annual Symposium will open in late June. All presenters (even for the Spotlight talks) will be required to register (even for at least of the day of the Workshop) and can do so by visiting https://www.amia.org/amia2021/registration. Author rates apply to all workshop presenters.
HOTEL: The host hotel for the AMIA 2021 Annual Symposiumis the Hilton San Diego Bayfront in San Diego: https://www.amia.org/amia2021/venue. AMIA has negotiated a block of rooms at a special room rate. Reservations received after the AMIA block is filled are subject to availability and prevailing rates.
Organizing Committee
- Michal Guindy, MD, MPA, Director of medical imaging and pathology, Head of venture and innovation, Assuta Medical Centers, Israel - affiliated with Ben Gurion University, Tel Aviv, Israel
- Lisa Ann Mullen, MD, Breast Imaging Fellowship Director, Assistant Professor of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, MD
- Maria Gabrani, PhD, Manager, Cognitive Healthcare and Lifesciences, IBM Research, Zurich, Switzerland
- Michal Rosen-Zvi, PhD, Director for Health Informatics at IBM Research and a visiting Professor at the Faculty of Medicine, The Hebrew University, Haifa, Israel
- Efrat Hexter, MS, Manager Medical Imaging Solutions, IBM Research, Haifa, Israel
- Antonio Foncubierta, PhD, Research Staff Member, IBM Research, Zurich, Switzerland
- Guillame Jaume, MS, PhD student, IBM Research, Zurich, Switzerland
- Pushpak Pati, MS, PhD student, IBM Research, Zurich, Switzerland
- Moshiko Raboh, MS, Research Scientist, IBM Research, Haifa, Israel
- Tal Tlusty, MS, Research Scientist, IBM Research, Haifa, Israel
Reviewers (not yet complete)
Important Dates
Circulation of call for papers: | July 16, 2021 |
Early registration deadline: | August 12, 2021 |
Abstracts submission due date: | September 14, 2021 |
Notification of spotlight talks acceptance: | September 30, 2021 |
Advance registration deadline: | October 7, 2021 |
Submission of hands-on tutorials information: | October 23, 2021 |
Workshop date: | October 30, 2021 |