Stories and Innovation in ALS (4).png

Click here to go back to headlines

Multimodal dialog based speech and facial biomarkers capture differential disease progression rates for ALS remote patient monitoring,

M. Neumann, O. Roesler, J. Liscombe, H. Kothare, D. Suendermann-Oeft, J. D. Berry, E. Fraenkel, R. Norel, A. Anvar, I. Navar, A. V. Sherman, J. R. Green and V. Ramanarayanan (2021).

In Proc. of: The 32nd International Symposium on Amyotrophic Lateral Sclerosis and Motor Neuron Disease, Virtual, December 2021.

Objective

Identify audiovisual speech markers that are responsive to clinical progression of Amyotrophic Lateral Sclerosis (ALS).

Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale

M. Neumann, O. Roesler, J. Liscombe, H. Kothare, D. Suendermann-Oeft, D. Pautler, I. Navar, A. Anvar, J. Kumm, R. Norel, E. Fraenkel, A. Sherman, J. Berry, G. Pattee, J. Wang, J. Green, V. Ramanarayanan: Investigating the Utility of Multimodal Conversational Technology and Audiovisual Analytic Measures for the Assessment and Monitoring of Amyotrophic Lateral Sclerosis at Scale . Accepted at Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czech Republic, August - September 2021

Accepted at Interspeech 2021, 22nd Annual Conference of the International Speech Communication Association, Brno, Czech Republic, August - September 2021.

 

Abstract

We investigate the utility of audiovisual dialog systems combined with speech and video analytics for real-time remote monitoring of depression at scale in uncontrolled environment settings. We collected audiovisual conversational data from participants who interacted with a cloud-based multimodal dialog system, and automatically extracted a large set of speech and vision metrics based on the rich existing literature of laboratory studies. We report on the efficacy of various audio and video metrics in differentiating people with mild, moderate and severe depression, and discuss the implications of these results for the deployment of such technologies in real-world neurological diagnosis and monitoring applications.

Towards A Large-Scale Audio-Visual Corpus for Research on Amyotrophic Lateral Sclerosis

A. Anvar, D. Suendermann-Oeft, D. Pautler, V. Ramanarayanan, J. Kumm, J. Berry, R. Norel, E. Fraenkel, and I. Navar: Towards A Large-Scale Audio-Visual Corpus for Research on Amyotrophic Lateral Sclerosis. In Proc. of AAN 2021, 73th Annual Meeting of the American Academy of Neurology, Virtual, April 2021.

In Proc. of AAN 2021, 73th Annual Meeting of the American Academy of Neurology, Virtual, April 2021

 

Objective

This presentation describes the creation of a large, open data platform, comprising speech and video recordings of people with ALS and healthy volunteers. Each participant is interviewed by Modality.AI’s virtual agent, emulating the role of a neurologist or speech pathologist walking them through speaking exercises [Fig 1] The collected data is made available to the academic and research community to foster acceleration of the development of biomarkers, diagnostics, therapies, and fundamental scientific understanding of ALS.

LESSONS LEARNED FROM A LARGE-SCALE AUDIO-VISUAL REMOTE DATA COLLECTION
FOR AMYOTROPHIC LATERAL SCLEROSIS RESEARCH

Vikram Ramanarayanan1,7 , Michael Neumann1 , Aria Anvar5 , Oliver Roesler1 , Jackson Liscombe1 , Hardik Kothare1 , David Suendermann-Oeft1 , James D. Berry2 , Ernest Fraenkel3 , Raquel Norel4 , Alexander V. Sherman2,6 , Jordan R. Green2,6 and Indu Navar5 1Modality.AI, 2MGH Institute of Health Professions, 3Massachusetts Institute of Technology, 4 IBM Thomas J. Watson Research Center, 5EverythingALS, Peter Cohen Foundation, 6Harvard University, 7University of California, San Francisco