3 episodes

A global network for the application of data science and AI to hearing (loss) research and technology. CAN will bring together academics, clinicians, industry partners, policymakers, and listeners to innovate hearing health(care).

Computational Audiology Network (CAN‪)‬ Jan-Willem Wasmann

    • Technology
    • 5.0 • 1 Rating

A global network for the application of data science and AI to hearing (loss) research and technology. CAN will bring together academics, clinicians, industry partners, policymakers, and listeners to innovate hearing health(care).

    A holistic perspective on hearing technology

    A holistic perspective on hearing technology

    In this episode, Brent Edwards from NAL and Stefan Launer from Sonova take us through their careers and share lessons and perspectives on the development of hearing technology. We discuss how the development of technology become more holistic, design thinking,  standardization and what's needed to get to new service models and innovation.

    Time index of content:
    Early career learnings - 3:20
    Factors important for career success - 6:40
    Hearing healthcare trends over past 30 years - 9:10
    Design thinking and unmet needs in hearing - 14:00
    Barriers to adoption of hearing innovation - 19:05
    Hearables and alternatives to conventional hearing aids - 25:15
    Hearing health data ownership, sharing and privacy - 28:50
    Hearing manufacturer ecosystems and harmonization - 39:40
    Threats and opportunities with OTC hearing aids - 44:50
    Final points - 55:55
    Advice for people early in their career - 57:05

    • 59 min
    Automated Speech Recognition (ASR) for the deaf

    Automated Speech Recognition (ASR) for the deaf

    Automated Speech Recognition (ASR) for the deaf and communication on equal terms regardless of hearing status.Episode 2 with Dimitri Kanevsky, Jessica Monaghan and Nicky Chong-White. Moderator Jan-Willem Wasmann
    You are witnessing a recording of an interview that was prepared as an experiment using an automated speech recognition system (speech to text). One of the participants, Dimitri Kanevsky is deaf and needs to read the transcript of what is said in order to follow the discussion, the other participants are normal hearing. We all need to time to read the transcript and confirm that we understand each other properly. We are using Google Meet and Google Relate, a prototype system not yet publicly released, that is trained on Dimitri’s speech. In addition, we are in different time zones (16 hours apart), haven’t met in person before, and English is not the first language for all of us. Of course, we hope the internet connection will not fail us. There will be a video recording (YouTube) and an audio-only recording. The video recording includes the transcript of what is said by Dimitri.

    In order to read the transcript on Dimitri's screen please watch the audiovisual version on Youtube:
    https://youtu.be/7bvFCo3VXlU
     Jessica Monaghan works as a research scientist at the National Acoustic Laboratories (NAL, Sydney) with a special interest in machine learning applications in audiology. She studied physics in Cambridge (UK) and received a Ph.D. in Nottingham (UK). She worked as a research fellow in Southampton and Macquarie University in Sydney. Her work focuses on speech reception and how to improve this in case of hearing loss. Recently she studied the effect of facemasks on speech recognition.
     Nicky Chong-White is a research engineer at the National Acoustic Laboratories (NAL, Sydney). She studied Electrical Engineering at the University of Auckland (NZ) and received a Ph.D. in speech signal processing at the University of Wollongong (AU). She has worked as DSP engineer with several research organisations including Motorola Australian Research Centre and AT&T Labs. Nicky holds 10 patents. She is the lead developer behind NALscribe, a live captioning app to help people with hearing difficulties understand conversations more easily, designed especially for clinical settings.  She has a passion for mobile application development and creating innovative digital solutions to enrich the lives of people with hearing loss.
     Dimitri Kanevsky is a researcher at Google. He lost his hearing in early childhood. He studied mathematics and received a Ph.D. at Moskow State University. Subsequently, Dimitri worked at various research centers including Max Planck Institute in Bonn (Germany) and the Institute for Advanced Studies in Princeton (USA) before joining IBM in 1986 and Google in 2014. He has been working for over 25 years in developing and improving speech recognition for people with profound hearing loss leading to Live Transcribe and Relate. Dimitri has also worked on other technologies to improve accessibility. In 2012 he was honored at the White House as a Champion of Change for his efforts to advance access to science, technology, engineering, and math (STEM) for people with disabilities. Dimitri currently holds over 295 patents.
    Quotes from the interview
    Dimitri: 'There is no data like more data.' (Mercer)
    Jessica: 'Blindness cuts us off from things, but deafness cuts us off from people.' (Helen Keller)
    Nicky: 'Inclusion Inspires Innovation.'
    Jan-Willem: 'Be careful about reading health books. You may die of a misprint.'  (Mark Twain)
    Further reading and exploring
    https://blog.google/outreach-initiatives/accessibility/impaired-speech-recognition/

    • 1 hr 15 min
    Bayesian Active Learning in Audiology

    Bayesian Active Learning in Audiology

    Here we discuss with Josef Schlittenlacher (ManCAD), Bert de Vries (TUe) and Dennis Barbour (WashU st. Louis) the potential of Bayesian active learning in audiology, in medicine, and beyond.

    Quotes from the interview:
    Dennis: 'No Bayesianists are born, they are all converted' (origin unknown)
    Josef: The audiogram is the ideal testbed for Bayesian active learning.' 
    Bert's favorite quote: “Everything is the way it is because it got that way” (D'Arcy Wentworth Thompson, 1860--1948)

     The later quote reflects on the idea that everything evolved to where it is now. It’s not a quote from the Free Energy Principle but it has everything to do with it. The hearing system evolved to where it is now. To design proper hearing aid algorithms, we should not focus on the best algorithm but rather on an adaptation process that converges to better algorithms than before. 

    Further reading and exploring:
    - https://computationalaudiology.com/bayesian-active-learning-in-audiology/
    - https://computationalaudiology.com/for-professionals/

    - Audiogram estimation using Bayesian active learning, https://doi.org/10.1121/1.5047436
    - Online Machine Learning Audiometry, https://pubmed.ncbi.nlm.nih.gov/30358656/
    - Bayesian Pure-Tone Audiometry Through Active Learning Under Informed Priors, https://www.frontiersin.org/articles/10.3389/fdgth.2021.723348/full
    - Digital Approaches to Automated and Machine Learning Assessments of Hearing: Scoping Review, https://www.jmir.org/2022/2/e32581

    • 48 min

Customer Reviews

5.0 out of 5
1 Rating

1 Rating

Top Podcasts In Technology

No Priors: Artificial Intelligence | Technology | Startups
Conviction | Pod People
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
TED Radio Hour
NPR