<?xml version="1.0"?>
<!DOCTYPE ArticleSet PUBLIC "-//NLM//DTD PubMed 2.0//EN" "http://www.ncbi.nlm.nih.gov/entrez/query/static/PubMed.dtd">
<ArticleSet>
  <Article>
    <Journal>
      <PublisherName>Quan Tech Quest Ltd.</PublisherName>
      <JournalTitle>Advances in Medical Informatics</JournalTitle>
      <Issn>2819-8298</Issn>
      <Volume>1</Volume>
      <PubDate PubStatus="epublish">
        <Year>2025</Year>
        <Month>06</Month>
        <Day>25</Day>
      </PubDate>
    </Journal>
    <ArticleTitle>How Neurological Disorders Affect Acoustic Features of Speech?</ArticleTitle>
    <FirstPage>7</FirstPage>
    <LastPage>7</LastPage>
    <Language>eng</Language>
    <AuthorList>
      <Author>
        <FirstName>Mohammadjavad</FirstName>
        <LastName>Sayadi</LastName>
        <Affiliation>Department of Computer Engineering, Faculty of Ilam, Technical and Vocational University, Ilam, Iran</Affiliation>
        <Identifier Source="ORCID">0000-0003-1511-8725</Identifier>
      </Author>
      <Author>
        <FirstName>Farhad </FirstName>
        <LastName>Torabinezhad</LastName>
        <Affiliation>Rehabilitation Research Center, Department of Speech Therapy, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran</Affiliation>
      </Author>
      <Author>
        <FirstName>Gholamreza </FirstName>
        <LastName>Bayazian</LastName>
        <Affiliation>ENT and Head &amp; Neck Research Center, The Five Sense Health Institute, Rasoul Akram Medical Complex, Iran University of Medical Sciences, Tehran, Iran</Affiliation>
      </Author>
      <Author>
        <FirstName>Somayeh </FirstName>
        <LastName>Abedian</LastName>
        <Affiliation>Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Canada</Affiliation>
      </Author>
    </AuthorList>
    <History>
      <PubDate PubStatus="received">
        <Year>2025</Year>
        <Month>04</Month>
        <Day>25</Day>
      </PubDate>
      <PubDate PubStatus="accepted">
        <Year>2025</Year>
        <Month>05</Month>
        <Day>30</Day>
      </PubDate>
    </History>
    <Abstract>
Introduction: Neurological disorders often manifest as alterations in speech production, affecting acoustic features such as pitch, rhythm, articulation, and voice quality. These speech changes can serve as non-invasive biomarkers for early diagnosis and monitoring. This study aims to investigate how various neurological disorders affect the acoustic features of speech across multiple speaking styles using a data mining approach.


Material and Methods: We collected speech recordings from 383 participants, including 264 individuals diagnosed with different neurological disorders and 119 healthy controls. Participants read a standardized text in four speaking styles: questioning, excited, angry, and happy. A comprehensive set of acoustic features, covering prosodic, spectral, voice quality, formant, and temporal parameters, was extracted. Deep learning models were trained separately for each speaking style to classify neurological status. Feature importance analyses were conducted to identify key acoustic indicators of neurological impairment.


Results: The deep learning models achieved classification accuracies ranging from 82.7% to 87.9% across speaking styles, with the highest performance observed in the angry speech condition. Prosodic features, particularly fundamental frequency and speaking rate, alongside voice quality measures such as jitter and shimmer, emerged as the most discriminative features. Distinct acoustic profiles were identified for different neurological disorders, and several features correlated significantly with clinical severity scores. Speaking style influenced the detectability of speech impairments, underscoring the value of analyzing diverse speech contexts.


Conclusion: Our findings demonstrate that neurological disorders induce characteristic alterations in acoustic speech features that can be effectively captured using data mining and deep learning techniques. Incorporating multiple speaking styles enhances diagnostic sensitivity. This approach holds promise for developing accessible, non-invasive speech-based biomarkers to support early diagnosis and monitoring of neurological diseases.&#xA0;
</Abstract>
  </Article>
</ArticleSet>
