Detecting stress, emotion or mental states of people from speech
-
Multi-modal approaches: using other modes such as video and
sensor data in addition to speech
-
Relevance of language models for mental state
detection
-
Cross-corpus detection on non-acted speech databases in
multiple languages and realistic environments
Effects of Audio on stress, emotion and mental states of people
- Audio-Visual Perception of music
-
Analysis of brain signal responses to audio and visual
stimulus
-
Evaluation and Applications: augmented reality, art
installations, music animations, computer games, etc
AI, Music and Mental Health
-
AI for personalized music therapy for mental health
-
Al-driven analysis of music therapy outcomes in mental health
-
Al-centric music therapy applications for mental illnesses
-
Ethical considerations in AI-driven music therapy for mental illness
(addressing privacy/consent etc.)
-
Al-generated music as a therapeutic means for mental wellness
The Impact of Synthesized Speech on Human-Machine Interaction
-
Psychological and cognitive effects of interacting with synthetic voices
-
Psychological adaptation to human-like machines
-
User perceptions of naturalness, trustworthiness, and likability in synthesized speech
-
Long-term effects of constant machine speech exposure
Other topics that are of interest in the context of stress, emotion and mental states
-
Approaches of Explainable AI in music and speech
-
Sounds at inaudible frequencies
-
Novel protocols for assessing mental states, inducing stress or emotion
-
Applications related to the above topics
Submission Process
Submitted papers shall be reviewed by the Scientific Committee.
Each paper will receive at least two reviews. Submitted papers must be original and not simultaneously
submitted to another journal or conference. They should follow
the
ICASSP 2026 format: At most 4 pages of content with possibly one additional page
containing references only.