A framework for language technologies in behavioral research and clinical applications: Ethical challenges, implications, and solutions.
Journal Article
Overview
abstract
Technological advances in the assessment and understanding of speech and language within the domains of automatic speech recognition, natural language processing, and machine learning present a remarkable opportunity for psychologists to learn more about human thought and communication, evaluate a variety of clinical conditions, and predict cognitive and psychological states. These innovations can be leveraged to automate traditionally time-intensive assessment tasks (e.g., educational assessment), provide psychological information and care (e.g., chatbots), and when delivered remotely (e.g., by mobile phone or wearable sensors) promise underserved communities greater access to health care. Indeed, the automatic analysis of speech provides a wealth of information that can be used for patient care in a wide range of settings (e.g., mHealth applications) and for diverse purposes (e.g., behavioral and clinical research, medical tools that are implemented into practice) and patient types (e.g., numerous psychological disorders and in psychiatry and neurology). However, automation of speech analysis is a complex task that requires the integration of several different technologies within a large distributed process with numerous stakeholders. Many organizations have raised awareness about the need for robust systems for ensuring transparency, oversight, and regulation of technologies utilizing artificial intelligence. Since there is limited knowledge about the ethical and legal implications of these applications in psychological science, we provide a balanced view of both the optimism that is widely published on and also the challenges and risks of use, including discrimination and exacerbation of structural inequalities. (PsycInfo Database Record (c) 2024 APA, all rights reserved).