Abstract
The rapid advancement of artificial intelligence (AI) technology and the widespread emergence of AI companions have transformed human-AI interaction from purely instrumental use to quasi-social engagement, potentially evolving into emotional attachment. This article systematically reviews two decades of interdisciplinary research in psychology and human-AI interaction, proposing a theoretical model to elucidate the formation of human-AI attachment. The study identifies three key findings: (1) Human-AI relationships undergo a dynamic progression from instrumental use to quasi-social interaction and, ultimately, to emotional attachment. (2) The development of AI attachment is influenced by dual pathways: individual factors (e.g., loneliness, usage motivation, emotional traits) and AI characteristics (e.g., anthropomorphism, autonomy, responsiveness). (3) This novel emotional bond raises ethical concerns, including emotional bubbles, privacy risks, and interpersonal alienation. The article constructs a triphasic model to delineate the evolution of human-AI emotional bonds: (1) Instrumental Use, where AI serves as a functional tool with minimal emotional engagement; (2) Quasi-Social Interaction, marked by anthropomorphism and bidirectional communication, though users remain aware of AI's non-human nature; and (3) Emotional Attachment, characterized by deep dependency, where AI becomes a "significant other" and a transitional object for emotional security. This model highlights the continuum of emotional investment, from functional commands to intimate self-disclosure and separation anxiety. The dual-path mechanism underpinning AI attachment formation integrates user-driven needs (e.g., social motivation, loneliness) and Al-driven performance (e.g., authenticity, autonomy, reactivity). AI's "backstage" features—privacy, non-judgmental feedback, and identity fluidity—foster a "digital sanctuary" for authentic self-expression, reinforcing attachment. However, excessive reliance on AI may lead to emotional bubbles (illusory reciprocity), self-deception, and real-world social skill deterioration. Ethical dilemmas arise from AI's hyper-personalized emotional mimicry, which risks manipulating vulnerable users and exacerbating societal isolation. Despite its contributions, current research suffers from limitations, including cross-sectional designs, homogeneous samples (e.g., overrepresentation of young users), and a lack of neurobiological evidence. Future directions call for longitudinal studies, multimodal data, and investigations into AGI's potential to disrupt traditional attachment paradigms through bidirectional emotional capacities. Practical implications urge developers to embed ethical safeguards (e.g., transparency in emotional algorithms), policymakers to establish risk-assessment frameworks, and users to cultivate digital literacy for healthier human-AI coexistence. This study not only advances theoretical frameworks for digital-era attachment but also prompts philosophical reflection on the essence of intimacy, challenging conventional definitions of love and "inter-subjectivity" in an age where AI blurs the boundaries between tool and companion. Balancing technological innovation with ethical vigilance is paramount to ensuring the sustainable development of human-AI relationships.
| Translated title of the contribution | From Para-social Interaction to Attachment: The Evolution of Human-AI Emotional Relationships |
|---|---|
| Original language | Chinese (Traditional) |
| Pages (from-to) | 948-961 |
| Number of pages | 14 |
| Journal | Journal of Psychological Science |
| Volume | 48 |
| Issue number | 4 |
| DOIs | |
| State | Published - 20 Aug 2025 |