Review of research hotspots in affective human computer interaction in 2024
Received date: 2024-12-03
Online published: 2025-02-10
Copyright
Yu GU , Fuji REN . Review of research hotspots in affective human computer interaction in 2024[J]. Science & Technology Review, 2025 , 43(1) : 132 -142 . DOI: 10.3981/j.issn.1000-7857.2025.01.00035
1 |
|
2 |
Qiu H C, He H L, Zhang S, et al. SMILE: Single-turn to multi-turn inclusive language expansion via ChatGPT for mental health support[C]//Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg, PA: USAACL, 2024: 615-636.
|
3 |
Ryan O, Dablander F, Haslbeck J M B. Toward a generative model for emotion dynamics[EB/OL]. (2024-09-11) [2025-01-03]. https://osf.io/preprints/psyarxiv/x52ns.
|
4 |
Deng Y, Liao L Z, Zheng Z H, et al. Towards human-centered proactive conversational agents[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 807-818.
|
5 |
|
6 |
|
7 |
Sanchez L, Crocker D, Oo T M, et al. Robotic gestures, human moods: Investigating affective responses in public interaction[C]//Proceedings of Companion of the 2024 ACM/ IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 935-939.
|
8 |
Bal B S, Pitti A, Cohen L, et al. Assessing the sense of control during affective physical human robot interaction[C]//In 19th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI). Boulder: ACM, 2024.
|
9 |
|
10 |
|
11 |
|
12 |
|
13 |
|
14 |
|
15 |
|
16 |
|
17 |
Colombetti G, Kuppens P. How should we understand valence, arousal, and their relation?[M]//Emotion Theory: The Routledge Comprehensive Guide. New York: Routledge, 2024: 599-620.
|
18 |
|
19 |
|
20 |
|
21 |
|
22 |
|
23 |
|
24 |
|
25 |
|
26 |
|
27 |
|
28 |
|
29 |
|
30 |
李路宝, 陈田, 任福继, 等. 基于图神经网络和注意力的双模态情感识别方法[J]. 计算机应用, 2023, 43 (3): 700- 705.
|
31 |
|
32 |
任福继, 于曼丽, 胡敏, 等. 融合表情和BVP生理信号的双模态视频情感识别[J]. 中国图象图形学报, 2018, 23 (5): 688- 697.
|
33 |
|
34 |
|
35 |
邓佳文, 任福继. 2023年生成式AI大模型发展热点回眸[J]. 科技导报, 2024, 42 (1): 266- 285.
|
36 |
Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International Conference on Machine Learning. Online: PMLR, 2021: 8748-876.
|
37 |
|
38 |
Xing F. Designing heterogeneous LLM agents for financial sentiment analysis[EB/OL]. [2024-12-15]. https://arxiv.org/abs/2401.05799v1.
|
39 |
|
40 |
|
41 |
Li C, Wang J D, Zhu K J, et al. Large language models understand and can be enhanced by emotional stimuli[EB/OL]. (2021-12-26) [2025-01-03]. https://arxiv.org/abs/210.00020.
|
42 |
|
43 |
|
44 |
鲍艳伟, 任福继. 人脑信息处理和类脑智能研究进展[J]. 科技导报, 2023, 41 (9): 6- 16.
|
45 |
黄忠, 任福继, 胡敏, 等. 基于Transformer架构和B样条平滑约束的机器人面部情感迁移网络[J]. 机器人, 2023, 45 (4): 395- 408.
|
46 |
黄忠, 任福继, 胡敏, 等. 基于双LSTM融合的类人机器人实时表情再现方法[J]. 机器人, 2019, 41 (2): 137- 146.
|
47 |
Bubeck S, Chandrasekaran V, Eldan R, et al. Sparks of artificial general intelligence: Early experiments with GPT-4[EB/OL]. (2023-03-22) [2025-01-03]. https://arxiv.org/abs/230.1271.
|
48 |
王国庆, 裴云强, 杨阳, 等. 多模可信交互: 从多模态信息融合到人-机器人-数字人三位一体式交互模型[J]. 中国科学: 信息科学, 2024, 54 (4): 872- 892.
|
49 |
邵雷, 石峰. 生成式人工智能对社交机器人的影响与治理对策[J]. 情报杂志, 2024, 43 (7): 154- 163.
|
50 |
Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[EB/OL]. [2025-01-03]. https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-RadfordNarasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a50-35.
|
51 |
Na H. CBT-LLM: A chinese large language model for cognitive behavioral therapy-based mental health question answering[C]//In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Torino, Italia: ELRA and ICCL, 2024: 2930-2940.
|
52 |
Liu C X, Xie Z Y, Zhao S R, et al. Speak from heart: An emotion-guided LLM-based multimodal method for emotional dialogue generation[C]//Proceedings of the 2024 International Conference on Multimedia Retrieval. New York: ACM, 2024: 533-542.
|
53 |
Hei X X, Yu C, Zhang H, et al. A bilingual social robot with sign language and natural language[C]//Proceedings of Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 526-529.
|
54 |
|
55 |
|
56 |
Chang C J, Sohn S S, Zhang S, et al. The importance of multimodal emotion conditioning and affect consistency for embodied conversational agents[C]//Proceedings of the 28th International Conference on Intelligent User Interfaces. New York: ACM, 2023: 790-801.
|
57 |
Kim C Y, Lee C P, Mutlu B. Understanding large-language model (LLM)-powered human-robot interaction[C]//Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 371-380.
|
58 |
|
59 |
|
60 |
|
61 |
Yang Q, Ye M, Du B. EmoLLM: Multimodal emotional understanding meets large language models[EB/OL]. [2025-01-03]. https://arxiv.org/abs/2406.16442v2.
|
62 |
Xia C Y, Xing C, Du J S, et al. FOFO: A benchmark to evaluate LLMs' format-following capability[C]//Annual Meeting of the Association for Computational Linguistics, 2024.
|
63 |
|
/
〈 |
|
〉 |