Exclusive: Review of Science and Technology Hotspots in 2024

Review of research hotspots in affective human computer interaction in 2024

  • Yu GU ,
  • Fuji REN , *
Expand
  • School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China

Received date: 2024-12-03

  Online published: 2025-02-10

Copyright

All rights reserved. Unauthorized reproduction is prohibited.

Cite this article

Yu GU , Fuji REN . Review of research hotspots in affective human computer interaction in 2024[J]. Science & Technology Review, 2025 , 43(1) : 132 -142 . DOI: 10.3981/j.issn.1000-7857.2025.01.00035

1
Sabour S , Zhang W , Xiao X Y , et al. A chatbot for mental health support: Exploring the impact of Emohaa on reducing mental distress in China[J]. Frontiers in Digital Health, 2023, 5: 1133987.

DOI

2
Qiu H C, He H L, Zhang S, et al. SMILE: Single-turn to multi-turn inclusive language expansion via ChatGPT for mental health support[C]//Proceedings of Findings of the Association for Computational Linguistics: EMNLP 2024. Stroudsburg, PA: USAACL, 2024: 615-636.

3
Ryan O, Dablander F, Haslbeck J M B. Toward a generative model for emotion dynamics[EB/OL]. (2024-09-11) [2025-01-03]. https://osf.io/preprints/psyarxiv/x52ns.

4
Deng Y, Liao L Z, Zheng Z H, et al. Towards human-centered proactive conversational agents[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. New York: ACM, 2024: 807-818.

5
Ren F J , Zhou Y Y , Deng J W , et al. Tracking emotions using an evolutionary model of mental state transitions: Introducing a new paradigm[J]. Intelligent Computing, 2024, 3: 75.

DOI

6
Ren F J . Affective information processing and recognizing human emotion[J]. Electronic Notes in Theoretical Computer Science, 2009, 225: 39- 50.

DOI

7
Sanchez L, Crocker D, Oo T M, et al. Robotic gestures, human moods: Investigating affective responses in public interaction[C]//Proceedings of Companion of the 2024 ACM/ IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 935-939.

8
Bal B S, Pitti A, Cohen L, et al. Assessing the sense of control during affective physical human robot interaction[C]//In 19th Annual ACM/IEEE International Conference on Human Robot Interaction (HRI). Boulder: ACM, 2024.

9
Xu W , Gao Z F . Applying HCAI in developing effective human-AI teaming: A perspective from human-AI joint cognitive systems[J]. Interactions, 2024, 31 (1): 32- 37.

DOI

10
Dolan R J . Emotion, cognition, and behavior[J]. Science, 2002, 298 (5596): 1191- 1194.

DOI

11
Picard R W . Affective computing[M]. Cambridge: MIT Press, 2000.

12
Belkaid M , Pessoa L . Modeling emotion to enable intelligent behavior in robots[J]. Intellectica-La revue de l'Association pour la Recherche sur les sciences de la Cognition (ARCo), 2023, 79: 109- 128.

13
Ekman P . An argument for basic emotions[J]. Cognition & Emotion, 1992, 6 (3/4): 169- 200.

14
Wundt W M , Judd C H . Outlines of psychology[M]. Leipzig: W.Engelmann, 1902.

15
Russell J A . A circumplex model of affect[J]. Journal of Personality and Social Psychology, 1980, 39 (6): 1161- 1178.

DOI

16
Bakker I , van der Voordt T , Vink P , et al. Pleasure, arousal, dominance: Mehrabian and Russell revisited[J]. Current Psychology, 2014, 33 (3): 405- 421.

DOI

17
Colombetti G, Kuppens P. How should we understand valence, arousal, and their relation?[M]//Emotion Theory: The Routledge Comprehensive Guide. New York: Routledge, 2024: 599-620.

18
Smith K E , Woodard K , Pollak S D . Arousal may not be anything to get excited about[J]. Emotion Review, 2024, 17540739241303499.

19
Sander D . Is "arousal", as a scientific concept, worse than useless?[J]. Emotion Review, 2024, 17540739241303501.

20
Uddin M T , Yin L J , Canavan S . Spatio-temporal graph analytics on secondary affect data for improving trustworthy emotional AI[J]. IEEE Transactions on Affective Computing, 2024, 15 (1): 30- 49.

DOI

21
Stappen L , Baird A , Schumann L , et al. The multimodal sentiment analysis in car reviews (MuSe-CaR) dataset: Collection, insights and improvements[J]. IEEE Transactions on Affective Computing, 2023, 14 (2): 1334- 1350.

DOI

22
Casado C Á , Cañellas M L , López M B . Depression recognition using remote photoplethysmography from facial videos[J]. IEEE Transactions on Affective Computing, 2023, 14 (4): 3305- 3316.

DOI

23
Huang W C , Wang W L , Li Y Q , et al. FBSTCNet: A spatio-temporal convolutional network integrating power and connectivity features for EEG-based emotion decoding[J]. IEEE Transactions on Affective Computing, 2024, 15 (4): 1906- 1918.

DOI

24
Wickramasuriya D S , Faghih R T . A Bayesian filtering approach for tracking arousal from binary and continuous skin conductance features[J]. IEEE Transactions on BioMedical Engineering, 2020, 67 (6): 1749- 1760.

DOI

25
Kang X , Shi X F , Wu Y N , et al. Active learning with complementary sampling for instructing class-biased multi-label text emotion classification[J]. IEEE Transactions on Affective Computing, 2023, 14 (1): 523- 536.

DOI

26
Ren F J , Liu Z , Kang X . An efficient framework for constructing speech emotion corpus based on integrated active learning strategies[J]. IEEE Transactions on Affective Computing, 2022, 13 (4): 1929- 1940.

DOI

27
Xue F L , Wang Q C , Tan Z C , et al. Vision transformer with attentive pooling for robust facial expression recognition[J]. IEEE Transactions on Affective Computing, 2022, 14 (4): 3244- 3256.

28
Wang Y , Song W , Tao W , et al. A systematic review on affective computing: Emotion models, databases, and recent advances[J]. Information Fusion, 2022, 83: 19- 52.

29
Li S , Deng W H . Deep facial expression recognition: A survey[J]. IEEE Transactions on Affective Computing, 2022, 13 (3): 1195- 1215.

DOI

30
李路宝, 陈田, 任福继, 等. 基于图神经网络和注意力的双模态情感识别方法[J]. 计算机应用, 2023, 43 (3): 700- 705.

31
Zhang W J , Song P , Zheng W M . Joint local-global discriminative subspace transfer learning for facial expression recognition[J]. IEEE Transactions on Affective Computing, 2023, 14 (3): 2484- 2495.

DOI

32
任福继, 于曼丽, 胡敏, 等. 融合表情和BVP生理信号的双模态视频情感识别[J]. 中国图象图形学报, 2018, 23 (5): 688- 697.

33
Gu Y , Zhang X , Yan H , et al. WiFE: WiFi and vision based unobtrusive emotion recognition via gesture and facial expression[J]. IEEE Transactions on Affective Computing, 2023, 14 (4): 2567- 2581.

34
Hossain M S , Muhammad G . Emotion recognition using deep learning approach from audio-visual emotional big data[J]. Information Fusion, 2019, 49: 69- 78.

35
邓佳文, 任福继. 2023年生成式AI大模型发展热点回眸[J]. 科技导报, 2024, 42 (1): 266- 285.

DOI

36
Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International Conference on Machine Learning. Online: PMLR, 2021: 8748-876.

37
Miah M S U , Kabir M M , Sarwar T B , et al. A multimodal approach to cross-lingual sentiment analysis with ensemble of transformer and LLM[J]. Scientific Reports, 2024, 14 (1): 9603.

38
Xing F. Designing heterogeneous LLM agents for financial sentiment analysis[EB/OL]. [2024-12-15]. https://arxiv.org/abs/2401.05799v1.

39
Venerito V , Iannone F . Large language model-driven sentiment analysis for facilitating fibromyalgia diagnosis[J]. RMD Open, 2024, 10 (2): e004367.

40
Hellwig N C , Fehle J , Wolff C . Exploring large language models for the generation of synthetic training samples for aspect-based sentiment analysis in low resource settings[J]. Expert Systems with Applications, 2025, 261: 125514.

41
Li C, Wang J D, Zhu K J, et al. Large language models understand and can be enhanced by emotional stimuli[EB/OL]. (2021-12-26) [2025-01-03]. https://arxiv.org/abs/210.00020.

42
Miltiadou M , Savenye W C . Applying social cognitive constructs of motivation to enhance student success in online distance education[J]. AACE Review (formerly AACE Journal), 2003, 11 (1): 78- 95.

43
Bandura A . Health promotion from the perspective of social cognitive theory[J]. Psychology & Health, 1998, 13 (4): 623- 649.

44
鲍艳伟, 任福继. 人脑信息处理和类脑智能研究进展[J]. 科技导报, 2023, 41 (9): 6- 16.

45
黄忠, 任福继, 胡敏, 等. 基于Transformer架构和B样条平滑约束的机器人面部情感迁移网络[J]. 机器人, 2023, 45 (4): 395- 408.

46
黄忠, 任福继, 胡敏, 等. 基于双LSTM融合的类人机器人实时表情再现方法[J]. 机器人, 2019, 41 (2): 137- 146.

47
Bubeck S, Chandrasekaran V, Eldan R, et al. Sparks of artificial general intelligence: Early experiments with GPT-4[EB/OL]. (2023-03-22) [2025-01-03]. https://arxiv.org/abs/230.1271.

48
王国庆, 裴云强, 杨阳, 等. 多模可信交互: 从多模态信息融合到人-机器人-数字人三位一体式交互模型[J]. 中国科学: 信息科学, 2024, 54 (4): 872- 892.

49
邵雷, 石峰. 生成式人工智能对社交机器人的影响与治理对策[J]. 情报杂志, 2024, 43 (7): 154- 163.

50
Radford A, Narasimhan K, Salimans T, et al. Improving language understanding by generative pre-training[EB/OL]. [2025-01-03]. https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-RadfordNarasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a50-35.

51
Na H. CBT-LLM: A chinese large language model for cognitive behavioral therapy-based mental health question answering[C]//In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation. Torino, Italia: ELRA and ICCL, 2024: 2930-2940.

52
Liu C X, Xie Z Y, Zhao S R, et al. Speak from heart: An emotion-guided LLM-based multimodal method for emotional dialogue generation[C]//Proceedings of the 2024 International Conference on Multimedia Retrieval. New York: ACM, 2024: 533-542.

53
Hei X X, Yu C, Zhang H, et al. A bilingual social robot with sign language and natural language[C]//Proceedings of Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 526-529.

54
Hu Y H , Chen B Y , Lin J , et al. Human-robot facial coexpression[J]. Science Robotics, 2024, 9 (88): eadi4724.

55
Llanes-Jurado J , Gómez-Zaragozá L , Minissi M E , et al. Developing conversational Virtual Humans for social emotion elicitation based on large language models[J]. Expert Systems with Applications, 2024, 246: 123261.

56
Chang C J, Sohn S S, Zhang S, et al. The importance of multimodal emotion conditioning and affect consistency for embodied conversational agents[C]//Proceedings of the 28th International Conference on Intelligent User Interfaces. New York: ACM, 2023: 790-801.

57
Kim C Y, Lee C P, Mutlu B. Understanding large-language model (LLM)-powered human-robot interaction[C]//Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction. New York: ACM, 2024: 371-380.

58
Ashkanasy N M , Daus C S . Rumors of the death of emotional intelligence in organizational behavior are vastly exaggerated[J]. Journal of Organizational Behavior, 2005, 26 (4): 441- 452.

59
Salovey P , Mayer J D . Emotional intelligence[J]. Imagination, Cognition and Personality, 1990, 9 (3): 185- 211.

60
MacCann C , Roberts R D . New paradigms for assessing emotional intelligence: Theory and data[J]. Emotion, 2008, 8 (4): 540- 551.

61
Yang Q, Ye M, Du B. EmoLLM: Multimodal emotional understanding meets large language models[EB/OL]. [2025-01-03]. https://arxiv.org/abs/2406.16442v2.

62
Xia C Y, Xing C, Du J S, et al. FOFO: A benchmark to evaluate LLMs' format-following capability[C]//Annual Meeting of the Association for Computational Linguistics, 2024.

63
Mizrahi M , Kaplan G , Malkin D , et al. State of what art? A call for multi-prompt LLM evaluation[J]. Transactions of the Association for Computational Linguistics, 2024, 12: 933- 949.

Outlines

/