专题:科技伦理前沿

自动驾驶算法设计中的伦理决策——基于“有意义的人类控制”

  • 李德新 ,
  • 宫志超
展开
  • 1. 山西大学科学技术哲学研究中心,太原 030006
    2. 山西大学哲学社会学学院,太原 030006
李德新,副教授,研究方向为科技伦理、科学哲学,电子信箱:ldx@sxu.edu.cn;宫志超(共同第一作者),博士研究生,研究方向为科技伦理,电子信箱:madradaist@163.com

收稿日期: 2022-12-27

  修回日期: 2023-02-25

  网络出版日期: 2023-04-27

基金资助

国家社会科学基金重大项目(18ZDA030);山西省高等学校人文社科重点研究基地项目(2022J002);山西省研究生教育教学改革课题(2022YJJG031)

On ethical decision-making in algorithm design for autonomous vehicles——Based on meaningful human control

  • LI Dexin ,
  • GONG Zhichao
Expand
  • 1. Research Center for Philosophy of Science and Technology, Shanxi University, Taiyuan 030006, China
    2. School of Philosophy and Sociology, Shanxi University, Taiyuan 030006, China

Received date: 2022-12-27

  Revised date: 2023-02-25

  Online published: 2023-04-27

摘要

基于“有意义的人类控制”这一人工智能伦理学的核心概念,总结了自动驾驶在算法设计阶段的伦理难题;分析了“有意义的人类控制”运用于自动驾驶的可行性;从“跟踪”和“追踪”2大条件,围绕“问责制与透明度”和“价值敏感设计”进行“有意义的人类控制”框架构建,以为自动驾驶的算法设计提供系统方法论指导。

本文引用格式

李德新 , 宫志超 . 自动驾驶算法设计中的伦理决策——基于“有意义的人类控制”[J]. 科技导报, 2023 , 41(7) : 47 -54 . DOI: 10.3981/j.issn.1000-7857.2023.07.005

Abstract

Based on "meaningful human control", which is the core concept of artificial intelligence ethics, the ethical difficulties in the algorithm design stage for autonomous driving are summarized in the article. The feasibility of "meaningful human control" applied to autonomous vehicles is analyzed. On this basis, using the two conditions of "tracking" and "tracing", a "meaningful human control" framework is constructed around "accountability and transparency" and "value-sensitive design", which may provide systematic methodology guidance for algorithm design of autonomous vehicles.

参考文献

[1] 国家市场监督管理总局, 国家标准化管理委员会 . 汽车驾驶自动化分级: GB/T 40429—2021[S]. 2021: 3-5.
[2] Christian G J, Thornton S M, Millar J. Designing automated vehicles around human values[C]//Road Vehicle Automation 6. Automated Vehicles Symposium 2018. Switzerland: Springer Nature Switzerland AG, 2019: 39-48.
[3] Bonnefon J F, Shariff A, Rahwan I. The social dilemma of autonomous vehicles[J]. Science, 2016, 352(6293): 1573-1576.
[4] Foot P. The problem of abortion and the doctrine of the double effect[J]. Oxford Review, 1967, 5: 5-15.
[5] 隋婷婷, 郭喨 . 自动驾驶电车难题的伦理算法研究[J].自然辩证法通讯, 2020, 42(10): 85-90.
[6] Brändle C, Grunwald A. Autonomes fahren aus sicht der maschinenethik[C]//Bendel O. Handbuch Maschinenethik. Wiesbaden: Springer, 2019: 281-300.
[7] Wallach W, Allen C. Moral machines: Teaching robots right from wrong[M]. Oxford: Oxford University Press, 2009: 83.
[8] 潘恩荣, 杨嘉帆 . 面向技术本身的人工智能伦理框架:以自动驾驶系统为例[J]. 自然辩证法通讯, 2020, 42(3): 33-39.
[9] Awad E, Dsouza S, Kim R, et al. The moral machine experiment[J]. Nature, 2018, 563(7729): 59-64.
[10] Noah J G. Can you program ethics into a self-driving car[J]. IEEE Spectrum, 2016, 53(6): 28-58.
[11] Millar J. Technology as moral proxy: Autonomy and paternalism by design[J]. IEEE Technology and Society Magazine, 2015, 34(2): 47-55.
[12] Hevelke A, Julian N. Responsibility for crashes of autonomous vehicles: An ethical analysis[J]. Science and Engineering Ethics, 2015, 21(3): 619-630.
[13] Report of the 2016 Informal Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS)[EB/OL]. (2016-06-10) [2022-03-20]. https://documents-dds-ny.un.org/doc/UNDOC/GEN/G16/117/16/pdf/ G1611716.pdf.
[14] 徐能武, 龙坤 . 联合国 CCW 框架下致命性自主武器系统军控辩争的焦点与趋势[J]. 国际安全研究, 2019, 37(5): 108-132.
[15] Filippo S S, Jeroen H. Meaningful human control over autonomous systems: A philosophical account[J]. Frontiers in Robotics and AI, 2018, 5(15): 1-14.
[16] Fischer J M, Ravizza M. Précis of responsibility and control: A theory of moral responsibility[J]. Philosophy and Phenomenological Research, 2000, 61(2): 441-445.
[17] Schwarz E. The (im)possibility of meaningful human control for lethal autonomous weapon systems[EB/OL]. (2018-08-29) [2021-10-20]. https://blogs. icrc. org/law and policy.
[18] Leila M, Andrea A T, Virginia D, et al. Let me take over: Variable autonomy for meaningful human control[J]. Frontiers in Artifificial Intelligence, 2021, 4: 133-143.
[19] Friedman B, Freier N. Theories of information behavior[M]. New Jersey: Information Today, 2005: 370.
[20] Andrew S, Julie A J. The human-computer interaction handbook[M]. Florida: CRC Press, 2007: 1178.
[21] Bryson J J, Diamantis M E, Grant T D. Of, for, and by the people: The legal lacuna of synthetic persons[J]. Artificial Intelligence and Law, 2017, 25(3): 273-291.
[22] Mark B. Analysing and assessing accountability: A conceptual framework[J]. European Law Journal, 2007, 13 (4): 454-457.
[23] Williams C C. Trust diffusion: The effect of interpersonal trust on structure, function, and organizational transparency[J]. Business & Society, 2005, 44(3): 357-368.
[24] 王娟, 叶斌 .“负责任”的算法透明度——人工智能时代传媒伦理建构的趋向[J]. 自然辩证法研究, 2020, 36(12): 66-72.
[25] Veljko D. Toward implementing the ADC model of moral judgment in autonomous vehicles[J]. Science and Engineering Ethics, 2020, 26(5): 2461-2472.
[26] 苏宇. 论算法规制的价值目标与机制设计[J]. 自然辩证法通讯, 2019, 41(10): 8-15.

文章导航

/