当期目录首页 > 在线期刊 > 当期目录
迈向可解释的交互式人工智能:动因、途径及研究趋势发布时间:2021-09-03  点击数:
作 者:吴丹 孙国烨
关键词:人工智能;人机交互;可解释性;系统透明度;用户信任
摘 要:

近年来,人工智能的功能愈发强大,应用场景也越来越广泛。在人机交互协作成为常态的背景下,人们对人工智能可信任度及交互体验的要求也随之增高,机器的可解释性受到广大用户的高度重视,可解释的人工智能正在成为相关领域的重要议题。要迈向可解释的交互式人工智能,应当从设计指导性的框架准则、开发良好的算法模型、深入研究用户需求、构建个性化的交互式解释系统、展开有效的可解释性评估等方面发力。

 

Towards Explainable Interactive Artificial Intelligence: Motivations, Approaches, and Research Trends

Wu Dan Sun GuoyeWuhan University

Abstract The application scenarios for increasingly powerful artificial intelligence are becoming more and more widespread. As human-computer interaction and collaboration have become the norm, the demand for AI transparency and interaction experience is also increasing accordingly. Explainable artificial intelligence (XAI) is becoming an important topic in related research fields. Future research should design frame works and guidelines, build good algorithmic models, study user needs in conjunction with other subjects, build personalized interactive explanation systems, and improve the evaluation of explainbility explainability. Only through by these means can we develop move towards explainable interactive AI and human-centerd AI.

Key words         artificial intelligencehuman-computer interactionexplainbilitysystem transparencyusers' trust

 

■作者简介 丹,管理学博士,武汉大学信息管理学院教授、博士生导师;湖北 武汉 430072;孙国烨,武汉大学信息管理学院博士研究生。


[PDF](下载数: