拟人化人工智能的隐私利弊

萨菲亚Kazi
作者: 萨菲亚Kazi, CSX-F, CIPT
发表日期: 2023年9月14日

The rapid growth of generative artificial intelligence (AI) over the last year has raised numerous privacy and ethical issues, including discussion of the ethics and legality of generative AI learning from copyrighted work. 例如, 一些作者对OpenAI提起了诉讼, 谁拥有ChatGPT, 违反版权.1 There is also concern about the ways in which generative AI might use personal information. 例如, Italy’s data protection authority temporarily banned ChatGPT due to privacy-related concerns.2

Generative AI that creates text and chats with a user poses a unique challenge because it can lead people to feel as though they are interacting with a human. 拟人化指的是将人类的属性或个性赋予非人类.3 People often anthropomorphize AI—especially generative AI—because of the human-like outputs it can create.

这种虚假的关系是有问题的. The ELIZA effect describes the phenomenon of “when a person attributes human-level intelligence to an AI system and falsely attaches meaning, 包括情感和自我意识, 对人工智能.”4 有一次, 一名男子与聊天机器人分享了他对气候变化的担忧, 这为他提供了结束自己生命的方法. 可悲的是, 他自杀而死, and his wife alleges that he would not have done so had it not been for his perceived relationship with the chatbot.5

这些产生类似人类输出的算法玩起来很有趣, 但它们带来了严重的隐私问题. 人格化人工智能在隐私方面有利有弊, and certain considerations need to be made for enterprises that leverage consumer-facing generative AI.

隐私的优点

One privacy-related benefit associated with anthropomorphizing AI is it may help users contextualize excessive data collection. 当信息被澳门赌场官方下载或应用程序收集时, 它可能看起来像一大堆没有任何意义的1和0. 但如果看起来像是一个陌生人在告诉用户自己的私人信息, 这似乎令人不快, 共享数据的结果变得具体和有形.

Apps may be required to include a description of what information they collect in the app store, but users often do not read it because it is complex or they do not understand exactly what that data collection entails. 相比之下,让人工智能聊天将这些信息投入使用可能会更有影响力. 想想Snapchat的人工智能聊天机器人My AI. 尽管苹果应用商店告知用户Snapchat可以访问位置数据, 图1 更实际地说明了访问位置数据意味着什么. 例如,我让我的AI向我推荐附近一家不错的咖啡店.

图1 -我的AI和位置数据

应用程序用户可能会理解应用程序可以访问位置数据, but having a conversation with a human-seeming chatbot that names specific neighborhood businesses better exemplifies what it means to share location data with an app. 这可能有助于人们更多地了解隐私问题, which can lead to consumers being more careful about the information they share and taking steps to protect their privacy.

隐私的缺点

当与人工智能聊天机器人交谈时, users may feel comfortable sharing more information than they ordinarily would if the chatbot sounds human-like and uses first- or second-person language. It may feel like the information provided to the chatbot is being shared with a friendly person rather than an enterprise that may use those data for a variety of purposes. 例如, people may talk to a chatbot for a while and eventually reveal sensitive information (e.g.他们正在努力解决的健康问题). 大多数聊天机器人在用户提供敏感信息时不会发出警告.

It is possible that an individual’s inputs or an AI platform’s outputs could be used to train future responses. This could mean that sensitive data or secret information shared with a chatbot might be shared with others or influence the outputs others receive. 例如, 尽管它已经改变了政策, OpenAI最初根据提供给ChatGPT的数据训练模型.6 人工智能在接受数据训练后,很难对其进行训练.7

隐私声明通常很难理解, 而且,消费者可能会有一种绕过它们的诱惑,转而选择对话, 一个人工智能聊天机器人关于隐私的简单易懂的回答. 用户可能认为聊天机器人提供的信息是全面的. but someone who does not dig further into a provider’s privacy notice may have an inaccurate idea of what information the provider collects.

例如, 我问Snapchat的聊天机器人该应用收集哪些信息, 它给了我不完整的信息. 图2 显示My AI显示它收集的数据,但这不是一个完整的列表. 图1 确定该应用程序还收集位置数据.

图2 -询问Snapchat收集了哪些数据

从业人员的要点

隐私 professionals who work at enterprises that leverage AI chatbots should consider some of the ways in which the chatbot provides information about privacy practices. 对所收集数据问题的回答, how data are used and consumer rights must be accurate and thorough to avoid misleading data subjects.

澳门赌场官方下载s that use AI chatbots must also consider the ages of their users and ensure that minors are protected. 例如,59%的青少年说他们使用Snapchat,8 只有付费订阅Snapchat+的用户才能删除其“我的人工智能”功能.9 This means that minors and parents of minors who are using the free version of Snapchat cannot remove the My AI function. 青少年, 谁可能不明白它的缺陷,也不知道他们的信息会如何被利用, 正在使用我的人工智能来帮助心理健康吗, 这可能很危险.10 澳门赌场官方下载s that provide AI services to minors should inform them about the consequences of providing sensitive information and reinforce that outputs may not be accurate.

我问我的人工智能如何应对头痛, 虽然该公司最初拒绝提供医疗建议, 最后医生建议我吃药(图3). 虽然建议相对无害, 非处方药物, minors may not know if these recommended medications may interact with any prescribed medications.

图3-My AI推荐药物

Even enterprises that do not leverage generative AI need to explore how it may affect their day-to-day operations. Staff may feel as though asking ChatGPT for help drafting an email is akin to asking a colleague for help, but the two are not comparable: ChatGPT is a third-party service and does not abide by workplace confidentiality norms. 有必要针对生成式人工智能的使用制定一项政策. 这需要各个部门的投入,而不仅仅是隐私团队. Managers should also establish guidelines about what kind of work can leverage AI tools (e.g., 它可能被允许使用ChatGPT来起草社交媒体帖子, 但用它来起草机密电子邮件可能是不可接受的。). 没有针对gpt和其他人工智能工具使用的政策可能会导致员工滥用, 这可能会导致隐私问题.

在隐私意识培训方面, privacy professionals can leverage the anthropomorphic elements of AI to show the impact of what happens if information is improperly disclosed. Nonprivacy staff may not understand why certain data are considered sensitive or why it matters if information is breached, but examples such as Snapchat’s AI can effectively illustrate the consequences of what might happen if information such as location data are improperly shared.

结论

许多澳门赌场官方下载利用类似人类的人工智能工具, 许多员工也可能依赖于他们的工作. The way AI has been anthropomorphized has simultaneously made it seem less trustworthy and more trustworthy from a privacy perspective. Having a policy around AI use and ensuring that consumers understand the implications of sharing data with AI can promote more effective and trustworthy AI tools.

尾注

1 奶油,E.; “作家起诉OpenAI非法“摄取”他们的书,《澳门赌场官方下载》,2023年7月5日
2 穆克吉,年代.; G. Vagnoni; “OpenAI回应监管机构后,意大利恢复ChatGPT路透社,2023年4月28日
3 韦氏词典人格化
4 香,C.; ““他还会在这里”:寡妇说,男子在与人工智能聊天机器人交谈后自杀身亡,副,2023年3月30日
5 同前.
6 OpenAI。”OpenAI的澳门赌场官方下载隐私
7 Claburn T.; “有趣的是,人工智能模型必须遵守隐私法,包括被遗忘的权利,《澳门赌场官方下载》,2023年7月13日
8 Vogels E.; R. Gelles-Watnick; N. Massarat; “青少年,社交媒体和技术2022,皮尤研究中心,2022年8月10日
9 Snapchat。”我如何从我的聊天Feed中解除或删除我的AI?
10 鲁迪,M.; “青少年 Are Turning to Snapchat's 'My AI' for Mental Health Support—Which Doctors Warn Against福克斯新闻,2023年5月5日

萨菲亚Kazi, CSX-F, CIPT

ISACA的隐私专业实践负责人是谁®. 在这个角色中, 她专注于ISACA隐私相关资源的开发, 包括书籍, 白皮书和审查手册. Kazi在ISACA工作了9年,之前在 ISACA® 杂志 并开发屡获殊荣的ISACA播客. 2021年,她获得了AM奖&P网络的新兴领袖奖, 哪个奖项认可35岁以下的创新协会出版专业人士.