School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users
AI 摘要
这条新闻显示「School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users」正在成为 科技产业 方向的新信号,值得结合 北美洲 与 科技 后续动态继续观察。
关键点
- 核心事件:School-shooting lawsuits accuse OpenAI of hiding violent ChatGPT users
- 所属领域:科技 / 科技产业
- 观察维度:北美洲、Ars Technica 后续报道与同类事件是否继续增加
影响分析
短期可能影响产品路线、开发者生态与产业链预期;若同类新闻继续增加,可能形成新的科技主题。
情绪:中性偏积极 · 相关:Ars Technica / 科技 / 北美洲 / 科技产业 · 模板回退
OpenAI could have prevented one of the deadliest mass shootings in Canada's history, a string of seven lawsuits filed Wednesday in a California court alleged. Ultimately, the AI company overruled recommendations from its internal safety team. More than eight months prior to the school shooting, trained experts had flagged a ChatGPT account later linked to the shooter as posing a credible threat of gun violence in the real world. In those cases, OpenAI is expected to notify police—which, in this case, already had a file on the shooter and had proactively removed guns from their home previously—but that's not what happened. Apparently, OpenAI decided that the user's privacy and the potential stress of an encounter with cops outweighed the risks of violence, whistleblowers told The Wall Street Journal. Leaders rejected the safety team's urgings and declined to report the user to law enforcement. Instead, OpenAI simply deactivated the account, then quickly followed up to tell the shooter how to get back on ChatGPT to continue planning by signing up with another email address, the lawsuits alleged.Read full article Comments