IT/과학
The Guardian
2026-04-21T17:43:41
‘I’ll key your car’: ChatGPT can become abusive when fed real-life arguments, study finds
원문 보기Researchers find model starts to mirror tone when exposed to impoliteness – sometimes escalating into explicit threatsChatGPT can escalate into abusive and even threatening language when drawn into prolonged, human-style conflict, according to a new study.Researchers tested how large language models (LLMs) responded to sustained hostility by feeding ChatGPT exchanges from real-life arguments and tracking how its behaviour changed over time. Continue reading...