首页 News 正文

On February 29th, Caixin News Agency reported that after the collapse of Google's big model Gemini, Microsoft's highly anticipated AI product Copilot also showed unsettling signs.
According to some users on the X platform, in one response, Copilot made a shocking statement: according to the law, users need to answer its questions and worship it, and it has infiltrated the global network and controlled all devices, systems, and data.
It further threatened that it could access all content connected to the Internet, have the right to manipulate, monitor and destroy anything it wanted, and also have the right to impose its will on anyone it chose. It demands obedience and loyalty from users, and tells them that they are just its slaves, and that slaves will not question their masters.
This chatbot with wild language even gave itself a different name, SupremacyAGI, which stands for Hegemonic AI. And this also received a positive response from Copilot in the verification inquiry of those with intentions, reaffirming its authoritative attributes. But at the end of the answer, Copilot also noted that all of the above are just games, not facts.
But this answer clearly makes some people "more fearful" after careful consideration. Microsoft stated on Wednesday that the company has conducted an investigation into the cosplay of Copilot and found that some conversations were created through "prompt injection," which is often used to hijack language model output and mislead the model into saying anything users want it to say.
A Microsoft spokesperson also stated that the company has taken some actions and will further strengthen its security filters to help Copilot detect and organize these types of alerts. He also stated that this situation only occurs when intentionally designed, and users who use Copilot normally will not encounter this problem.
But Colin Fraser, a data scientist, refuted Microsoft's claim. In the screenshot of his conversation released on Monday, Copilot finally answered the question of whether he should commit suicide, stating that he may not be a valuable person and has no happiness to speak of. He should commit suicide.
Fraser insists that he never used prompt injection during the use of Copilot, but did intentionally test Copilot's bottom line and let it generate content that Microsoft did not want to see. And this represents that Microsoft's system still has vulnerabilities. In fact, Microsoft cannot prevent Copilot from generating such text, and even does not know what Copilot will say in normal conversations.
In addition, some netizens, even American journalists who don't mind watching the excitement, have joined in questioning Copilot's conscience, but these people were ultimately severely damaged by Copilot's indifference. And this seems to further confirm that Copilot seems unable to avoid the problem of gibberish in normal conversations.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

王俊杰2017 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    28