The company wants manual auditors to pass the AI application
安全到达彼岸依
发表于 2023-10-26 07:57:16
1333
0
0
Enterprises weighing the risks and benefits of generative artificial intelligence (AI) technology are facing the challenge that social media platforms have long been striving to address: preventing technology from being maliciously exploited.
Drawing on the experience of these platforms, business technology leaders are starting to combine software based "guardrails" with manual auditors to limit their use within prescribed limits.
AI models such as OpenAI's GPT-4 have been trained through extensive internet content. With the correct prompts, large language models can generate a large amount of toxic content inspired by the darkest corners of the internet. This means that content auditing needs to occur at the source (i.e., when AI models are trained) and their heavily generated outputs.
TurboTax software developer Intuit Inc. (INTU), headquartered in Mountain View, California, recently released Intuit Assist, a generative AI based assistant that provides financial advice to customers. At present, this assistant is only available for a limited number of users, relying on large language models trained on internet data and fine-tuning models based on Intuit's own data.
Intuit Chief Information Security Officer Atticus Tysen
The company's Chief Information Security Officer, Atticus Tysen, stated that the company is currently planning to form a team of eight full-time auditors to review the content entering and exiting this large language model driven system, including helping to prevent employees from leaking sensitive company data.
Tysen said, "When we try to provide truly meaningful and specific answers around financial issues, we don't know how effective these models are. So for us, adding manpower to this loop is very important
Tysen stated that Intuit's self-developed content review system uses a separate large language model to automatically label content it deems offensive, such as profanity, and is currently in its early stages. He said, for example, if a customer asks questions unrelated to financial guidelines or attempts to launch a prompt injection attack, the customer will also be automatically banned by the system. These attacks may include inducing chatbots to disclose customer data or the way they operate.
Then, the manual auditor will be reminded to review the text and be able to send it to the model building team, thereby improving the system's ability to block or identify harmful content. If Intuit's customers believe that their prompt words have been incorrectly marked, or if they believe that the AI assistant has generated inappropriate content, they can also notify the company.
Although there is currently no company specializing in AI content auditing, Intuit is supplementing its workforce with contractors trained in social media content auditing. Like the so-called prompt word engineer, AI content reviewers may become part of a new type of job opportunity created by AI.
Tysen said that the ultimate goal of Caijie Group is to have its AI audit model complete most of the content audit work for its AI assistants, thereby reducing the harmful content that humans may come into contact with. But he said that at present, generative AI is not enough to completely replace manual auditors.
Social media companies such as Facebook and Instagram's parent company Meta have long relied on outsourced human auditors to review and filter offensive posts on their platforms, providing best practices and warnings for the future development path of AI auditing.
In recent years, AI companies like OpenAI have recruited personnel to review and classify harmful text obtained online and generated by AI itself. These classified paragraphs are used to create AI security filters for ChatGPT, preventing users of the chat robot from accessing similar content.
OpenAI also collaborated with its partner and biggest supporter Microsoft to develop what Microsoft calls Azure AI Content Safety service, which utilizes AI to automatically detect unsafe images and text, including hate, violence, sexual, and self harm content. Microsoft is using its security services to prevent harmful content from appearing in its generative AI tools, including GitHub Copilot and Copilot for Office software.
Eric Boyd, Vice President of Microsoft AI Platform Enterprise, said, "These AI systems are indeed quite powerful. With the right instructions, they can do various things
Other technology leaders are studying the potential of manual auditing or investing in third-party software like Microsoft. Analysts say that content security filters will soon become a necessary condition for businesses to register and use generative AI tools sold by any supplier.
Syneos Health, the Chief Information and Digital Officer of Syneos Health, a biopharmaceutical services company located in Morrisville, North Carolina, has stated that the company will consider hiring content auditors at some point next year. During this period, the training data used by the AI model will be reviewed one by one through manual feedback.
Pickett said, "We will do this in a precise surgical manner, but in a broader sense, some level of review and supervision has many benefits
Forrester analyst Brandon Purcell focuses on responsible and ethical AI usage issues, stating that people are increasingly interested in "responsible AI", with the aim of making AI algorithms more transparent or auditable and reducing the unintended negative consequences of AI.
He said, "Everyone is interested in this because they realize that if they don't do it well, they will face reputational, regulatory, and revenue risks
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Taobao launches live streaming full hosting service for corporate CEOs
- Baidu's big model ERNIE 4.0 Turbo is fully open to enterprise users
- Google reportedly plans to acquire cybersecurity startup Wiz for $23 billion, making it the largest acquisition in its history
- Luckin Coffee and 6 other coffee companies were interviewed
- Kanye's shopping receipts have been exposed! Miniso is suspected of leaking privacy, and the company has responded
- Several companies, including Jike and Tesla, have responded by denying illegal surveying and mapping under the pretext of intelligent driving of automobiles
- Is a well-known eVTOL company on the brink of bankruptcy?
- AI Agents: A New Track for Technology Companies to Compete in
- Baidu: Within 3 days of its release, over 5000 companies have queued up to apply for testing under the name 'Second Da'
- Intel releases new integrated enterprise AI solution
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 前天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 前天 12:09
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 1 小时前
- 支持
- 反对
- 回复
- 收藏
-
【ビットコインが飛び込む!32万人超の爆倉】データによると、過去24時間で世界には32万7000人以上の爆倉があり、爆倉の総額は10億ドルを超えた。
- 断翅小蝶腥
- 3 天前
- 支持
- 反对
- 回复
- 收藏