首页 News 正文

According to media reports, OpenAI is preparing to launch a new AI assistant product codenamed "Operator" that can automatically perform various complex operations, including coding, booking travel, and automated e-commerce shopping. According to internal employee reports, OpenAI's leadership is expected to release the product in January 2025, initially as a research preview and development tool, with an open API interface for developers.
According to reports, OpenAI has been conducting several research projects related to intelligent agents. One of the sources stated that the closest thing to completion will be a universal tool for executing tasks in a web browser.
An AI agent is an intelligent entity that can perceive the environment, make decisions, and perform actions. It has the ability to gradually achieve given goals through independent thinking and calling tools. It can provide personalized applications for the C-end and cost reduction and efficiency improvement solutions for the B-end. For ordinary users, the core function of an AI assistant is to autonomously operate the phone and assist in completing complex reasoning tasks.
OpenAI CEO Altman has already revealed his intention to leave. A few weeks ago, he stated on the "Ask Me Anything" forum on Reddit, "We will have better and better models, but I believe the next major breakthrough will be AI assistants." At OpenAI's press conference before the company's annual development day last month, Kevin Weil, the company's Chief Product Officer, said, "I think 2025 will be the year when Agent systems finally enter the mainstream
From OpenAI's perspective, it is facing increasing pressure in the commercialization process, and the gradual improvement of ChatGPT may not be able to attract users to pay higher prices. Executives urgently need a breakthrough product to prove that the huge investment in AI development is worthwhile.
At present, OpenAI has open-source a multifunctional collaborative AI agent called Swarm, which can create multiple agents to work together more efficiently to complete tasks. Its GPT o1 model enhances its reasoning ability, making significant progress in solving complex problems and natural user interaction, and making it more suitable for AI agent scenarios.
AI assistants are regarded as the core foundation leading to AGI, and in the era where hardware manufacturers always refer to AI, AI assistants may become a breakthrough point for terminal intelligence. Yongxing Securities stated that AI agents may grasp the new entry point of mobile internet, and the traffic distribution pattern is expected to reshape the AI agent intelligent agent. Due to its strong interactivity and convenience, it may be able to break down the natural barriers between different apps on the same terminal.
According to incomplete analysis by the Science and Technology Innovation Board Daily, top domestic and foreign manufacturers are competing to launch AI assistant products——
Microsoft recently quietly opened sourced the AI tool OmniParser, which can help users create personalized agents to operate personal computers; On October 22nd, Microsoft announced the integration of 10 autonomous AI agents in Dynamics 365, supporting OpenAI's latest model o1, with self-learning capabilities and the ability to automatically execute complex cross platform business; In September, Microsoft launched a benchmark framework called Windows Agent Arena, which also falls under the category of AI assistant development.
According to The Information, Google plans to preview its large-scale action model "Project Jarvis" in December, which will help users perform tasks such as "collecting research, purchasing products, or booking flights".
On October 22nd, Anthropic iterated a new feature for the large model Claude - Computer Use, allowing AI to manipulate computers like humans. Claude 3.5 Sonnet is the first model to support computer control, capable of simulating human computer operations, including moving the cursor, clicking buttons, and inputting text.
Apple has chosen to integrate Siri with ChatGPT to achieve smarter human-computer interaction. Some netizens have also discovered that Apple has quietly released two implementation versions of Ferret UI (based on Gemma 2B and Llama 8B respectively), which is a technology released by Apple in May this year that allows AI to understand mobile phone screens.
Huawei has released a new research result that allows AI to operate mobile phones like humans. The relevant team has proposed a mobile phone control architecture: Lightweight Multi modal App Control (LiMAC).
Chinese unicorn enterprise Zhipu AI has launched the AI assistant tool AutoGLM, which does not require manual operation. Users can speak into their phones (give commands) and automatically open various apps on their phones to shop online, order takeout, book high-speed rail tickets, even send WeChat messages, grab red envelopes, comment on friend circles, organize notes and generate strategies and summarize papers.
CITIC Securities stated that terminal AI assistant technologies such as AutoGLM will bring a shorter path of interaction, and the ability to accept voice commands and automatically complete complex operations will bring great convenience to consumers. It is expected to become a highlight feature of AI terminals and attract consumers to upgrade and replace them.
Huatai Securities also stated that the implementation of AI assistants will bring multiple levels of industry opportunities, among which Agent+terminals are expected to drive the transformation of human-computer interaction. In addition to changes in terminal sales volume and price, it may have a more profound impact on the business model of terminal applications.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

芊芊551 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    44