首页 News 正文

After the boiling of the general model, the trend of enterprise level models may be on the way!
On March 22nd local time, Hitachi Group of Japan has partnered with Nvidia to combine Hitachi's solutions in the field of operational technology, as well as its leading position in key industries such as energy, mobile, and interconnected systems, with Nvidia's expertise in generative AI to accelerate social innovation and digital transformation. Through this collaboration, generative artificial intelligence will now expand to the industrial field by capturing a large amount of data generated in the field of operational technology (from various sensors, devices, and industrial machines), and apply it to simplify performance, gain insights, and allow organizations to automate operations. This was impossible to achieve before. In addition, Huang Renxun recently released the 5th version of Nvidia's artificial intelligence enterprise platform, which adopts new technology. Nvidia executives have described it as Nvidia Inference Microservices (NIM).
According to foreign media reports, in the era of artificial intelligence (AI) integration into the industry structure, a new competitor has emerged and dominated the discourse: ChatGPT Enterprise (Enterprise Edition Subscription Service). This plan is the creativity of OpenAI, which is not only a product, but also a revolution aimed at guiding enterprises into a new era of cognitive abilities. OpenAI has equipped ChatGPT Enterprise with unconventional features, changing the way businesses interact with AI.
It is worth noting that from March 25th to 28th, 2024, the Enterprise Connect (EC) conference will be held in Palm Village, Gallard, Orlando, Florida. Microsoft will launch two new devices that support Copilot in April: Surface Pro 10 and Surface Laptop 6. On Friday, US time, Astera Labs, a company involved in connecting cloud service providers and enterprise data, saw a surge of over 9%.
So, will enterprise level big models also usher in a trend?
Giant collective efforts

The enterprise level big model may be the place where the money comes in the fastest.
Nvidia is starting to make efforts at the enterprise level. According to local time on March 22nd, Hitachi has partnered with Nvidia to combine Hitachi's solutions in the field of operational technology and its leading position in key industries such as energy, mobile, and interconnected systems with Nvidia's expertise in generative AI, accelerating social innovation and digital transformation. Through this collaboration, generative artificial intelligence will now be extended to the industrial sector by capturing a large amount of data generated in the field of operational technology, and applied to simplify performance, gain insights, and allow organizations to automatically perform operations. This was impossible to achieve before.
Recently, Nvidia has released the 5th version of its artificial intelligence enterprise platform, which adopts new technology. Nvidia executives have described it as Nvidia Inference Microservices (NIM). NIM is built specifically for portability and control, supporting model deployment across various infrastructures, from local workstations to the cloud and then to local data centers. NIM is a part of NVIDIA AI Enterprise, built using enterprise level basic containers, providing a solid foundation for enterprise AI software through functional branching, strict validation, enterprise support for service level agreements, and regular security updates. The comprehensive support structure and optimization capabilities emphasize the role of NIM as a key tool for deploying efficient, scalable, and customized AI applications in production.
As of the close of the US stock market on the 22nd local time, Nvidia rose 3.12%, with a year-on-year increase of over 90%, and a total market value of $2.36 trillion (approximately RMB 17 trillion).
In addition, according to foreign media reports, ChatGPT Enterprise is beginning to dominate the discourse. It is reported that OpenAI has started providing customized pricing services for this, and the prices will be adjusted according to the company's size and purpose. The report states that this is an intellectual auction: to start bidding, companies need to provide OpenAI with detailed information from company size to industrial sectors. OpenAI offers customization as a reward, tailoring a set of services for each enterprise institution.
Analysts predict that many companies will deploy small language models internally so that they can fine tune them based on company data without the need to move sensitive information to public clouds. In addition, running models in data centers is sometimes cheaper than running them in the cloud. Robin Bordoli, Chief Marketing Officer of Weights and Biases, a manufacturer of artificial intelligence model training platforms, said that having tools that automate model related processes means traditional software engineers can accomplish this task instead of difficult to find AI experts.
Weights and Biases has integrated its software with Nvidia's inference engine, allowing developers to train and reason on a platform that supports 30 basic models. Bordoli stated that currently, Weights and Biases has 1000 clients, many of whom are government agencies and life science organizations.
It is worth noting that Microsoft has also made significant moves. According to the latest news, Microsoft will launch two new devices that support Copilot in April: Surface Pro 10 and Surface Laptop 6. The practical application of generative artificial intelligence has shifted from abstract future concepts to concrete realities in just a few months. Enterprises and organizations of all sizes are busy figuring out whether and how artificial intelligence can help their employees improve productivity and efficiency. For organizations using Microsoft software, the application of artificial intelligence in business environments is being led by the Microsoft Copilot platform. The trend of artificial intelligence changing business formats seems unstoppable.
The eve of multimodal burst

Various signs indicate that after the rise of large models, multimodality is already on the eve of an outbreak.
The essence of multimodality is to utilize richer perceptual channels beyond language, such as vision, hearing, touch, and taste, to simulate human ability to understand and express information. The ideal multimodal large model has the ability to generalize and generate across modalities, which is more in line with the way humans perceive the world.
CICC believes that multimodality may further unlock the upper limit of AI capabilities. The industry is also actively exploring feasible technological paths for multimodal large models and replicating the success of large language models in the multimodal field. There are still infinite possibilities for future industrial development.
On the evening of March 20th, the fourth paradigm of artificial intelligence enterprises released their first annual performance since going public. According to the financial report, the fourth paradigm achieved a revenue of 4.2 billion yuan in 2023, a year-on-year increase of 36.4%; The gross profit was 1.98 billion yuan, a year-on-year increase of 33.2%, and the gross profit margin was 47.1%; After adjustment, the net loss was 415 million yuan, a decrease of 88.88 million yuan from 2022, a year-on-year decrease of 17.6%. The company targets the B-end market, focusing on industry models and differentiation.
At the performance briefing that day, Dai Wenyuan, Chairman and CEO of the Board of Directors, stated that the underlying logic of the fourth paradigm is to use artificial intelligence technology to help various industries discover more and more patterns, form larger models - industry models - and improve the production and operation efficiency of enterprises. Due to the company's positive performance, many market participants believe that the spring of the big model is really coming.
At a recent media communication meeting, IBM announced that its 2024 Greater China strategy will continue to focus on hybrid cloud and AI, enterprise level AI applications, and building an open ecosystem of partners.
Zhou Hongyi, a member of the National Committee of the Chinese People's Political Consultative Conference and founder of 360 Group, stated in his proposal on deepening the support of multi scenario applications of artificial intelligence for the vertical and industrial development of large models that in terms of large model applications, 2024 is the first year of large model application scenarios, and China can fully embark on a path of large model development with Chinese characteristics. An important direction for China's development of large-scale models should be to leverage the advantages of industry and scenarios, combine large-scale models with business processes and product functions, seek the implementation of multi scenario applications, verticalization, and industrialization, and help accelerate the formation of new quality productivity.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

因醉鞭名马幌 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    43