首页 News 正文

On June 28th local time, Google released Gemma 2 language models for researchers and developers worldwide, with two sizes: 9 billion parameters (9B) and 27 billion parameters (27B).
Google stated that the performance of the Gemma 2-27B model is comparable to mainstream models at twice the size, and can be achieved with just one Nvidia H100 sensor core GPU or TPU host, greatly reducing deployment costs.
Google also plans to release a Gemma 2 model with parameters of 2.6 billion in the coming months, which is more suitable for artificial intelligence application scenarios on smartphones.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

因醉鞭名马幌 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    43