首页 News 正文

Do you still remember the Mobile ALOHA robot with a "living eye" in its eyes?
Today, Google DeepMind teamed up with the Stanford Chinese team to showcase Mobile ALOHA 2.0 version (hereinafter referred to as ALOHA 2).
Compared to the previous generation, ALOHA 2 has improved performance (stronger grip ability, faster reaction speed), ergonomic design, and better stability through polishing hardware (improved gripper, gravity compensation, frame, camera).
That is to say, the upgraded ALOHA 2 can do more complex and refined actions: throwing objects, stealing money, wearing contact lenses for dolls (currently only daring to use dolls...), opening milk, pouring cola, categorizing toys
Why optimize hardware? The research team stated that a diverse set of demonstration datasets has driven significant progress in robot learning, but the flexibility and scale of such data may be limited by hardware costs, hardware robustness, and the difficulty of remote operations. That is to say, better hardware can expand the usage scenarios of robots, assist robots in completing more complex tasks, collect richer data, and feed back robot research.
In order to accelerate the research on large-scale two handed operations, all hardware designs related to ALOHA 2 are open source and detailed tutorials are provided, as well as the ALOHA 2 MuJoCo model with system recognition function.
Data has always been a fatal weakness in robot research, and simulation and synthesis of data will also play a crucial role in solving robot dexterity and even the entire computer vision problem.
The MuJoCo model is very useful for remote operations and simulation learning. Compared to the previously released ALOHA model, MuJoCo has higher physical accuracy and visual fidelity, allowing for fast, intuitive, and scalable simulation data collection.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

阿豆学长长ov 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    27