Xiaomi launches open-source “MiMo-Embodied” AI model for autonomous driving & robotics
Xiaomi has unveiled its latest breakthrough: a publicly released foundation model named MiMo‑Embodied, designed to power both autonomous driving and embodied robotics. According to the company, this vision-language model combines capabilities in perception, planning, spatial understanding and drive decision-making, and is now available to developers on platforms like Hugging Face and GitHub. What makes MiMo-Embodied particularly noteworthy is its cross-domain design. Rather than limiting itself to either self-driving vehicles or robotics, the model bridges both, enabling tasks such as task planning for robots and drive-path prediction for vehicles within a shared architecture. Xiaomi asserts the model achieves state-of-the-art results across 29 benchmark tests covering areas like affordance prediction (robots) and status planning (autonomous driving).

