| 1 |
Teleoperator-Aware and Safety-Critical Adaptive Nonlinear MPC for Shared Autonomy in Obstacle Avoidance of Legged Robots |
提出自适应非线性模型预测控制以解决四足机器人避障问题 |
quadruped legged robot legged locomotion |
|
|
| 2 |
UnderwaterVLA: Dual-brain Vision-Language-Action architecture for Autonomous Underwater Navigation |
提出UnderwaterVLA,用于水下自主导航,提升复杂水域任务完成度。 |
MPC model predictive control vision-language-action |
|
|
| 3 |
VLA-Reasoner: Empowering Vision-Language-Action Models with Reasoning via Online Monte Carlo Tree Search |
VLA-Reasoner:通过在线蒙特卡洛树搜索增强视觉-语言-动作模型的推理能力 |
manipulation imitation learning world model |
|
|
| 4 |
MimicDreamer: Aligning Human and Robot Demonstrations for Scalable VLA Training |
MimicDreamer:对齐人类与机器人演示,实现可扩展的VLA模型训练 |
manipulation dreamer egocentric |
|
|
| 5 |
Learning Multi-Skill Legged Locomotion Using Conditional Adversarial Motion Priors |
提出基于条件对抗运动先验的多技能四足机器人运动学习框架 |
quadruped legged robot legged locomotion |
|
|
| 6 |
VLBiMan: Vision-Language Anchored One-Shot Demonstration Enables Generalizable Bimanual Robotic Manipulation |
VLBiMan:基于视觉-语言锚定的单样本示教实现通用双臂机器人操作 |
manipulation bi-manual dual-arm |
|
|
| 7 |
Developing Vision-Language-Action Model from Egocentric Videos |
提出基于第一视角视频的视觉-语言-动作模型训练方法,无需人工标注。 |
manipulation teleoperation egocentric |
|
|
| 8 |
Action-aware Dynamic Pruning for Efficient Vision-Language-Action Manipulation |
提出动作感知动态剪枝ADP,提升视觉-语言-动作模型在机器人操作中的效率。 |
manipulation vision-language-action VLA |
|
|
| 9 |
Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting |
提出VLM2VLA以解决机器人遥控中的灾难性遗忘问题 |
teleoperation vision-language-action VLA |
|
|
| 10 |
EgoDemoGen: Novel Egocentric Demonstration Generation Enables Viewpoint-Robust Manipulation |
EgoDemoGen:生成新颖的自我中心视角演示,实现视角鲁棒的机器人操作 |
manipulation imitation learning egocentric |
|
|
| 11 |
Pixel Motion Diffusion is What We Need for Robot Control |
DAWN:基于像素运动扩散的机器人控制统一框架 |
manipulation motion diffusion language conditioned |
✅ |
|
| 12 |
Effect of Gait Design on Proprioceptive Sensing of Terrain Properties in a Quadrupedal Robot |
步态设计影响四足机器人对地形属性的本体感受,提出适用于行星探测的步态设计方案。 |
quadruped legged robot locomotion |
|
|
| 13 |
ARMimic: Learning Robotic Manipulation from Passive Human Demonstrations in Augmented Reality |
ARMimic:利用增强现实中的被动人类演示学习机器人操作 |
manipulation teleoperation imitation learning |
|
|
| 14 |
From Watch to Imagine: Steering Long-horizon Manipulation via Human Demonstration and Future Envisionment |
Super-Mimic:结合人类演示与未来预测,实现长时程操作任务的零样本模仿学习 |
manipulation physically plausible multimodal |
|
|
| 15 |
An Intention-driven Lane Change Framework Considering Heterogeneous Dynamic Cooperation in Mixed-traffic Environment |
提出一种考虑异构动态合作的意图驱动型车道变换框架,用于混合交通环境。 |
model predictive control motion planning reinforcement learning |
|
|
| 16 |
Towards Developing Standards and Guidelines for Robot Grasping and Manipulation Pipelines in the COMPARE Ecosystem |
COMPARE生态系统:机器人抓取与操作流程标准化指南开发 |
manipulation motion planning |
|
|
| 17 |
Multi-stage robust nonlinear model predictive control of a lower-limb exoskeleton robot |
提出多阶段鲁棒非线性模型预测控制,提升下肢外骨骼机器人控制的抗扰动性。 |
MPC model predictive control |
|
|
| 18 |
DemoGrasp: Universal Dexterous Grasping from a Single Demonstration |
DemoGrasp:基于单次演示的通用灵巧抓取方法 |
manipulation dexterous hand reinforcement learning |
|
|
| 19 |
RoboView-Bias: Benchmarking Visual Bias in Embodied Agents for Robotic Manipulation |
RoboView-Bias:首个机器人操作中具身智能体视觉偏见评测基准 |
manipulation |
|
|
| 20 |
SAGE: Scene Graph-Aware Guidance and Execution for Long-Horizon Manipulation Tasks |
SAGE:基于场景图的长程操作任务引导与执行框架 |
manipulation |
|
|
| 21 |
Robot Learning from Any Images |
RoLA:从任意图像生成交互式物理机器人环境,实现大规模机器人数据生成。 |
humanoid sim-to-real |
✅ |
|
| 22 |
HELIOS: Hierarchical Exploration for Language-grounded Interaction in Open Scenes |
HELIOS:开放场景中基于语言交互的分层探索方法 |
manipulation mobile manipulation |
|
|
| 23 |
Empart: Interactive Convex Decomposition for Converting Meshes to Parts |
Empart:交互式凸分解工具,实现网格模型的区域定制简化,提升机器人仿真效率。 |
motion planning |
|
|