cs.CV(2025-09-11)

📊 共 23 篇论文 | 🔗 10 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (13 🔗7) 支柱二:RL算法与架构 (RL & Architecture) (5 🔗2) 支柱四:生成式动作 (Generative Motion) (2 🔗1) 支柱七:动作重定向 (Motion Retargeting) (1) 支柱三:空间感知与语义 (Perception & Semantics) (1) 支柱五:交互与反应 (Interaction & Reaction) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (13 篇)

#题目一句话要点标签🔗
1 Measuring Epistemic Humility in Multimodal Large Language Models HumbleBench:评估多模态大语言模型认知谦逊性的新基准 large language model multimodal
2 SQAP-VLA: A Synergistic Quantization-Aware Pruning Framework for High-Performance Vision-Language-Action Models 提出SQAP-VLA框架,协同量化与剪枝加速高性能视觉-语言-动作模型推理。 vision-language-action VLA
3 Can Multimodal LLMs See Materials Clearly? A Multimodal Benchmark on Materials Characterization 提出MatCha:材料表征多模态基准,评估MLLM在材料科学图像理解能力 large language model multimodal chain-of-thought
4 Visual Grounding from Event Cameras 提出Talk2Event,首个基于事件相机的语言驱动物体定位大规模基准 multimodal visual grounding
5 Kling-Avatar: Grounding Multimodal Instructions for Cascaded Long-Duration Avatar Animation Synthesis Kling-Avatar:通过多模态指令驱动的级联式长时程虚拟形象动画合成 large language model multimodal
6 Towards Better Dental AI: A Multimodal Benchmark and Instruction Dataset for Panoramic X-ray Analysis 提出MMOral:用于全景X光分析的多模态基准和指令数据集,并构建OralGPT模型。 multimodal instruction following
7 PeftCD: Leveraging Vision Foundation Models with Parameter-Efficient Fine-Tuning for Remote Sensing Change Detection PeftCD:利用参数高效微调的视觉基础模型进行遥感变化检测 foundation model
8 Modality-Agnostic Input Channels Enable Segmentation of Brain lesions in Multimodal MRI with Sequences Unavailable During Training 提出模态无关输入通道的U-Net,实现多模态脑部MRI病灶分割,无需训练时可见序列 multimodal
9 VQualA 2025 Challenge on Visual Quality Comparison for Large Multimodal Models: Methods and Results VQualA 2025挑战赛:评估并提升大型多模态模型在视觉质量比较方面的能力 multimodal
10 DATE: Dynamic Absolute Time Enhancement for Long Video Understanding 提出DATE框架,通过动态绝对时间增强提升MLLM在长视频理解中的时序推理能力。 large language model multimodal
11 Video Understanding by Design: How Datasets Shape Architectures and Insights 从数据集视角解读视频理解:揭示数据集如何塑造模型架构与洞见 foundation model multimodal
12 DGFusion: Depth-Guided Sensor Fusion for Robust Semantic Perception 提出DGFusion,利用深度信息引导传感器融合,提升语义感知鲁棒性 multimodal
13 Fine-Grained Customized Fashion Design with Image-into-Prompt benchmark and dataset from LMM 提出基于LMM的图像到提示微调服装设计框架,解决文本描述不确定性问题 multimodal

🔬 支柱二:RL算法与架构 (RL & Architecture) (5 篇)

#题目一句话要点标签🔗
14 Enhancing 3D Medical Image Understanding with Pretraining Aided by 2D Multimodal Large Language Models Med3DInsight:利用2D多模态大语言模型预训练增强3D医学图像理解 representation learning large language model multimodal
15 Unified Multimodal Model as Auto-Encoder 提出基于自编码器的统一多模态模型UAE,实现理解与生成的双向提升。 reinforcement learning multimodal instruction following
16 FS-Diff: Semantic guidance and clarity-aware simultaneous multimodal image fusion and super-resolution FS-Diff:面向多模态图像融合与超分辨率的语义引导和清晰度感知方法 Mamba multimodal
17 Visual Programmability: A Guide for Code-as-Thought in Chart Understanding 提出Visual Programmability,自适应选择代码推理或视觉推理解决图表理解任务。 reinforcement learning chain-of-thought
18 Gradient-Attention Guided Dual-Masking Synergetic Framework for Robust Text-based Person Retrieval 提出GA-DMS框架,通过梯度注意力引导的双掩码机制提升文本行人检索性能 representation learning contrastive learning

🔬 支柱四:生成式动作 (Generative Motion) (2 篇)

#题目一句话要点标签🔗
19 InterAct: Advancing Large-Scale Versatile 3D Human-Object Interaction Generation InterAct:提出大规模通用3D人-物交互生成数据集与方法 motion generation penetration human-object interaction
20 Geometric Neural Distance Fields for Learning Human Motion Priors 提出神经黎曼运动场(NRMF),用于学习鲁棒、时序一致且物理可信的人体运动先验。 physically plausible

🔬 支柱七:动作重定向 (Motion Retargeting) (1 篇)

#题目一句话要点标签🔗
21 ALL-PET: A Low-resource and Low-shot PET Foundation Model in Projection Domain ALL-PET:一种低资源、低样本的投影域PET基础模型 geometric consistency foundation model

🔬 支柱三:空间感知与语义 (Perception & Semantics) (1 篇)

#题目一句话要点标签🔗
22 Loc$^2$: Interpretable Cross-View Localization via Depth-Lifted Local Feature Matching 提出Loc$^2$,通过深度提升的局部特征匹配实现可解释的跨视角定位 monocular depth feature matching

🔬 支柱五:交互与反应 (Interaction & Reaction) (1 篇)

#题目一句话要点标签🔗
23 Improvement of Human-Object Interaction Action Recognition Using Scene Information and Multi-Task Learning Approach 提出结合场景信息的多任务学习方法,提升人与固定物体交互行为识别精度。 human-object interaction

⬅️ 返回 cs.CV 首页 · 🏠 返回主页