cs.CL(2026-04-01)

📊 共 21 篇论文 | 🔗 2 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (13 🔗1) 支柱二:RL算法与架构 (RL & Architecture) (7 🔗1) 支柱七:动作重定向 (Motion Retargeting) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (13 篇)

#题目一句话要点标签🔗
1 Adapting Text LLMs to Speech via Multimodal Depth Up-Scaling 提出多模态深度向上扩展方法,提升文本LLM在语音任务上的性能并缓解文本能力退化。 large language model multimodal
2 Multimodal Analysis of State-Funded News Coverage of the Israel-Hamas War on YouTube Shorts 提出多模态分析流程,用于剖析YouTube Shorts上国家资助媒体对以色列-哈马斯战争的报道。 multimodal
3 True (VIS) Lies: Analyzing How Generative AI Recognizes Intentionality, Rhetoric, and Misleadingness in Visualization Lies 分析生成式AI识别可视化谎言中意图、修辞和误导性的能力 large language model multimodal
4 Speech LLMs are Contextual Reasoning Transcribers 提出CoT-ASR,利用思维链提升语音LLM的上下文推理转录能力 large language model chain-of-thought
5 LLM REgression with a Latent Iterative State Head 提出RELISH,一种用于LLM文本回归的轻量级迭代状态头 large language model
6 Temporal Dependencies in In-Context Learning: The Role of Induction Heads 揭示In-Context Learning中时间依赖性:归纳头在序列回忆中的作用 large language model
7 Universal YOCO for Efficient Depth Scaling 提出Universal YOCO以解决标准Transformer推理效率低下问题 large language model
8 Uncertainty-Aware Variational Reward Factorization via Probabilistic Preference Bases for LLM Personalization 提出不确定性感知的变分奖励分解VRF,用于LLM的个性化 large language model
9 Positional Cognitive Specialization: Where Do LLMs Learn To Comprehend and Speak Your Language? 提出CogSym,通过认知分工视角实现LLM高效语言迁移与微调。 large language model
10 From Early Encoding to Late Suppression: Interpreting LLMs on Character Counting Tasks 揭示LLM在字符计数任务中“早期编码、后期抑制”现象,发现负电路干扰。 large language model
11 More Human, More Efficient: Aligning Annotations with Quantized SLMs 通过量化SLM对齐标注,实现更人性化、更高效的自动评估与标注。 large language model
12 A Japanese Benchmark for Evaluating Social Bias in Reasoning Based on Attribution Theory 提出JUBAKU-v2:一个基于归因理论的日语推理社会偏见评测基准 large language model
13 Locally Confident, Globally Stuck: The Quality-Exploration Dilemma in Diffusion Language Models 针对扩散语言模型质量-探索困境,提出基于Metropolis-Hastings采样的解码方法 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (7 篇)

#题目一句话要点标签🔗
14 Brainstacks: Cross-Domain Cognitive Capabilities via Frozen MoE-LoRA Stacks for Continual LLM Learning Brainstacks:基于冻结MoE-LoRA堆栈的跨领域认知能力持续LLM学习 DPO large language model instruction following
15 Agentic Tool Use in Large Language Models 综述性研究:大型语言模型中的Agentic工具使用方法与演进 policy learning large language model
16 Embarrassingly Simple Self-Distillation Improves Code Generation 提出简单自蒸馏方法SSD,无需外部资源提升代码生成能力 reinforcement learning distillation large language model
17 LangMARL: Natural Language Multi-Agent Reinforcement Learning LangMARL:提出基于自然语言的多智能体强化学习框架,解决LLM智能体在动态环境中协同策略演化难题。 reinforcement learning large language model
18 TR-ICRL: Test-Time Rethinking for In-Context Reinforcement Learning TR-ICRL:面向上下文强化学习的测试时重思考框架,提升推理和知识密集型任务性能 reinforcement learning large language model
19 Agent Q-Mix: Selecting the Right Action for LLM Multi-Agent Systems through Reinforcement Learning Agent Q-Mix:通过强化学习为LLM多智能体系统选择最优动作 reinforcement learning large language model
20 Optimsyn: Influence-Guided Rubrics Optimization for Synthetic Data Generation Optimsyn:利用影响引导的规则优化合成数据生成,提升下游任务性能 reinforcement learning large language model

🔬 支柱七:动作重定向 (Motion Retargeting) (1 篇)

#题目一句话要点标签🔗
21 Emotion Entanglement and Bayesian Inference for Multi-Dimensional Emotion Understanding 提出EmoScene基准,并结合情感纠缠与贝叶斯推理提升多维度情感理解。 motion prediction large language model

⬅️ 返回 cs.CL 首页 · 🏠 返回主页