cs.CL(2023-12-26)

📊 共 15 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (12 🔗3) 支柱二:RL算法与架构 (RL & Architecture) (3)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (12 篇)

#题目一句话要点标签🔗
1 From text to multimodal: a survey of adversarial example generation in question answering systems 综述性研究:探索问答系统中对抗样本生成技术,覆盖文本与多模态场景 multimodal
2 Supervised Knowledge Makes Large Language Models Better In-context Learners 利用监督知识提升大语言模型上下文学习能力 large language model
3 More than Correlation: Do Large Language Models Learn Causal Representations of Space? 揭示LLM空间表征的因果性:影响下游任务与预测性能 large language model
4 Zero-Shot Cross-Lingual Reranking with Large Language Models for Low-Resource Languages 探索大语言模型在低资源语言零样本跨语言重排序中的应用 large language model
5 RoleEval: A Bilingual Role Evaluation Benchmark for Large Language Models 提出RoleEval双语角色评估基准,用于评估大语言模型的角色知识 large language model
6 A Logically Consistent Chain-of-Thought Approach for Stance Detection 提出LC-CoT方法,通过逻辑一致的思维链提升零样本立场检测性能 chain-of-thought
7 DocMSU: A Comprehensive Benchmark for Document-level Multimodal Sarcasm Understanding 提出DocMSU基准数据集,用于解决文档级多模态讽刺理解难题。 multimodal
8 Towards Probing Contact Center Large Language Models 针对客服中心场景,探究指令微调大语言模型的特性与性能 large language model
9 KnowledgeNavigator: Leveraging Large Language Models for Enhanced Reasoning over Knowledge Graph KnowledgeNavigator:利用大语言模型增强知识图谱推理 large language model
10 SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security 提出SecQA:用于评估大型语言模型在计算机安全领域能力的问答数据集 large language model
11 Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4 提出26条原则,指导LLaMA和GPT系列模型的问题构建与提示工程 large language model
12 Task Contamination: Language Models May Not Be Few-Shot Anymore 揭示大语言模型任务污染问题:零/少样本能力或被高估 large language model

🔬 支柱二:RL算法与架构 (RL & Architecture) (3 篇)

#题目一句话要点标签🔗
13 Aligning Large Language Models with Human Preferences through Representation Engineering 提出RAHF:通过表征工程对齐大语言模型与人类偏好 reinforcement learning RLHF large language model
14 Knowledge Distillation of LLM for Automatic Scoring of Science Education Assessments 提出基于知识蒸馏的LLM压缩方法,用于科学教育评估自动评分。 distillation large language model
15 Medical Report Generation based on Segment-Enhanced Contrastive Representation Learning 提出基于分割增强对比表示学习的医学报告生成模型MSCL,提升报告质量。 representation learning contrastive learning

⬅️ 返回 cs.CL 首页 · 🏠 返回主页