| 1 |
Do Large Language Models Need Intent? Revisiting Response Generation Strategies for Service Assistant |
探讨服务型AI中意图识别的必要性,对比直接生成与意图优先两种策略。 |
large language model |
|
|
| 2 |
HoPE: Hyperbolic Rotary Positional Encoding for Stable Long-Range Dependency Modeling in Large Language Models |
提出HoPE:一种用于稳定长程依赖建模的双曲旋转位置编码 |
large language model |
|
|
| 3 |
CTCC: A Robust and Stealthy Fingerprinting Framework for Large Language Models via Cross-Turn Contextual Correlation Backdoor |
提出CTCC框架,通过跨轮上下文关联后门实现大语言模型鲁棒且隐蔽的指纹识别 |
large language model |
✅ |
|
| 4 |
Creativity Benchmark: A benchmark for marketing creativity for large language models |
提出创意基准以评估大型语言模型的市场创意能力 |
large language model |
|
|
| 5 |
A Study of Large Language Models for Patient Information Extraction: Model Architecture, Fine-Tuning Strategy, and Multi-task Instruction Tuning |
研究大型语言模型在患者信息抽取中的应用,探索模型架构、微调策略和多任务指令调优。 |
large language model |
|
|
| 6 |
Memorization $\neq$ Understanding: Do Large Language Models Have the Ability of Scenario Cognition? |
提出双视角评估框架以解决大语言模型场景认知能力问题 |
large language model |
|
|
| 7 |
Evaluating Cognitive-Behavioral Fixation via Multimodal User Viewing Patterns on Social Media |
提出一种多模态用户行为分析框架,用于评估社交媒体中的认知行为固着现象。 |
multimodal |
✅ |
|
| 8 |
KERAG: Knowledge-Enhanced Retrieval-Augmented Generation for Advanced Question Answering |
KERAG:知识增强的检索增强生成框架,提升复杂问答覆盖率与准确性 |
large language model chain-of-thought |
|
|
| 9 |
Knowledge Collapse in LLMs: When Fluency Survives but Facts Fail under Recursive Synthetic Training |
揭示LLM递归合成训练中的知识崩塌现象,提出领域特定训练缓解策略 |
large language model instruction following |
|
|
| 10 |
WildScore: Benchmarking MLLMs in-the-Wild Symbolic Music Reasoning |
WildScore:提出一个在真实场景下评估多模态大语言模型音乐推理能力的基准。 |
large language model multimodal |
|
|
| 11 |
Code Review Without Borders: Evaluating Synthetic vs. Real Data for Review Recommendation |
利用LLM生成合成数据,解决新兴语言代码审查推荐系统训练数据不足问题 |
large language model |
|
|
| 12 |
Research on Multi-hop Inference Optimization of LLM Based on MQUAKE Framework |
基于MQUAKE框架的多跳推理优化LLM方法,提升复杂问题回答精度 |
large language model |
|
|
| 13 |
The Token Tax: Systematic Bias in Multilingual Tokenization |
揭示多语言分词偏差:Token Tax对低资源语言的影响与应对 |
large language model |
|
|
| 14 |
A Lightweight Framework for Trigger-Guided LoRA-Based Self-Adaptation in LLMs |
提出SAGE框架,通过触发器引导LoRA自适应调整LLM推理过程中的知识更新。 |
large language model |
|
|
| 15 |
From Staff Messages to Actionable Insights: A Multi-Stage LLM Classification Framework for Healthcare Analytics |
提出多阶段LLM分类框架,从医院员工消息中提取可执行的医疗分析洞见。 |
large language model |
|
|
| 16 |
Triadic Fusion of Cognitive, Functional, and Causal Dimensions for Explainable LLMs: The TAXAL Framework |
TAXAL框架:融合认知、功能和因果维度,提升Agentic LLM的可解释性 |
large language model |
|
|
| 17 |
L1RA: Dynamic Rank Assignment in LoRA Fine-Tuning |
L1RA:LoRA微调中基于L1正则化的动态秩分配方法 |
large language model |
|
|