| 1 |
FM4NPP: A Scaling Foundation Model for Nuclear and Particle Physics |
提出FM4NPP以解决粒子物理实验数据稀疏问题 |
large language model foundation model |
|
|
| 2 |
EGGS-PTP: An Expander-Graph Guided Structured Post-training Pruning Method for Large Language Models |
提出EGGS-PTP以解决大语言模型的计算与内存挑战 |
large language model foundation model |
|
|
| 3 |
Data-Driven Discovery of Interpretable Kalman Filter Variants through Large Language Models and Genetic Programming |
通过大语言模型和遗传编程提出可解释的卡尔曼滤波器变体 |
large language model |
|
|
| 4 |
SYNAPSE-G: Bridging Large Language Models and Graph Learning for Rare Event Classification |
提出SYNAPSE-G以解决稀有事件分类中的数据稀缺问题 |
large language model |
|
|
| 5 |
Less is More: Learning Graph Tasks with Just LLMs |
提出利用大型语言模型解决图任务的新方法 |
large language model chain-of-thought |
|
|
| 6 |
TimeMKG: Knowledge-Infused Causal Reasoning for Multivariate Time Series Modeling |
提出TimeMKG以解决多变量时间序列建模中的知识缺失问题 |
large language model multimodal |
|
|
| 7 |
DeepFeatIoT: Unifying Deep Learned, Randomized, and LLM Features for Enhanced IoT Time Series Sensor Data Classification in Smart Industries |
提出DeepFeatIoT以解决IoT传感器数据分类问题 |
large language model TAMP |
|
|
| 8 |
Noise Hypernetworks: Amortizing Test-Time Compute in Diffusion Models |
提出噪声超网络以解决扩散模型推理效率问题 |
large language model |
✅ |
|
| 9 |
Benchmark-Driven Selection of AI: Evidence from DeepSeek-R1 |
提出基准驱动的AI选择方法以提升推理模型性能 |
large language model |
|
|
| 10 |
Pre-trained Transformer-models using chronic invasive electrophysiology for symptom decoding without patient-individual training |
提出基于预训练Transformer模型的症状解码方法以解决个体化训练问题 |
foundation model |
|
|
| 11 |
Constrained Decoding of Diffusion LLMs with Context-Free Grammars |
提出扩散模型的约束解码方法以解决语法约束问题 |
large language model |
|
|
| 12 |
Dynamic Mixture-of-Experts for Incremental Graph Learning |
提出动态专家混合模型以解决增量图学习问题 |
TAMP |
|
|
| 13 |
Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs |
提出四种策略以提升LLMs在零-shot情境下的预测能力 |
large language model |
|
|
| 14 |
Next Edit Prediction: Learning to Predict Code Edits from Context and Interaction History |
提出下一编辑预测以解决代码编辑体验不足问题 |
large language model |
✅ |
|
| 15 |
Modern Neural Networks for Small Tabular Datasets: The New Default for Field-Scale Digital Soil Mapping? |
提出现代神经网络以解决小型表格数据集的土壤属性预测问题 |
foundation model |
|
|
| 16 |
HierMoE: Accelerating MoE Training with Hierarchical Token Deduplication and Expert Swap |
提出HierMoE以解决MoE模型训练中的通信与负载不均问题 |
large language model |
|
|
| 17 |
Decentralized Rank Scheduling for Energy-Constrained Multi-Task Federated Fine-Tuning in Edge-Assisted IoV Networks |
提出去中心化排名调度解决边缘IoV网络中的多任务联邦微调问题 |
foundation model |
|
|
| 18 |
Enhancing Memory Recall in LLMs with Gauss-Tin: A Hybrid Instructional and Gaussian Replay Approach |
提出Gauss-Tin以解决大语言模型的灾难性遗忘问题 |
large language model |
|
|
| 19 |
NeuronTune: Fine-Grained Neuron Modulation for Balanced Safety-Utility Alignment in LLMs |
提出NeuronTune以解决大型语言模型的安全与效用平衡问题 |
large language model |
|
|