Redian新闻
>
ICLR 2023(投稿)|自然语言处理相关论文分类整理

ICLR 2023(投稿)|自然语言处理相关论文分类整理

公众号新闻


MLNLP社区是国内外知名的机器学习与自然语言处理社区,受众覆盖国内外NLP硕博生、高校老师以及企业研究人员。
社区的愿景是促进国内外自然语言处理,机器学习学术界、产业界和广大爱好者之间的交流和进步,特别是初学者同学们的进步。
转载自 | RUC AI Box
作者 | 都一凡
机构 | 中国人民大学高瓴人工智能学院
研究方向 | 预训练模型
本文从ICLR 2023 的投稿论文中筛选出了与自然语言处理相关的论文100多篇,并按照研究主题进行分类整理,以供参考。
导读
ICLR是人工智能领域顶级会议之一,会议主题包括深度学习、统计和数据科学,以及一些重要的应用,例如:计算机视觉、计算生物学、语音识别、文本理解、游戏和机器人等。ICLR 2023将于2023年5月1日至5月5日在卢旺达基加利举行。由于官方的论文接受列表尚未公开,因此本文从投稿论文中选取了与自然语言处理相关的100多篇论文,按照不同的研究主题进行了分类整理,以供参考。ICLR 2023投稿论文链接如下:https://openreview.net/group?id=ICLR.cc/2023/Conference。
目录
  • 模型
  • 文本生成
  • 机器翻译
  • 对话与问答
  • 知识与推理
  • 多模态
  • 信息检索
  • 代码
  • 数学
  • 知识蒸馏
  • 表示学习
  • 可解释性
  • 鲁棒性
  • 其他任务
  • Benchmark

1

『模型』

1.1 模型结构
  • EIT: Enhanced Interactive Transformer for Sequence Generation
  • Transformers with Multiresolution Attention Heads
  • SaMoE: Parameter Efficient MoE Language Models via Self-Adaptive Expert Combination
  • Sparse MoE with Random Routing as the New Dropout: Training Bigger and Self-Scalable Models
1.2 模型训练
  • Guess the Instruction! Making Language Models Stronger Zero-Shot Learners
  • LEXA: Language-agnostic Cross-consistency Training for Question Answering Tasks
  • CCT: Cross-consistency training for Clone Detection and Code Search Tasks
  • Large Language Models Can Self-improve
  • Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning
  • PMixUp: Simultaneous Utilization of Part-of-Speech Replacement and Feature Space Interpolation for Text Data Augmentation
  • Self-Consistent Learning: Cooperation between Generators and Discriminators
  • Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
  • Toward Adversarial Training on Contextualized Language Representation
  • ContraGen: Effective Contrastive Learning For Causal Language Model
  • Language Model Pre-training with Linguistically Motivated Curriculum Learning
  • MLM with Global Co-occurrence
  • Improving Language Model Pretraining with Text Structure Information
  • Learning by Distilling Context
  • MAT: Mixed-Strategy Game of Adversarial Training in Fine-tuning
  • Sub-Task Decomposition Enables Learning in Sequence to Sequence Tasks
1.3 模型使用
  • Prompt Injection: Parameterization of Fixed Inputs
  • Meta-Weighted Language Model Tuning for Augmentation-Enhanced Few-Shot Learning
  • Pre-trained Language Models can be Fully Zero-Shot Learners
  • KnowDA: All-in-One Knowledge Mixture Model for Data Augmentation in Low-Resource NLP
  • Contrastive Novelty Learning: Anticipating Outliers with Large Language Models
  • Model ensemble instead of prompt fusion: a sample-specific knowledge transfer method for few-shot prompt tuning
  • Mass-Editing Memory in a Transformer
  • Zemi: Learning Zero-Shot Semi-Parametric Language Models from Multiple Tasks
  • Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language Models
  • Selective Annotation Makes Language Models Better Few-Shot Learners
  • Generate rather than Retrieve: Large Language Models are Strong Context Generators
  • Ahead-of-Time P-Tuning
  • Can discrete information extraction prompts generalize across language models?


2

『文本生成』

  • Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation
  • DiffusER: Diffusion via Edit-based Reconstruction
  • MVP: Multi-task Supervised Pre-training for Natural Language Generation
  • Penalizing the High-likelihood: A Novel Sampling Method for Open-ended Neural Text Generation via Inverse Probability Weighting
  • RainProof: An Umbrella to Shield Text Generator from Out-Of-Distribution Data
  • A Non-monotonic Self-terminating Language Model
  • PromptSum: Planning with Mixed Prompts for Parameter-Efficient Controllable Abstractive Summarization
  • On the Usefulness of Embeddings, Clusters and Strings for Text Generation Evaluation
  • Joint Generator-Ranker Learning for Natural Language Generation
  • Calibrating Sequence likelihood Improves Conditional Language Generation
  • Sequence to sequence text generation with diffusion models
  • Tailoring Language Generation Models under Total Variation Distance
  • Language Models Can See: Plugging Visual Controls in Text Generation
  • Distribution Aware Metrics for Conditional Natural Language Generation
  • PEER: A Collaborative Language Model


3

『机器翻译』

  • Seq2Seq Pre-training with Dual-channel Recombination for Translation
  • Simple and Scalable Nearest Neighbor Machine Translation
  • Fuzzy Alignments in Directed Acyclic Graph for Non-Autoregressive Machine Translation


4

『对话与问答』

  • Towards Boosting the Open-Domain Chatbot with Human Feedback
  • Learning Locality and Isotropy in Dialogue Modeling
  • Knowledge-Consistent Dialogue Generation with Language Models and Knowledge Graphs
  • Complex-Target-Guided Open-Domain Conversation based on offline reinforcement learning


5

『知识与推理』

  • ReAct: Synergizing Reasoning and Acting in Language Models
  • Language model with Plug-in Knowldge Memory
  • Thrust: Adaptively Propels Large Language Models with External Knowledge
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models
  • DecAF: Joint Decoding of Answers and Logical Forms for Question Answering over Knowledge Bases
  • Least-to-Most Prompting Enables Complex Reasoning in Large Language Models
  • Neuro-Symbolic Procedural Planning with Commonsense Prompting
  • Multimodal Analogical Reasoning over Knowledge Graphs
  • ThinkSum: Probabilistic reasoning over sets using large language models
  • Joint Representations of Text and Knowledge Graphs for Retrieval and Evaluation
  • Rethinking Identity in Knowledge Graph Embedding
  • gGN: learning to represent nodes in directed graphs as low-rank Gaussian distributions
  • Don't Throw Your Old Policies Away: Knowledge-based Policy Recycling Protects Against Adversarial Attacks
  • Measuring and Narrowing the Compositionality Gap in Language Models


6

『多模态』

  • CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
  • CLIP model is an Efficient Continual Learner
  • Language Modelling with Pixels
  • Visual Classification via Description from Large Language Models
  • Contrastive Alignment of Vision to Language Through Parameter-Efficient Transfer Learning
  • RelationCLIP: Training-free Fine-grained Visual and Language Concept Matching
  • Contrastive Prompt Tuning Improves Generalization in Vision-Language Models
  • Masked Vision and Language Modeling for Multi-modal Representation Learning
  • UNIFIED-IO: A Unified Model for Vision, Language, and Multi-modal Tasks
  • Visually-augmented pretrained language models for NLP Tasks without Images
  • Music-to-Text Synaesthesia: Generating Descriptive Text from Music Recordings
  • VLG: General Video Recognition with Web Textual Knowledge
  • Dynamic Historical Adaptation for Continual Image-Text Modeling
  • From Images to Textual Prompts: Zero-shot VQA with Frozen Large Language Models
  • NÜWA-LIP: Language-guided Image Inpainting with Defect-free VQGAN
  • Universal Vision-Language Dense Retrieval: Learning A Unified Representation Space for Multi-Modal Retrieval
  • Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language
  • Language-Guided Artistic Style Transfer Using the Latent Space of DALL-E
  • Unified Vision and Language Prompt Learning
  • DrML: Diagnosing and Rectifying Vision Models using Language
  • MaPLe: Multi-modal Prompt Learning
  • Prefix Conditioning Unifies Language and Label Supervision
  • Domain-Unified Prompt Representations for Source-Free Domain Generalization
  • Learning to Decompose Visual Features with Latent Textual Prompts
  • Delving into the Openness of CLIP
  • Cali-NCE: Boosting Cross-modal Video Representation Learning with Calibrated Alignment
  • Dynamic Historical Adaptation for Continual Image-Text Modeling
  • Design of the topology for contrastive visual-textual alignment


7

『信息检索』

  • Multi-Vector Retrieval as Sparse Alignment
  • Augmenting Zero-shot Dense Retrievers With Plug-in Mixture-of-memories
  • CAMVR: Context-Adaptive Multi-View Representation Learning for Dense Retrieval


8

『代码』

  • Language Models Can Teach Themselves to Program Better
  • Repository-Level Prompt Generation for Large Language Models of Code
  • NAPG: Non-Autoregressive Program Generation for Hybrid Tabular-Textual Question Answering
  • A Simple, Yet Effective Approach to Finding Biases in Code Generation
  • Deep Learning-based Source Code Complexity Prediction
  • FixEval: Execution-based Evaluation of Program Fixes for Competitive Programming Problems
  • InCoder: A Generative Model for Code Infilling and Synthesis
  • Code Translation with Compiler Representations
  • CodeT: Code Generation with Generated Tests
  • Multi-lingual Evaluation of Code Generation Models


9

『数学』

  • Learning Math Reasoning from Self-Sampled Correct and Partially-Correct Solutions
  • Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning


10

『知识蒸馏』

  • Speed Up Iterative Non-Autoregressive Transformers by Distilling Multiple Steps
  • A comparison of dataset distillation and active learning in text classification
  • Less is More: Task-aware Layer-wise Distillation for Language Model Compression
  • Distilling Text-Image Foundation Models


11

『表示学习』

  • RankCSE: Unsupervised Sentence Representations Learning via Learning to Rank
  • Neural Embeddings for Text
  • Ranking-Enhanced Unsupervised Sentence Representation Learning
  • Neural Topic Modeling with Embedding Clustering Regularization
  • Counterfactual Contrastive Learning for Robust Text Classification
  • On The Inadequacy of Optimizing Alignment and Uniformity in Contrastive Learning of Sentence Representations


12

『可解释性』

  • ORCA: Interpreting Prompted Language Models via Locating Supporting Evidence in the Ocean of Pretraining Data
  • ContraSim -- A Similarity Measure Based on Contrastive Learning


13

『鲁棒性』

  • Learning from Others: Similarity-based Regularization for Mitigating Artifacts
  • Randomized Smoothing with Masked Inference for Adversarially Robust NLP Systems


14

『其他任务』

  • Exploring Methods for Parsing Movie Scripts - Feature Extraction for Further Social Injustice Analysis
  • MSQ-BioBERT: Ambiguity Resolution to Enhance BioBERT Medical Question-Answering
  • Compositional Semantic Parsing with Large Language Models
  • AxBERT: An Explainable Chinese Spelling Correction Method Driven by Associative Knowledge Network
  • BED: Boundary-Enhanced Decoder for Chinese Word Segmentation
  • Semi-connected Joint Entity Recognition and Relation Extraction of Contextual Entities in Family History Records
15. Benchmark
  • GuoFeng: A Discourse-aware Evaluation Benchmark for Language Understanding, Translation and Generation

技术交流群邀请函

△长按添加小助手

扫描二维码添加小助手微信

请备注:姓名-学校/公司-研究方向
(如:小张-哈工大-对话系统)
即可申请加入自然语言处理/Pytorch等技术交流群

关于我们

MLNLP 社区是由国内外机器学习与自然语言处理学者联合构建的民间学术社区,目前已经发展为国内外知名的机器学习与自然语言处理社区,旨在促进机器学习,自然语言处理学术界、产业界和广大爱好者之间的进步。
社区可以为相关从业者的深造、就业及研究等方面提供开放交流平台。欢迎大家关注和加入我们。

微信扫码关注该文公众号作者

戳这里提交新闻线索和高质量文章给我们。
相关阅读
NeurIPS 2022 | 首个将视觉、语言和音频分类任务进行统一的半监督分类学习基准ICLR'23截稿, 图神经网络依然火热 (附42 篇好文整理)EMNLP 2022 | 主会长文论文分类整理用「冷饭」炒出个顶会第一?ICLR 2023最高分论文竟被「石锤」抄袭!ICLR 2023(投稿) | 扩散模型相关论文分类整理Google探索全新NLU任务「自然语言评估」,正式面试前让AI帮你热个身!投稿延期 | IEEE ICC 2023(IEEE国际通信会议)咀外文嚼汉字(173)“Singles” 与 ”Shingles”(一位数差点球员,带状皰疹)2023日历(礼盒装)| 一本可以听书的日历 精选因果推理相关的图神经网络研究进展求职干货 | J.P.Morgan 2023 秋招已开!海外求职:资管(投资策略、研究、交易、投资组合管理)MLNLP前沿综述分享第一期 | 美国Pinterest公司@吴凌飞:面向自然语言处理的图深度学习求职干货 |  BNY Mellon 2023 低年级项目已开!海外求职:资管(投资策略、研究、交易、投资组合管理)炼石:图解《网络数据分类分级要求》(征求意见稿)硬核观察 #836 可根据自然语言指令进行《我的世界》游戏的 AI包邮送四本|荀恩东:自然语言处理离不开结构计算 | 《GPF、BCC、意合图》系列著作序言TACL 2022 | GAL:合成文本在自然语言处理任务中的应用ICLR 2023论文得分排名出炉!多篇论文竟同时拿到1分和10分AIGC 席卷 AI 顶会:ICLR 2023 论文得分出炉,扩散模型研究暴增CCF论文列表(2022拟定)大更新!MICCAI空降B类!PRCV空降C类!ICLR继续陪跑...疑似抄袭论文评分第一,字面歧义被指违背伦理,这届ICLR审稿怎么了?ICLR盲审阶段就被评审赞不绝口的论文:会是Transformer架构的一大创新吗?《西罗普郡一少年》:28: 威尔士边境CIKM 2022 | 推荐系统相关论文分类整理MSRA智能多媒体组实习生招聘:计算机视觉、自然语言理解、强化学习多个方向ICLR盲审阶段被审稿人赞不绝口的论文:会是Transformer架构的一大创新吗?《山居续忆》:附录一:徐礼耕先生之回忆:(四) 从大革命后到抗战庆成幸存的回忆博士后申请 | 西湖大学张岳课题组招收基础自然语言处理、机器翻译、机器学习等方向的博士后MLNLP2022官宣!第一届机器学习算法与自然语言处理大会开始免费报名,29场报告,60余位学者等你来!ICLR'23 | 投稿量暴涨46%,「GNN」依旧热门,「Diffusion」&「Mask」后来居上ICLR 2023 论文得分出炉!扩散模型研究暴增!英译-小题大作浅谈扩散模型的有分类器引导和无分类器引导访索菲亚大教堂:揭开涂层看千年精美壁画哈工大车万翔:自然语言处理范式正在变迁
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。