7 Papers & Radios | 无残差连接训练深度transformer;DeepMind写代码AI登Science封面
机器之心 & ArXiv Weekly Radiostation
参与:杜伟、楚航、罗若天
本周主要论文包括:首次无残差连接或归一化层也能训练深度 Transformer 的探索性研究,以及 DeepMind携其写代码 AI AlphaCode 登上了 Science 封面,写代码能力不输程序员。
目录:
Competition-level code generation with AlphaCode
Inverse scaling can become U-shaped
FedALA: Adaptive Local Aggregation for Personalized Federated Learning
An Efficient Training Approach for Very Large Scale Face Recognition
Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
ArXiv Weekly Radiostation:NLP、CV、ML 更多精选论文(附音频)
论文 1:Competition-level code generation with AlphaCode
作者:YUJIA LI 等
论文地址:https://www.science.org/doi/10.1126/science.abq1158
摘要:今年年初,DeepMind 发布了基于 Transformer 的新模型 AlphaCode,该模型实现了大规模代码生成。现在,AlphaCode 又在《Science》上发表了新论文,研究登上《Science》封面。
推荐:DeepMind 携 AlphaCode 登 Science 封面,写代码能力不输程序员。
论文 2:Inverse scaling can become U-shaped
作者:Jason Wei 等
论文地址:https://arxiv.org/pdf/2211.02011.pdf
摘要:语言模型越大,性能越好,这一点已经在很多任务中被证明是正确的。那是否存在一种情况:某些任务的结果会因模型规模的增加反而变得糟糕?谷歌最近发表的一篇论文或许能为我们提供答案。获得 Inverse Scaling 奖励的任务如下:Negation QA、Hindsight Neglect、Quote Repetition 和 Redefine Math。
推荐:模型越大,表现越差?谷歌收集了让大模型折戟的任务,还打造了一个新基准。
论文 3:FedALA: Adaptive Local Aggregation for Personalized Federated Learning
作者:Jianqing Zhang 等
论文地址:https://arxiv.org/pdf/2212.01197.pdf
摘要:该论文提出了一种用于联邦学习的自适应本地聚合方法,通过从全局模型中自动捕获客户机所需信息的方式来应对联邦学习中的统计异质性问题。作者对比了 11 个 SOTA 模型,并取得了超越最优方法 3.27% 的优异表现。作者将其中的自适应本地聚合模块应用到其他联邦学习方法上取得了最多 24.19% 的提升。本文被 AAAI 2023 会议收录,下图为自适应本地聚合(ALA)过程。
推荐:超越 SOTA 3.27%,上交大等提出自适应本地聚合新方法。
论文 4:An Efficient Training Approach for Very Large Scale Face Recognition
作者:Kai Wang 等
论文地址:https://arxiv.org/pdf/2105.10375.pdf
摘要:本文主要介绍了超大规模分类框架的现有解决方案,以及低成本分类框架 FFC 的相应原理及 trick 介绍。本文被 CVPR 2022 会议收录,下图为 SOTA 方法比较。
推荐:达摩院开源低成本大规模分类框架 FFC。
论文 5:Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation
作者:匿名
论文地址:https://openreview.net/pdf?id=NPrsUQgMjKK
摘要:ICLR 2023 盲审阶段的这篇论文首次证明了无需残差连接或归一化层时也可能成功训练深度 transformer。为此,他们研究了深度无残差 transformer 中的信号传播和秩崩溃问题,并推导出三种方法来阻止它们。
具体而言,方法中使用了以下组合:参数初始化、偏置矩阵和位置相关的重缩放,并强调了 transformer 中信号传播特有的几种复杂性,包括与位置编码和因果掩蔽的交互。研究者实证证明了他们的方法可以生成可训练的深度无残差 transformer。
推荐:ICLR 盲审阶段就被评审赞不绝口的论文:会是 Transformer 架构的一大创新吗?
论文 6:EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
作者:Yuxin Fang 等
论文地址:https://arxiv.org/pdf/2211.07636.pdf
摘要:智源开源了简单又强大、具有 10 亿参数的视觉基础模型 EVA,将最强语义学习与最强几何结构学习相结合,在 ImageNet 分类、COCO 检测分割、Kinetics 视频分类等广泛的视觉感知任务中取得当前最强性能。
推荐:10 亿参数、多项 SOTA,智源开源视觉基础模型 EVA。
论文 7:Join the High Accuracy Club on ImageNet with A Binary Neural Network Ticket
作者:Nianhui Guo 等
论文地址:https://arxiv.org/pdf/2211.12933.pdf%E3%80%81
摘要:来自德国 Hasso Plattner 计算机系统工程研究院的 Nianhui Guo 和 Haojin Yang 等研究者提出了 BNext 模型,成为第一个在 ImageNet 数据集上 top1 分类准确率突破 80% 的 BNN。下图为基于 ImageNet 的 SOTA BNN 性能对比。、
推荐:首个在 ImageNet 上精度超过 80% 的二值神经网络 BNext 问世。
本周 10 篇 NLP 精选论文是:
1. Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE. (from Yu Qiao, Xinbo Gao, Xiaoou Tang, Dacheng Tao)
2. Learning to Dub Movies via Hierarchical Prosody Models. (from Ming-Hsuan Yang, Qingming Huang)
3. Improving Simultaneous Machine Translation with Monolingual Data. (from Dacheng Tao)
4. Intermediate Entity-based Sparse Interpretable Representation Learning. (from Joydeep Ghosh)
5. a survey on GPT-3. (from Bhaskar Krishnamachari)
6. ZeroKBC: A Comprehensive Benchmark for Zero-Shot Knowledge Base Completion. (from Hongming Zhang)
7. Constructing Highly Inductive Contexts for Dialogue Safety through Controllable Reverse Generation. (from Minlie Huang)
8. KPT: Keyword-guided Pre-training for Grounded Dialog Generation. (from Minlie Huang)
9. LawngNLI: A Long-Premise Benchmark for In-Domain Generalization from Short to Long Contexts and for Implication-Based Retrieval. (from Dan Roth)
10. SoftCorrect: Error Correction with Soft Detection for Automatic Speech Recognition. (from Xiang-Yang Li, Tie-Yan Liu)
1. NeRDi: Single-View NeRF Synthesis with Language-Guided Diffusion as General Image Priors. (from Leonidas Guibas, Dragomir Anguelov)
2. ALTO: Alternating Latent Topologies for Implicit 3D Reconstruction. (from Leonidas Guibas)
3. Improving Zero-shot Generalization and Robustness of Multi-modal Models. (from Ming-Hsuan Yang, Laurent Itti)
4. Self-supervised AutoFlow. (from Ming-Hsuan Yang)
5. Consistency-Aware Anchor Pyramid Network for Crowd Localization. (from Qingming Huang, Ming-Hsuan Yang, Nicu Sebe)
6. UNETR++: Delving into Efficient and Accurate 3D Medical Image Segmentation. (from Ming-Hsuan Yang)
7. Progressive Multi-resolution Loss for Crowd Counting. (from Qingming Huang, Ming-Hsuan Yang)
8. Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly Supervised Video Anomaly Detection. (from Qingming Huang, Ming-Hsuan Yang)
9. AsyInst: Asymmetric Affinity with DepthGrad and Color for Box-Supervised Instance Segmentation. (from Alan Yuille)
10. Discovering Class-Specific GAN Controls for Semantic Image Synthesis. (from Bernt Schiele)
© THE END
转载请联系本公众号获得授权
投稿或寻求报道:[email protected]
微信扫码关注该文公众号作者