谁发表了最具影响力的AI研究?谷歌遥遥领先,OpenAI成果转化率完胜DeepMind
统计了近三年引用量最高的 100 篇论文,我们发现……
1、AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models
论文链接:https://academic.oup.com/nar/article/50/D1/D439/6430488
机构:DeepMind
引用次数:1372
主题:Using AlphaFold to augment protein structure database coverage.
2、ColabFold: making protein folding accessible to all
论文链接:https://www.nature.com/articles/s41592-022-01488-1
引用次数:1162
主题:An open-source and efficient protein folding model.
3、Hierarchical Text-Conditional Image Generation with CLIP Latents
论文链接:https://arxiv.org/abs/2204.06125
机构:OpenAI
引用次数:718
主题:DALL・E 2, complex prompted image generation that left most in awe
4、A ConvNet for the 2020s
论文链接:https://arxiv.org/abs/2201.03545
机构:Meta,UC 伯克利
引用次数:690
主题:A successful modernization of CNNs at a time of boom for Transformers in Computer Vision
5、PaLM: Scaling Language Modeling with Pathways
论文链接:https://arxiv.org/abs/2204.02311
机构:谷歌
引用次数:452
主题:Google's mammoth 540B Large Language Model, a new MLOps infrastructure, and how it performs
论文链接:https://www.nature.com/articles/s41586-021-03819-2 机构:DeepMind 引用次数:8965 主题:AlphaFold, a breakthrough in protein structure prediction using Deep Learning
论文链接:https://arxiv.org/abs/2103.14030 机构:微软 引用次数:4810 主题:A robust variant of Transformers for Vision
论文链接:https://arxiv.org/abs/2103.00020 机构:OpenAI 引用次数:3204 主题:CLIP, image-text pairs at scale to learn joint image-text representations in a self supervised fashion
论文链接:https://dl.acm.org/doi/10.1145/3442188.3445922 机构:U. Washington, Black in AI, The Aether 引用次数:1266 主题:Famous position paper very critical of the trend of ever-growing language models, highlighting their limitations and dangers
论文链接:https://arxiv.org/pdf/2104.14294.pdf 机构:Meta 引用次数:1219 主题:DINO, showing how self-supervision on images led to the emergence of some sort of proto-object segmentation in Transformers
论文链接:https://arxiv.org/abs/2010.11929 机构:谷歌 引用次数:11914 主题:The first work showing how a plain Transformer could do great in Computer Vision
论文链接:https://arxiv.org/abs/2005.14165 机构:OpenAI 引用次数:8070 主题:This paper does not need further explanation at this stage
论文链接:https://arxiv.org/abs/2004.10934 机构:Academia Sinica, Taiwan 引用次数:8014 主题:Robust and fast object detection sells like hotcakes
论文链接:https://arxiv.org/abs/1910.10683 机构:谷歌 引用次数:5906 主题:A rigorous study of transfer learning with Transformers, resulting in the famous T5
论文链接:https://arxiv.org/abs/2006.07733 机构:DeepMind,Imperial College 引用次数:2873 主题:Showing that negatives are not even necessary for representation learning
ChatGPT及大模型技术大会
机器之心将于3月21日在北京举办「ChatGPT 及大模型技术大会」,为圈内人士提供一个专业、严肃的交流平台,围绕研究、开发、落地应用三个角度,探讨大模型技术以及中国版 ChatGPT 的未来。
届时,机器之心将邀请大模型领域的知名学者、业界顶级专家担任嘉宾,通过主题演讲、圆桌讨论、QA、现场产品体验等多种形式,与现场观众讨论大模型及中国版 ChatGPT 等相关话题。
点击阅读原文,立即报名。
© THE END
转载请联系本公众号获得授权
投稿或寻求报道:[email protected]
微信扫码关注该文公众号作者