7 Papers & Radios | GPT-4学会反思;ChatGPT数据标注比人便宜20倍
机器之心 & ArXiv Weekly
参与:楚航、罗若天、梅洪源
本周论文包括美国东北大学、MIT 等机构提出的 Reflexion,该方法赋予智能体动态记忆和自我反思的能力;苏黎世大学的研究者证明了 ChatGPT 在多项注释任务(包括相关性、立场、主题和框架检测)上优于众包工作平台和人类工作助理等研究。
Fairness-guided Few-shot Prompting for Large Language Models ChatGPT Outperforms Crowd-Workers for Text-Annotation Tasks Blind Video Deflickering by Neural Filtering with a Flawed Atlas Reflexion: an autonomous agent with dynamic memory and self-reflection Disentanglement with Biological Constraints: A Theory of Functional Cell Types Emergence of Maps in the Memories of Blind Navigation Agents Erasing Concepts from Diffusion Models ArXiv Weekly Radiostation:NLP、CV、ML 更多精选论文(附音频)
作者:Huan Ma、Changqing Zhang 等
论文地址:https://arxiv.org/abs/2303.13217
作者:Fabrizio Gilardi、Meysam Alizadeh 等
论文地址:https://arxiv.org/abs/2303.15056
作者:Chenyang Lei 、 Xuanchi Ren
论文地址:https://arxiv.org/pdf/2303.08120.pdf
作者:Noah Shinn 、 Beck Labash
论文地址:https://arxiv.org/pdf/2303.11366.pdf
作者:James C. R. Whittington, Will Dorrell
论文地址:https://openreview.net/pdf?id=9Z_GfhZnGH
作者:Erik Wijmans、Manolis Savva 等
论文地址:https://openreview.net/pdf?id=lTt4KjHSsyl
作者:Rohit Gandikota 、 Joanna Materzynska 等
论文地址:https://arxiv.org/pdf/2303.07345v1.pdf
1. A comprehensive evaluation of ChatGPT's zero-shot Text-to-SQL capability. (from Philip S. Yu)
2. ReCOGS: How Incidental Details of a Logical Form Overshadow an Evaluation of Semantic Interpretation. (from Christopher D. Manning, Christopher Potts)
3. Towards Making the Most of ChatGPT for Machine Translation. (from Dacheng Tao)
4. Error Analysis Prompting Enables Human-Like Translation Evaluation in Large Language Models: A Case Study on ChatGPT. (from Dacheng Tao)
5. Language Models can Solve Computer Tasks. (from Pierre Baldi)
6. Training Language Models with Language Feedback at Scale. (from Kyunghyun Cho)
7. Bias or Diversity? Unraveling Semantic Discrepancy in U.S. News Headlines. (from Jiebo Luo)
8. Zero-shot Entailment of Leaderboards for Empirical AI Research. (from Sören Auer)
9. Scaling Expert Language Models with Unsupervised Domain Discovery. (from Noah A. Smith)
10. GPTEval: NLG Evaluation using GPT-4 with Better Human Alignment. (from Yang Liu)
1. AutoAD: Movie Description in Context. (from Andrew Zisserman)
2. AVFormer: Injecting Vision into Frozen Speech Models for Zero-Shot AV-ASR. (from Cordelia Schmid)
3. PAIR-Diffusion: Object-Level Image Editing with Structure-and-Appearance Paired Diffusion Models. (from Nicu Sebe, Trevor Darrell)
4. SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates. (from Leonidas Guibas)
5. FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from Sparse Views. (from Larry S. Davis)
6. BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects. (from Dieter Fox, Jan Kautz)
7. Physics-Driven Diffusion Models for Impact Sound Synthesis from Videos. (from Antonio Torralba)
8. Masked Diffusion Transformer is a Strong Image Synthesizer. (from Ming-Ming Cheng, Shuicheng Yan)
9. InceptionNeXt: When Inception Meets ConvNeXt. (from Shuicheng Yan)
10. TimeBalance: Temporally-Invariant and Temporally-Distinctive Video Representations for Semi-Supervised Action Recognition. (from Mubarak Shah)
1. Ideal Abstractions for Decision-Focused Learning. (from Eric Horvitz)
2. Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity. (from Leonidas J. Guibas)
3. Planning with Sequence Models through Iterative Energy Minimization. (from Joshua Tenenbaum)
4. An EMO Joint Pruning with Multiple Sub-networks: Fast and Effect. (from Licheng Jiao)
5. Federated Learning without Full Labels: A Survey. (from Yang Liu, Kai Chen)
6. Fairness-Aware Data Valuation for Supervised Learning. (from Mário A. T. Figueiredo)
7. Predicting Adverse Neonatal Outcomes for Preterm Neonates with Multi-Task Learning. (from Jiebo Luo)
8. Neural Collapse Inspired Federated Learning with Non-iid Data. (from Deng Cai)
9. Adaptive Riemannian Metrics on SPD Manifolds. (from Nicu Sebe)
10. How Does Attention Work in Vision Transformers? A Visual Analytics Attempt. (from Liang Wang, Kwan-Liu Ma)
© THE END
转载请联系本公众号获得授权
投稿或寻求报道:[email protected]
微信扫码关注该文公众号作者