马斯克签署千人联名信:GPT-4 太危险,所有 AI 实验室必须立即暂停研究!
https://futureoflife.org/open-letter/pause-giant-ai-experiments/?continueFlag=d19c796896f3776241c518f8747afa68
我们应该让机器充斥我们的信息渠道,传播宣传和谎言吗?
我们应该将所有工作都自动化,包括那些令人满足的工作吗?
我们应该发展可能最终超过、取代我们的非人类思维吗?
我们应该冒失控文明的风险吗?
专门负责 AI 的新的、有能力的监管机构;
对高能力 AI 系统和大型计算能力池的监督和跟踪;
用于区分真实与合成内容、跟踪模型泄漏的溯源和水印系统;
健全的审计和认证生态系统;对 AI 造成的损害承担责任;
技术AI安全研究的充足公共资金;
应对 AI 引起的巨大经济和政治变革(特别是对民主制度的影响)的资深机构。
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜. In Proceedings of the 2021 ACM conference on fairness, accountability, and transparency (pp. 610-623).
Bostrom, N. (2016). Superintelligence. Oxford University Press.
Bucknall, B. S., & Dori-Hacohen, S. (2022, July). Current and near-term AI as a potential existential risk factor. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 119-129).
Carlsmith, J. (2022). Is Power-Seeking AI an Existential Risk?. arXiv preprint arXiv:2206.13353.
Christian, B. (2020). The Alignment Problem: Machine Learning and human values. Norton & Company.
Cohen, M. et al. (2022). Advanced Artificial Agents Intervene in the Provision of Reward. AI Magazine, 43(3) (pp. 282-293).
Eloundou, T., et al. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models.
Hendrycks, D., & Mazeika, M. (2022). X-risk Analysis for AI Research. arXiv preprint arXiv:2206.05862.
Ngo, R. (2022). The alignment problem from a deep learning perspective. arXiv preprint arXiv:2209.00626.
Russell, S. (2019). Human Compatible: Artificial Intelligence and the Problem of Control. Viking.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.
Weidinger, L. et al (2021). Ethical and social risks of harm from language models. arXiv preprint arXiv:2112.04359.
Ordonez, V. et al. (2023, March 16). OpenAI CEO Sam Altman says AI will reshape society, acknowledges risks: 'A little bit scared of this'. ABC News.
Perrigo, B. (2023, January 12). DeepMind CEO Demis Hassabis Urges Caution on AI. Time.
Bubeck, S. et al. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4. arXiv:2303.12712.
OpenAI (2023). GPT-4 Technical Report. arXiv:2303.08774.
Ample legal precedent exists – for example, the widely adopted OECD AI Principles require that AI systems "function appropriately and do not pose unreasonable safety risk"
Examples include human cloning, human germline modification, gain-of-function research, and eugenics.
点击「在看」
是对我们最大的鼓励
微信扫码关注该文公众号作者
戳这里提交新闻线索和高质量文章给我们。
来源: qq
点击查看作者最近其他文章