人工智能变智障?谷歌版“ChatGPT” Bard首秀大翻车,一夜市值蒸发7000亿元!(附视频&摘要稿)
在试图追上微软和OpenAI在人工智能方面的先发优势时,谷歌自己搞砸了。这是一个价值1000亿美金(约合7000亿人民币)的错误。
2月8日,谷歌在巴黎召开的发布会上展示了Bard——这款为了对抗ChatGPT推出来的聊天机器人。按照谷歌的宣传,Bard不仅要和ChatGPT一样有问必答,还要更“负责任”——暗搓搓地指ChatGPT里掺杂的虚假信息太多,不够“负责”。这一宣传明显提高了人们对于Bard的期望值。
然而,在谷歌短短的几秒展示里,Bard其实只被问了一个问题——“我可以告诉我 9 岁的孩子关于詹姆斯·韦伯太空望远镜(James Webb Space Telescope ,简称JWST) 的哪些新发现?”
Bard的回答很精彩——有丰富的信息,而且很形象的比喻,确实深入浅出地给孩子解释了JWST的发现。然而,里面有一个巨大的错误:回答里提到“ JWST 拍摄到了太阳系外行星的第一张照片”,也就是下图灰线部分。然而,事实上,第一张系外行星照片是由欧洲南方天文台的Very Large Telescope (VLT) 在 2004 年拍摄的。
天文学家们认为,这一问题可能源于人工智能误解了“模棱两可的NASA新闻稿,低估了过去的历史”。而在唯一的一次演示里,就犯了这么大的错误,谷歌可以说颜面尽失,不得不快速撤下了相关演示的视频。
但错误已经犯了,代价就不可避免。消息一传出,谷歌股票大跌,市值蒸发约1020亿美元(约6932.50亿元人民币)。
Bard由谷歌的大型语言模型LaMDA,即对话应用程序语言模型提供支持。该公司表示,在更广泛地向公众提供对话技术之前,将向“可信的测试人员”开放该技术。其功能包括一个名为Bard的聊天机器人,以及问答形式的新搜索桌面。
谷歌首席执行官桑达尔·皮查伊表示,Bard试图将世界知识的广度与大型语言模型的力量、智慧和创造力结合在一起,它将利用网络上的信息来提供新鲜、高质量的回复。Bard的回应将在质量、安全性和真实性方面达到很高的标准。该应用程序还可能提供最新的ChatGPT无法做到的回复。
↓↓↓ 上下滑动,查看摘要稿 ↓↓↓
Google owner Alphabet Inc fell by the most in more than three months after a demonstration of its new artificial intelligence chatbot, Bard, sparked concerns that the tech giant has lost ground in the race for the future of internet search.
Google has been under pressure since developer OpenAI launched its wildly popular chatbot, ChatGPT, which many in the tech industry tout as the next generation of search. On Tuesday, Microsoft Corp., which is investing billions in OpenAI, unveiled a new version of its Bing search engine and Edge browser incorporating technology from the AI startup. On Wednesday, Google hosted a news conference in Paris where it shared more details about its progress integrating artificial intelligence into search.
Investors were largely underwhelmed by the demonstration. In one instance, Bard was asked about new discoveries from the James Webb Space Telescope. In one of its responses, Bard said the telescope was used to take the first pictures of a planet outside the Earth’s solar system — but NASA says those were actually taken by a different telescope.
For now, Bard is available only to a limited number of trusted testers, and OpenAI’s ChatGPT has also been found to deliver inaccurate or outdated responses. Yet investors are keenly attuned to any threat to Google’s search business, which remains its lifeblood.
“That is why you see such a reaction, because this is the money generator, the cash cow in Alphabet’s portfolio,” said Mandeep Singh, an analyst with Bloomberg Intelligence.
Google said in a statement that Bard’s response “highlights the importance of a rigorous testing process.” The company said it will combine external feedback with its own internal testing to ensure Bard’s responses “meet a high bar for quality, safety and groundedness in real-world information.”
Alphabet declined 7.4% to $99.67 at 1:09 p.m. New York time on Wednesday. Earlier, the shares fell as much as 8.9%, the biggest drop since Oct. 26.
Last year, Google declared a “code red” in response to ChatGPT’s release — a move akin to pulling a fire alarm internally, which sent the company’s AI engineers scrambling for a response. Bard’s imperfect performance on the public stage suggests the company may have felt pressure to show off the technology before it was ready, Singh said.
“They did this in haste,” Singh said. “You don’t expect it from a company that is so dominant, and really has always been able to fend off any challenges as far as their core search business.”
While Microsoft won the narrative this week, Google’s longstanding investments in artificial intelligence will ultimately pay off, said Gene Munster, co-founder and managing partner at Deepwater Asset Management.
“Right now the snapshot is: advantage Microsoft,” Munster said. “However, we still think the long-term advantage should go to Alphabet given the resources it has put into AI over the past six years.”
↓↓↓ 上下滑动,查看专访稿 ↓↓↓
Let us start off with the year so far, are you waking up every morning energized about what's happening in A.I. or are you yourself losing sleep because of the potential concerns? When we consider what's happened with China GDP T over the last few months, we really do need to take.
It's about the hasty miss. The speed before safety.
That's really kind of driven the the release of Chuck Beatty into the into the worlds. We know that researchers have easily gotten around the preventative content filters of and beauty to produce a cyber search, security threats such as new strands of polymorphic malware. They've been able to create a fraudulent phishing campaigns and expose dealers. We've also seen the potential for Chachi, Beatty and other large language models to, in a sense, create fire hoses, disinformation and propaganda that can flood the digital public square with lies and alternative facts.
And so right. I would say there there's a lot of exciting things going on, but there's a lot of reason to for us to be concerned with showing on the screen just how rapid the rollout is from research to development to new product launch across a number of names and platforms in the wall today. I know it's been astonishing tracking the headlines of how quickly new products and new names are born.
It's that pace dangerous or is that just the reality of innovation? I would I would hesitate to say it's the reality of innovation. I would say that we really do need to avoid what's been going on, which is this kind of shocking irresponsibility of bringing things to the market. With with increasing speed, we've seen the move move fast and break things attitude before.
But this attitude is is becoming perhaps more of a concern as the scale of the potential consequences become wider and deeper into society.
And I think that we we really need to we need to press the pause button a bit when it comes to the acceleration of the technologies.
So I guess that the debate moves to who should police their CEO.
When you reflect on the Alan Turing Institute's work around this area, your work in ethics. What's interesting is how many papers have been published by the likes of open A.I.
guidelines that the industry themselves have put forward.
Should they be the ones to do that? Or is there a need for very quickly a conversation around third party oversight? We absolutely need third party oversight when it comes to these systems, especially these highly impactful systems that will affect large populations. Now, that's not to say that the practices of the innovators in designing, developing and deploying these systems shouldn't involve strong ex ante practices of impact assessment, transparency and accountability of bias mitigation.
But these things simply must be subject to the oversight of third party bodies. Now regulators and perhaps certification schemes. David, what is the single biggest ethical consideration around the use of A.I.
in the real world today? I mean, if we're if we're thinking about these large language models and foundation models, we have to recognize that what these systems are doing is merely drawing on.
Billions of of of parameters that that that come from human behavior, that can have discrimination embedded in it, that can be that can be racist and sexist and can can in fact ultimately be re replicated and amplified in systems. And so we really need to be aware that if we do not approach these these systems with a sufficient bias mitigation mechanisms, we will simply replicates deeper patterns than just. Right.
No.
防止再次失联,请立即关注备用号
想第一时间观看高质量英语演讲&采访视频?把“精彩英语演讲”设置为星标就对了!操作办法就是:进入公众号——点击右上角的●●●——找到“设为星标”点击即可。
微信扫码关注该文公众号作者