Sudden breakthroughs in AI could hold the key to digital progress | 人工智能的突飞猛进可能成为数字化发展的关键 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

Sudden breakthroughs in AI could hold the key to digital progress
人工智能的突飞猛进可能成为数字化发展的关键

TikTok’s recommendation algorithm and OpenAI’s language model are exemplars of key tipping points in deep learning
TikTok的推荐算法和OpenAI的语言模型是深度学习中最受关注的关键引爆点。
00:00

With the steady pace at which the building blocks of computing technology advance, it is easy to be lulled into a belief in the incremental and predictable nature of digital progress. But that doesn’t take account of the sudden and disruptive new applications that suddenly become possible along the way.

There have been few fields that make this case as clearly as deep learning, the main technique behind recent advances in AI. This is a technology that has been many years in the making: it was just a case of waiting for computing power to become abundant and cheap enough, and for data to become available in large enough quantities to train the systems. At that point, the algorithms would start to bootstrap themselves.

Two highly visible current examples have shown just how disruptive the results can be when the technology reaches a critical point. The first case, involving data and algorithms, is TikTok. The huge success of the Chinese-owned app at the centre of a political storm in the US can be traced to many things. Among them are its slick automated editing, freeing of “watermarked” videos to travel beyond its own network, and a format that touched a nerve with its target audience.

But the thing that has excited the techies most has been its use of AI to serve up the videos that are most likely to keep its audience hooked. The results of its personalisation technology have been addictive, yielding the heightened engagement that is gold dust to a social media company.

When deep learning systems first reached the mainstream, there seemed to be a real risk that start-ups would struggle to compete. Big companies with access to masses of data and computing power would be able to train the most effective models, in turn bringing them more users (and data) and ensuring an unassailable lead.

It turns out that a viral app can act as the flywheel. Recommendation systems have been around for years, but TikTok was still able to achieve meaningful lift-off.

Microsoft, with one of the biggest AI research efforts in the world, is now hoping to buy part or all of the upstart, partly to get access to its deep learning insights — though a White House seemingly bent on barring the app from the US could thwart the effort.

The second example of the sudden breakthroughs that have come from steady advances in the building blocks of AI involves hardware, and it also touches on Microsoft. OpenAI — a San Francisco research organisation that received a $1bn investment from the software company last year — recently released a new, large-scale language system, known as GPT-3, to an invited audience.

There is a race on to build ever-larger language models, where massive volumes of text are ingested by systems that use them to try to gain a better understanding of how language works. OpenAI’s own GPT-2 was one of the first to use the technology for automated writing. Google’s version of the technology, called BERT, now works so well that it has been put to work in the company’s search engine, acting invisibly in the background to decipher what searchers mean with their more complex queries.

What would happen if you threw even more computing power at the problem? That is the whole idea behind OpenAI’s research programme, and the reason it took the investment from Microsoft, much of it “in kind” in the form of technology. Earlier this year, the software company revealed that it had built what it claimed was the world’s fifth most powerful supercomputer, to be used exclusively by OpenAI.

The result of all this hardware — along with further adaptations to the algorithms — is an automated writing system that can reportedly do a passable impression of a real person on almost any topic. That may sound like a gimmick with few practical applications, other than spewing out reams of realistic-sounding fake news. But it could eventually lead to the automation of many simple text-based tasks where humans are currently required. By mining the sum of human knowledge, it could also make connections and yield insights that humans haven’t thought of.

The thought experiment involving an infinite number of monkeys, hammering away at an infinite number of typewriters, posits that one of them must eventually write the complete works of Shakespeare. Far more interesting, though, could be the many other things the monkeys would come up with along the way, including the oeuvres of writers who never existed.

It would still take human intelligence to “understand” the systems’ mindless output. But as with TikTok’s recommendation engine, the results, if properly channelled, could be significant.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

美国不再有羞耻感了吗?

卢斯:美国政客面对丑闻的厚颜无耻是这个时代的一大特征。

瑞士财富管理公司将目光投向亚洲

瑞士作为世界财富管理中心的声誉近年来受到了打击,但瑞士财富管理公司仍可在其竞争对手香港和新加坡占据主导地位。

加拿大-印度外交对峙背后的印度犯罪帮派

31岁的比什努瓦是印度小报的话题常客,他在被指控从狱中策划勒索、谋杀和其他罪行。

Lex专栏:美国人对信用卡的钟爱削弱了即时支付的吸引力

尽管即时支付在一些国家大行其道,但在美国,Visa和万事达卡现在依然可以放宽心。

抢购西方资产的俄罗斯发胶巨头

阿列克谢•萨加尔是受益于西方公司撤离俄罗斯市场的新一代商人之一。

拥有多少钱才算是一名超级富豪?

是1000万美元、3000万美元,还是1亿美元?亿万富翁的迅速崛起颠覆了有钱精英的定义。
设置字号×
最小
较小
默认
较大
最大
分享×