悲观:飞客纽斯将不可战胜

春风吹

桃花仙
注册
2017-02-24
消息
3,255
荣誉分数
1,449
声望点数
223
首先AI工具已经开始识别fakenews, 其次同样的AI工具可以编织故事,比人编的自然漏洞就少,还会做背景。

将来信谁?休提主流媒体。已经被干掉了还不自知。


Can AI Win the War Against Fake News?
Developers are working on tools that can help spot suspect stories and call them out, but it may be the beginning of an automated arms race.
It may have been the first bit of fake news in the history of the Internet: in 1984, someone posted on Usenet that the Soviet Union was joining the network. It was a harmless April’s Fools Day prank, a far cry from today’s weaponized disinformation campaigns and unscrupulous fabrications designed to turn a quick profit. In 2017, misleading and maliciously false online content is so prolific that we humans have little hope of digging ourselves out of the mire. Instead, it looks increasingly likely that the machines will have to save us.

One algorithm meant to shine a light in the darkness is AdVerif.ai, which is run by a startup of the same name. The artificially intelligent software is built to detect phony stories, nudity, malware, and a host of other types of problematic content. AdVerif.ai, which launched a beta version in November, currently works with content platforms and advertising networks in the United States and Europe that don’t want to be associated with false or potentially offensive stories.

The company saw an opportunity in focusing on a product for companies as opposed to something for an average user, according to Or Levi, AdVerif.ai’s founder. While individual consumers might not worry about the veracity of each story they are clicking on, advertisers and content platforms have something to lose by hosting or advertising bad content. And if they make changes to their services, they can be effective in cutting off revenue streams for people who earn money creating fake news. “It would be a big step in fighting this type of content,” Levi says.

Sign up for the Chain Letter
Blockchains, cryptocurrencies, and why they matter.
Manage your newsletter preferences
AdVerif.ai scans content to spot telltale signs that something is amiss—like headlines not matching the body, for example, or too many capital letters in a headline. It also cross-checks each story with its database of thousands of legitimate and fake stories, which is updated weekly. The clients see a report for each piece the system considered, with scores that assess the likelihood that something is fake news, carries malware, or contains anything else they’ve ask the system to look out for, like nudity. Eventually, Levi says he plans to add the ability to spot manipulated images and have a browser plugin.

Recommended for You
  1. Exclusive: This is the most dexterous robot ever created
  2. IBM’s Dario Gil says quantum computing promises to accelerate AI
  3. China’s AI wizards want to entertain you, cure you, and dominate the world
  4. How to manipulate Facebook and Twitter instead of letting them manipulate you
  5. Jeff Bezos gave a sneak peek into Amazon’s future
Testing a demo version of the AdVerif.ai, the AI recognized the Onion as satire (which has fooled many people in the past). Breitbart stories were classified as “unreliable, right, political, bias,” while Cosmopolitan was considered “left.” It could tell when a Twitter account was using a logo but the links weren’t associated with the brand it was portraying. AdVerif.ai not only found that a story on Natural News with the headline “Evidence points to Bitcoin being an NSA-engineered psyop to roll out one-world digital currency” was from a blacklisted site, but identified it as a fake news story popping up on other blacklisted sites without any references in legitimate news organizations.

Some dubious stories still get through. On a site called Action News 3, a post headlined “NFL Player Photographed Burning an American Flag in Locker Room!” wasn’t caught, though it’s been proved to be a fabrication. To help the system learn as it goes, its blacklist of fake stories can be updated manually on a story-by-story basis.

AdVerif.ai isn’t the only startup that sees an opportunity in providing an AI-powered truth serum for online companies. Cybersecurity firms in particular have been quick to add bot- and fake news-spotting operations to their repertoire, pointing out how similar a lot of the methods look to hacking. Facebook is tweaking its algorithms to deemphasize fake news in its newsfeeds, and Google partnered with a fact-checking siteso far with uneven results. The Fake News Challenge, a competition run by volunteers in the AI community, launched at the end of last year with the goal of encouraging the development of tools that could help combat bad-faith reporting.

Related Story
Real or Fake? AI Is Making It Very Hard to Know
Thanks to machine learning, it’s becoming easy to generate realistic video, and to impersonate someone.
Delip Rao, one of its organizers and the founder of Joostware, a company that creates machine-learning systems, said spotting fake news has so many facets that the challenge is actually going to be done in multiple steps. The first step is “stance detection,” or taking one story and figuring out what other news sites have to say about the topic. This would allow human fact checkers to rely on stories to validate other stories, and spend less time checking individual pieces.

The Fake News Challenge released data sets for teams to use, with 50 teams submitting entries. Talos Intelligence, a cybersecurity division of Cisco, won the challenge with an algorithm that got more than 80 percent correct—not quite ready for prime time, but still an encouraging result. The next challenge might take on images with overlay text (think memes, but with fake news), a format that is often promoted on social media, since its format is harder for algorithms to break down and understand.

“We want to basically build the best tools for the fact checkers so they can work very quickly,” Rao said. “Like fact checkers on steroids.”

Even if a system is developed that is effective in beating back the tide of fake content, though, it’s unlikely to be the end of the story. Artificial-intelligence systems are already able to create fake text, as well as incredibly convincing images and video. (see “Real or Fake? AI Is Making It Very Hard to Know”). Perhaps because of this, a recentGartner study predicted that by 2022, the majority of people in advanced economies will see more false than true information. The same report found that even before that happens, faked content will outpace AI’s ability to detect it, changing how we trust digital information.

What AdVerif.ai and others represent, then, looks less like the final word in the war on fake content than the opening round of an arms race, in which fake content creators get their own AI that can outmaneuver the “good” AIs (see “AI Could Set Us Back 100 Years When It Comes to How We Consume News”). As a society, we may yet have to reevaluate how we get our information.
 
老床通鹅这事儿就是一个经典。哈哈。有ai辅助的话,估计老床一点机会都没有。
 
肥死不可这回估计要加大资金开发信息识别ai。

背政治逼得。
 
赵本山说:悲哀,实在的悲哀!
 
老床通鹅这事儿就是一个经典。哈哈。有ai辅助的话,估计老床一点机会都没有。
你很乐观,看到AI的正面。其实也可以说,老床有ai辅助的话,估计美国和全世界一点机会都没有。AI绝对是把锋利的刀,就看在谁手里了,祈祷吧。
 
刀越来越锋利,自己搞死是迟早的事情。



你很乐观,看到AI的正面。其实也可以说,老床有ai辅助的话,估计美国和全世界一点机会都没有。AI绝对是把锋利的刀,就看在谁手里了,祈祷吧。
 
Mitbbs 上很多故事都是机器人编的。

但是时事新闻,编起来有难度。
 
后退
顶部