春风吹
桃花仙
- 注册
- 2017-02-24
- 消息
- 3,255
- 荣誉分数
- 1,449
- 声望点数
- 223
首先AI工具已经开始识别fakenews, 其次同样的AI工具可以编织故事,比人编的自然漏洞就少,还会做背景。
将来信谁?休提主流媒体。已经被干掉了还不自知。
Can AI Win the War Against Fake News?
Developers are working on tools that can help spot suspect stories and call them out, but it may be the beginning of an automated arms race.
One algorithm meant to shine a light in the darkness is AdVerif.ai, which is run by a startup of the same name. The artificially intelligent software is built to detect phony stories, nudity, malware, and a host of other types of problematic content. AdVerif.ai, which launched a beta version in November, currently works with content platforms and advertising networks in the United States and Europe that don’t want to be associated with false or potentially offensive stories.
The company saw an opportunity in focusing on a product for companies as opposed to something for an average user, according to Or Levi, AdVerif.ai’s founder. While individual consumers might not worry about the veracity of each story they are clicking on, advertisers and content platforms have something to lose by hosting or advertising bad content. And if they make changes to their services, they can be effective in cutting off revenue streams for people who earn money creating fake news. “It would be a big step in fighting this type of content,” Levi says.
Sign up for the Chain Letter
Blockchains, cryptocurrencies, and why they matter.
Manage your newsletter preferences
AdVerif.ai scans content to spot telltale signs that something is amiss—like headlines not matching the body, for example, or too many capital letters in a headline. It also cross-checks each story with its database of thousands of legitimate and fake stories, which is updated weekly. The clients see a report for each piece the system considered, with scores that assess the likelihood that something is fake news, carries malware, or contains anything else they’ve ask the system to look out for, like nudity. Eventually, Levi says he plans to add the ability to spot manipulated images and have a browser plugin.
Recommended for You
Some dubious stories still get through. On a site called Action News 3, a post headlined “NFL Player Photographed Burning an American Flag in Locker Room!” wasn’t caught, though it’s been proved to be a fabrication. To help the system learn as it goes, its blacklist of fake stories can be updated manually on a story-by-story basis.
AdVerif.ai isn’t the only startup that sees an opportunity in providing an AI-powered truth serum for online companies. Cybersecurity firms in particular have been quick to add bot- and fake news-spotting operations to their repertoire, pointing out how similar a lot of the methods look to hacking. Facebook is tweaking its algorithms to deemphasize fake news in its newsfeeds, and Google partnered with a fact-checking site—so far with uneven results. The Fake News Challenge, a competition run by volunteers in the AI community, launched at the end of last year with the goal of encouraging the development of tools that could help combat bad-faith reporting.
Related Story
Real or Fake? AI Is Making It Very Hard to Know
Thanks to machine learning, it’s becoming easy to generate realistic video, and to impersonate someone.
Delip Rao, one of its organizers and the founder of Joostware, a company that creates machine-learning systems, said spotting fake news has so many facets that the challenge is actually going to be done in multiple steps. The first step is “stance detection,” or taking one story and figuring out what other news sites have to say about the topic. This would allow human fact checkers to rely on stories to validate other stories, and spend less time checking individual pieces.
The Fake News Challenge released data sets for teams to use, with 50 teams submitting entries. Talos Intelligence, a cybersecurity division of Cisco, won the challenge with an algorithm that got more than 80 percent correct—not quite ready for prime time, but still an encouraging result. The next challenge might take on images with overlay text (think memes, but with fake news), a format that is often promoted on social media, since its format is harder for algorithms to break down and understand.
“We want to basically build the best tools for the fact checkers so they can work very quickly,” Rao said. “Like fact checkers on steroids.”
Even if a system is developed that is effective in beating back the tide of fake content, though, it’s unlikely to be the end of the story. Artificial-intelligence systems are already able to create fake text, as well as incredibly convincing images and video. (see “Real or Fake? AI Is Making It Very Hard to Know”). Perhaps because of this, a recentGartner study predicted that by 2022, the majority of people in advanced economies will see more false than true information. The same report found that even before that happens, faked content will outpace AI’s ability to detect it, changing how we trust digital information.
What AdVerif.ai and others represent, then, looks less like the final word in the war on fake content than the opening round of an arms race, in which fake content creators get their own AI that can outmaneuver the “good” AIs (see “AI Could Set Us Back 100 Years When It Comes to How We Consume News”). As a society, we may yet have to reevaluate how we get our information.
将来信谁?休提主流媒体。已经被干掉了还不自知。
Can AI Win the War Against Fake News?
Developers are working on tools that can help spot suspect stories and call them out, but it may be the beginning of an automated arms race.
- by Jackie Snow
- December 13, 2017
One algorithm meant to shine a light in the darkness is AdVerif.ai, which is run by a startup of the same name. The artificially intelligent software is built to detect phony stories, nudity, malware, and a host of other types of problematic content. AdVerif.ai, which launched a beta version in November, currently works with content platforms and advertising networks in the United States and Europe that don’t want to be associated with false or potentially offensive stories.
The company saw an opportunity in focusing on a product for companies as opposed to something for an average user, according to Or Levi, AdVerif.ai’s founder. While individual consumers might not worry about the veracity of each story they are clicking on, advertisers and content platforms have something to lose by hosting or advertising bad content. And if they make changes to their services, they can be effective in cutting off revenue streams for people who earn money creating fake news. “It would be a big step in fighting this type of content,” Levi says.
Sign up for the Chain Letter
Blockchains, cryptocurrencies, and why they matter.
Manage your newsletter preferences
AdVerif.ai scans content to spot telltale signs that something is amiss—like headlines not matching the body, for example, or too many capital letters in a headline. It also cross-checks each story with its database of thousands of legitimate and fake stories, which is updated weekly. The clients see a report for each piece the system considered, with scores that assess the likelihood that something is fake news, carries malware, or contains anything else they’ve ask the system to look out for, like nudity. Eventually, Levi says he plans to add the ability to spot manipulated images and have a browser plugin.
Recommended for You
- Exclusive: This is the most dexterous robot ever created
- IBM’s Dario Gil says quantum computing promises to accelerate AI
- China’s AI wizards want to entertain you, cure you, and dominate the world
- How to manipulate Facebook and Twitter instead of letting them manipulate you
- Jeff Bezos gave a sneak peek into Amazon’s future
Some dubious stories still get through. On a site called Action News 3, a post headlined “NFL Player Photographed Burning an American Flag in Locker Room!” wasn’t caught, though it’s been proved to be a fabrication. To help the system learn as it goes, its blacklist of fake stories can be updated manually on a story-by-story basis.
AdVerif.ai isn’t the only startup that sees an opportunity in providing an AI-powered truth serum for online companies. Cybersecurity firms in particular have been quick to add bot- and fake news-spotting operations to their repertoire, pointing out how similar a lot of the methods look to hacking. Facebook is tweaking its algorithms to deemphasize fake news in its newsfeeds, and Google partnered with a fact-checking site—so far with uneven results. The Fake News Challenge, a competition run by volunteers in the AI community, launched at the end of last year with the goal of encouraging the development of tools that could help combat bad-faith reporting.
Related Story
Real or Fake? AI Is Making It Very Hard to Know
Thanks to machine learning, it’s becoming easy to generate realistic video, and to impersonate someone.
Delip Rao, one of its organizers and the founder of Joostware, a company that creates machine-learning systems, said spotting fake news has so many facets that the challenge is actually going to be done in multiple steps. The first step is “stance detection,” or taking one story and figuring out what other news sites have to say about the topic. This would allow human fact checkers to rely on stories to validate other stories, and spend less time checking individual pieces.
The Fake News Challenge released data sets for teams to use, with 50 teams submitting entries. Talos Intelligence, a cybersecurity division of Cisco, won the challenge with an algorithm that got more than 80 percent correct—not quite ready for prime time, but still an encouraging result. The next challenge might take on images with overlay text (think memes, but with fake news), a format that is often promoted on social media, since its format is harder for algorithms to break down and understand.
“We want to basically build the best tools for the fact checkers so they can work very quickly,” Rao said. “Like fact checkers on steroids.”
Even if a system is developed that is effective in beating back the tide of fake content, though, it’s unlikely to be the end of the story. Artificial-intelligence systems are already able to create fake text, as well as incredibly convincing images and video. (see “Real or Fake? AI Is Making It Very Hard to Know”). Perhaps because of this, a recentGartner study predicted that by 2022, the majority of people in advanced economies will see more false than true information. The same report found that even before that happens, faked content will outpace AI’s ability to detect it, changing how we trust digital information.
What AdVerif.ai and others represent, then, looks less like the final word in the war on fake content than the opening round of an arms race, in which fake content creators get their own AI that can outmaneuver the “good” AIs (see “AI Could Set Us Back 100 Years When It Comes to How We Consume News”). As a society, we may yet have to reevaluate how we get our information.