原创,社交网络和针对性投放技术正在催化世界观极端化时代

贵圈

政府都对党
注册
2014-10-21
消息
32,833
荣誉分数
6,360
声望点数
373
不论是社交网络,还是自媒体,社交媒体服务。都有一个共同的特点,就是针对性的投放。推送内容这在音乐电影娱乐游戏领域很好。在社会和政治领域,就会把人的视野窄化。实际上不仅仅社交网站是这样,大号的新闻网站一样,都在走向这个趋势。很简单,这样能卖呗。

左的人看左的油管视频,右的人看右的脸书主页。然后这些社交网络媒体,就会根据你开始的选择和兴趣,不断加强地喂给你更多的同类的观点,以满足你的观看取向和欲望。社会的多样性和意见的多样性越来越被忽视。左的更左,右的更右。越尖锐的话题,越容易被机器人判断,也就越容易被针对性投放蒙住眼睛,最终极端化。

年龄和阅历可以很大程度上抵消这种机器人推送导致的极端化倾向。可以主动地选择接受对立的观点。CFC这样自由的论坛的枪林弹雨也可以使人接触最终接受圈儿外的观点。

年轻人学生这一方面的能力就相对薄弱。他们接受的是洗脑一般的教育理念。他们没有阅历来主动收纳对立面的观点。同时他们更多冲动更多维护自己的观点的欲望,激烈的论坛辩论也很难多样化他们的视野。

不要搞错的是,网络主流是拓宽眼界的,是去极端化的。但是全部生活在沉浸在社交媒体的新一代年轻人,需要面对窄化的挑战。同时这叫媒体也要针对社会和政治话题,重新设计他们的机器人。这是可以做到的。但是, 连CNN这样的大新闻媒体,都难以做到公正,社交媒体的机器人就能做到吗?

最终关键不是技术,是人心。
 
如同性取向。:tx:
 
年轻人为主体的选票,通常是这样的。很不幸。
 
不论是社交网络,还是自媒体,社交媒体服务。都有一个共同的特点,就是针对性的投放。推送内容这在音乐电影娱乐游戏领域很好。在社会和政治领域,就会把人的视野窄化。实际上不仅仅社交网站是这样,大号的新闻网站一样,都在走向这个趋势。很简单,这样能卖呗。

左的人看左的油管视频,右的人看右的脸书主页。然后这些社交网络媒体,就会根据你开始的选择和兴趣,不断加强地喂给你更多的同类的观点,以满足你的观看取向和欲望。社会的多样性和意见的多样性越来越被忽视。左的更左,右的更右。越尖锐的话题,越容易被机器人判断,也就越容易被针对性投放蒙住眼睛,最终极端化。

年龄和阅历可以很大程度上抵消这种机器人推送导致的极端化倾向。可以主动地选择接受对立的观点。CFC这样自由的论坛的枪林弹雨也可以使人接触最终接受圈儿外的观点。

年轻人学生这一方面的能力就相对薄弱。他们接受的是洗脑一般的教育理念。他们没有阅历来主动收纳对立面的观点。同时他们更多冲动更多维护自己的观点的欲望,激烈的论坛辩论也很难多样化他们的视野。

不要搞错的是,网络主流是拓宽眼界的,是去极端化的。但是全部生活在沉浸在社交媒体的新一代年轻人,需要面对窄化的挑战。同时这叫媒体也要针对社会和政治话题,重新设计他们的机器人。这是可以做到的。但是, 连CNN这样的大新闻媒体,都难以做到公正,社交媒体的机器人就能做到吗?

最终关键不是技术,是人心。
落入了不断强化不断极端的怪圈。
 
在技术上和哲学上都很复杂?
假新闻为何能高速传播被信以为真?
社交网真的没有推波助澜?
实际上最好的对抗假新闻的手段就是推送对立的观点信息。以人海对人海。当然,牺牲商业利益为前提。

“脸书”遭遇假新闻 扎克伯格咋对付?
  • 2小时前
分享
161119224154_facebook_fake_news_640x360_reuters.jpg
Image copyrightREUTERS
Image caption扎克伯格说,他不想让脸书成为“真理的仲裁者”。
社交网站“脸书”(Facebook)首席执行官马克·扎克伯格(Mark Zuckerberg)宣布,已经制订计划解决脸书充斥假新闻的问题。

脸书近日深陷争议之中,因为一些用户认为,脸书上的假新闻改写了美国大选的结果。

扎克伯格在脸书上发文说,脸书“重视错误信息问题”,并详细描述了脸书的应对计划。

近一周前,扎克伯格在回应假新闻问题时说,脸书上只有“少量”内容是假新闻,99%的内容都是真实的。

但扎克伯格今天写道:“我们为解决这个问题已经努力很久,我们对这个责任很重视。”

找平衡
扎克伯格说,虚假信息问题“在技术上和哲学上都很复杂”。他强调,脸书不想阻止用户分享意见,也不想成为“真理的仲裁者”。

扎克伯格写道,为了更有力地打击虚假信息,脸书已经开展七个相关项目,包括更有效的监测、核实,以及在虚假内容上作警告标记。

上周美国总统大选之后,很多人批评扎克伯格,认为脸书上的假新闻帮助特朗普积攒人气,最终赢得大选。扎克伯格当时说,这个说法“很疯狂”。

但是,假新闻网站确实在增加,而其中一个重要因素就是网络广告给它们带来的收益。

有些“假新闻”网站本来是提供貌似新闻的幽默讽刺内容,没有欺骗的目的。但是,有些这类网站渐渐开始把假新闻写得更容易让人相信,更具欺骗性,因为他们认为这样的内容更可能被大量分享。

有一条曾在大选后在脸书上被广泛分享的假新闻说,好莱坞黑人影星丹泽尔·华盛顿曾称赞特朗普。

周一,谷歌宣布将采取措施,以避免假新闻网站通过网络广告赚钱。其后不久,脸书也宣布将对假新闻网站实行类似的限制措施。
 
Petition to ban "Fake News" on Facebook
 
希望小札头脑清楚,不要开始sensoring。而是改善推送方式。
 
How Facebook can cut down on fake news without relying on thousands of humans to decide what is true
zuckerberg-facebook-truth.jpg

The arbiter of truth. (Reuters/Robert Galbraith)
SHARE
WRITTEN BY

Josh Horwitz
OBSESSION
Machines with Brains
November 16, 2016


The world’s most powerful news provider has a fake news problem—and it doesn’t appear to know what to do about it.





After Donald Trump secured an unexpected victory in the US presidential election, Facebook’s role in enabling his campaign has come under the spotlight. Not only did the social network create an “echo chamber” where users only see information that reinforces their existing biases, it also disseminated information that was patently false,and which often aided Trump. Content creators looking for easy clicks and ad revenue gamed the system, publishing fake but viral stories like “Pope Francis Forbids Catholics From Voting For Hillary!” which were shared hundreds of thousands of times on Facebook.





Now the company seems divided internally about its next steps. Zuckerberg issued a statement alleging that “more than 99% of what people see is authentic” on the social network, but employees disagree. Dozens of staff members have internally formed a secret task force to combat the problem, according to Buzzfeed. The company had created tools to deal with the problem earlier this year, then deliberately did not deploy them, Gizmodo reports, fearing the reaction from conservative outlets which were disproportionately targeted (because they had more fake news).





On Monday (Nov. 14) the company did make one major change, banning fake news publishers from its ad network. But it hasn’t introduced new measures to prevent what they publish from appearing on your Facebook News Feed. While this means these companies may make less money from Facebook, users remain concerned by what shows up in News Feed, not outside of it, the ban may do little to address their grievances.





Zuckerberg framed the issue on Nov. 13 as a problem with truth itself, which isn’t always clear cut, and suggests an influential private company deciding for readers what is true and what is false risks going down a slippery slope.





Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual. I am confident we can find ways for our community to tell us what content is most meaningful, but I believe we must be extremely cautious about becoming arbiters of truth ourselves.





For Facebook, employing a team of human editors to vet for the “truth” might not only be unfeasible given the amount of information it handles, but also undesirable after the company’s Trending News debacle. Here are some concrete, specific measures Facebook could take to make sure links shared on its site don’t spread outright lies, without trespassing into murky ethical issues of restricting freedom of speech.





  • Crack down on Facebook pages that make money by spreading lies
This election cycle saw a surge in Facebook pages that manufactured misleading memes, drove users to outside websites with fake news, and then collected revenue from the traffic generation. One conservative-leaning page called Make America Great, was reaching about 1.7 million people daily, by sharing exaggerated or fabricated news stories from other sites, the New York Times reported. In July 2016, the page’s founder earned $30,000 per month in revenue, the paper reported.





Facebook should take measures to de-prioritize results coming from pages its algorithm determines to be unreliable.





One way to do this, according to Azeem Azhar, a writer and investor in artificial intelligence companies, involves examining how long pages have been in existence (the longer, the likely more reliable), where the content they share originates from (Is it a generally-credible mainstream news source? Is it well-linked to elsewhere on the internet?) and the profile of people clicking on it it (Do they read about SpaceX, or space aliens?).





Pages that distribute information with the right “trust signals” for reliability will see their pieces placed accordingly in the News Feed. Pages that don’t have strong trust signals will see their distribution reduced.





“There are certain extensive trust signals generated over time that are like reputation. If we see those signals attached to a piece of content, it tells us a lot about that content,” says Azhar. “Imagine you have a story about some kind of brain cancer. Then, that story is being shared by a lot of neurosurgeons. Would you look a the veracity or the importance of that story because it’s being shared by neurosurgeons, versus a bunch of celebrities?”





  • Make its existing community tools to combat hoaxes more effective
A feature released in 2015 lets users flag stories as potential hoaxes. When an unspecified number of users label a story as a hoax, it gets “reduced distribution” and is less likely to appear in one’s News Feed. In addition, when a specific popular post has been flagged as inaccurate, Facebook puts a disclaimer above the piece that reads “Many people have reported that this story contains false information” in small, faint grey font.





news_feed_fyi__showing_fewer_hoaxesc2a0___facebook_newsroom.png

(Facebook)
Yet how many times did viewers see this disclaimer over the election cycle? Facebook might consider making this existing system more sensitive to user flags, or making its disclaimers more prominent.





  • Link to alternate sources of information in “Related Links”
Facebook is under-utilizing a little-noticed feature in News Feed, says Lokman Tsui, a professor at the School of Journalism and Communication at the Chinese University of Hong Kong. Currently, whenever users click on a story that appears in the feed, a tab called “Related Links” opens up below the space where the original story’s link appeared. When bonafide hoaxes or unverifiable stories surface, Facebook might consider opening up Related Links by default, and link out to sites like Snopes.com or media outlets with opposing viewpoints. This could help keep Facebook users better informed about the likely veracity of the content they see, if they do indeed see information that’s proven false or possibly false.





“You have these fact-checking organizations that verify all kinds of news,” says Tsui. “Facebook could link to the news, and say ‘Here’s the news, now here are some links to credible fact check organizations.”





  • Devise and list a thorough procedure for identifying and managing misinformation
This is perhaps the most important step Facebook can take, and its biggest failure to date. Facebook remains a black box in regard to how its algorithm prioritizes not just news or memes, but nearly everything that’s shared in its main feed. With regard to truthfulness however, it’s especially lacking. The social network has entire pages devoted to how it deals with harassment and hate speech, and a transparent way for users to report these things. It also has a page where it publishes the number of requests it has received from governments looking to obtain information about its users.





Facebook’s activities in both these areas have been criticized, but at least they exist. There is no comparable, detailed explanation of how it deals with fake information.





Facebook might also consider allowing third-parties to occasionally review its algorithms and procedures for how effectively they vet hoaxes (or hate speech, or pornography), and then have them release reports on how meet they live up to the standards they set for themselves.





An imperfect but better Facebook
Facebook did not answer questions about the specific measures it has taken to improve its existing technology for detecting hoaxes, or how many people have been devoted to it or will be in the future.





While it clearly needs to do more to wipe off misinformation, no one wants to see Facebook become an “arbiter of truth” the way, for example, heavily censored Chinese social networks remove information that criticizes the government.





The public benefits when they are given as much exposure to an abundance of information and ideas, and left to come to their own conclusions. A myriad of information exists on the internet, some of it good, some of it not good, and it’s all one click away from Facebook.





Just as Facebook has perfected a bias towards showing us information that panders to our political beliefs, it can strive to perfect a bias towards presenting the truth—even if it’s an imperfect or inconsistent bias. Tsui says it’s important Facebook doesn’t become “Big Daddy, keeping what they think is true and removing what they think is not true.” Instead, he believes it “can do a lot more and should do a lot more to help [users] make decisions” about the truthfulness of what they read.
 
小札现在是左右为难,亚历山大呀。
“no one wants to see Facebook become an “arbiter of truth” the way, for example, heavily censored Chinese social networks remove information that criticizes the government.”
google也在受压,压力来自老白宫。
 
Obama criticizes the spread of fake news on Facebook
A poke from the president
by Casey Newton@CaseyNewton Nov 17, 2016, 7:43pm EST SHARE

President Obama took time during a press conference today to assail the spread of fake news online, particularly the way it travels on Facebook. “In an age where there’s so much active misinformation and it’s packaged very well and it looks the same when you see it on a Facebook page or you turn on your television,” he said, “if everything seems to be the same and no distinctions are made, then we won’t know what to protect.”

Obama’s remarks come at a time when Facebook has been subject to withering criticism for allowing fake stories and hoaxes to spread to millions of users at a critical time. A BuzzFeedanalysis this week found that top-performing fake stories performed better on Facebook during the run-up to this month’s presidential election than accurate stories shared by traditional media sites.

Obama made his remarks during a joint press conference in Germany with its chancellor, Angela Merkel, during a valedictory tour of Europe. He appears to have been thinking about Facebook’s fake-news problem for a while: a profile about his final days as president in The New Yorker today said he had talked “obsessively” about a BuzzFeed report on how Macedonian teens were spamming Facebook with fake Trump news for fun and profit.

ban sites that post fake newsfrom using its advertising network to make money. But CEO Mark Zuckerberg has resisted the idea that the company played a role in influencing the outcome of the US election, calling the idea “crazy.” Still, he said the company would do more to combat the spread of fake news — while saying Facebook would resist becoming an “arbiter of truth.”

You can watch Obama’s full press conference below. His comments about fake news begin 48 minutes in.



MORE FROM THE VERGE
 
毫无担当,指责非主流媒体的时候,不说看看自己那帮主流媒体的德行?主流媒体明显作弊偏袒时,为何他不站出来说话?
非主流之所以能登台,还不是主流太操蛋了?希望一方面FB/google顶住了,另一方面改善推送信息的机制。
 
面对美国新闻打假的压力,脸书能否顶住?

看情况,脸书可能顶不住。

“这7年来脸书平台一直被中国屏蔽,看起来,扎克伯格的确在从多方面为赢得中国市场作准备。有软件工程师向《纽约时报》透露,Facebook秘密开发了一款审查工具,可能会用于过滤政治敏感内容。作者Finn Mayer-Kuckuk指出,中国的互联网公司必须要接受审查。因此,像腾讯或是新浪这样的企业都要自掏腰包招募审查员删除敏感内容:"2009年以来,中国政府也要求谷歌、推特、脸书等外国公司自我审查。谷歌因此全线撤出了中国市场。谷歌创始人之一谢尔盖·布林(Sergey Brin)出生在前苏联,他不愿与专政有任何的联系"。”
 
后退
顶部
首页 论坛
消息
我的