Source: Guardian UK
The activities of users of Twitter and other social media services were recorded and analysed as part of a major project funded by the US military, in a program that covers ground similar to Facebook’s controversial experiment into how to control emotions by manipulating news feeds.
Research funded directly or indirectly by the US Department of Defense’s military research department, known as Darpa, has involved users of some of the internet’s largest destinations, including Facebook, Twitter, Pinterest and Kickstarter, for studies of social connections and how messages spread.
While some elements of the multi-million dollar project might raise a wry smile – research has included analysis of the tweets of celebrities such as Lady Gaga and Justin Bieber, in an attempt to understand influence on Twitter – others have resulted in the buildup of massive datasets of tweets and additional types social media posts.
Several of the DoD-funded studies went further than merely monitoring what users were communicating on their own, instead messaging unwitting participants in order to track and study how they responded.
Shortly before the Facebook controversy erupted, Darpa published a lengthy list of the projects funded under its Social Media in Strategic Communication (SMISC) program, including links to actual papers and abstracts.
The project list includes a study of how activists with the Occupy movement used Twitter as well as a range of research on tracking internet memes and some about understanding how influence behaviour (liking, following, retweeting) happens on a range of popular social media platforms like Pinterest, Twitter, Kickstarter, Digg and Reddit.
Darpa, established in 1958, is responsible for technological research for the US military. Its notable successes have included no less than Arpanet, the precursor to today’s internet, and numerous other innovations, including onion routing, which powers anonymising technologies like Tor. However, thanks to some of its more esoteric projects, which have included thought-controlled robot arms, city-wide surveillance programs and exo-skeletons, the agency has also become the subject of many conspiracy theories, and a staple in programmes like the X-Files.
Unveiled in 2011, the SMISC program was regarded as a bid by the US military to become better at both detecting and conducting propaganda campaigns on social media.
On the webpage where it has published links to the papers, Darpa states the general goal of the SMISC program is “to develop a new science of social networks built on an emerging technology base”.
“Through the program, Darpa seeks to develop tools to support the efforts of human operators to counter misinformation or deception campaigns with truthful information.”
However, papers leaked by NSA whistleblower Edward Snowden indicate that US and British intelligence agencies have been deeply engaged in planning ways to covertly use social media for purposes of propaganda and deception.
Documents prepared by NSA and Britain’s GCHQ (and previously published by the Intercept as well as NBC News) revealed aspects of some of these programs. They included a unit engaged in “discrediting” the agency’s enemies with false information spread online.
Earlier this year, the Associated Press also revealed the clandestine creation by USAid of a Twitter-like, Cuban communications network to undermine the Havana government. The network, built with secret shell companies and financed through a foreign bank, lasted more than two years and drew tens of thousands of subscribers. It sought to evade Cuba’s stranglehold on the internet with a primitive social media platform.
Of the funding provided by Darpa, $8.9m has been channeled through IBM to a range of academic researchers and others. A further $9.6m has gone through academic hubs like Georgia Tech and Indiana University.
Facebook, the world’s biggest social networking site, has apologised for the study, which involved secret psychological tests on nearly 700,000 users in 2012, and prompted outrage from users and experts alike, being “poorly communicated” to the public.
The experiment, which resulted in a scientific paper published in the March issue of Proceedings of the National Academy of Sciences, hid “a small percentage” of emotional words from peoples’ news feeds, without their knowledge, to test what effect that had on the statuses or “likes” that they then posted or reacted to.
However, it appears that Facebook was involved in at least one other military-funded social media research project, according to the records recently published by Darpa.
The research was carried by Xuanhuai Wang, an engineering manager at Facebook, as well as Yi Chang, a lead scientist at Yahoo labs, and others based at the Universities of Michigan and Southern California.
The project, which related to how users understood and consumed information on Twitter, at one point analysed the tweets, retweets and other interactions spawned by Lady Gaga (described as “the most popular elite user on Twitter”) and Justin Bieber (“who is extremely popular among teenagers”).
Other studies looked further afield. One, “On the Study of Social Interactions on Twitter”, which was carried out by the University of South California, collected tweets from 2,400 Twitter users who had identified themselves as residing in the Middle East. It analysed how often they had interactions with other users and how these were spread.
Several studies related to the automatic assessment of how well different people in social networks knew one another, through analysing frequency, tone and type of interaction between different users. Such research could have applications in the automated analysis of bulk surveillance metadata, including the controversial collection of US citizens’ phone metadata revealed by Snowden.
Studies which received military funding channeled through IBM included one called “Modeling User Attitude toward Controversial Topics in Online Social Media”, which analysed Twitter users’ opinions on fracking.
Discussing the applicability of their research, the study’s authors stated: “For example, a government campaign on Twitter supporting vaccination can engage with followers who are more likely to take certain action (eg spreading a campaign message) based on their opinions.”
“As another example, when anti-government messages are spread in social media, government would want to spread counter messages to balance that effort and hence identify people who are more likely to spread such counter messages based on their opinions.”
A similarly titled-project out of the University of Southern California, “The Role of Social Media in the Discussion of Controversial Topics”, studied the behaviour of Twitter users posting about a 2012 vote in California on measures such as raising taxes, genetically modified organisms and the death penalty.
“Our findings suggest Twitter is primarily used for spreading information to like-minded people rather than debating issues,” the authors wrote in their paper on the project.
A study at Georgia Tech, “Cues to Deception in Social Media Communications”, involved an in-laboratory experiment using an experimental social media platform, “FaceFriend”, and 61 paid participants. While past research had investigated “written deception” in communications such as email, the study expanded this into social media, and the researchers concluded: “Breaking news stories and world events – for example, the Arab Spring – are heavily represented in social media, making them susceptible topics for influence attempts via deception.”
Several of the DoD-funded projects went further than simple observation, instead engaging directly with social media users and analysing their responses.
One of multiple studies looking into how to spread messages on the networks, titled “Who Will Retweet This? Automatically Identifying and Engaging Strangers on Twitter to Spread Information” did just this.
The researchers explained: “Since everyone is potentially an influencer on social media and is capable of spreading information, our work aims to identify and engage the right people at the right time on social media to help propagate information when needed.”
In the paper, which included data gathered through actively engaging 3,761 people on Twitter around the topics of public safety and bird flu, the researchers added: “Unlike existing work, which often uses only social network properties, our feature set includes personality traits that may influence one’s retweeting behaviour.”
In a statement, Darpa defended its funding of the research as essential to US defense interests.
“Social media is changing the way people inform themselves, share ideas, and organize themselves into interest groups, including some that aim to harm the United States,” said a spokesman. “Darpa supports academic research that seeks to understand some of these dynamics through analyses of publicly available discussions conducted on social media platforms.”
Sources said that data was from public streams in social networks, and was collected and stored by academics at institutions conducting the research, not by Darpa itself.
The Guardian approached a number of individuals involved in research, asking them for their views on why they believed the US military may be interested in funding research of this type, and asking about the extent to which consent was sought from people whose social media posts were recorded and analysed.
Among those who replied, Emilio Ferrara, who was involved in the research paper on “The Digital Evolution of Occupy Wall St”, said: “According to federal regulations of human experimentation, for studies that don’t affect the environment of online users, and whereas one can freely gather online data – say, from the public Twitter feed – there is no requirement of informed consent. This is the framework under which our Twitter study was carried out; moreover, all our studies on Twitter look into aggregate collective phenomena and never at the individual level.”
A colleague, Dr Filippo Menczer, added: “In our lab we study all aspects of the diffusion of information in social media.
“This work has broad applications as we strive to understand fundamental mechanism of social communication, such as how ideas and ‘memes’ compete for our attention, how they sometimes go viral, etc.”