Pro Russian Bots Got Activated, Attacked McCain (social media)When He Came Down on Trump
Iam very visual and for me to understand things better I need a picture in my mind. Here are Pixel Robot icons or an isolated set of 8 bits bots vector isolated set. They are shown billions bigger of their actual size. They are cell size travelers of the internet system going from computer to computer or standing to guard a site such as Wikipedia making sure other bots with intentions of corrupting data don't come in to change their data ie. numbers of visitors. This blog uses them and internet companies use them. They make the thinking of the collective, in this case, humans into counted numbers and graphs. They make their biggest impact on the thinking of the masses where ever you find the biggest quantitative of humans like social media. It is believed they can change elections by posting incorrect data to the side with the wrong candidates and this just happened and this is what this story is all about. {adamfoxie.blogspot.com} |
After violent protests rocked Charlottesville, Virginia last month, Republican Senator John McCain took to Twitter to condemn hatred and bigotry and urge President Donald Trump to speak out more forcefully.
Pro-Russian bots get activated on social media.
Within hours, an online campaign attacking McCain -- a frequent Trump critic -- began circulating, amplified with the help of automated and human-coordinated networks known as bots and cyborgs linking to blogs on “Traitor McCain” and the hashtag #ExplainMcCain
After the 2016 U.S. presidential race was subject to Russian cyber meddling, analysts say the ferocity of more recent assaults is a preview of what could be coming in the 2018 elections, when Republicans will be defending their control of both chambers of Congress.
“They haven’t stood still since 2016,” said Ben Nimmo, a senior fellow in information defense at the Digital Forensic Research Lab at the Atlantic Council in Washington, which tracked the activity. “People have woken up to the idea that bots equal influence and lots of people will be wanting to be influencing the midterms.”
While special counsel and former FBI chief Robert Mueller keeps investigating the 2016 race, Nimmo’s work is among a number of initiatives cropping up at think tanks, start-ups, and even the Pentagon seeking to grasp how bots and influence operations are rapidly evolving. Blamed for steering political debate last year, bots used for Russian propaganda and other causes are only becoming more emboldened, researchers say.
They’re preparing “and sowing seeds of discord” and “potentially laying the groundwork for what they’re going to do in 2018 or 2020,” said Laura Rosenberger, senior fellow, and director of the Alliance for Securing Democracy at the German Marshall Fund.
The alliance last month unveiled Hamilton 68, an online dashboard designed to track Russian influence operations on Twitter with the hope of better highlighting sources of information.
The site culls real-time data from 600 Twitter users, analyzing trending hashtags, topics, and links. The dashboard’s developers say the accounts they selected cover those likely controlled by Russian government influence operations. Other accounts are pro-Russia users that may be loosely connected to the government and some are people influenced by the first two groups and who are active in bolstering Russian media themes. Some are bot accounts.
“Our view is that exposure is a really important element of beginning to push back on some of these efforts,” said Rosenberger, who served at the National Security Council and the State Department in the Obama administration.
Cyborgs Vs. Bots
Short for “robot,” internet bots come in a couple of forms. There are automated versions in which software pumps out posts from social media accounts, often at rates that a human couldn’t conceivably do. Others are dubbed cyborgs -- some of their content is automatically spit out, but a person also takes over posting at times. They can also be human-run accounts that are hacked or taken over by a robot.
Not all bots are nefarious. Although researchers say pro-Russian operatives exploiting social media have made headlines lately, the use of bots is broadening as they prove they can be influential in moving narratives from niche circles and the fringes of the internet to a wider audience by spreading links to blogs and news sites, as well as popularizing memes and hashtags. That will make them a potentially potent tool for competing interests trying to influence U.S. political debate in 2018 and beyond.
It’s hard to determine from where bots originate. Analysts are able to monitor the messaging that bots latch on to, such as advocating for Russian and alt-right narratives or anti-NATO stances. Nation-states or groups helping political campaigns might look to employ bots given their power to shift debates.
And while many online campaigns are clearly fake, bots are also used in more sophisticated efforts that start from a basis in truth.
Ukraine
Top theme users boosted the week after the Charlottesville clashes were “alt-right alarmism” about the left-wing anti-fascist movement, known as Antifa, according to the dashboard findings. The most-tweeted link in the Russian-linked network followed by the researchers was a petition to declare Antifa a terrorist group.
On Twitter, pro-Russian bots and cyborgs helped promote accusations that McCain allied with neo-Nazis in the past, such as during Ukraine’s civil unrest in 2013. At the time, the Arizona Republican, who is known for his tough stance against Russian meddling in Ukraine, met with and appeared on a stage with nationalist leader Oleh Tyahnybok, whose group has neo-Nazi roots.
One Twitter account tracked by Nimmo’s lab, @TeamTrumpRussia, is what the researchers call a “pro-Kremlin cyborg site.” It averages a rate of more than 220 tweets a day, including memes about McCain in the week after the Charlottesville unrest, which left one person dead.
Top Russian officials, including President Vladimir Putin, have repeatedly rejected accusations the country meddled in the U.S. election, a finding at odds with the conclusions of the U.S. intelligence community. In January, the nation’s top intelligence agencies agreed that Russia interfered in the election to discredit Hillary Clinton and boost Trump, who has often appeared reluctant to embrace the findings. Trump’s intelligence chiefs, including CIA Director Mike Pompeo and Director of National Intelligence Dan Coats, have agreed with the conclusions.
Putin told NBC News in June that there’s “no proof” of any involvement by Russia at the “state level.” But he did say that “patriotically minded” Russians could have been behind intrusions into Clinton’s campaign.
The drumbeat of news about Russia’s role in the election have only helped push relations with the U.S. to post-Cold War lows. Nonetheless, analysts say Russia’s longer-term goal is less focused on Trump than on helping disrupt or undermine U.S. democratic institutions -- an effort that has been under way for decades but which now has a more technological edge.
Researchers say Twitter isn’t the only domain for bots. They’re increasingly expanding to other platforms like YouTube, Instagram, and LinkedIn. They even operate interactive “chatbots” on mobile applications available on Facebook, said Nitin Agarwal, an information science professor at the University of Arkansas at Little Rock.
Mimicking Human Behavior
“The level of sophistication among these bots is increasing and becoming more and more advanced to try to evade bot detection and suspension from Twitter and other platforms,” said Agarwal, who’s spent a decade studying the use of social media for influence operations. They’re also trying to “mimic human behavior so that they can gain your trust and they can influence your behaviors,” he said.
Because the use of bots is still new, trying to understand how they operate has become a cutting-edge field. It’s even caught the attention of the Pentagon’s Defense Advanced Research Projects Agency, known as DARPA.
In May, the agency awarded Agarwal and Intelligent Automation Inc., a Rockville, Maryland-based technology company, a contract of up to $1.5 million over three years -- if research milestones are met -- to study the classification of “social bots,” what their intent is and how they’re applied on social media.
For researchers, Twitter is a data gold mine because users’ accounts are usually publicly available. It’s harder to access private content on Facebook.
‘Powerful Antidote’
When asked how it was responding to growing sophistication by bots, a Twitter spokeswoman referred to a June 14 blog post by Colin Crowell, the company’s vice president of public policy, government, and corporate philanthropy. Crowell outlined how Twitter is curbing “bots and other networks of manipulation,” including growing its team and resources and working “hard to detect spammy behaviors.”
“Twitter’s open and real-time nature is a powerful antidote to the spreading of all types of false information,” Crowell wrote. “This is important because we cannot distinguish whether every single Tweet from every person is truthful or not. We, as a company, should not be the arbiter of truth.”
Since the election, Twitter and Facebook have taken steps to counter false news and kill off fake accounts. In August, Facebook said it created a software algorithm to flag stories that may be suspicious and send them to third-party fact checkers. But bots are also getting savvier at dodging detection. That poses a challenge to social media companies trying to crack down on fake accounts -- and fake news.
And with bot activity accelerating as the U.S. heads into another election season in 2018, social media companies could face further risks from these networks.
A challenge for social media companies is “how good their algorithms are at weeding out bot strikes,” Nimmo said. “That’s something that they need to be thinking of.”
— With assistance by Sarah Frier
Comments