Photos of intimate body parts are bad. But a livestream of a suicide is fine?
Social media analysts say there’s a reason why the death of a US veteran spread so far and fast this week: business bottom lines.
Facebook’s live stream of 33-year-old US veteran Ronnie McNutt on August 31 was initially deemed not to breach the global corporation’s standards.
A day later, the global megacorporation changed its mind.
But, more than a week later, it’s still out there. Unwitting Facebook, Instagram, Twitter and TikTok users are having the graphic footage shoved into their feeds.
RELATED: Warning to keep kids off TikTok today
Why?
It’s all a game of numbers, says Flinders University researcher Dr Zac Rogers.
Social media algorithms love such things, he says. They are programmed to get viewers’ attention. They want engagement. They want clicks. And their artificial intelligence doesn’t care what it’s driven by or what it contains.
“The user is, in fact, a resource from which value is extracted via exploitative means,” Dr Rogers says. “Most users of internet platforms have no idea they cannot only be subject to micro-targeting, but that they are also a type of cognitive raw material the internet runs on. And the high octane version of that fuel is extremism and hate.”
Moderation may be difficult. But it can be done.
We know that even a glimpse of a breastfeeding mother will be quickly removed. So why not tragic suicides like Ronnie McNutt’s?
AN ALGORITHMIC CRISIS
Mr McNutt’s friend, Josh Steen, told US media he was at a loss over how the crisis had been handled.
“If some woman posts a topless photo, their software will detect that, remove it, and ban their account,” Mr Steen said. “That’s more offensive than my friend killing himself?”
Mr Steen said he and others had attempted to get help for their distressed friend. They called family. They called police. They also repeatedly appealed to Facebook to intervene and close the livestream.
The stream continued for two hours. An hour-and-a-half after his death, someone (or something) from Facebook finally responded. The broadcast did not violate any community guidelines, the statement said.
So the footage remained online.
“Ronnie’s video was up for eight hours, and it had already been shared to a viral level before it was pulled down,” Steen told Forbes. “If Facebook had done their job, this video wouldn’t be public.”
RELATED: PM condemns graphic TikTok suicide video
Then parents began to notice the extraordinary footage appearing on their teen’s TikTok “For You” algorithm-driven recommendations feeds. McNutt’s family faced a barrage of harassment.
His Facebook profile was flooded with instructions on how to find the footage. Fake fundraisers have been targeting his parent’s backing for campaigns.
“His entire family watched him commit suicide,” Mr Steen said. “Now they’re being forced to watch it over and over again.”
Facebook, Twitter and TikTok are yet to detail how the footage was allowed to be pushed into automatic recommendation feeds, and why it was allowed to persist so long.
It’s not the first time they’ve faced such scrutiny.
THIS ISN’T THE FIRST TIME
Earlier this year, TikTok took three hours to respond to the live-streamed suicide of a 19-year-old.
In 2017, a 12-year-old girl from Georgia took her own life on Facebook after being sexually abused. It remained online for up to two weeks.
Days later, an aspiring actor killed himself while live-streaming from a car on a Los Angeles street. Then a 14-year-old from Florida followed suit.
In 2019, Brenton Tarrant live-streamed his New Zealand massacre. Recordings and clips quickly spread.
“We do not allow any organisations or individuals that proclaim a violent mission or are engaged in violence, from having a presence on Facebook,” the corporation said in a statement at the time.
Yet US President Donald Trump’s inciting “when the looting starts, the shooting starts” response to protests in Minneapolis in May remains online.
RELATED: Sinister formula behind your Facebook feed
Facebook has been repeatedly accused of actively promoting such content. It’s alleged artificial intelligence systems have identified it as being the most efficient means of winning viewer’s attention and interaction.
And those wishing to promote particular messages – be they trolls, propagandists or marketers – know how to ‘game’ the systems, Mr Rogers says. Fake profiles. Masses of ‘bot’ accounts. Carefully crafted high-profile ‘influencers’. All are active participants in the race to make particular messages go ‘viral’.
“Their algorithms are failing, whoever is reviewing these things is failing,” Steen said of his friend’s death going viral. “It’s going to continue to happen, it’s going to get worse if something isn’t done. This has got to be the breaking point.”
SECRET ALGORITHMS
Google. Facebook. Twitter. TikTok. Just about all social media are obsessive about keeping their sorting algorithms secret.
But trolls, lobbyists, influences and propaganda services have figured them out well enough to bend them to their purposes.
“Many observers agree that it’s long past time to implement more regulation and oversight on the tech sector,” writes Brookings Institution researcher Alex Engler.
“Yet the practices of these companies are obscured to reporters, researchers and regulators.”
It’s about profits, not users, he says.
“Much of it fuels an advertising technology industry that enables gender and racial discrimination in housing and employment advertising, evaluates Airbnb and Grubhub interactions to assign secret consumer scores for individual customers, and fuels dangerous misinformation.”
The attention economy, Dr Rogers says, is one of extraction via exploitation.
Users think they are getting free content tailored to their preferences and needs. But these corporations are not altruistic.
Rogers says users are the free labour which runs internet advertising.
“You are constantly being nudged, steered, and manipulated into acquiescing further. The more attention each of us gives the internet, the more our capacity to reflect on nuance is eroded. As we begin to think in more binary terms, we are further socialised into a system designed for machine efficiency and economic extraction by a handful of corporations.”
Forget fighting machines of glittering metal. AI is taking over the world through social media.
“If that sounds like we are being nudged to become more like machines it’s because we are,” Mr Rogers says. “Machines see humans as noise. Noise is inefficient. When humans acquiesce machines seem smarter, but they are nothing of the sort. It’s humans who have been diminished.”
TAMING THE WILD WEST
Increasingly organised campaigns have succeeded in convincing big names such as Coca Cola, Adidas and Unilever to pull their advertising from Facebook over such behaviour.
But CEO Mark Zuckerburg thinks it’s all a storm in a teacup: “My guess is that all these advertisers will be back on the platform soon enough,” he reportedly told staff in an internal memo.
The company emphatically denies the claims.
“We don’t benefit from hate,” a Facebook response to US media reads.
“We invest billions of dollars each year to keep our community safe and are in deep partnership with outside experts to review and update our policies.”
But does Facebook’s AI agree?
“None of this is science fiction,” Mr Rogers says.
“The human-computer interface has been a zone in which human cognition has been reverse engineered for forty years. Every quirk, bug, and vulnerability has been studied and reapplied to make computers seem more clever, more complete. But it’s based entirely on deception. The internet scaled this phenomenon. Social media supercharged it. At scale, it is an attempt to construct an entirely prosthetic economy. A terraformed economy based on exploiting the human mind.”
It’s an economy where AI is in control.
“The worst of tech is known only to a rumour mill of in-house data scientists,” Engler writes. “They know the results from database queries that never made it into a memo and the questions not to ask of their own data.”
Now one such in-house source has spoken out.
Software engineer Ashok Chandwaney has just quit Facebook after five years. In his resignation letter, he declared “I’m quitting because I can no longer stomach contributing to an organisation that is profiting off hate in the US and globally”.
Jamie Seidel is a freelance writer | @JamieSeidel