Getting rid of social media’s recommended content is similar to blocking telemarketers from your phone or cutting junk from your mailbox. It means cleansing your proverbial temple from the presence of merchants and snake oil salesmen.

The problem is — social media companies really want our engagement, because that translates to more “watch time,” ad clicks, and opportunities to sell you stuff. Sooner or later, however, one realizes that “recommended content” is precisely how those companies get us — by subtly telling us whom to follow, what to watch, and where to focus our attention.

The promises of unbound access and connectivity, delivered by market-tested buzz words and state-approved Silicon Valley personalities, lose their luster once we find ourselves in environments that are constantly fueling our addictions and redirecting our attention. This is when we start to wonder who exactly is doing the recommending and to what ends.

If the answer is “profit-seeking companies that exploit our attention for their own benefit,” then we have to ask: what’s the point of using social media platforms in the first place, aside from utilizing them as glorified messengers and content depositories?

While I’ve had some success in cutting recommended videos, ads, and various attention-grabbing modules from popular social media platforms (more on that below), I’ve realized that hiding recommended content is a temporary solution. To dismantle the trap of sponsored and algorithm-driven recommendations, we have to pressure social media companies to fundamentally address their manipulative practices.

Instead of listening to excuses about how hard it is to monitor content, or how sexy it is to “break the rules” or “lean in,” we have to analyze these platforms in terms of how they benefit the public. Judging by what those who engineered such platforms and mentored their creators have to say, it’s easy to see that the ultimate benefactors are social media companies’ founders and top-level executives, who have little incentive to deviate from the perception that their companies are GREAT. But a look at what these corporations actually do and how they came about makes their intent clear.

What can we expect from a video-sharing site that has become one of the last refuges for independent journalism (YouTube), a people-rating site that was re-branded as a surveillance company (Facebook), and a “news breaking” site (Twitter) that asks us to express ourselves through blurbs, while providing a platform to its most profitable user — the openly racist President of the United States of America?

The recommended content that gets shuffled, sneaked, and projected to social media users creates an illusion of choice inside glass-walled silos. Those “recommendations” aren’t just filling up space around what we want to see — they give social media companies power over us, as we subjugate ourselves to the whims of mysterious algorithms that are built to exploit, and profit from, our addictions and fears.

YouTube’s Recommendations: Conspiracies, Hate, and Alt-right Propaganda

On January 25, 2019, YouTube announced that it is “changing its algorithms to stop recommending conspiracies and false information.” As an example, The Washington Post singled out how YouTube’s algorithm promoted conspiracies about the mass shooting in Parkland, Florida, that took the lives of 17 people last year.

If you are a YouTube user, you probably don’t need an explanation as to how YouTube recommends conspiracies or other content you didn’t ask for — it does so by placing recommended videos around the content you actually want to engage with.

For the Google-owned company, what matters most is “watch time,” which is why what we see on YouTube increasingly resembles content that interests, and is shared by, those who spend most of their time online.

The best case scenario for Google and YouTube’s executives is for you to do the same.

Similarly to my experiences with Facebook, I’ve realized that YouTube’s recommended content didn’t offer me alternative viewpoints — it simply exposed me to being targeted by media operatives who have found a way to game YouTube’s algorithm, or put money behind their posts.

After being unsuccessful at deleting racist, transphobic, corporatist, click-bait, alt-right garbage videos from my YouTube homepage, I was able to remove those videos altogether which made the platform much more tolerable.

To illustrate this change, here’s what YouTube’s recommended videos look like on my homepage:

Content that I’ve visited and like, such as Kali Uchis and Eyedea & Abilities’ videos, as well content I might like, such as Bertrand Russell video, get mixed up with transphobic, corporatist garbage from “Syndicon,” “Voice Liberty” and “Bloomberg Markets,” featuring right-wing pundits Jordan Peterson, Ben Shapiro, and Tucker Carlson.

A closer look at the channels that are deemed “recommendable” by YouTube shows what passes as “diversity of opinion” according to the platform’s recommendation algorithm:

Syndicon’s latest videos
Voice Liberty’s latest videos

It’s safe to say YouTube’s algorithm has a thing for right-wing propagandists and sensationalism. Since I don’t have such inclinations, here’s what my YouTube page looks like with a Chrome extension that removes recommendations altogether:

No garbage recommendations on the YouTube homepage page.
No garbage recommendations around the single video page.

The decision to scrap recommended content from my “social media” experience came with the realization that 1) there are entire industries dedicated to capturing and selling my attention online, and 2) I can find what I am looking for without having to browse through Silicon Valley’s recommendations.

After reaching those conclusions, it’s hard to believe that a simple “retooling” of YouTube’s algorithms will fix its issues. For me, the off-chance of being pleasantly surprised or intellectually challenged by a recommended video is not worth having to weave through conspiracies and propaganda every time I log in.

To be clear, YouTube’s algorithm problems have been out in the open for years, but the issue resurfaces in the mainstream only when someone uncovers a truly vile corner of the platform. Similarly to Facebook’s reactionary way of dealing with criticism, YouTube’s structural problems are then swept under the rug through promises for more oversight, until the next time a critical mass of people notice something alarming and the PR cycle starts again.

I realize a lot of people don’t mind overlooking YouTube’s algorithm problems as long as they have access to the content they want. Similarly, many are probably OK with being shown Goldman Sachs and Raytheon ads in-between and around the content they are viewing on social media — be it on their phone, their watch, their messenger, or their VR device.

Nevertheless, I have a sneaking suspicion that as soon as someone offers a viable platform that doesn’t profit from our attention and personal data, Facebook, YouTube and others will quickly go the way of MySpace which, for all it’s worth, exited the stage much more gracefully.

In the meantime, Facebook is fighting tooth and nail to stay relevant through Instagram and, more recently, “Messenger for Kids.”

Tweet, Re-tweet, and Follow (The People We Suggest)

Using recommended content to increase user engagement is a strategy that is also utilized by Twitter — the place for “breaking news” and discombobulated thoughts.

The way Twitter gets you to see “additional” content is by following other users. When you follow someone, you are also shown posts and accounts of the users they follow, which eventually transforms your “feed” into an echo chamber of blue-checked “digital influencers.”

Users are constantly followed by Twitter’s criminally titled “Who to follow” section which features exciting, left field personalities such as Hillary Clinton, Donald J. Trump, and Ben & Candy Carson.

Similarly to what I did with my YouTube homepage, I decided to remove all extra content (trends, “Who to Follow,” etc.) from my Twitter page through a browser extension. I then “unfollowed” everyone and started to browse Twitter through lists. This cleaned my feed from sponsored ads and “third party” content.

However, I suspect this also made me a suspicious account in the eyes of Twitter, as I saw a steep decline in my reach and followers. As soon as I stared following accounts, I started gaining followers again.

The game of “either follow or remain in an echo chamber” necessitates for new users, who don’t have that many followers, to find creative ways to stand out in the stream of never-ending blurbs. This is often achieved by inserting yourself in “trending” conversations, or re-sharing posts of others with a unique take and hoping that someone will “re-tweet” you.

Many Twitter users exploit this way of attracting attention by re-sharing tweets with the intent to smear, intimidate, or threaten people on the platform. This is Twitter’s bread and butter, as nothing gets people more excited than projecting their frustrations on strangers online.

Right-wing operative directs his followers to a video he doesn’t like, and then to the presenter’s Twitter account.

Twitter’s solution to this toxic way of gaining attention was to introduce “quality filters.” However, filters hardly address Twitter’s fundamental flaws — they merely ask users to stick their head in the sand.

In addition, the platform has notoriously allowed “tough guy” politicians like Marco Rubio and Donald Trump to post tweets threatening to murder foreign leaders and initiate a nuclear war. There’s an obvious double standard for “power users” like Marco and Donald, whose insights are deemed more important and tolerable than those of the unchecked masses.

Even Twitter’s CEO has admitted that his creation is not a place for “nuanced discussion,” which makes mass media’s efforts to entice anything with a pulse to tweet, or have a hashtag, that much more revealing.

YouTube’s “solutions” have been equally ineffective. It was recently discovered that the platform’s recommendation algorithm makes it easy for pedophiles to find and comment on videos of young children. This prompted a number of companies to pull advertising dollars from YouTube since their ads were being shown on said videos.

YouTube’s solution was to disable “tens of millions of videos that feature minors, in addition to removing inappropriate comments and the accounts that make them,” as reported by WIRED.

However, according to Guillaume Chaslot, an AI researcher who worked on YouTube’s recommendation engine, “It’s an AI problem, not a comment section problem.” In an interview for WIRED, Chaslot said that as long as YouTube’s recommendation algorithm bases its decisions on watch time, regardless of their content, the problem won’t go away.

Similarly to Twitter, where journalists and establishment politicians feel compelled to “one up” each other through cleverly written haikus, YouTube’s algorithm encourages users to regularly produce content in order to stay relevant. Predictably, the combination of having to upload daily, long-form videos to appease the Algorithm has caused many popular YouTube content creators to feel burned out.

In both Twitter and YouTube, quality of content is sacrificed in the rush to produce the fastest tweet or video that offers the best take on an issue that just might be recommended to you — the ultimate product.

Social Media is Social Control

It’s not hard to see how social media companies benefit the U.S. oligarchy. Expressing ourselves through bits of text, dividing people into “blue checked” and regulars, and creating controlled environments where outrage can be easily manufactured and amplified in the mainstream media is the perfect way to control a population.

The way social media companies treat their employees is illustrative of their inaction when it comes to harmful content and addiction-encouraging platforms.

In “The Trauma Floor: The secret lives of Facebook moderators in America,” published in The Verge, Casey Newton describes the experiences of Facebook content moderators who experience severe anxiety while still in training and continue to struggle with trauma symptoms long after they leave:

Collectively, the employees described a workplace that is perpetually teetering on the brink of chaos. It is an environment where workers cope by telling dark jokes about committing suicide, then smoke weed during breaks to numb their emotions. It’s a place where employees can be fired for making just a few errors a week — and where those who remain live in fear of the former colleagues who return seeking vengeance.

It’s a place where, in stark contrast to the perks lavished on Facebook employees, team leaders micromanage content moderators’ every bathroom and prayer break; where employees, desperate for a dopamine rush amid the misery, have been found having sex inside stairwells and a room reserved for lactating mothers; where people develop severe anxiety while still in training, and continue to struggle with trauma symptoms long after they leave; and where the counseling that Cognizant offers them ends the moment they quit — or are simply let go.

In a Medium article, ex-Google employee Liz Fong-Jones writes about what she describes as an escalation of harassment, doxxing, and hate speech “targeted at marginalized employees within Google’s internal communications”:

It began as concern trolling and rapidly escalated to leaks of the names, photos, and posts of LGBT+ employees to white supremacist sites. Management silently tolerated it for fear of being labeled as partisan. Employees attempted to internally raise concerns about this harassment through official channels, only to be ignored, stonewalled, or even punished for doing so.

Google can only build the best products for users if its base of employees is truly diverse and management listens to their feedback. Failure to do so creates collateral damage among users when product launches trigger issues that could have been caught earlier in development — as occurred with the Google+ rollout. Users are not guinea pigs, and missteps with regard to privacy and human rights cannot be undone. With vulnerable employees feeling unsafe to exist at work, let alone raise concerns about products, we felt we had no other choice but to speak to the media for the first time last year and highlight Google management’s refusal to identify and discipline the employees behind the harassment.


What kind of innovation can come out of such environments?

Well, have you tried to upload a video from your PC to Instagram? Or to combine an audio clip, an image, and text together in one post? Or reach a lot of people without sponsoring or “boosting” content? A simple newsletter could facilitate these actions, which illustrates why “social” media don’t care about complexity or possibilities—they about your daily, addiction-driven “stories” and “watch time.” These limitations outline how such companies restrict our modes of expression, while instilling in us the false needs to gather likes, followers, subscribers, and so on.

As the myth of the benevolent social media company wears out, people are bound to embrace organizations that don’t treat their users like products. Such platforms would allow us to communicate without begging digital influences to re-tweet us, paying money to reach more people, or supporting unhealthy, unethical platforms that treat their employees and users like garbage.

Unfortunately, social media executives are in a filter bubble of their own. According to Roger McNamee, ex-Facebook investor and longtime mentor of Mark Zuckerberg, “If people are in a cult, you cannot cure that. People at Facebook and Google live in a preference bubble. They’re so bought into their vision, and the vision is that code cures literally everything … they created a lot of the polarization in America, but they can’t fix it.”

Social media companies’ omnipresence in modern society makes them similar to how they themselves exploit recommended content — by popping up in all aspects of our daily lives (media relations, commercials, movies, products, politics, and so on) and begging us to join the stream, even if it’s contaminated by propaganda, conspiracy theories, and intellectual dishonesty.

What exactly we are joining, and how it affects us, becomes irrelevant as we scroll up and down Silicon Valley’s feeds, curated and moderated by people we don’t know, who follow the orders of the managerial class and its government overlords.

After professionally using such platforms for nearly 10 years, there’s no doubt in my mind that they are a cancer to society. To cut it at its roots, we have to escape the trap of “recommended content” in its many forms and counter Silicon Valley’s hollow justifications and self-serving algorithms with platforms that value our health and social bonds.

Author

I am a Bulgarian American writer and media maker interested in progressive politics, technology, and culture. The Melt Age is a place to share thoughts outside of paywalls and trackers. You can reach me at: info@themeltage.com.

Write A Comment