TikTok - Coda Story https://www.codastory.com/tag/tiktok/ stay on the story Fri, 18 Apr 2025 15:51:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://eymjfqbav2v.exactdn.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1.png?lossy=1&resize=32%2C32&ssl=1 TikTok - Coda Story https://www.codastory.com/tag/tiktok/ 32 32 239620515 Blocking Pornhub and the death of the World Wide Web https://www.codastory.com/authoritarian-tech/blocking-pornhub-and-the-death-of-the-world-wide-web/ Fri, 24 Jan 2025 13:07:51 +0000 https://www.codastory.com/?p=53843 The construction of digital walls, as governments exert more control over access to information, is changing the nature of the once global internet

The post Blocking Pornhub and the death of the World Wide Web appeared first on Coda Story.

]]>
It's time to acknowledge an uncomfortable truth. The internet, as we've known it for the last 15 years, is breaking apart. This is not just true in the sense of, say, China or North Korea not having access to Western services and apps. Across the planet, more and more nations are drawing clear lines of sovereignty between their internet and everyone else's. Which means it's time to finally ask ourselves an even more uncomfortable question: what happens when the World Wide Web is no longer worldwide?

Over the last few weeks the US has been thrown into a tailspin over the impending divest-or-ban law that might possibly block the youth of America from accessing their favorite short-form video app. But if you've only been following the Supreme Court's hearing on TikTok you may have totally missed an entirely separate Supreme Court hearing on whether or not southern American states like Texas are constitutionally allowed to block porn sites like Pornhub. As of this month, 17 US states have blocked Pornhub for refusing to adhere to "age-verification laws" that would force Pornhub to collect users' IDs before browsing the site, thus making sensitive, personal information vulnerable to security breaches. 

But it's not just US lawmakers that are questioning what's allowed on their corner of the web. 

Following a recent announcement that Meta would be relaxing their fact checking standards Brazilian regulators demanded a thorough explanation of how this would impact the country's 100 million users. Currently the Brazilian government is "seriously concerned" about these changes. Which itself is almost a verbatim repeat of how Brazilian lawmakers dealt with X last year, when they banned the platform for almost two months over how the platform handled misinformation about the country's 2023 attempted coup.

Speaking of X, the European Union seems to have finally had enough of Elon Musk's digital megaphone. They've been investigating the platform since 2023 and have given Musk a February deadline to explain exactly how the platform's algorithm works. To say nothing of the French and German regulators grappling with how to deal with Musk's interference in their national politics.

And though the aforementioned Chinese Great Firewall has always blocked the rest of the world from the country's internet users, last week there was a breach that Chinese regulators are desperately trying to patch. Americans migrated to a competing app called RedNote, which has now caught the attention of both lawmakers in China, who are likely to wall off American users from interacting with Chinese users, and lawmakers in the US, who now want to ban it once they finally deal with TikTok.

All of this has brought us to a stark new reality, where we can no longer assume that the internet is a shared global experience, at least when it comes to the web's most visible and mainstream apps. New digital borders are being drawn and they will eventually impact your favorite app. Whether you're an activist, a journalist, or even just a normal person hoping to waste some time on their phone (and maybe make a little money), the spaces you currently call home online are not permanent. 

Time to learn how a VPN works. At least until the authorities restrict and regulate access to VPNs too, as they already do in countries such as China, Iran, Russia and India. 

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post Blocking Pornhub and the death of the World Wide Web appeared first on Coda Story.

]]>
53843
Silicon Savanna: The workers taking on Africa’s digital sweatshops https://www.codastory.com/authoritarian-tech/kenya-content-moderators/ Wed, 11 Oct 2023 11:11:00 +0000 https://www.codastory.com/stayonthestory/silicon-savannah-taking-on-africas-digital-sweatshops-in-the-heart-of-silicon-savannah/ Content moderators for TikTok, Meta and ChatGPT are demanding that tech companies reckon with the human toll of their enterprise.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>

 Silicon Savanna: The workers taking on Africa's digital sweatshops

This story was updated at 6:30 ET on October 16, 2023

Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.

He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.

Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.

One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said. 

Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”

Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.

Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.

That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.

“The word evil is not equal to what we saw,” he said. 

Yet he had to stay in the job. Pay was low — less than two dollars an hour, Wabe told me — but going back to Ethiopia, where he had been tortured and imprisoned, was out of the question. Wabe worked with dozens of other migrants and refugees from other parts of Africa who faced similar circumstances. Money was too tight — and life too uncertain — to speak out or turn down the work. So he and his colleagues kept their heads down and steeled themselves each day for the deluge of terrifying images.

Over time, Wabe began to see moderators as “soldiers in disguise” — a low-paid workforce toiling in the shadows to make Facebook usable for billions of people around the world. But he also noted a grim irony in the role he and his colleagues played for the platform’s users: “Everybody is safe because of us,” he said. “But we are not.”  

Wabe said dozens of his former colleagues in Sama’s Nairobi offices now suffer from post-traumatic stress disorder. Wabe has also struggled with thoughts of suicide. “Every time I go somewhere high, I think: What would happen if I jump?” he wondered aloud. “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

The West End Towers house the Nairobi offices of Majorel, a Luxembourg-based content moderation firm with over 22,000 employees on the African continent.

To most people using the internet — most of the world — this kind of work is literally invisible. Yet it is a foundational component of the Big Tech business model. If social media sites were flooded with videos of murder and sexual assault, most people would steer clear of them — and so would the advertisers that bring the companies billions in revenue.

Around the world, an estimated 100,000 people work for companies like Sama, third-party contractors that supply content moderation services for the likes of Facebook’s parent company Meta, Google and TikTok. But while it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These “soldiers in disguise” are reaching a breaking point. Because of people like Wabe, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.

In May, more than 150 moderators in Kenya, who keep the worst of the worst off of platforms like Facebook, TikTok and ChatGPT, announced their drive to create a trade union for content moderators across Africa. The union would be the first of its kind on the continent and potentially in the world.

There are also major pending lawsuits before Kenya’s courts targeting Meta and Sama. More than 180 content moderators — including Wabe — are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Sama ended its content moderation agreement with Meta and Majorel picked up the contract instead. The plaintiffs say they were blacklisted from reapplying for their jobs after Majorel stepped in. In August, a judge ordered both parties to settle the case out of court, but the mediation broke down on October 16 after the plaintiffs' attorneys accused Meta of scuttling the negotiations and ignoring moderators' requests for mental health services and compensation. The lawsuit will now proceed to Kenya's employment and labor relations court, with an upcoming hearing scheduled for October 31.

The cases against Meta are unprecedented. According to Amnesty International, it is the “first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.” Forthcoming court rulings could jeopardize Meta’s status in Kenya and the content moderation outsourcing model upon which it has built its global empire. 

Meta did not respond to requests for comment about moderators’ working conditions and pay in Kenya. In an emailed statement, a spokesperson for Sama said the company cannot comment on ongoing litigation but is “pleased to be in mediation” and believes “it is in the best interest of all parties to come to an amicable resolution.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing marks a turning point in the country’s tech labor trajectory. 

“This is the tech industry’s sweatshop moment,” Madung said. “Every big corporate industry here — oil and gas, the fashion industry, the cosmetics industry — have at one point come under very sharp scrutiny for the reputation of extractive, very colonial type practices.”

Nairobi may soon witness a major shift in the labor economics of content moderation. But it also offers a case study of this industry’s powerful rise. The vast capital city — sometimes called “Silicon Savanna” — has become a hub for outsourced content moderation jobs, drawing workers from across the continent to review material in their native languages. An educated, predominantly English-speaking workforce makes it easy for employers from overseas to set up satellite offices in Kenya. And the country’s troubled economy has left workers desperate for jobs, even when wages are low.

Sameer Business Park, a massive office compound in Nairobi’s industrial zone, is home to Nissan, the Bank of Africa, and Sama’s local headquarters. But just a few miles away lies one of Nairobi’s largest informal settlements, a sprawl of homes made out of scraps of wood and corrugated tin. The slum’s origins date back to the colonial era, when the land it sits on was a farm owned by white settlers. In the 1960s, after independence, the surrounding area became an industrial district, attracting migrants and factory workers who set up makeshift housing on the area adjacent to Sameer Business Park.

For companies like Sama, the conditions here were ripe for investment by 2015, when the firm established a business presence in Nairobi. Headquartered in San Francisco, the self-described “ethical AI” company aims to “provide individuals from marginalized communities with training and connections to dignified digital work.” In Nairobi, it has drawn its labor from residents of the city’s informal settlements, including 500 workers from Kibera, one of the largest slums in Africa. In an email, a Sama spokesperson confirmed moderators in Kenya made between $1.46 and $3.74 per hour after taxes.

Grace Mutung’u, a Nairobi-based digital rights researcher at Open Society Foundations, put this into local context for me. On the surface, working for a place like Sama seemed like a huge step up for young people from the slums, many of whom had family roots in factory work. It was less physically demanding and more lucrative. Compared to manual labor, content moderation “looked very dignified,” Mutung’u said. She recalled speaking with newly hired moderators at an informal settlement near the company’s headquarters. Unlike their parents, many of them were high school graduates, thanks to a government initiative in the mid-2000s to get more kids in school.

“These kids were just telling me how being hired by Sama was the dream come true,” Mutung’u told me. “We are getting proper jobs, our education matters.” These younger workers, Mutung’u continued, “thought: ‘We made it in life.’” They thought they had left behind the poverty and grinding jobs that wore down their parents’ bodies. Until, she added, “the mental health issues started eating them up.” 

Today, 97% of Sama’s workforce is based in Africa, according to a company spokesperson. And despite its stated commitment to providing “dignified” jobs, it has caught criticism for keeping wages low. In 2018, the company’s late founder argued against raising wages for impoverished workers from the slum, reasoning that it would “distort local labor markets” and have “a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.”

Content moderation did not become an industry unto itself by accident. In the early days of social media, when “don’t be evil” was still Google’s main guiding principle and Facebook was still cheekily aspiring to connect the world, this work was performed by employees in-house for the Big Tech platforms. But as companies aspired to grander scales, seeking users in hundreds of markets across the globe, it became clear that their internal systems couldn’t stem the tide of violent, hateful and pornographic content flooding people’s newsfeeds. So they took a page from multinational corporations’ globalization playbook: They decided to outsource the labor.

More than a decade on, content moderation is now an industry that is projected to reach $40 billion by 2032. Sarah T. Roberts, a professor of information studies at the University of California at Los Angeles, wrote the definitive study on the moderation industry in her 2019 book “Behind the Screen.” Roberts estimates that hundreds of companies are farming out these services worldwide, employing upwards of 100,000 moderators. In its own transparency documents, Meta says that more than 15,000 people moderate its content in more than 20 sites around the world. Some (it doesn’t say how many) are full-time employees of the social media giant, while others (it doesn’t say how many) work for the company’s contracting partners.

Kauna Malgwi was once a moderator with Sama in Nairobi. She was tasked with reviewing content on Facebook in her native language, Hausa. She recalled watching coworkers scream, faint and develop panic attacks on the office floor as images flashed across their screens. Originally from Nigeria, Malgwi took a job with Sama in 2019, after coming to Nairobi to study psychology. She told me she also signed a nondisclosure agreement instructing her that she would face legal consequences if she told anyone she was reviewing content on Facebook. Malgwi was confused by the agreement, but moved forward anyway. She was in graduate school and needed the money.

A 28-year-old moderator named Johanna described a similar decline in her mental health after watching TikTok videos of rape, child sexual abuse, and even a woman ending her life in front of her own children. Johanna currently works with the outsourcing firm Majorel, reviewing content on TikTok, and asked that we identify her using a pseudonym, for fear of retaliation by her employer. She told me she’s extroverted by nature, but after a few months at Majorel, she became withdrawn and stopped hanging out with her friends. Now, she dissociates to get through the day at work. “You become a different person,” she told me. “I’m numb.”

This is not the experience that the Luxembourg-based multinational — which employs more than 22,000 people across the African continent — touts in its recruitment materials. On a page about its content moderation services, Majorel’s website features a photo of a woman donning a pair of headphones and laughing. It highlights the company’s “Feel Good” program, which focuses on “team member wellbeing and resiliency support.”

According to the company, these resources include 24/7 psychological support for employees “together with a comprehensive suite of health and well-being initiatives that receive high praise from our people," Karsten König, an executive vice president at Majorel, said in an emailed statement. "We know that providing a safe and supportive working environment for our content moderators is the key to delivering excellent services for our clients and their customers. And that’s what we strive to do every day.”

But Majorel’s mental health resources haven’t helped ease Johanna’s depression and anxiety. She says the company offers moderators in her Nairobi office with on-site therapists who see employees in individual and group “wellness” sessions. But Johanna told me she stopped attending the individual sessions after her manager approached her about a topic she shared in confidentiality with her therapist. “They told me it was a safe space,” Johanna explained, “but I feel that they breached that part of the confidentiality so I do not do individual therapy.” TikTok did not respond to a request for comment by publication.

Instead, she looked for other ways to make herself feel better. Nature has been especially healing. Whenever she can, Johanna takes herself to Karura Forest, a lush oasis in the heart of Nairobi. One afternoon, she brought me to one of her favorite spots there, a crashing waterfall beneath a canopy of trees. This is where she tries to forget about the images that keep her up at night. 

Johanna remains haunted by a video she reviewed out of Tanzania, where she saw a lesbian couple attacked by a mob, stripped naked and beaten. She thought of them again and again for months. “I wondered: ‘How are they? Are they dead right now?’” At night, she would lie awake in her bed, replaying the scene in her mind.

“I couldn’t sleep, thinking about those women.”

Johanna’s experience lays bare another stark reality of this work. She was powerless to help victims. Yes, she could remove the video in question, but she couldn’t do anything to bring the women who were brutalized to safety. This is a common scenario for content moderators like Johanna, who are not only seeing these horrors in real-time, but are asked to simply remove them from the internet and, by extension, perhaps, from public record. Did the victims get help? Were the perpetrators brought to justice? With the endless flood of videos and images waiting for review, questions like these almost always go unanswered.

The situation that Johanna encountered highlights what David Kaye, a professor of law at the University of California at Irvine and the former United Nations special rapporteur on freedom of expression, believes is one of the platforms’ major blindspots: “They enter into spaces and countries where they have very little connection to the culture, the context and the policing,” without considering the myriad ways their products could be used to hurt people. When platforms introduce new features like livestreaming or new tools to amplify content, Kaye continued, “are they thinking through how to do that in a way that doesn’t cause harm?”

The question is a good one. For years, Meta CEO Mark Zuckerberg famously urged his employees to “move fast and break things,” an approach that doesn’t leave much room for the kind of contextual nuance that Kaye advocates. And history has shown the real-world consequences of social media companies’ failures to think through how their platforms might be used to foment violence in countries in conflict.

The most searing example came from Myanmar in 2017, when Meta famously looked the other way as military leaders used Facebook to incite hatred and violence against Rohingya Muslims as they ran “clearance operations” that left an estimated 24,000 Rohingya people dead and caused more than a million to flee the country. A U.N. fact-finding mission later wrote that Facebook had a “determining role” in the genocide. After commissioning an independent assessment of Facebook’s impact in Myanmar, Meta itself acknowledged that the company didn’t do “enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

Yet five years later, another case now before Kenya’s high court deals with the same issue on a different continent. Last year, Meta was sued by a group of petitioners including the family of Meareg Amare Abrha, an Ethiopian chemistry professor who was assassinated in 2021 after people used Facebook to orchestrate his killing. Amare’s son tried desperately to get the company to take down the posts calling for his father’s head, to no avail. He is now part of the suit that accuses Meta of amplifying hateful and malicious content during the conflict in Tigray, including the posts that called for Amare’s killing.

The case underlines the strange distance between Big Tech behemoths and the content moderation industry that they’ve created offshore, where the stakes of moderation decisions can be life or death. Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University's Stern School of Business who authored a seminal 2020 report on the issue, believes this distance helped corporate leadership preserve their image of a shiny, frictionless world of tech. Social media was meant to be about abundant free speech, connecting with friends and posting pictures from happy hour — not street riots or civil war or child abuse.

“This is a very nitty gritty thing, sifting through content and making decisions,” Barrett told me. “They don't really want to touch it or be in proximity to it. So holding this whole thing at arm’s length as a psychological or corporate culture matter is also part of this picture.”

Sarah T. Roberts likened content moderation to “a dirty little secret. It’s been something that people in positions of power within the companies wish could just go away,” Roberts said. This reluctance to deal with the messy realities of human behavior online is evident today, even in statements from leading figures in the industry. For example, with the July launch of Threads, Meta’s new Twitter-like social platform, in July, Instagram head Adam Mosseri expressed a desire to keep “politics and hard news” off the platform.

The decision to outsource content moderation meant that this part of what happened on social media platforms would “be treated at arm’s length and without that type of oversight and scrutiny that it needs,” Barrett said. But the decision had collateral damage. In pursuit of mass scale, Meta and its counterparts created a system that produces an impossible amount of material to oversee. By some estimates, three million items of content are reported on Facebook alone on a daily basis. And despite what some of Silicon Valley’s other biggest names tell us, artificial intelligence systems are insufficient moderators. So it falls on real people to do the work.

One morning in late July, James Oyange, a former tech worker, took me on a driving tour of Nairobi’s content moderation hubs. Oyange, who goes by Mojez, is lanky and gregarious, quick to offer a high five and a custom-made quip. We pulled up outside a high-rise building in Westlands, a bustling central neighborhood near Nairobi’s business district. Mojez pointed up to the sixth floor: Majorel’s local office, where he worked for nine months, until he was let go.

He spent much of his year in this building. Pay was bad and hours were long, and it wasn’t the customer service job he’d expected when he first signed on — this is something he brought up with managers early on. But the 26-year-old grew to feel a sense of duty about the work. He saw the job as the online version of a first responder — an essential worker in the social media era, cleaning up hazardous waste on the internet. But being the first to the scene of the digital wreckage changed Mojez, too — the way he looks, the way he sleeps, and even his life’s direction.

That morning, as we sipped coffee in a trendy, high-ceilinged cafe in Westlands, I asked how he’s holding it together. “Compared to some of the other moderators I talked to, you seem like you’re doing okay,” I remarked. “Are you?”

His days often started bleary-eyed. When insomnia got the best of him, he would force himself to go running under the pitch-black sky, circling his neighborhood for 30 minutes and then stretching in his room as the darkness lifted. At dawn, he would ride the bus to work, snaking through Nairobi’s famously congested roads until he arrived at Majorel’s offices. A food market down the street offered some moments of relief from the daily grind. Mojez would steal away there for a snack or lunch. His vendor of choice doled out tortillas stuffed with sausage. He was often so exhausted by the end of the day that he nodded off on the bus ride home.

And then, in April 2023, Majorel told him that his contract wouldn’t be renewed.

It was a blow. Mojez walked into the meeting fantasizing about a promotion. He left without a job. He believes he was blacklisted by company management for speaking up about moderators’ low pay and working conditions.

A few weeks later, an old colleague put him in touch with Foxglove, a U.K.-based legal nonprofit supporting the lawsuit currently in mediation against Meta. The organization also helped organize the May meeting in which more than 150 African content moderators across platforms voted to unionize.

At the event, Mojez was stunned by the universality of the challenges facing moderators working elsewhere. He realized: “This is not a Mojez issue. These are 150 people across all social media companies. This is a major issue that is affecting a lot of people.” After that, despite being unemployed, he was all in on the union drive. Mojez, who studied international relations in college, hopes to do policy work on tech and data protection someday. But right now his goal is to see the effort through, all the way to the union’s registry with Kenya’s labor department.

Mojez’s friend in the Big Tech fight, Wabe, also went to the May meeting. Over lunch one afternoon in Nairobi in July, he described what it was like to open up about his experiences  publicly for the first time. “I was happy,” he told me. “I realized I was not alone.” This awareness has made him more confident about fighting “to make sure that the content moderators in Africa are treated like humans, not trash,” he explained. He then pulled up a pant leg and pointed to a mark on his calf, a scar from when he was imprisoned and tortured in Ethiopia. The companies, he said, “think that you are weak. They don’t know who you are, what you went through.”

A popular lunch spot for workers outside Majorel's offices.

Looking at Kenya’s economic woes, you can see why these jobs were so alluring. My visit to Nairobi coincided with a string of July protests that paralyzed the city. The day I flew in, it was unclear if I would be able to make it from the airport to my hotel — roads, businesses and public transit were threatening to shut down in anticipation of the unrest. The demonstrations, which have been bubbling up every so often since last March, came in response to steep new tax hikes, but they were also about the broader state of Kenya’s faltering economy — soaring food and gas prices and a youth unemployment crisis, some of the same forces that drive throngs of young workers to work for outsourcing companies and keep them there.

Leah Kimathi, a co-founder of the Kenyan nonprofit Council for Responsible Social Media, believes Meta’s legal defense in the labor case brought by the moderators betrays Big Tech’s neo-colonial approach to business in Kenya. When the petitioners first filed suit, Meta tried to absolve itself by claiming that it could not be brought to trial in Kenya, since it has no physical offices there and did not directly employ the moderators, who were instead working for Sama, not Meta. But a Kenyan labor court saw it differently, ruling in June that Meta — not Sama — was the moderators’ primary employer and the case against the company could move forward.

“So you can come here, roll out your product in a very exploitative way, disregarding our laws, and we cannot hold you accountable,” Kimathi said of legal Meta’s argument. “Because guess what? I am above your laws. That was the exact colonial logic.”

Kimathi continued: “For us, sitting in the Global South, but also in Africa, we’re looking at this from a historical perspective. Energetic young Africans are being targeted for content moderation and they come out of it maimed for life. This is reminiscent of slavery. It’s just now we’ve moved from the farms to offices.”

As Kimathi sees it, the multinational tech firms and their outsourcing partners made one big, potentially fatal miscalculation when they set up shop in Kenya: They didn’t anticipate a workers’ revolt. If they had considered the country’s history, perhaps they would have seen the writing of the African Content Moderator’s Union on the wall.

Kenya has a rich history of worker organizing in resistance to the colonial state. The labor movement was “a critical pillar of the anti-colonial struggle,” Kimathi explained to me. She and other critics of Big Tech’s operations in Kenya see a line that leads from colonial-era labor exploitation and worker organizing to the present day. A workers’ backlash was a critical part of that resistance — and one the Big Tech platforms and their outsourcers may have overlooked when they decided to do business in the country.

“They thought that they would come in and establish this very exploitative industry and Kenyans wouldn’t push back,” she said. Instead, they sued.

What happens if the workers actually win?

Foxglove, the nonprofit supporting the moderators’ legal challenge against Meta, writes that the outcome of the case could disrupt the global content moderation outsourcing model. If the court finds that Meta is the “‘true employer’ of their content moderators in the eyes of the law,” Foxglove argues, “then they cannot hide behind middlemen like Sama or Majorel. It will be their responsibility, at last, to value and protect the workers who protect social media — and who have made tech executives their billions.”

But there is still a long road ahead, for the moderators themselves and for the kinds of changes to the global moderation industry that they are hoping to achieve.

In Kenya, the workers involved in the lawsuit and union face practical challenges. Some, like Mojez, are unemployed and running out of money. Others are migrant workers from elsewhere on the continent who may not be able to stay in Kenya for the duration of the lawsuit or union fight.

The Moderator’s Union is not yet registered with Kenya’s labor office, but if it becomes official, its members intend to push for better conditions for moderators working across platforms in Kenya, including higher salaries and more psychological support for the trauma endured on the job. And their ambitions extend far beyond Kenya. The network hopes to inspire similar actions in other countries’ content moderation hubs. According to Martha Dark, Foxglove’s co-founder and director, the industry’s working conditions have spawned a cross-border, cross-company organizing effort, drawing employees from Africa, Europe and the U.S.

“There are content moderators that are coming together from Poland, America, Kenya, and Germany talking about what the challenges are that they experience when trying to organize in the context of working for Big Tech companies like Facebook and TikTok,” she explained.

Still, there are big questions that might hinge on the litigation’s ability to transform the moderation industry. “It would be good if outsourced content reviewers earned better pay and were better treated,” NYU’s Paul Barrett told me. “But that doesn't get at the issue that the mother companies here, whether it’s Meta or anybody else, is not hiring these people, is not directly training these people and is not directly supervising these people.” Even if the Kenyan workers are victorious in their lawsuit against Meta, and the company is stung in court, “litigation is still litigation,” Barrett explained. “It’s not the restructuring of an industry.”

So what would truly reform the moderation industry’s core problem? For Barrett, the industry will only see meaningful change if companies can bring “more, if not all of this function in-house.”

But Sarah T. Roberts, who interviewed workers from Silicon Valley to the Philippines for her book on the global moderation industry, believes collective bargaining is the only pathway forward for changing the conditions of the work. She dedicated the end of her book to the promise of organized labor.

“The only hope is for workers to push back,” she told me. “At some point, people get pushed too far. And the ownership class always underestimates it. Why does Big Tech want everything to be computational in content moderation? Because AI tools don’t go on strike. They don't talk to reporters.”

Artificial intelligence is part of the content moderation industry, but it will probably never be capable of replacing human moderators altogether. What we do know is that AI models will continue to rely on human beings to train and oversee their data sets — a reality Sama’s CEO recently acknowledged. For now and the foreseeable future, there will still be people behind the screen, fueling the engines of the world’s biggest tech platforms. But because of people like Wabe and Mojez and Kauna, their work is becoming more visible to the rest of us.

While writing this piece, I kept returning to one scene from my trip to Nairobi that powerfully drove home the raw humanity at the base of this entire industry, powering the whole system, as much as the tech scions might like to pretend otherwise. I was in the food court of a mall, sitting with Malgwi and Wabe. They were both dressed sharply, like they were on break from the office: Malgwi in a trim pink dress and a blazer, Wabe in leather boots and a peacoat. But instead, they were just talking about how work ruined them.

At one point in the conversation, Wabe told me he was willing to show me a few examples of violent videos he snuck out while working for Sama and later shared with his attorney. If I wanted to understand “exactly what we see and moderate on the platform,” Wabe explained, the opportunity was right in front of me. All I had to do was say yes.

I hesitated. I was genuinely curious. A part of me wanted to know, wanted to see first-hand what he had to deal with for more than a year. But I’m sensitive, maybe a little breakable. A lifelong insomniac. Could I handle seeing this stuff? Would I ever sleep again?

It was a decision I didn’t have to make. Malgwi intervened. “Don’t send it to her,” she told Wabe. “It will traumatize her.”

So much of this story, I realized, came down to this minute-long exchange. I didn’t want to see the videos because I was afraid of how they might affect me. Malgwi made sure I didn’t have to. She already knew what was on the other side of the screen.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. This story offers a deep dive on court battles in Kenya that could jeopardize the outsourcing model upon which Meta has built its global empire.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>
47011
Meta cozies up to Vietnam, censorship demands and all https://www.codastory.com/authoritarian-tech/vietnam-censorship-facebook/ Thu, 28 Sep 2023 15:25:58 +0000 https://www.codastory.com/?p=46764 U.S. social media companies have become indispensable partners in Vietnam's information control regime

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
When Vietnamese Prime Minister Pham Minh Chinh and his delegation visited Meta's Menlo Park headquarters in California last week, they were welcomed with a board reminiscent of Facebook’s desktop interface.

"What's on your mind?" it read at the top. Beneath the standard status update prompt were a series of messages written in Vietnamese that extended a warm welcome to the prime minister, underscoring the collaboration between his government and the social media giant. Sunny statements are reported to have dominated the meeting in which the two sides rhapsodized about bolstering their partnership.

Prime Minister Chinh highlighted the instrumental role American companies, Meta in particular, might play in uncorking the potentials of the Comprehensive Strategic Partnership that the U.S. and Vietnam cemented in mid-September. He encouraged Meta to deepen its ties with Vietnamese firms to boost the digital economy. Joel Kaplan, Meta’s vice president for U.S. public policy, indicated willingness to support Vietnamese businesses of all sizes, adding that the company hopes to continue producing “metaverse equipment” in the country. 

The warm aura of the meeting obscured an uncomfortable reality for Meta on the other side of the Pacific: It has become increasingly enmeshed in the Vietnamese government's draconian online censorship regime. In a country whose leaders once frowned upon it, Facebook has seen its relationship with the Vietnamese government morph from one of animosity to an unlikely alliance of convenience. Not a small feat for the social media giant.

Facebook has long been the most popular social media platform in Vietnam. Today, over 70% of Vietnam’s total population of nearly 100 million people use it for content sharing, business operations and messaging.

For years, Facebook’s approach to content policy in Vietnam appeared to be one of caution, in which the company brought some adherence to free speech principles to decision-making when it was faced with censorship demands from the government. But in 2020, it shifted to one of near-guaranteed compliance with official demands, at least in the eyes of Vietnamese authorities. It was in that year that the Vietnamese government claimed that the company went from approving 70 to 75%% of censorship requests from the authorities, to a staggering 95%. Since then Vietnamese officials have maintained that Facebook's compliance rate is upwards of 90%.

Meta’s deference to Vietnam’s official line continues today. Last June, an article in the Washington Post quoted two former employees who, speaking on the condition of anonymity, said that Facebook had taken on an internal list of Vietnam Communist Party officials who it agreed to shield from criticism on its platform. The undisclosed list is included in the company’s internal guidelines for moderating online content, with Vietnamese authorities having a significant sway on it, the Post reported. While the Post did not cite the names of the Vietnamese officials on the list, it noted that Vietnam is the only country in East Asia for which Facebook provides this type of white-glove treatment.

Also in June, the government instructed cross-border social platforms to employ artificial intelligence models capable of automatically detecting and removing “toxic” content. A month earlier, in the name of curbing online scams, the authorities said they were gearing up to enforce a requirement that all social media users, whether on local or foreign platforms, verify their identities.

These back-to-back developments are emblematic of the Vietnamese government’s growing confidence in asserting its authority over Big Tech.

Facebook's corporate headquarters location in Menlo Park, California. Josh Edelson/AFP via Getty Images.

How has Vietnam reached this critical juncture? Two key factors seem to account for why Vietnamese authorities are able to boss around Big Tech.

The first is Vietnam’s economic lure. Vietnam's internet economy is one of the most rapidly expanding markets in Southeast Asia. According to a report by Google and Singapore's Temasek Holdings, Vietnam's digital economy hit $23 billion in 2022 and is projected to reach approximately $50 billion by 2025, with growth fueled primarily by a thriving e-commerce sector. 

Dangling access to a market of nearly 100 million people, Vietnamese authorities have become increasingly adept at exploiting their economic leverage to browbeat Big Tech companies into compliance. Facebook's 70 million users aside, DataReportal estimates that YouTube has 63 million users and TikTok around 50 million in Vietnam.

Although free speech principles were foundational for major American social media platforms, it may be naive to expect them to adhere to any express ideological value proposition at this stage. Above all else, they prioritize rapid growth, outpacing competitors and solidifying their foothold in online communication and commerce. At the end of the day, it is the companies’ bottom line that has dictated how Big Tech operates across borders.

Alongside market pressures, Vietnam has also gained leverage through its own legal framework. Big Tech companies have recognized that they need to adhere to local laws in the countries where they operate, and the Vietnamese government has capitalized on this, amping up its legal arsenal to tighten its grip on cyberspace, knowing full well that Facebook, along with YouTube and TikTok, will comply. Nowhere is this tactic more manifest than in the crackdown on what the authorities label as anti-state content. 

Over the past two decades, the crackdown on anti-state content has shaped the way Vietnamese authorities deployed various online censorship strategies, while also dictating how a raft of laws and regulations on internet controls were formulated and enforced. From Hanoi’s perspective, anti-state content can undermine national prestige, besmirch the reputation of the ruling Communist Party and slander and defame Vietnamese leaders.

There is one other major benefit that the government derives from the big platforms: it uses them to promote its own image. Like China, Vietnam has since 2017 deployed a 10,000-strong military cyber unit tasked to manipulate online discourse to enforce the Communist Party’s line. The modus operandi of Vietnam’s cyber troops has been to ensure “a healthy cyberspace” and protect the regime from “wrong,” “distorting,” or “false news,” all of which are in essence “anti-state” content in the view of the authorities.

And the biggest companies now readily comply. A majority of online posts that YouTube and Facebook have restricted or removed at the behest of Vietnamese authorities were related to  “government criticism” or ones that “oppose the Communist Party and the Government of Vietnam,” according to the transparency reports by Google and Facebook.

The latest data disclosed by Vietnam’s Ministry of Information and Communications indicates that censorship compliance rates by Facebook and YouTube both exceed 90%.

In this context, Southeast Asia provides a compelling case study. Notably, four of the 10 countries with the highest number of Facebook users worldwide are also in Southeast Asia: Indonesia, the Philippines, Vietnam and Thailand. Across the region, censorship requests have pervaded the social media landscape and redefined Big Tech-government relations. 

“Several governments in the region have onerous regulation that compels digital platforms to adhere to strict rules over what content is or isn’t allowed to be on the platform,” Kian Vesteinsson, an expert on technology and democracy at Freedom House, told me. “Companies that don’t comply with these rules may risk fines, criminal or civil liability, or even outright bans or blocks,” Vesteinsson said.

But a wholesale ban on any of the biggest social platforms feels highly improbable today. These companies have become indispensable partners in Vietnam’s online censorship regime, to the point that the threat of shutting them down is more of a brinkmanship tactic than a realistic option. In other words, they are too important to Vietnam to be shut down. And the entanglement goes both ways — for Facebook and Google, the Vietnamese market is too lucrative for them to back out or resist censorship demands.

To wit, after Vietnam threatened to block Facebook in 2020 over anti-government posts, the threat never materialized. And Facebook has largely met the demands of Vietnamese authorities ever since.

Last May, TikTok faced a similar threat. Vietnam launched a probe into TikTok's operations in Vietnam, warning that any failure to comply with Vietnamese regulations could see the platform shown the door in this lucrative market. While the outcome of the inspection is pending and could be released any time, there are already signs that TikTok, the only foreign social media platform to have set up shop in Vietnam, will do whatever it takes to get on the good side of Vietnamese authorities. In June, TikTok admitted to its wrongdoings in Vietnam and pledged to take corrective actions.

The fuss that Vietnamese authorities have made about both Facebook and TikTok has likely masked their real intent: to further strong-arm these platforms into becoming more compliant and answerable to Vietnamese censors. Judging by their playbook, Vietnamese authorities are likely to continue wielding the stick of shutdown as a pretext to tighten the grip on narratives online, fortify state controls on social media and solidify the government's increasing leverage over Big Tech.

Could a different kind of platform emerge in this milieu? Vietnam’s economy of scale would scarcely allow for this kind of development: The prospect of building a more robust domestic internet ecosystem that could elbow out Facebook or YouTube doesn’t really exist. Absent bigger political and economic changes, Hanoi will remain reliant on foreign tech platforms to curb dissent, gauge public sentiment, discover corrupt behavior by local officials and get out its own messages to its internet-savvy population.

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
46764
The Albanian town that TikTok emptied https://www.codastory.com/authoritarian-tech/albania-tiktok-migration-uk/ Thu, 24 Aug 2023 15:28:36 +0000 https://www.codastory.com/?p=42467 “It’s like the boys have gone extinct,” say women in Kukes. They’ve all left for London, chasing dreams of fast cars and easy money sold on social media

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>

The Albanian town that TikTok emptied

“I once had an idea in the back of my mind to leave this place and go abroad,” Besmir Billa told me earlier this year as we sipped tea in the town of Kukes, not far from Albania’s Accursed Mountains. “Of course, like everybody else, I’ve thought about it.”

The mountains rose up all around us like a great black wall. Across the valley, we could see a half-constructed, rusty bridge, suspended in mid-air. Above it stood an abandoned, blackened building that served during Albania’s 45-year period of communist rule as a state-run summer camp for workers on holiday. 

Since the fall of communism in 1991, Kukes has lost roughly half of its population. In recent years, thousands of young people — mostly boys and men — have rolled the dice and journeyed to England, often on small boats and without proper paperwork. 

Fifteen years ago, people would come to Kukes from all over the region for market day, where they would sell animals and produce. The streets once rang with their voices. Those who’ve lived in Kukes for decades remember it well. Nowadays, it’s much quieter.

Billa, 32, chose not to leave. He found a job in his hometown and stayed with his family. But for a person his age, he’s unusual.

You can feel the emptiness everywhere you go, he told me. “Doctors all go abroad. The restaurants are always looking for bartenders or waiters. If you want a plumber, you can’t find one.” Billa’s car broke down recently. Luckily, he loves fixing things himself — because it’s difficult to find a mechanic.

Besmir Billa playing a traditional Albanian instrument, called the cifteli, in Kukes.

All the while, there is a parallel reality playing out far from home, one that the people of Kukes see in glimpses on TikTok and Instagram. Their feeds show them a highly curated view of what their lives might look like if they left this place: good jobs, plenty of money, shopping at designer stores and riding around London in fast cars. 

In Kukes, by comparison, times are tough. Salaries are low, prices are rising every week and there are frequent power outages. Many families can barely afford to heat their homes or pay their rent. For young people growing up in the town, it’s difficult to persuade them that there’s a future here.

Three days before I met Billa, a gaggle of teenage boys chased a convoy of flashy cars down the street. A Ferrari, an Audi and a Mercedes had pulled into town, revving their engines and honking triumphantly. The videos were uploaded to TikTok, where they were viewed and reposted tens of thousands of times.

Behind the wheel were TikTok stars Dijonis Biba and Aleks Vishaj, on a victory lap around the remote region. They’re local heroes: They left Albania for the U.K. years ago, became influencers with hundreds of thousands of followers, and now they’re back, equipped with cars, money and notoriety.

Vishaj, dubbed the “King of TikTok” by the British tabloids, was reportedly convicted of robbery in the U.K. and deported in 2021. Biba, a rapper, made headlines in the British right-wing press the same year for posting instructions to YouTube on how to enter the U.K. with false documents. Police then found him working in a secret cannabis house in Coventry. He was eventually sentenced to 15 months in prison. 

The pair now travel the world, uploading TikTok videos of their high-end lifestyle: jet skiing in Dubai, hanging out in high-rise hotels, driving their Ferrari with the needle touching 300 kilometers per hour (180 mph) through the tunnel outside Kukes. 

Billa’s nephews, who are seven and 11, were keen to meet him and get a selfie when they came to town, like every other kid in Kukes. 

“Young people are so affected by these models, and they’re addicted to social media. Emigrants come back for a holiday, just for a few days, and it’s really hard for us,” Billa said. 

Billa is worried about his nephews, who are being exposed to luxury lifestyle videos from the U.K., which go against the values that he’s trying to teach them. They haven’t yet said they want to leave the country, but he’s afraid that they might start talking about it one day. “They show me how they want a really expensive car, or tell me they want to be social media influencers. It’s really hard for me to know what to say to them,” he said.

Billa feels like he’s fighting against an algorithm, trying to show his nephews that the lifestyle that the videos promote isn’t real. “I’m very concerned about it. There’s this emphasis for kids and teenagers to get rich quickly by emigrating. It’s ruining society. It’s a source of misinformation because it’s not real life. It’s just an illusion, to get likes and attention.”

And he knows that the TikTok videos that his nephews watch every day aren’t representative of what life is really like in the U.K. “They don’t tell the darker story,” he said.

The Gjallica mountains rise up around Kukes, one of the poorest cities in Europe.

In 2022, the number of people leaving Albania for the U.K. ticked up dramatically, as well as the number of those seeking asylum, at around 16,000, more than triple the previous year. According to the Migration Observatory at the University of Oxford, one reason for the uptick in claims may be that Albanians who lack proper immigration status are more likely to be identified, leading them to claim asylum in order to delay being deported. But Albanians claiming asylum are also often victims of blood feuds — long-standing disputes between communities, often resulting in cycles of revenge — and viciously exploitative trafficking networks that threaten them and their families if they return to Albania.

By 2022, Albanian criminal gangs in Britain were in control of the country’s illegal marijuana-growing trade, taking over from Vietnamese gangs who had previously dominated the market. The U.K.’s lockdown — with its quiet streets and newly empty businesses and buildings — likely created the perfect conditions for setting up new cannabis farms all over the country. During lockdown, these gangs expanded production and needed an ever-growing labor force to tend the plants — growing them under high-wattage lamps, watering them and treating them with chemicals and fertilizers. So they started recruiting. 

Everyone in Kukes remembers it: The price of passage from Albania to the U.K. on a truck or small boat suddenly dropped when Covid-19 restrictions began to ease. Before the pandemic, smugglers typically charged 18,000 pounds (around $22,800) to take Albanians across the channel. But last year, posts started popping up on TikTok advertising knock-down prices to Britain starting at around 4,000 pounds (around $5,000). 

People in Kukes told me that even if they weren’t interested in being smuggled abroad, TikTok’s algorithm would feed them smuggling content — so while they were watching other unrelated videos, suddenly an anonymous post advertising cheap passage to the U.K. would appear on their “For You” feed.

TikTok became an important recruitment tool. Videos advertising “Black Friday sales” offered special discounts after Boris Johnson’s resignation, telling people to hurry before a new prime minister took office, or when the U.K. Home Office announced its policy to relocate migrants to Rwanda. People remember one post that even encouraged Albanians to come and pay their respects to Queen Elizabeth II when she died in September last year. There was a sense of urgency to the posts, motivating people to move to the U.K. while they still could, lest the opportunity slip away. 

The videos didn’t go into detail about what lay just beneath the surface. Criminal gangs offered to pay for people’s passage to Britain, on the condition they worked for them when they arrived. They were then typically forced to work on cannabis farms to pay off the money they owed, according to anti-human trafficking advocacy groups and the families that I met in Kukes. 

Elma Tushi, 17, in Kukes, Albania.

“I imagined my first steps in England to be so different,” said David, 33, who first left Albania for Britain in 2014 after years of struggling to find a steady job. He could barely support his son, then a toddler, or his mother, who was having health problems and couldn’t afford her medicine. He successfully made the trip across the channel by stowing away in a truck from northern France. 

He still remembers the frightened face of the Polish driver who discovered him hiding in the wheel well of the truck, having already reached the outskirts of London. David made his way into the city and slept rough for several weeks. “I looked at everyone walking by, sometimes recognizing Albanians in the crowd and asking them to buy me bread. I couldn’t believe what was happening to me.” 

He found himself half-hoping the police might catch him and send him home. “I was so desperate. But another part of me said to myself, ‘You went through all of these struggles, and now you’re going to give up?’”

David, who asked us to identify him with a pseudonym to protect his safety, found work in a car wash. He was paid 35 pounds (about $44) a day. “To me, it felt like a lot,” he said. “I concentrated on saving money every moment of the day, with every bite of food I took,” he told me, describing how he would live for three or four days on a tub of yogurt and a package of bread from the grocery chain Lidl, so that he could send money home to his family.

At the car wash, his boss told him to smile at the customers to earn tips. “That’s not something we’re used to in Albania,” he said. “I would give them the keys and try to smile, but it was like this fake, frozen, hard smile.”

Like David, many Albanians begin their lives in the U.K. by working in the shadow economy, often at car washes or construction sites where they’re paid in cash. While there, they can be targeted by criminal gangs with offers of more lucrative work in the drug trade. In recent years, gangs have funneled Albanian workers from the informal labor market into cannabis grow houses. 

David said he was careful to avoid the lure of gangsters. At the French border, someone recognized him as Albanian and approached, offering him a “lucky ticket” to England with free accommodation when he arrived. He knew what price he would have to pay — and ran. “You have to make deals with them and work for them,” he told me, “and then you get sucked into a criminal life forever.”

It’s a structure that traps people in a cycle of crime and debt: Once in the U.K., they have no documents and are at the mercy of their bosses, who threaten to report them to the police or turn them into the immigration authorities if they don’t do as they say. 

Gang leaders manipulate and intimidate their workers, said Anxhela Bruci, Albania coordinator at the anti-trafficking foundation Arise, who I met in Tirana, the Albanian capital. “They use deception, telling people, ‘You don’t have any documents, I’m going to report you to the police, I have evidence you have been working here.’ There’s that fear of going to prison and never seeing your family again.” 

Gangs, Bruci told me, will also make personal threats against the safety of their victims’ families. “They would say, ‘I'm going to kill your family. I'm going to kill your brother. I know where he lives.’ So you’re trapped, you’re not able to escape.”

She described how workers often aren’t allowed to leave the cannabis houses they’re working in, and are given no access to Wi-Fi or internet. Some are paid salaries of 600-800 pounds (about $760-$1,010) a month. Others, she added, are effectively bonded labor, working to pay back the money they owe for their passage to Britain. It’s a stark difference from the lavish lifestyles they were promised.

As for telling their friends and family back home about their situation, it’s all but impossible. “It becomes extremely dangerous to speak up,” said Bruci. Instead, once they do get online, they feel obliged to post a success story. “They want to be seen as brave. We still view the man as the savior of the family,” said Bruci, who is herself Albanian.

Bruci believes that some people posting on TikTok about their positive experience going to the U.K. could be “soldiers” for traffickers. “Some of them are also victims of modern slavery themselves and then they have to recruit people in order to get out of their own trafficking situation.”

As I was reporting this story, summer was just around the bend and open season for recruitment had begun. A quick search in Albanian on TikTok brought up a mass of new videos advertising crossings to the U.K. If you typed in "Angli" — Albanian for “England” — on TikTok the top three videos to appear all involved people making their way into the UK. One was a post advertising cheap crossings, and the other two were Albanians recording videos of their journeys across the channel. After we flagged this to TikTok, those particular posts were removed. New posts, however, still pop up every day.

With the British government laser-focused on small boat crossings, and drones buzzing over the beaches of northern France, traveling by truck was being promoted at a reduced price of 3,000 pounds (about $3,800). And a new luxury option was also on offer — speedboat crossings from Belgium to Britain that cost around 10,000 pounds (about $12,650) per person.

Kevin Morgan, TikTok’s head of trust and safety for Africa, Europe and the Middle East, said the company has a “zero tolerance approach to human smuggling and trafficking,” and permanently bans offending accounts. TikTok told me it had Albanian-speaking moderators working for the platform, but would not specify how many. 

In March, TikTok announced a new policy as part of this zero-tolerance approach. The company said it would automatically redirect users who searched for particular keywords and phrases to anti-trafficking sites. In June, the U.K.’s Border Force told the Times that they believed TikTok’s controls had helped lower the numbers of small boat crossings into Britain. Some videos used typos on purpose to get around TikTok’s controls. As recently as mid-August, a search on TikTok brought up a video with a menu of options to enter Britain — via truck, plane or dinghy.

In Kukes, residents follow British immigration policy with the same zeal as they do TikTok videos from Britain. They trade stories and anecdotes about their friends, brothers and husbands. Though their TikTok feeds rarely show the reality of life in London, some young people in Kukes know all is not as it seems.

“The conditions are very miserable, they don’t eat very well, they don’t wash their clothes, they don’t have much time to live their lives,” said Evis Zeneli, 26, as we scrolled through TikTok videos posted by her friends in the U.K., showing a constant stream of designer shopping trips to Gucci, Chanel and Louis Vuitton.

It’s the same for a 19-year-old woman I met whose former classmate left last year. Going by his social media posts, life looks great — all fast cars and piles of British banknotes. But during private conversations, they talk about how difficult his life really is. The videos don’t show it, she told me, but he is working in a cannabis grow house. 

“He’s not feeling very happy. Because he doesn’t have papers, he’s obliged to work in this illegal way. But he says life is still better over there than it is here,” she said.

 “It’s like the boys have gone extinct,” she added. At her local park, which used to be a hangout spot for teenagers, she only sees old people now.

Albiona Thaçi, 33, at home with her daughter.

“There’s this huge silence,” agreed Albiona Thaçi, 33, whose husband traveled to the U.K. nine months ago in a small boat. When he left, she brought her two daughters to the seaside to try to take their mind off of the terrifying journey that their father had undertaken. Traveling across the English Channel in a fragile dinghy, he dropped his phone in the water, and they didn’t hear from him for days. “Everything went black,” Thaçi said. Eventually, her husband called from the U.K., having arrived safely. But she still doesn’t know when she’ll see him again. 

In her 12-apartment building, all the men have left. “Now we have this very communal feeling. Before, we used to knock on each others’ doors. Now, we just walk in and out.” But Thaçi’s friends have noticed that when they get together for coffee in the mornings, she’s often checked out of their conversation. “My heart, my mind, is in England,” she said. She plans to join her husband if he can get papers for her and their daughters. 

The absence of men hangs over everything. In the village of Shishtavec, in the mountains above Kukes, five women crowded around the television one afternoon when I visited. It was spring, but it still felt like winter. They were streaming a YouTube video of dozens of men from their village, all doing a traditional dance at a wedding — in London. 

Adelie Molla and her aunt Resmije Molla watch television in Shishtavec.

“They’re doing the dance of men,” said Adelie Molla, 22. She had just come in from the cold, having collected water from the well up by the town mosque. The women told me that the weather had been mild this year. “The winter has gone to England,” laughed Molla’s mother Yaldeze, 53, whose son left for the U.K. seven months ago. Many people in their village have Bulgarian heritage, meaning they can apply for European passports and travel to Britain by plane, without needing to resort to small boats.

The whole family plans to eventually migrate to Britain and reunite. “For better or worse I have to follow my children,” said Yaldeze, who has lived in the village her whole life. She doesn’t speak a word of English. “I’m going to be like a bird in a cage.” 

Around the town, some buildings are falling into disrepair while others are half-finished, the empty window-frames covered in plastic sheeting. A few houses look brand new, but the windows are dark. Adelie explained that once people go to the U.K., they use the money they make there to build houses in their villages. The houses lie empty, except when the emigrants come to visit. And when they come back to visit their hometown, they drive so that they can show off cars with U.K. license plates — proof they’ve made it. 

 “This village is emptying out,” Molla said, describing the profound boredom that had overtaken her life. “Maybe after five years, no one will be here at all anymore. They’ll all be in London.”

The old city of Kukes was submerged beneath a reservoir when Albania’s communist regime built a hydropower dam in the 1970s.

The oldest settlements of Kukes date back to the fourth century. In the 1960s, when Albania’s communist government decided to build a hydropower dam, the residents of Kukes all had to leave their homes and relocate further up the mountain to build a new city, while the ancient city was flooded beneath an enormous reservoir. And in the early 1970s, under Enver Hoxha’s paranoid communist regime, an urban planner was tasked with building an underground version of Kukes, where 10,000 people could live in bunkers for six months in the event of an invasion. A vast network of tunnels still lies beneath the city today. 

“Really, there are three Kukeses,” one local man told me: the Kukes where we were walking around, the subterranean Kukes beneath our feet, and the Kukes underwater. But even the Kukes of today is a shadow of its former self, a town buried in the memories of the few residents who remain.

View of a street in Kukes, Albania.

David was deported from Britain in 2019 after police stopped him at a London train station. He tried to return to the U.K. in December 2022 by hiding in a truck but couldn’t get past the high-tech, high-security border in northern France. He is now back in Kukes, struggling to find work. 

He wanted me to know he was a patriotic person who, given the chance to have a good life, would live in Albania forever. But, he added, “You don’t understand how much I miss England. I talk in English, I sing in English, I cook English food, and I don’t want my soul to depart this earth without going one more time to England.”

He still watches social media reels of Albanians living in the U.K. “Some people get lucky and get rich. But when you see it on TikTok or Instagram, it might not even be real.” 

Besmir Billa, whose nephews worry him with their TikTok aspirations, has set himself a challenge. He showed me his own TikTok account, which he started last summer.

The grid is full of videos showcasing the beauty of Kukes: clips of his friends walking through velvety green mountains, picking flowers and petting wild horses. “I’m testing myself to see if TikTok can be used for a good thing,” he told me. 

“The idea I had is to express something valuable, not something silly. I think this is something people actually need,” he said. During the spring festival, a national holiday in Albania when the whole country pours onto the streets to celebrate the end of winter, he posted a video showing young people in the town giving flowers to older residents. 

At first, his nephews were “not impressed” by their uncle’s page. But then, the older boy clocked the total number of views on the spring festival video: 40,000 and counting. 

 

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>
42467
How TikTok influencers exploit ethnic divisions in Ethiopia https://www.codastory.com/authoritarian-tech/tktok-ethiopia-ethnic-conflict/ Wed, 14 Jun 2023 13:29:06 +0000 https://www.codastory.com/?p=44479 Social media influencers in Africa’s second-largest country are helping to stoke conflict – and making money along the way

The post How TikTok influencers exploit ethnic divisions in Ethiopia appeared first on Coda Story.

]]>
When Ethiopians took to the streets in February in reaction to a highly politicized rift within the country’s Orthodox Tewahedo Church, government authorities temporarily blocked social media platforms. On the outside, it may have seemed like just another blunt force measure by an authoritarian state trying to quell social unrest. But the move was more keenly calculated than that — the rhetoric of social media influencers was having an outsized impact on how Ethiopians, both in the country and in Ethiopia’s politically influential diaspora, perceived what was happening.  Similar to other moments of intense social conflict amid Ethiopia’s civil war, TikTok became a ground zero for much of the conflict playing out online.

In early February, three archbishops of the Orthodox Tewahedo Church — one of the oldest churches in Africa that dates back to the 4th century — accused fellow church leaders of discriminating against the Oromo people, who constitute the largest ethnic group in Ethiopia’s population of 120 million. While church members come from a diverse array of ethnic backgrounds, worship services are predominantly conducted in the liturgical language of Ge’ez and in Amharic, which is a language primarily spoken by the Amhara people. Amharic is the dominant language of Addis Ababa, Ethiopia’s capital, and the working language of the federal government. This linguistic predilection underlines the cultural clout of Amharic. 

After the three archbishops — all of Oromo lineage — made their allegations of discrimination public, they were excommunicated by church authorities. They then declared their plans to form a breakaway synod, triggering an instant public outcry. The cleavages underlying Ethiopia’s civil conflict bubbled to the surface and devolved into violent skirmishes, resulting in a combined total of 30 fatalities in the southern Ethiopian town of Shashemene and in Addis Ababa.

But what was a serious political crisis for the church and for the country amounted to a prime opportunity for TikTok influencers seeking to spread their messages and turn a profit along the way.

A quick scroll through live sessions on TikTok reveals heated political discussions in Amharic, Oromo and Tigrinya, in which participants exchange barbs and strategize on how to confront their adversaries. Zemedkun Bekele is prominent among them. A self-proclaimed defender of the Orthodox Tewahedo Church, he is known for his forceful, admonitory videos that are often over an hour long. Bekele began broadcasting threats against the breakaway synod, claiming to have video evidence that its leaders had engaged in homosexual activity and threatening to release the tape to the public. Accusations like this resonate deeply in a nation steeped in conservatism, where homosexuality is viewed with considerable disdain.

A known social media influencer who had already been banned from both Facebook and YouTube in 2020 for violating their policies on hate speech and the promotion of violence, Bekele re-established himself on TikTok in February 2023, just in time to jump into the fray. Since then, he has amassed a dedicated audience of more than 203,000 TikTok followers, most of whom appear to be members of the Amhara ethnic group and followers of the Orthodox Tewahedo Church. 

In the midst of the crisis, Bekele also launched attacks against a senior church teacher, Daniel Kibret, who has become a staunch ally of Prime Minister Abiy Ahmed. Drawing on the fact that the prime minister comes from a mixed religious background (his father is Muslim and his mother is Christian), Bekele made unfounded claims that Kibret had secretly converted to Islam.

In a video with more than 19,000 views, Bekele maintained that he would not relent. “We will not back down without making sure the Ethiopian Orthodox Church is as big as the country itself. We will not back down without toppling Abiy Ahmed,” he said. “We will not back down without hanging Daniel Kibret upside down.” The video was posted on February 4, the same day that the three bishops declared their intentions to secede from the Church.

Another account called TegOromo also saw swift growth surrounding the church controversy. TegOromo has a passionate following and is on the opposite side of the conflict from Bekele. The person who runs the account has expressed support for the Oromo religious leaders who sought to establish the independent synod. The account's moniker fuses the first three letters of “Tegaru” with “Oromo” — a calculated move to represent harmony between the Tigrayan and Oromo ethnic groups.

With more than 60,000 followers, TegOromo’s account is marked by overt threats, inflammatory language and aggressive rhetoric. One TikTok video urged supporters to “chop the Amharas like an onion.” This video was later removed from the platform, but copies of it remain accessible. In a live session, a TegOromo follower called on Oromo people to “kill all Amharas” and even specified that children should not be spared. TegOromo cheered him on, urging other followers to answer the call and take up arms.  

Despite the controversial nature of TegOromo’s content, the influencer's popularity suggests a burgeoning trend. Republishing his material or circulating incisive and satirical clips featuring TegOromo has become a reliable strategy for Ethiopian content creators seeking higher engagement.

In another instance, the spotlight turned toward two emerging TikTok influencers, Dalacha and Betayoo, who garnered attention for their adept use of vitriol. In one video, Dalacha, who identifies as Oromo, launched a barrage of insults and sexual slurs at his rival TikToker, Maareg, who identifies as Amhara. The episode exemplified the depths to which Dalacha was willing to stoop in order to denigrate Maereg. Dalacha used language that reduced the Amhara community to mere cattle. It was intended only to amplify the prevailing animosity between the two ethnic groups.

In another video, Betayoo, who consistently identifies as Amhara, used similarly troubling language, employing both sexual and ethnic slurs. She directed her insults toward a rival TikToker who identifies as Tigrayan and who has publicly expressed disdain for the Amhara community. Betayoo's actions escalated beyond targeting an individual. She proceeded to insult the entire Tigrayan community, expressing a desire for their eradication. 

Left: Zemedkun Bekele and his co-host celebrate achieving 60 million views. Right: A screenshot from TegOromo's live session, subtitled in Amharic, in which he calls on his Oromo kin to eliminate the Amharas.

The videos I reference above also all contain clear violations of TikTok’s terms of service, yet they remain on the platform. TikTok's Community Guidelines strictly prohibit hate speech or hateful behavior and promise to remove such material from their platform. Accounts and/or users that engage in severe or multiple violations of hate speech policies are promptly banned from the platform. Despite these guidelines, plenty of Ethiopians who have exhibited hateful behavior remain active on the platform and continue to produce content for significant numbers of followers.

When I approached TikTok staff members to alert them about the videos and ask them to comment for this piece, they did not respond. It is difficult to definitively prove that this kind of discourse directly contributes to violence on the ground. But it is clear that discussions of political violence and religious conflicts on TikTok often result in the spread of misinformation and amplify interethnic hatred. Clips containing these influencers’ offensive remarks have also seeped onto other platforms, such as YouTube and Facebook, where reposting or critiquing such content has become a low-effort method for content creators to gain engagement.

Given the sheer volume of such live streams, TikTok's moderation team may be overwhelmed, struggling to monitor these discussions and remove inappropriate content. It is also worth noting that all of these accounts are run primarily in Amharic, Oromo or Tigrinya, languages that are spoken by millions of Ethiopians in and outside of the country but that have historically been underrepresented on major social media platforms. TikTok does not publicly disclose how many staff members or content moderators it employs for reviewing content in these languages.

All this engagement is not driven purely by political vitriol — it is also a pursuit of profit. The TikTok LIVE feature has seen a swift uptick in popularity among Ethiopian users, catalyzing an emergence of politically-minded influencers who reap economic rewards through virtual gifts. These gifts can be converted into TikTok “diamonds,” which are in turn redeemable for actual cash.

Crafting politically-charged clickbait, designed to fan the flames of ethnic and religious discord, is emerging as a common tactic for financial gain. It has had especially strong uptake among individuals in the Ethiopian diaspora. Many of the most impactful Ethiopian TikTok figures are actually located in Western nations. Zemedkun Bekele, for instance, lives in Germany.

Amid the ongoing crisis, Bekele proudly claimed to have received one of the most sought-after TikTok LIVE gifts — the lion, which translates to a little over $400 in real-world currency. He has prominently featured a video on his profile displaying a virtual lion roaring at the screen, serving as both a symbol of his influence and a testament to the economic gains that one can reap through this kind of engagement on TikTok.

In a 2021 essay, former New York Times media critic Ben Smith showed how TikTok's algorithmic recommendation framework has helped to intensify cultural, linguistic and ideological divides among its global user base. The unfolding situation in Ethiopia could serve as a case study for Smith’s argument. With the videos I described — in addition to hundreds of others — the platform's content dissemination strategy appears to inadvertently encourage distinct factions to isolate themselves and push each other to commit hate speech and even physical violence.

The rise of these online strongholds poses significant challenges to promoting inclusive, cross-cultural understanding that TikTok claims to want to foster. Users now risk becoming trapped in ideological echo chambers, detached from diverse perspectives and viewpoints, and increasingly vulnerable to politically-motivated disinformation.

At the core of the issue lies the question of accountability. What obligation does TikTok, and by extension other social media platforms, have to curtail the spread of divisive content, particularly when it is financially incentivized? And moreover, could the pursuit of profit from politically-charged content inadvertently pave the way for more extreme or hazardous content, potentially triggering threats of violence in real life?

In the end, for onlookers familiar with Ethiopian culture and politics, it is clear that the platforms that invite us to share our lives online are failing to mediate the complexities of the world they seek to engage with.

The post How TikTok influencers exploit ethnic divisions in Ethiopia appeared first on Coda Story.

]]>
44479
Can the West curb its addiction to Chinese tech? https://www.codastory.com/authoritarian-tech/chinese-tech-tiktok-ban/ Thu, 06 Apr 2023 13:22:40 +0000 https://www.codastory.com/?p=42327 A U.S. ban on TikTok could open the floodgates for sanctions on any technology made in China. But that’s easier said than done

The post Can the West curb its addiction to Chinese tech? appeared first on Coda Story.

]]>
Calls to ban TikTok are growing ever louder in Washington, D.C. The sensational video-sharing app is in the pockets of 150 million Americans, a figure that explains, at least in part, the grilling of TikTok CEO Shou Zi Chew by U.S. lawmakers at a House Energy and Commerce committee hearing on March 23. Committee chair Cathy McMorris Rodgers said that the social media platform is a “weapon by the Chinese Communist Party to spy on you, manipulate what you see and exploit for future generations.”

While it’s true that ByteDance, TikTok’s Beijing-based parent company, is beholden to Chinese law, there’s no publicly available evidence that the Chinese government itself has spied on people in the U.S. through TikTok. Last year, ByteDance was found to have tracked the IP addresses of journalists covering the company, though the employees who took part in this effort were fired soon after the news came to light. 

TikTok has been under intense regulatory scrutiny since 2019, with policymakers especially concerned about where and how it stores data belonging to U.S. users. As of now, the company says, it sits in its data centers in Virginia and Singapore. However, it’s also working on “Project Texas”: a $1.5 billion plan to route all U.S. user data through the servers of the Austin-based computing giant Oracle, managed through a separate entity called TikTok U.S. Data Security.

At the hearing, Chew said this would act as a “firewall” to protect U.S. user data. But legislators appear unconvinced. Their security concerns have led to a place where U.S. sanctions on TikTok seem like a real possibility. Legal experts caution that a ban might not have legs, due to free speech protections under the First Amendment. But even if it doesn’t stand up in court, restrictions on the app could have significant ripple effects for other China-owned companies, both in the U.S. and other Western countries.

Political posturing among legislators like McMorris Rodgers make it look as though Washington is ready to ban all sorts of Chinese technology from U.S. territory. But this is easier said than done. Chinese technology has become deeply embedded in the West, whether in stand-alone products like Hikvision security cameras or in the individual parts used to make networked technologies, ranging from mobile phones to smart speakers.

The case of Huawei might offer some clues. The Chinese telecommunications infrastructure firm was effectively banned by the Trump administration in 2019 over fears of espionage. While the Shenzhen-based company was initially allowed to continue developing 5G networks in Britain, this was swiftly overturned following analysis by the U.K. government’s National Cyber Security Centre — and amid the trade war between the U.S. and China emerging at the time. 

“Many in the U.K. said, ‘Don’t worry, Huawei isn’t a problem,’” explained Sam Olsen, the CEO of the Evenstar Institute, a London-based business strategy and geopolitical think tank. “But the actions of the U.S. created a business case for U.K. companies to move away from Huawei and Chinese technology more broadly.”

So far, many of the U.K.’s bans on Chinese technology have been restricted to government property: TikTok has been banned on civil servants’ work devices and government departments are barred from installing Hikvision surveillance cameras at sensitive sites.

But Olsen believes more sanctions are on the way. He cites cellular components of networked devices commonplace in the home and the office: smart thermostats, refrigerators and connected security systems, often referred to as the Internet of Things, some of which capture copious amounts of user data by design. Key parts of these products are made in China by companies like Quectel and Fibocom. Most have geolocation capabilities. “By their nature, these devices contain sensitive data: a car that’s parked outside a government location could potentially reveal an agent’s identity and behavioral patterns,” he said.

Chinese technologies that can track cross-border metrics are in the firing line, too. China’s dominance in global trade means it possesses immense logistics data. For example, LOGINK, a Chinese transport and logistics platform, links shippers internationally on a closed platform. “The software is integrated into China’s 93 ports in 53 countries, meaning it has access to the visibility of global supply chains,” said Olsen. “From a national security angle, China could divert sensitive cargo thanks to the platform's data.”

Likewise, Beijing-based Nuctech builds security scanning technology used at EU borders and major international sporting events. The U.S. Transportation Security Administration banned the equipment in American airports in 2014, but most Western allies are yet to follow. “The technology is scanning everything going in and out of Europe, giving China huge data to work from,” said Olsen.

According to Olsen, cheap manufacturing and a market of 1.5 billion people meant it initially made business and political sense for Western allies to cozy up to China. However, he says, there has been a collective naivete regarding China’s longer-term goals, which “never wanted to settle as a bit player in a Western feudal system.”

Now, under Xi Jinping’s leadership, the “wolf warrior diplomacy” style of figures like Zhao Lijian, an ex-foreign ministry spokesman, is on full display: coercion, confrontation and conflict as China asserts its status as a true global superpower. After decades of investment in Chinese technology for its high quality and cheap price, some political forces in Western nations want to backtrack. But this may force them to choose between market pressures and national security interests.

“U.K. government enthusiasm for Chinese investment and trade partnerships has, for 20 years, rubbed up against cybersecurity concerns,” said Tim Stevens, a reader in global security at King’s College London. “But now, the cybersecurity aspects of Chinese state-firm relations are too serious for any government or firm to ignore: There are few, if any, restrictions on what data the Chinese government can obtain from ‘private’ companies, and no independent oversight of those relationships.”

While sanctions on Chinese technology ramp up, a complete disentanglement is unlikely, if not impossible. Chinese tech isn’t only embedded in Western devices — but in its economies, too. Olsen says one way of easing reliance on China is the Biden administration’s CHIPS and Science Act: the $280 billion investment to boost semiconductor manufacturing in the U.S. “If that ran its course, the U.S. would be able to export to its allies, gain self-sufficiency and encourage the likes of Apple to diversify its tech away from China.”

It’s unclear if we’re entering another Cold War, says Richard Harknett, a professor of political science and the director of the School of Public and International Affairs at the University of Cincinnati. However, the nature of technology and cybersecurity mean the devices we all use can be leveraged by global superpowers in the name of espionage: digital frontlines in which a live shot is never fired. “Unauthorized data collection, data manipulation and manipulating computer networks are the ways superpowers are trying to gain advantages against each other,” Harknett said. “We’re in a new phase of strategic competition across all countries. Cyberspace is allowing states to use digital means to undermine each other’s economies, militaries and trust in government.”

Through that lens, for the National Security Agency Director Paul Nakasone to compare TikTok to a “loaded gun” makes sense from a U.S. geopolitical perspective. Olsen, however, says the platform has become more of a totem for the current freezing of U.S.-China relations. “It’s a symbol of Chinese data and influence in the West,” he said.

The post Can the West curb its addiction to Chinese tech? appeared first on Coda Story.

]]>
42327
The occupational hazards of cleaning the internet https://www.codastory.com/authoritarian-tech/reddit-content-moderation-lawsuit/ Tue, 28 Feb 2023 13:34:41 +0000 https://www.codastory.com/?p=40761 A new lawsuit against tech giant Reddit underscores the global struggle of content moderators

The post The occupational hazards of cleaning the internet appeared first on Coda Story.

]]>
Somebody has to get rid of it. Live streams of mass shootings, videos of children being beaten and graphic images of bestiality are all too easy to find on the internet, even though most people do not want to look at or even think about such stuff. It is little wonder that social media platforms employ armies of people to review and remove this material from their networks.

Maya Amerson is one of these people. Amerson began working as a content moderator for Reddit in 2018, reviewing violent and disturbing content and identifying posts that violated the company’s terms of use. After three years on the job, she began suffering from panic attacks and symptoms of post-traumatic stress disorder. She repeatedly sought support from her employer but didn’t get it. Now, she is taking Reddit to court.

Currently pending before the San Francisco Superior Court, Amerson’s lawsuit alleges that the Reddit management ignored her requests to move to a different position with less exposure to this kind of material, even after she returned from a 10-week medical leave following her PTSD diagnosis. The lawsuit also claims that Amerson’s supervisors belittled her after she came back to the office. This led to her resignation in September 2022. She filed the suit three months later. When we asked Reddit for its side of the story, we were told that we could attribute the following to a company spokesperson: “At Reddit, our employees' well-being is a top priority and we offer a robust set of resources for content moderators.”

The allegations in Amerson’s lawsuit are specific to Reddit, a social media giant valued at $6.6 billion that has garnered praise for its unique and decentralized approach to content moderation. But they tell a story that goes well beyond Reddit. A growing number of legal actions are taking aim at tech platforms over their treatment of content moderators and the psychological hazards of the work. These cases highlight the tension inherent in a job that cannot yet be automated and is routinely exported to low-wage contractors overseas.

They also raise an essential and unresolved set of questions: Is vicarious trauma inevitable in the work of content moderation? Or can the tech giants who hire scores of moderators around the world do more to meet the psychological needs of the workforce? If so, are any of the major platforms doing it right?

A brief history of the “front page of the internet”

Founded in 2005 as the aspirational “front page of the internet,” Reddit allows people to create discussion groups, known as subreddits, on topics that interest them. Each of the over 100,000 subreddits across the platform has its own ethos and culture. And each subreddit has its own set of rules that are enforced in addition to the company’s own content policy.

What makes all this work? At the heart of Reddit lies an army of thousands of volunteers who carry out the grueling work of content moderation. Known on the platform as mods, these volunteers oversee each subreddit, removing content and banning accounts that violate their established community rules and norms. Some of the platform’s largest subreddits, like r/politics, have more than 1,000 volunteer mods. The company also employs a paid staff of moderators, like Amerson, to ensure that the content posted on the site does not run afoul of the law or of the company’s own content policy.

Across Reddit, it is these volunteer mods who conduct most of the content moderation on the site. According to Reddit’s most recent transparency report, volunteer moderators were behind 59% of all content removals in 2021, while paid content moderators handled most other removals. 

Reddit characterizes its moderation approach as “akin to a democracy, wherein everyone has the ability to vote and self-organize, follow a set of common rules, and ultimately share some responsibility for how the platform works” and maintains that its bottom-up system “continues to be the most scalable solution we’ve seen to the challenges of moderating content online.” 

In doing this work, however, volunteer mods are providing essential quality control services for Reddit without getting paid a dime. A 2022 study from Northwestern University attempted to quantify the labor value of Reddit’s estimated 21,500 volunteer moderators and concluded that it is worth at least $3.4 million to the company annually. 

While the volunteer moderators do their work for free, many are unaware of the monetary value of the labor they perform for the company. “We wanted to introduce some transparency into this transaction to show that volunteer moderators are doing a very valuable and complex job for Reddit,” said Hanlin Li, the co-author of the report.

Volunteer moderators often describe their work as a “labor of love,” Li explained, but they also have been forthcoming about the need for more institutional support to combat hate speech and harassment. “Over the years we’ve seen a lot of protests by moderators against Reddit, essentially because they thought that they were not supported adequately to do this volunteer job,” she explained. 

This has been a longstanding tension at Reddit, which developed a reputation early on for taking a laissez-faire approach to content moderation and hosting subreddits rife with bigotry and hate speech. This came to a head with a misogynistic incel community that formed on the platform in 2013 and became the ground zero for the 2014 Gamergate online harassment campaign that targeted women in the video game industry. The following year, when Reddit’s former CEO Ellen Pao announced that the platform would ban five subreddits for harassment, she was inundated with online abuse. Shortly after, she resigned. The incel subreddit‌ wasn’t banned by the website until 2017.

Despite its unique reliance on a volunteer-driven content moderation model, the site has flown under the radar of researchers and reporters, generating less attention than other social networks. But the lawsuit could change this.

Lawsuits target tech giants over moderator trauma, mistreatment

Maya Amerson’s case against Reddit is a reminder that the company, despite its unique approach to moderation, is not immune to the same kinds of allegations of worker harm that have formed the basis of lawsuits against the world’s largest tech platforms. 

That includes TikTok, Facebook and YouTube. TikTok was sued by two former content moderators last spring: They alleged that the company failed to create a safe and supportive working environment as they reviewed disturbing material including videos of bestiality, necrophilia and violence against children. Facebook agreed to a $52-million settlement in 2020 to compensate former content moderators who claimed their work caused psychological harm. And YouTube recently settled a $4.3-million lawsuit filed by former moderators who said they developed mental health issues while on the job. 

These suits all focus on a relatively new labor force and speak to an industry-wide concern about the toll of moderators’ work. “Generally speaking, content moderation is kind of the new frontier,” Nicholas De Blouw, one of Amerson’s attorneys, told me. “In 1992, there wasn’t such a thing as content moderation. I think that’s the interesting aspect of these cases, is seeing how the law protects people that are in roles that never really existed.”

Many social media platforms have shifted away from employing moderators in-house and now outsource their jobs to third-party contractors. Vendors like Accenture, CPL and Majorel employ people all over the world, from India to Latvia to the Philippines, typically expecting fast turnaround times on content review but offering relatively low hourly wages in return. In lawsuits, employees allege they’ve been exploited, made to work in substandard conditions and denied adequate mental health support for the psychological effects of their work. 

After TIME magazine showed how moderators in Kenya — working for Sama, a subcontractor of Meta — were paid as little as $2.20 per hour to review violent material while operating in a “workplace culture characterized by trauma,” one worker took both Sama and Meta to court. Sama has since canceled its remaining contracts with Meta. A separate investigation by the Bureau of Investigative Journalism highlighted the plight of a “legion of traumatized” and underpaid Colombian moderators working under grueling conditions for a TikTok contractor. “You have to just work like a computer,” a moderator said of the job in Colombia. “Don’t say anything, don’t go to bed, don’t go to the restroom, don’t make a coffee, nothing.” 

Some social media researchers have suggested that eliminating the outsourced labor model could be a step toward improving conditions for workers. New York University professor Paul M. Barrett, who authored a 2020 report on outsourced content moderation, has called on companies to stop farming out the work and instead employ moderators in-house, where they would theoretically have more access to mental health support, supervision and proper training. 

But moving the job in-house alone won’t resolve all the alleged issues with this line of work, as Amerson’s lawsuit against Reddit makes clear. After all, she was not an outsourced moderator — she was a direct employee of the company. As Barrett explained, directly employing moderators does not guarantee that they will receive everything that employees in these roles need.

“There’s an irreducible aspect to this work that makes it hazardous,” he said. “If you’re doing this work, there’s a danger you’re going to run into very difficult, offensive, and unsettling content. You can’t really completely avoid that. The question is: How well-trained are you, how well-supervised are you, and are there options for you to get the kind of mental and emotional support that would seem to be common sense in connection with this kind of work?” 

Are any companies doing this well? Barrett said he was “unaware of a major platform doing content moderation well in terms of directly employing moderators who receive proper training, supervision and mental health support.”

What would incentivize platforms to implement these changes? While they could reform themselves, put an end to outsourced moderation and provide in-house employees with better benefits, compensation, oversight and training, there’s no reason to expect this to happen. Under Elon Musk’s leadership, Twitter has moved in the opposite direction, gutting content moderation teams. Layoffs at YouTube have left the company with just one employee overseeing global misinformation policy.

In an area that shows no appetite for self-regulation, actual oversight may be the only solution. Barrett has proposed enhancing the consumer protection authority of the Federal Trade Commission to regulate the social media industry. Such an expansion could give the FTC the authority to ask companies outright about their content moderation practices and to verify whether they live up to their commitments to consumers outlined in their terms of service and community standards guidelines, Barrett explained.

“They make promises just like other companies,” he said. “My argument is that the FTC could ask: So how many moderators do you employ? Do you have enough people to get this done? How do they interact with the automated system? And is all of that adequate, really, to fulfill the promises you make?”

A policy shift like this also would require Congressional action, which is hard to imagine in our gridlocked era of governance. But without major changes to the industry, it’s also hard to imagine lawsuits like Amerson’s — and the concerns that undergird them — going away anytime soon.

The post The occupational hazards of cleaning the internet appeared first on Coda Story.

]]>
40761
China is gaining control of the world’s data as the US stands by https://www.codastory.com/surveillance-and-control/data-trafficking-china-us-tiktok/ Thu, 17 Nov 2022 14:45:59 +0000 https://www.codastory.com/?p=36546 Global data trafficking presents security risks that most countries are not prepared to handle, Aynne Kokas argues in her new book

The post China is gaining control of the world’s data as the US stands by appeared first on Coda Story.

]]>
There came a point ten years ago when Aynne Kokas realized that she could no longer keep WeChat on her personal phone. She had begun research on what would eventually become her new book, “Trafficking Data: How China is Winning the Battle for Digital Sovereignty,” published this month. 

WeChat is an omnipresent Chinese messaging app, and Kokas, a media studies professor at the University of Virginia, needed it to talk to Chinese sources for her research. But, as Kokas told me, it soon became “a very meta experience.” To have WeChat on her personal phone meant that “you were subjecting yourself to precisely the type of surveillance that you were writing about.”

In the book, Kokas analyzes how Chinese firms and the Chinese government gather data on U.S. citizens for political and commercial gain, putting U.S. national security at risk. China is able to do this, Kokas points out, in part because the U.S. government does not have substantial regulations in place to protect users and their data.

“By tracing how China and the US have shaped the global movement of data, I hope this book empowers citizens around the world to navigate the complex terrain created by Silicon Valley, Washington, and Beijing,” she writes.

I recently spoke with Kokas on the phone. Our conversation has been edited for length and clarity. 

What do “digital sovereignty” and “data trafficking” mean in layman's terms? 

Digital sovereignty is the idea of control over a country’s digital resources. Digital sovereignty is something that we see in countries that are trying to protect their digital domain from oversight from other countries. The Chinese government has a more expansive vision called cyber sovereignty, which is that any digital space that a country touches should be part of their digital domain.

Data trafficking is the movement of data from one country to another without the consent of users and without their understanding of the implications of their data being moved between national data regimes. For example, if I sign up for TikTok here in the U.S. and I find out that my data has been accessed in another country, that would be data trafficking. 

My favorite line in the book is when you write, “Most people are simply not exciting intelligence targets.” So what are the implications of data trafficking for most Americans in their daily lives? 

People are afraid that they are individually going to be targeted, and there are some scary stories, but ultimately the more interesting data for the Chinese government and for Chinese firms is actually at scale. So while you might not personally be interesting, you plus all of your neighbors, or you plus all of the people in your state, yield really rich insights that can enable the tracking and mapping of a whole society.

And while most people aren’t that interesting, there are specific subgroups that face intensive targeting, like Hong Kong democracy activists, as well as Uyghur and Tibetan activists. 

I also think there are other layers that are significant. One is economic risk. U.S. companies can’t gather data in China the same way that Chinese companies can in the United States, and that creates a fundamental asymmetry in the development of the digital economy in ways that will have long-standing implications for the development of products. At a certain point, it’s not necessarily just about spying or surveillance. It’s about what types of products you can build.

The third issue is national security. These platforms are becoming essential in daily life and the functioning of society. For example, TikTok now functions as a form of critical communications infrastructure. Chinese firms have also become involved in gathering and using health data and agricultural data from the United States. If that breaks down or if the Chinese government decides to pull participation from these firms, which they can do, it leads to a fundamental destabilization of key areas in the U.S. and global economy — areas like communication, health, food production. 

That’s not a risk that I think most people want to take.

Do you think the United States is at fault for not better protecting user data? Or is China more at fault for taking advantage of those weaknesses? 

A lot of China’s ability to go into other countries and propose tech platforms that rapidly gather data builds on the fact that U.S.-based companies have already been there. A great example of this is TikTok being officially based in the Cayman Islands. This is a classic move by U.S. firms to escape U.S. government scrutiny. And TikTok adopted this, so while their headquarters are officially in Beijing, they’re domiciled in the Cayman Islands. The other thing that U.S. firms pioneered was a lack of algorithmic transparency. And that’s at the foundation of a lot of these business models from which many Chinese entrepreneurs learn to grow their businesses.

The first and most important thing the U.S. government should do is pass national data regulations that have actual enforcement requirements in place. But there are significant differences within the U.S. government about what is and is not acceptable in terms of government oversight over corporations, as well as oversight over data. And even if laws are passed, enforcement is still really challenging. 

You present these issues as being contested, but it seems that the U.S. isn’t putting up much of a fight. 

The title should be something like, “China is taking over the digital world, and the U.S. kind of agreed to it.” But people I interviewed in the U.S. government and tech corporations would argue that by not heavily regulating the U.S. digital landscape, U.S. platforms are able to grow and compete with China that way. The other aspect is this resistance to changing U.S. data governance policies because that would be “letting China win” by adopting too many aspects of the Chinese model. I don’t fully agree with that framework. 

You wrote that you felt a sense of urgency while working on the book. Why did you feel that way? 

A lot of people outside China haven’t experienced China’s digital control directly, so they don’t understand the seriousness of what it means for that model to be exported and how difficult it is to put the genie back in the bottle once it’s out.

The post China is gaining control of the world’s data as the US stands by appeared first on Coda Story.

]]>
36546
TikTok influencers are dancing, lip-syncing, and posing to promote Russia’s war in Ukraine https://www.codastory.com/disinformation/tiktok-influencers-are-dancing-lip-syncing-and-posing-to-promote-russias-war-in-ukraine/ Fri, 01 Apr 2022 16:28:21 +0000 https://www.codastory.com/?p=31601 Despite TikTok’s ban on uploads in Russia, influencers are using it to spread pro-war propaganda. Others are debunking it

The post TikTok influencers are dancing, lip-syncing, and posing to promote Russia’s war in Ukraine appeared first on Coda Story.

]]>
From the first moments of Russia’s military invasion of Ukraine, local TikTok users have played a pivotal role in documenting the war, offering the world a glimpse of what is happening on the front lines. TikTok has had so much influence on the war in Ukraine that President Volodymyr Zelensky has called on TikTokers to help end the war. A few weeks ago, the White House briefed top influencers about the war in Ukraine, in an effort to align their messages about the war with U.S. interests.

TikTok restricted its services in Russia in early March, citing Russia’s "anti-fake news" law, but many users are circumventing the restrictions all the same. And plenty of the platform’s one billion monthly users worldwide continue to comment and report on the war, while others are using the tool to spread related disinformation through commentary, dance challenges, and lip-syncing trends.

Here are some of the widespread trends that social media researchers have uncovered on TikTok:

1. In early March, U.S.-based media watchdog Media Matters published a report by researcher Abbie Richards identifying 180 TikTok users who had posted nearly-identical videos showing a person kneeling while holding an English-language sign that condemns “Russophobia” and invokes “info wars.” Captions typically include the hashtags #RussianLivesMatter or #RLM.

Richards notes that the video captions also include strikingly similar typographical errors, indicating that they are part of a highly coordinated effort. Some gave themselves away: In what could only have been an error, some of the video captions included Russian-language instructions, such as “You can publish, description: Russian Lives Matter #RLM.”

2. Media Matters also spotted Russian influencers on TikTok making hand gestures to form the letter “Z” while doing a viral TikTok dance. Z has become a symbol of support for Russia’s military. In a more bizarre trend, young women posed for selfies by making the Z letter with their hands, proclaiming that this is how “real women” take selfies.

3. A report by VICE showed how Russian TikTok influencers have been recruited by an anonymous Telegram channel to post videos with pro-Kremlin messaging about the invasion, in exchange for payment. Operators of the Telegram channel instructed the TikTokers, some of whom have over a million followers, to justify the attack on Ukraine by defending their own people against the government in Kyiv. This aligns with Putin’s false narrative that the Ukrainian government has systematically targeted Russian-speaking people in the ongoing conflict between Russian-backed separatists and the Ukrainian government in the eastern Donbas region (Putin has even referred to this as a genocide).  TikTok influencer Yarra_M observed how people in dozens of these videos appear to be using the exact same script as one another, with some simply reading it from their phones.

https://twitter.com/mikegalsworthy/status/1500118408901365761?s=20&t=5MYhzoJhneH7Ce8zgFPyHw

4. Marieke Kuypers, a Dutch user who describes herself as an “unofficial TikTok fact-checker,” recently noticed TikTok users amplifying Putin’s rhetoric of justifying Ukraine’s invasion by pointing to NATO’s role in the breakup of Yugoslavia in the 1990s and its airstrikes in Kosovo in 1999. These actual events came in response to violence by Serbian forces against ethnic Albanians in Kosovo, following years of conflict over Kosovo’s attempts to secede from Serbia. But in the videos, TikTokers act out a dialogue between Russia and Ukraine, where Ukraine refuses to stop bombing Yugoslavia in 1999 (although this never actually happened) and then in 2022 the roles are reversed, with Ukraine pleading to stop the shelling and Russia refusing to do so.

5. In the days leading up to the invasion, when Russia recognized the eastern territories of Donetsk and Luhansk (both located in the Donbas region) as independent republics, more than 1,000 Russian TikTokers started posting videos that used a mirror effect. Videos featured people fist-bumping their own reflections, pretending to be “two brothers,” Donbas and Russia, and lip-syncing to a Russian song, “Brother for Brother.” The hashtags read: “We don’t leave our own behind,” and “We’re together,” in line with the Kremlin’s misleading message that people in the Donbas need to be saved from the “Nazi” Ukrainian government and that Russia will come to their rescue. 

The videos were first spotted by reporters for the popular independent Russian news site TJournal, which is now blocked in Russia, among other media outlets featuring opinions that dissent from the Kremlin narrative. Soon other TikTokers started using the same trend and filter to mock influencers who had “sold” their videos for government propaganda.

Despite TikTok’s ban on uploads in Russia, influencers are using it to spread pro-war propaganda. Others are debunking it.

The post TikTok influencers are dancing, lip-syncing, and posing to promote Russia’s war in Ukraine appeared first on Coda Story.

]]>
31601
Millennial authoritarianism rises in Brazil as Bolsonaro takes on TikTok https://www.codastory.com/authoritarian-tech/millennial-authoritarianism/ Wed, 19 Jan 2022 14:08:17 +0000 https://www.codastory.com/?p=28204 With his poll numbers falling, President Jair Bolsonaro tries to overhaul the social media strategy that brought him to power

The post Millennial authoritarianism rises in Brazil as Bolsonaro takes on TikTok appeared first on Coda Story.

]]>
On June 19, the day Brazil hit 500,000 official Covid deaths, President Jair Bolsonaro posted a TikTok video where he rode a horse and saluted a crowd to the sound of “I Walk the Line” by Johnny Cash. 

There was barely a mask in sight. 

Bolsonaro’s TikTok audience is exploding. His followers on the youth-dominated site grew to more than 340,000 people  at a rate of almost 50% in the past month alone. Bolsonaro tries to make authoritarianism look cool. In his TikTok profile created last June, the populist, far-right president posts videos where he goes on diplomatic missions, visits his mother, plays around with his staff, and engages in the traditional politics of hugging children and giving long motivational speeches. 

Bolsonaro is known as the “Tropical Trump”. Besides similar governing styles, both leaders rode to power attacking the press as fake news and Big Tech for persecuting them. While Trump was in office, Bolsonaro made no secret of his admiration, and looked to the American for direction. Since Trump’s failure to win re-election, however, Bolsonaro has gone role model shopping. 

He has found what he’s looking for in the young men’s aisle. 

With elections coming up in October, Bolsonaro is adjusting his strategy to mimic the social media tactics of El Salvador’s Nayib Bukele, who calls himself the “world’s coolest dictator.” Salvadoran researcher Manuel Meléndez-Sánchez coined the term “millennial authoritarianism” to explain the rise to power of the 40-year-old Bukele.

Bolsonaro is 66 years old. Still, the term applies to him, too, argues Vitor Machado, a political researcher at the Federal University of Paraná, in southern Brazil. Millennial authoritarianism is a political strategy, says Machado, that encompasses authoritarian behavior, populist appeals, and a modern and youthful personal brand built mainly via social media. Bolsonaro has associated his online identity with his millennial sons –who are themselves politicians– while, says Machado, fine-tuning his social media discourse to resonate with millennials. 

Speaking the same language as young people has become a key tactic for many Latin American leaders regardless of ideological leanings — from leftists such as newly-elected Gabriel Boric in Chile to authoritarians such as Nicolás Maduro in Venezuela and Juan Orlando Hernández in Honduras. 

For Brazil, where Bolsonaro is widely viewed by political scientists as a threat to the future of democracy, the president’s ability to manipulate youth sentiment with his newfound social media hipness has radically changed the election calculus. 

“I see only three options: prison, death, or victory,” said Bolsonaro when questioned about the upcoming election during a meeting of religious leaders last September. More than once, the president has threatened a military coup if he loses his mandate. Though after recent confrontations with the Supreme Court — which is currently considering five criminal inquiries into the president — he has downplayed his threat. “Who never told a little lie to their girlfriend? If you didn’t, the night wouldn’t end well,” he said to the laughter of an audience of allies. 

From left to right: Jair Renan Bolsonaro, son 04; City councilor Carolos Bolsonaro, son 02; Senator Flavio Bolsonaro, son 01. Coda Story/Getty Images.

The Bolsonaro Family on TikTok 

When searching “Bolsonaro” on TikTok, dozens of related hashtags appeared, including “bolsonaro2022” and its less popular counterpart “bolsonarocorrupt.” The total posts with the “Bolsonaro” tag have, collectively, more than five billion views. And, although TikTok has a delicate relationship with political content because of its moderation guidelines, Bolsonaro does not seem to be dividing opinions. 

The platform appears to be on his side: the first 15 hashtags that pop up are either positive or neutral. 

“Populist discourse is easy to understand and offers easy solutions,” says Veridiana Cordeiro, one of the lead researchers of Digital Sociology and Artificial Intelligence at the University of São Paulo. According to Cordeiro, millennials seek unconventional forms of political and civic engagement, and being active on social media is among them. “Flashy and performative posts are what bring adherence on social networks. Bolsonaro has managed to gain popularity with this type of political strategy.”. 

Bolsonaro follows only four people on TikTok: Senator Flávio Bolsonaro who joined the platform last May and is known in Brazil as “son 01”; city councilor Carlos Bolsonaro who joined last October and is known as “son 02”; a Wolverine cosplay; and a Brazilian magician. 

Congressman Eduardo Bolsonaro, “son 03”, does not have a TikTok profile and has even argued in favor of banning the app in Brazil.

Meanwhile, the president does not follow 23-year-old “son 04.” This is puzzling because Jair Renan was the first in his family to create a TikTok account, last March, and has the largest number of followers: almost 430,000. In his posts, he is an ardent proponent of his father’s politics.  

https://www.tiktok.com/@renanbolsonaro/video/6969194634903817477?_d=secCgwIARCbDRjEFSACKAESPgo8rcF%2F6xgRHjdr2lR77g1KIe4hxWC5KgEoZQiy5h17p6qjblYfmD2hmd7fFVFyShWBFrjMbT3h%2BpY9tiTPGgA%3D&checksum=6e864aa36d3b7aa0d93d6a88b3d588961bd48447447e893d2ae1c991424f3b3c&language=pt&preview_pb=0&sec_user_id=MS4wLjABAAAANq_zIh2jglL5JnXBk1vO2fGKgCYMbmmRvs0ykRL8W0OL4KSg7IJYeOzbdS47ncg5&share_app_id=1233&share_item_id=6969194634903817477&share_link_id=19ABF9F3-CA30-4A3F-B387-47D917F8183D&source=h5_m&timestamp=1639228000&tt_from=copy&u_code=dj846j85lb5l37&user_id=6976651076432413702&utm_campaign=client_share&utm_medium=ios&utm_source=copy&_r=1

In one video, he makes fun of products from China and criticizes their quality. In another, he appears in a shooting range, playing with rifles of different models.

“Not only Jair Renan, but the entire family fuels millennial authoritarianism,” explains Machado. Jair Renan’s half-brothers, Flávio and Carlos, also have their share of popular posts. Flávio regularly shows videos of Bolsonaro engaged in “cool activities” such as riding a sports car that belongs to the Federal Police and playing football with Arab sheiks. 

Brazil has 160 million social media users, more than any other non-Asian country except the United States. Brazilians also score high in terms of time dedicated to social media, reaching almost 4 hours a day, behind the Philippines and Columbia, according to We Are Social

The amount of time that Brazilians spend on social media has helped Bolsonaro in the past. In 2018, the year he won the election, the Supreme Electoral Court gave him only 48 seconds per week of unpaid electoral advertisements on public radio and television. Bolsonaro was affiliated with the Social Liberal Party, and, because of the party’s low representation rates, was granted less exposure time than his main opponents — lefist Fernando Haddad, from the Worker’s Party, and centrist Ciro Gomes, from the Democratic Labour Party. 

As a result of these disadvantages on broadcast media, Bolsonaro took his presidential campaign to social media and won. He now has social media accounts in conservative social networks such as Gettr and Parler. In fact, he is the only world leader active on both outlier platforms. 

These apps have grown rapidly in Brazil by promising a hands-off approach on censorship and the spread of misinformation. According to data company Sensor Tower, downloads of Gettr and Parler in Brazil are the second-highest of any country, just behind the United States.

Still, they are tiny compared to the number of Brazilians using Instagram, WhatsApp, TikTok, and the other major apps in Brazil. TikTok alone has almost 5 million users. Millennial authoritarianism, therefore, has become a crucial component of his re-election bid. 


This puts Bolsonaro in something of a vice, says Issaaf Karhawi, a researcher at the the University of São Paulo specialized in social media. While hostile to the biggest social media platforms, he depends on them to mainstream his online engagement. Karhawi says that Bolsonaro and his family have built a social media juggernaut around themselves — a community that started with 8 million followers and that now, four years later, amounts to over 42 million, almost twice as many as his five main potential opponents in the upcoming election.

Bolsonaro’s brand of politics comes at an opportune time for capturing Brazil’s youth vote. Research suggests that millennials are disillusioned with liberal democracy and increasingly open to non-democratic forms of government. “Unlike their parents who experienced an authoritarian regime, millennials grew up in a democratic government and find themselves politically disillusioned and disengaged,” argues Cordeiro, the digital sociology expert at the University of São Paulo, who says the absence of a living memory of military dictatorship is decisive in Brazil. 

Bolsonaro uses this “foggy memory” to push country-first, socially conservative, and ethnically majoritarian policies and posts and is able to leverage the divisiveness common to both social networks and populist politics. “If we continue to observe the prevalence of polarized attitudes among millennials, we can increasingly have fertile ground for populist policies,” Cordeiro said. 

Fake news as a strategy 

Discrediting legitimate media reports as “fake news” has been a central component of Bolsonaro’s administration. The president frequently encourages his supporters to follow him on his social media channels so he can bypass the press, control his image, and shape the political narrative around himself while disavowing democratic institutions.

He is also accused of spreading disinformation and misinformation. A Federal Police case is looking into the so-called “Office of Hate”: a pro-Bolsonaro online apparatus allegedly led by Bolsonaro’s sons and a group of young supporters committed to attacking government opponents and journalists. 

In the congress, legislators have tried to find solutions, presenting at least 45 bills aimed at curbing the spread of fake news. The measures proposed are diverse. Some would allow users who share fake news to be prosecuted as criminals; and some pressure tech platforms to ban Bolsonaro, his family, and his supporters — similar to what happened with Trump in early 2021. 

Aware of the possibility of losing his profiles on key social media channels, Bolsonaro is taking countermeasures of his own. In September 2021, he signed a decree forbidding social media platforms from banning users or taking down their content without a court order. It marked the first time social media companies had been stopped by a national government from taking down users’ content from their own platforms. 

The decree was ruled unconstitutional just a few days later but it set Bolsonaro on a path to use all tools and maneuvers at his disposal to protect himself and his allies on social media.

The researcher also says that even though Bolsonaro has had a good run on social media, his strategy is dangerous. “When we see a president communicating almost exclusively on social media, we slowly observe a disavowal of democratic institutions, more specifically of the media, both traditional media and institutional or governmental media,” says Karhawi. “There is no such thing as an individual capable of embodying politics, the media and the truth.”

The post Millennial authoritarianism rises in Brazil as Bolsonaro takes on TikTok appeared first on Coda Story.

]]>
28204
The physicians debunking the massive misinformation about women’s health https://www.codastory.com/disinformation/women-health/ Fri, 03 Dec 2021 12:49:17 +0000 https://www.codastory.com/?p=27242 From reproductive health to sex-ed, here are five medical specialists debunking myths

The post The physicians debunking the massive misinformation about women’s health appeared first on Coda Story.

]]>
The kind of misinformation on reproductive and sexual health flooding social media has  profound effects on young women, putting their physical and mental wellbeing under threat. It’s a code-red public health disaster and has prompted many doctors to take to social media to share correct information and to bust myths. Here are five physicians who talk facts about everything from menstrual health to contraception to fertility treatment.

1. Jennifer Lincoln, known to her over 2 million TikTok followers as @drjenniferlincoln, is a Portland, Oregon-based obstetrician-gynecologist. Lincoln’s short, humorous videos, based on scientific research, covers a wide range of subjects about health, mythbusting about period pains, treating vaginal infections with pseudoscientific cures or misinformation about sexually transmitted infections and safety of Covid-19 vaccines. She also uses her platform to discuss pressing issues like widespread inaccessibility of hygienic menstrual products, birth control, abortion or how to become an OB GYN whose practice is inclusive of people with different gender identities.

https://www.tiktok.com/@drjenniferlincoln/video/7026050585329716526?lang=en&is_copy_url=1&is_from_webapp=v1 

2. Alease Daniel, or @aleasetheembryologist on TikTok, is a Raleigh-based embryologist, who has introduced her more than 124,000 TikTok followers to her IVF lab. IVF is a method of assisted reproduction with sperm and eggs combined outside of the body in a laboratory dish. Millions of TikTok viewers have seen her work in the lab, talking through the procedures like prepping dishes for IVF to retrieving the eggs or counting sperm. She also uses her videos to debunk reproductive misconceptions. Daniel has told Wral that she’s posting videos because fertility treatment can leave people feeling out of control and having knowledge about the process provides a little bit of peace of mind.

https://www.tiktok.com/@aleasetheembryologist/video/6971150758913887494?lang=en&is_copy_url=1&is_from_webapp=v1 

3. Tanaya Narendra, @dr_cuterus on Instagram, is a gynecologist, who uses her social media account to post videos and illustrations about reproductive health, safe sex, body positivity or safety of Human papillomavirus (HPV) vaccines, that prevent some strains of virus causing cervical cancer. Her posts in English and Hindi are short, funny and educational, like this video titled “Dude, where’s my vagina?” explaining the anatomy of the uterus using an anatomical model.

4. Ali Rodriguez, also known as The Latina Doc or @alirodmd on TikTok, is using her dancing TikTok videos to answer questions and clear misconceptions about reproductive health in English and Spanish. In October, she told VerywellHealth that being a Latina, she understands the stigma and secrecy surrounding reproductive health and contraception and her patients from the Latinx community often are exposed to misinformation or lack of information about it. 

https://www.tiktok.com/@alirodmd/video/7008157945628298501?referer_url=https%3A%2F%2Fwww.verywellhealth.com%2Fembed&referer_video_id=6965120978821090565&refer=embed&is_copy_url=0&is_from_webapp=v1&sender_device=pc&sender_web_id=7018896146479269377

5. Natalie Crawford, or @nataliecrawfordmd on TikTok, is a Texas-based obstetrician-gynecologist and fertility specialist. Since 2019 she’s been sharing fertility-related information on ovulation, reproductive health and diets. She’s also been posting informative videos about endometriosis, a long-term condition where tissue that normally lines the inside uterus grows outside of it, usually causing severe pain and sometimes other issues such as infertility. Endometriosis can be debilitating and can take years to diagnose and treat accordingly. Crawford also runs Instagram and YouTube accounts to share information more extensively than she can do in few-second TikTok videos.

https://www.tiktok.com/@nataliecrawfordmd/video/6917728845567118597?lang=en&is_copy_url=1&is_from_webapp=v1 

The post The physicians debunking the massive misinformation about women’s health appeared first on Coda Story.

]]>
27242
Eco-anxiety and how to cope with it https://www.codastory.com/climate-crisis/eco-anxiety/ Tue, 09 Nov 2021 12:08:16 +0000 https://www.codastory.com/?p=26320 Around the world, fears about the fate of the environment are having profound effects on mental health — particularly among young people. We spoke to the therapists and influencers helping to tackle the problem

The post Eco-anxiety and how to cope with it appeared first on Coda Story.

]]>
With heatwaves, storms, floods and wildfires spreading across the world, the climate crisis is impossible to ignore. So are the concerns of many young people, whether they are protesting outside COP26 sessions or posting their frustrations on social media. In fact, there’s a whole new term for these ever-present worries.

Eco-anxiety is the term being used to describe a deep-seated fear of environmental meltdown now being experienced by a growing number of people. According to psychologists, it can have profound effects on mental health and is particularly prevalent among young people. 

In response, a growing number of mental health professionals are taking a “climate-aware” approach to the treatment of a range of conditions. Young people are also creating online communities to share their experiences and tips on how to deal with feelings of stress related to ecological issues.

As psychotherapist Caroline Hickman explained, the phenomenon “doesn't just stop with anxiety, it extends into depression, despair, frustration, guilt, grief, shame. It's a real combination of emotional responses.”

“It's not just what's happening to the planet,” added Hickman, who is a member of Climate Psychology Alliance and a lecturer at the University of Bath in the U.K. “What we also feel is frustration and abandonment and betrayal, because people in power are failing to act on science.”

Hickman has co-authored a global survey, led by the University of Bath, which will soon be published in the peer-reviewed medical journal The Lancet. Questioning 10,000 people aged between 16 and 25 in 10 countries, it found that more than half of respondents said that stress over climate change was affecting their daily lives. But some go even further than that. 

“One trend that's troubling is the tendency of what's been called doomism: the stance that it's too late, there's nothing we can do, that we're past the point of making a significant impact toward a healthier world,” said psychologist Leslie Davenport, who has written a book for young people titled “All the Feelings Under the Sun: How to Deal with Climate Change.”  

Far from being a mental illness, Davenport and Hickman believe that eco-anxiety is a rational and healthy response to what's going on in the world today, but that it should be channeled constructively and not spill over into nihilism. 

Fighting climate doomism is precisely what sustainability scientist Alaina Wood, 25, has been doing on TikTok for the past few months. She’s a member of EcoTok, a TikTok account, run by a team of environmental educators and activists, that provides climate education to over 100,000 followers.

Wood says that, while we all need to be aware of the threats to our world, the non-stop stream of terrifying headlines about the climate crisis can leave people feeling overwhelmed. She believes that shifting focus to possible solutions helps relieve eco-anxiety and gives a sense of agency to people who may have felt powerless to act. 

The biggest thing that helped me was finding a counselor,” she said, speaking of her own previously debilitating fears. “They recommended that I seek out resources and people who weren't just talking about the doom.” 

“It helped my eco-anxiety to know that I could fix things.” 

Along with a sense of pessimism, a lot of young people describe a profound sense of guilt over their individual actions, such as eating meat, or driving a car. 

Henry Ferland, a 19-year-old student and TikTok content creator stresses that individual guilt is largely misplaced — after all, corporations and governments that have failed to act are the ones to blame. However, he does believe that taking small steps with tangible effects is important both for the environment and for our mental health. 

And that’s exactly what helped him to tackle his own eco-anxiety.

Ferland, known on TikTok as Traashboyyy, tasked himself with picking up litter every day, then ended up setting himself a target of 50,000 pieces of garbage. "Doing little personal actions where you can see the good impact that you're having on the environment really helps me,” he said.

After meeting his goal a month ago, he bumped it up to 500,000 and asked his followers to join in, using the hashtag #trash500k on social media. 

“It's so much fun seeing people clean up in Germany and in Mexico and in Oregon,” he said. 

Like his teammates at EcoTok, Ferland believes that building communities and a wider awareness of eco-anxiety is extremely important. “You are not alone,” he said. “People who know that climate change is real have feelings of stress about it.”

Alaina Wood agrees wholeheartedly. “At the end of the day, it's really about finding somebody you trust, who you can talk to about it and who also understands what it is,” she said. “I was running into a lot of young people saying that their mental health professionals didn't know what eco-anxiety was.”

The post Eco-anxiety and how to cope with it appeared first on Coda Story.

]]>
26320
Vietnamese and Latino micro-influencers fight against vaccine disinformation in San Jose https://www.codastory.com/disinformation/vaccine-micro-influencers/ Wed, 11 Aug 2021 16:19:55 +0000 https://www.codastory.com/?p=23160 A diverse community of Instagram and TikTok stars is the latest weapon in the war against Covid-19

The post Vietnamese and Latino micro-influencers fight against vaccine disinformation in San Jose appeared first on Coda Story.

]]>
Mike Morea had just filmed his latest makeup tutorial when I arrived at his home in San Jose, California. In the video, the 26-year-old beauty and lifestyle influencer told his followers, while dabbing lotion onto his cheeks, that his aesthetic goal for the day was a subtle “a no makeup look.” He showed me the video shortly after I walked through his door. When I complimented his skin, Morea grinned and opened a cabinet full of his favorite makeup products.

This is the type of interaction that dominates Morea’s social media feeds, where he offers intimate, casual tips in Spanish on everything from home improvement projects to the perfect eyeliner. Originally from Bogotá, Colombia, Morea posts prolifically to his 41,000 followers on Instagram and nearly one million on TikTok. His chatty videos and photos usually cover lifestyle topics, but a few months ago, he took on a subject his followers hadn’t yet seen him engage with: Covid vaccine hesitancy.

On May 19, Morea posted a photo on Instagram; in the picture, he wore a black face mask and stood in the aisle of a pharmacy. “At first, I was a little skeptical about the vaccine, but after listening to experiences and learning more directly from the experts at the department of health, I can assure you that these are just rumors!” he wrote. “Now I can’t wait to schedule my appointment to get vaccinated.”

https://www.tiktok.com/@mikemorea/video/6976687200914164997

Morea told me he was initially hesitant about getting inoculated against Covid-19, but after watching friends receive theirs without complication, he got the shot in June. He broadcast his vaccination online, so his followers got a front-row seat. “I kind of walked them through the whole thing, so it was actually fun,” he said. “A lot of misinformation is going around about Covid-19 vaccines. I wanted the opportunity to spread my voice and let the community know that’s fake news.”

Morea’s foray into the world of pro-vaccine social media messaging is part of an urgent public health effort in San Jose and a handful of other cities across the United States. As the coronavirus pandemic stretches into its 18th month in the U.S., totaling nearly 36 million cases and claiming over 614,000 lives, some local governments are turning to a diverse community of “local micro-influencers” like Morea — with 5,000 to 100,000 followers — to promote vaccination on their platforms. The effort is part of a nationwide push to convince the unvaccinated, about half of the U.S. population, to get immunized against Covid-19. 

Surveys suggest that the estimated 90 million unvaccinated but eligible adults in the U.S. fall into two major categories. The first group is predominantly made up of politically conservative, white, rural, and evangelical Christians, who are explicitly opposed to Covid shots. According to a July survey of unvaccinated U.S. adults by the Kaiser Family Foundation, 65% of unvaccinated white adults polled said they would “definitely not” get a Covid-19 jab, compared with 13% of Latino adults and 13% of Black adults. The second group comprises those who are cautiously open to getting vaccinated but say they want to “wait and see” before taking a shot. This cohort tends to be younger and more racially and politically diverse, according to the survey, including nearly one-third of Latino and 15% of Black adults. 

In Santa Clara County, where the city of San Jose is located, Black and Latino residents have the lowest vaccination rates of all demographic groups, despite dying from coronavirus at a higher per-capita rate. City officials in partnership with the digital marketing agency XOMAD and funded in large part by the Knight Foundation, selected 49 micro-influencers to promote vaccines from May to June. 

Those chosen were paid between $200 and $2,500 and compensated based on their number of followers, frequency of posts, and level of engagement. XOMAD trawled through tens of thousands of social media profiles to find the right candidates and created an online platform where influencers can communicate with local government and health officials, ask questions from their followers, and discuss how to engage with vaccine opponents. Social media posts carried the disclosure “Paid partnership with City of San Jose.”

During the two-month campaign, according to XOMAD, the influencers published 339 posts across Facebook, Twitter, TikTok, and Instagram, yielding 2.5 million total views and impressions.  

Officials selected influencers who mirrored the city’s demographics. San Jose is roughly one-third Latino and is home to the largest Vietnamese population outside of Vietnam, about ten percent of the city’s population. Proponents believe micro-influencers are able to cut through vaccine hesitancy and misinformation by addressing members of their own community on the same digital platforms where viral falsehoods have become widespread. 

“Over 50 percent of our messengers who participated in this campaign had between 1,000 and 10,000 followers on their primary channels,” said Trevor Gould, a senior executive analyst for the City of San Jose who helped spearhead the project. “And so it just has this extra sense of authenticity to it.” 

As a lifestyle influencer, Morea was “shocked” when he was first approached about the project. He signed up despite concerns that opponents might attack him for his involvement. “I knew what I was getting into because a lot of people are anti-vaccine,” he said. After posting his vaccination video, “I got messages like ‘oh, you’re going to get sick,’ ‘now you’ve got a chip in you’,” he said. 

Morea heard comparable comments from people offline. One relative asked if the vaccine would make him seriously ill or implant a foreign object in his body. Morea used this and other similar conversations to poll his followers asking if any of their family members were anti-vaccine. “People responded, ‘yes, oh my god, my brother.’ It was kind of to relate, like you guys are not the only ones dealing with this.”

Beth Hoffman, a PhD candidate at the University of Pittsburgh Graduate School of Public Health who is currently studying Covid-19 vaccine misinformation on social media, said public health institutions should be thinking more about how to harness local influencers. She pointed to a June 2021 study by researchers with the Public Good Projects, a U.S.-based public health nonprofit, analyzing the success of a micro-influencer campaign promoting the flu shot for Black and Latino U.S residents during influenza seasons. Researchers concluded that local social media personalities were critical messengers for conveying information related to flu inoculation in at-risk communities with lower vaccination rates. 

“I think what we’ve seen is that the anti-vaccine movement is very skilled at using social media to reach their followers, but public health has really lagged behind. So I think this can be a really valuable way to start doing the outreach that we need to do.” 

Debunking disinformation

Jonny Tran, a Vietnamese American influencer with a perfect cloud of bleached hair and a social media following of 67,000 on Instagram and 200,000 on TikTok, overcame his initial vaccine hesitancy by reading content debunking myths on social media and seeing his peers get the shot. By the time San Jose’s campaign courted him in early May, he was ready to dive in. 

Like Morea, Tran has also come up against vaccine hesitancy in his personal life, and believes sections of Vietnamese media may play a role in promoting skepticism about coronavirus immunization. 

“What I’ve seen within my family and some of the Vietnamese community I know is, a lot of it comes from misinformation from Vietnamese news,” he said. “It sort of instills fear in those who watch those outlets. I’ve seen it with my aunts and uncles who watch certain Vietnamese news, and because of that, they didn’t believe in the vaccine.” 

Jonny Tran, a fashion influencer, encouraged his followers to get vaccinated in partnership with the City of San Jose.

Morea, meanwhile, says the disinformation he’s encountered in the Latino community is predominantly circulated on WhatsApp through forwarded videos and audio messages from anonymous accounts. “It’s a huge way to spread misinformation,” he said. 

Both participants say the reception to their advocacy has been largely positive. “I’ve got random comments or DM’s from people saying, ‘oh, I was a little worried about it but now I’m planning to get my first shot. That happened a few times,” said Tran.

The campaign between XOMAD, San Jose, and the Knight Foundation is one of a handful of similar partnerships nationwide between the marketing agency, city officials, and local micro-influencers, including in North Carolina, New Jersey, and Oklahoma. “We’ve worked with some of the biggest influencers in the world, but I will tell you that the real impact comes from nano and micro-influencers,” said XOMAD CEO Rob Perry. “They have genuine relationships with their followers.” 

While the White House has enlisted high-profile content creators to spread vaccine awareness, Perry says he hopes to see federal officials turn to more hyper-local names. “The Biden administration is largely focused on macro influencers,” he said. “But in my opinion, what is going to help beat this pandemic the most is tens of thousands of these trusted social media messengers all posting to target communities around the country.”

The post Vietnamese and Latino micro-influencers fight against vaccine disinformation in San Jose appeared first on Coda Story.

]]>
23160
Texans post conspiracy TikTok videos claiming the snow is “government created” https://www.codastory.com/disinformation/texas-snow-disinformation-conspiracy/ Tue, 23 Feb 2021 17:54:07 +0000 https://www.codastory.com/?p=19985 Some of those affected by last week’s deadly snowstorm have turned to anti-science theories for answers

The post Texans post conspiracy TikTok videos claiming the snow is “government created” appeared first on Coda Story.

]]>
Despairing Texans are posting TikTok videos stating that last week’s deadly snowstorm, which plunged the state into chaos, is a “simulation” created by the government or corporate interests. The freak weather conditions have left millions without power and drinking water — and now people are turning to conspiracy theories for answers.

“That’s obviously government-created snow that was made by Joe Biden and the Democrats,” wrote one poster. 

Texas isn’t the only place to be hit with a flurry of snow conspiracies, though. The trend began earlier this month in the UK, when some cities were blanketed with unusual hail-like snowflakes. They set anti-science conspiracy theorists off on wild TikTok rants. “This ain’t real snow. I don’t know what lab cooked this up but it ain’t fooling no one. Just look at it,” said one user named @jasmine.ac2 in a video that attracted 1.3 million views. 

The theories then migrated across the Atlantic. “I’ve lived in Texas for 18 years and never seen snow like this,” says a user named Emily Rosielier. “Like how do they make fake snow look so real?”

One of the most talked-about clips was posted by Houston-based @omgchrissy1980. It shows a mother and her family out in the snow. The woman holds a lighter to a snowball and watches as it turns black. “Thank you Bill Gates for trying to fucking trick us that this is real snow,” she says.

In a second video she and her young daughter conduct another experiment with snow from their backyard. “If I put this shit in the microwave, it’s going to start sparking, because there’s metal in it,” she says. Needless to say, it melts. 

https://www.tiktok.com/@matt_and_omar/video/6931858095169637638?_d=secCgYIASAHKAESMgowYbOhHg9BU%2F0H7Cv9FBVYC%2BxV%2ByDV5aI87J6bf%2FusikJCsDd7xIAQZtV0n8lj21wEGgA%3D&language=en&preview_pb=0&sec_user_id=MS4wLjABAAAAoh5HdiLMeN5X4tejpmUuO0HnbDuH63PFdlD57D_V3cUw6LeNtntoaONHnn9pwmrz&share_item_id=6931858095169637638&share_link_id=649409D0-83F9-4C6F-8BD4-529BBE37DE2E&timestamp=1614100284&tt_from=copy&u_code=dgcbm63g1dcck3&user_id=6915036212622459906&utm_campaign=client_share&utm_medium=ios&utm_source=copy&source=h5_m

The theories have also been roundly dismissed by fact-checkers, who have explained that snow behaves differently to ice when a lighter is held to it. Rather than melting, the frozen water evaporates, while the butane gas that fuels the flame turns the remainder black. 

“There’s nothing strange or artificial about the snow in the videos. It’s probably best to let these claims melt away,” the Politifact website stated

Snow-based conspiracy theories are not new, either. In 2014, when the state of Atlanta experienced massive snowstorms, similar stories did the rounds on YouTube and Facebook.

“Here we are again, with the same hoax distributed with new tools. Wishful thinking and ignorance of basic science will always trump fact checking,” tweeted Mike Rothschild, a conspiracy debunker and author who is writing a book about QAnon. 

TikTok has not removed the videos flagged to the company by Coda Story, and said the company does “not allow misinformation that causes harm to individuals, our community, or the larger public.”

Other TikTokers are also hitting back against the conspiracists. Canadian influencer Matt Benfield posted a response to the videos to his 750,000 followers, citing his lifelong experience with heavy snowfall. “These conspiracy theories are getting wilder by the day. It’s offensive to the thousands of Texans without power and those that froze to death,” he said.

The post Texans post conspiracy TikTok videos claiming the snow is “government created” appeared first on Coda Story.

]]>
19985
TikTok’s wellness trends breed misinformation https://www.codastory.com/disinformation/bad-science-on-tiktok/ Tue, 09 Feb 2021 15:37:21 +0000 https://www.codastory.com/?p=19834 Memes and challenges have underpinned TikTok’s meteoric rise — but the platform is also home to a torrent of misinformation

The post TikTok’s wellness trends breed misinformation appeared first on Coda Story.

]]>
Amid an endless stream of memes and lighthearted videos, pseudoscience content is rife on TikTok. From ineffective coronavirus cures to anti-vaccination content, the video sharing app has become fertile ground for all manner of disinformation.

Owned by the Beijing-based tech company ByteDance, the app brings together the most scrollable qualities of social media: unlimited content, served up to users by a tireless algorithm and hundreds of thousands of custom image filters. In just four years, the app’s rise has been meteoric, reaching two billion downloads last October and beating older platforms such as Twitter and Snapchat in terms of total active users.

TikTok’s popularity has highlighted a number of vulnerabilities, including users who share videos that promote unscientific and, in some cases, dangerous medical advice, diets and treatments. In response, it issued an expanded policy on misleading content in early 2020, adding a “misleading information” category to its reporting toolkit for users. In the first half of that year, more than 104 million videos were removed from the app for violations. 

Here are five anti-science trends discovered by Coda Story:

Baking soda and Covid-19

Oleksandr Ignatenko

What could possibly connect Russian-language TikTok, my grandmother and the late scientist and physician Ivan Neumyvakin? The answer can be found in most kitchen cabinets. Before his death in 2018, Neumyvakin was an evangelist for the medicinal properties of baking soda. He loved the stuff so much he wrote an entire book about it. It recommended the ubiquitous white powder to treat all manner of ailments, from hemorrhoids and urinary tract infections to cardiac arrhythmia. Neumyvakin’s ideas were so well known that his name was the first one my grandmother dropped when I told her I was writing on this subject.

So, what does any of that have to do with TikTok? As I wandered through the platform’s labyrinth of bad science surrounding Covid-19, one video stood out. It shows a man who connects the hose of a Soviet gas mask to the spout of a kettle. He adds baking soda to the water inside, brings it to a boil, puts on the mask and inhales the steam. A caption states that doing so will cure a dry cough — one of the main symptoms of coronavirus. The video attracted more than 200,000 views and 3,000 reactions. 

Another clip that does much the same thing has reached 170,000 views. Although these videos do not mention Neumyvakin, this exact procedure can be found on page 31 of his book. One video involving a milk and baking soda cocktail, which opens the third chapter, does give him a shout-out, though. 

Baking soda, which is unsurprisingly ineffective against Covid-19, is far from the most bizarre silver bullet being proposed by people during the pandemic. Some have touted chloroform, while others have even promoted camel urine. Its growing visibility as a purported Covid-19 remedy does, however, underline something important. 

My grandmother told me that the widespread medicinal use of household staples in the Soviet Union was prompted by a lack of access to quality state health care. By this logic, such myths should have swiftly died out with the fall of communism. Now, though, citizens of former-Soviet nations still have to contend with struggling health systems, a lethal virus is rampaging around the world and platforms like TikTok have a reach far greater than any book. Repackaged in shiny new wrappers, the same old ideas now seem more prevalent than ever.

Intermittent fasting advice

Mariam Kiparoidze 

With viral challenges, lip syncing and dance moves to pop songs, TikTok has offered millions of people across the globe a sense of community during the coronavirus lockdowns. Staying at home also prompted a slew of recipes, instructions for sourdough starters, workouts and healthy eating videos. But, on that same menu, the platform has also served up a feast of junk science and nutritional misinformation. 

Recently, the respected Twitter account Food Science Babe posted the following message: “There’s SO much false information on TikTok regarding food, I could spend all day every day trying to refute all the videos people tag me in and I wouldn’t even make a small dent. So frustrating if this is where younger generations are getting their info.” 

TikTok, which is wildly popular among teenagers and young people, is flooded with videos with hashtags such as #intermittentfasting and #whatIeatinaday.  

Intermittent fasting — which involves not eating for a specified amount of time, sometimes up to 24 hours — might prove beneficial to some when recommended by a health professional, but 15-second videos with glittery filters hardly ever go into the details. Meanwhile, the portions in #whatIneedinaday videos are as small as a single tangerine for breakfast.

This trend worries health experts. Many blame a longstanding toxic dieting culture and believe that the internet has only aided its spread. Now, some registered dietitians have taken to social media to promote healthy eating and provide a positive influence.

“While there are obviously tremendous dangers to anyone at any age to restricting calories like this, the teen years are a particularly sensitive time for undernutrition,” said Abbey Sharp, a dietitian who reviews online nutrition misinformation on a dedicated YouTube channel. According to her, nutritional deficiencies experienced at a young age can cause severe growth retardation and hormonal imbalances — and that’s before you get to the psychological effects and the development of long-term eating disorders. 

Under its community guidelines, TikTok prohibits “content that promotes eating habits that are likely to cause adverse health outcomes,” but many of these videos appear to do exactly that.  

Misinformation about reproductive health

Caitlin Thompson

Young people are increasingly turning to TikTok to learn about sex. With 69% of users between the ages of 13 and 24, that is not surprising. Given the rampant nature of misinformation about reproductive health on the platform, it is, however, troubling. 

Bad science on TikTok ranges from the ridiculous — including testimonials for flavored suppositories — to fake cures, like using yogurt to treat yeast infections. 

According to Dr. Jennifer Lincoln, an OBGYN in Portland, Oregon, the amount of false claims about reproductive health on the platform speaks to the challenges young people face in accessing accurate information in schools across the U.S.

“Comprehensive sex education that’s medically accurate in the United States is completely the exception, not the rule,” she said. 

That means people are finding information elsewhere. This led Lincoln to set up her own TikTok account, in order to provide a relatable evidence-based voice for young people. Now she debunks myths about sexually transmitted infections and periods to her 1.7 million followers. 

Birth control is a hot topic. One stubborn trend of misinformation involves the contraceptive pill. Scrolling through #birthcontrol or #birthcontrolproblems turns up hundreds of videos of women claiming, without evidence, that birth control is toxic and causes infertility. 

https://www.tiktok.com/@dr.staci.t/video/6875745884592688390?_d=secCgYIASAHKAESMgowFj6goZ1AeqHriFTOuPRbxoALmuGAtcIqm9SLSWNj3dqHrWTiF%2FW2Ylk%2BJiMXdaLPGgA%3D&language=en&preview_pb=0&sec_user_id=MS4wLjABAAAAG3QmLehAhYO78aVABiQ9osp4EOR7PY-elyjimvzK8vvwQPnNATLw1Mz8wNn7xwaY&share_item_id=6875745884592688390&share_link_id=212EED65-75E8-473D-AA80-FB33E6310766&timestamp=1611361309&tt_from=copy&u_code=dc8d2d30b08ea0&user_id=6823613851700921349&utm_campaign=client_share&utm_medium=ios&utm_source=copy&source=h5_m

A lot of the information about birth control is sensationalized or presented out of context, explained Lincoln. “What it does is it makes people who are using birth control to not get pregnant feel really uncomfortable, like they're harming their body.” 

https://www.tiktok.com/@drjenniferlincoln/video/6785699701556595973?sender_device=pc&sender_web_id=6901447561358755333&is_from_webapp=v1&is_copy_url=0

TikTok is also full of naturopathy. Often, it’s about selling a product, like “womb detox” — whatever that is — or supplements that claim to help women cleanse after using birth control, which is totally unnecessary.  

Other bad science trends are even more worrying. A search for “do it yourself” abortions turns up dozens of videos of young girls holding clothes hangers — a potentially lethal method of back-alley termination. Others falsely claim that ibuprofen or cinnamon can be used to prompt miscarriage.

“I report those a lot,” said Lincoln. 

"Keeping our community safe is our top priority,” said a TikTok spokesperson, in response to questions for this story. “Our community guidelines make clear we do not permit misinformation that causes harm to individuals, our community, or the larger public, including medical misinformation. We have reviewed the videos that have been brought to our attention and taken appropriate action, including removal, against content that violates these guidelines."

TikTok’s Body Dysmorphia problem 

Isobel Cockerell

A new “challenge” has been sweeping TikTok. Ask your resident Gen-Zer about it, if you have one. It’s called the “Time Warp Scan.” 

“It's, like, this filter where a laser moves down the screen, freezing the video. If you move at the right time, it distorts your body,” my 23-year-old sister told me. You can extend your butt, lengthen your forehead or slim your hips, she explained, like you’re in a hall of mirrors. The top video using the filter has more than 30 million likes. 

But viral filters like these have started a bigger conversation, prompting users to ask why TikTok makes them feel ugly. Another filter, called “invert” sparked a trend where users flipped their selfie videos back and forth to show the mirror image of their face. The trick highlights how asymmetrical your face is — and has been upsetting some users. 

“Hopped on TikTok to cheer me up and it just made things worse,” said influencer Abby Price, showing herself bursting into tears as she reversed the image of her face. 

Counter-campaigns have been cropping up on the app, encouraging people to “turn off beauty mode,” which smooths skin and enhances facial features like an instant facelift. 

“Filters don’t help with the negative perception, the unreal way of looking at yourself,” said Sandeep Saib, a mental health activist who campaigns to raise awareness of body dysmorphic disorder (BDD), a mental health condition where individuals are fixated on the perceived flaws of one part or more of their body. “It’s basically fuel to the flame. It’s damaging, and it doesn’t do any justice to someone that’s suffering or trying to recover from BDD. It’s doing exactly what they’re trying to avoid.”

There’s another way TikTok users say the app is affecting their perception of themselves: by showing them a seemingly endless succession of perfect bodies. In March, the app faced criticism after leaked documents showed how it suppressed content posted by users it deemed ugly or disabled. At the time, a TikTok spokesperson said that the document’s instructions were no longer being followed. 

Carlita, 17, from Ontario, Canada, who asked to be identified only by his first name, joined TikTok in August. “I came into the app because I wanted funny videos and not, like, body dysmorphia,” he said. During the first few days, he felt that the recommendation algorithm was trying to figure out his sexual preferences and tastes. After that, it “basically only showed me these perfect guys. That constant barrage of perfect bodies started to influence my perception of myself.” 

TikTok spokesperson Sarah Mosavi said the company is committed to displaying diversity. “Being true to yourself is celebrated and encouraged on TikTok. Our community of creators is vibrant and diverse, and we want each and every one of our community members to feel comfortable and confident expressing themselves exactly as they are,” she said

But Carlita said TikTok has led him to constantly compare himself with other users. “Eventually it became overwhelming,” he said. He deleted the app a few days ago. 

HPV Vaccine Skepticism and Pseudoscience on TikTok

Erica Hellerstein

It takes about 15 seconds for Dr. Todd Wolynn to give a brief overview of the human papillomavirus, a sexually transmitted infection that can cause cervical cancer. He’s wearing a light pink shirt, skinny black jeans and dancing in a dad-at-a-Bar-Mitzvah sort of way to the Black Eyed Peas. If this sounds more informal than you might expect a physician to be, consider the venue for his lesson of the day: TikTok.

Wolynn is one of several doctors and nurses taking to the video-sharing app to educate teens about HPV and the HPV vaccine. Although the shot has been proven in studies to prevent cervical cancer, skepticism and misinformation about it still abound on social media.

Content casting doubt about the vaccine attracts a large audience on TikTok. One clip, with 11,000 views, advises viewers against taking Covid-19 and HPV shots. Another, with 2,000, states that the HPV vaccine “ruins lives” and that “unvaxxed kids are far healthier than vaccinated kids.” Others say that it causes death and paralysis. There is no scientific evidence that the HPV vaccine leads to paralysis, while the U.S. Centers for Disease Control and Prevention determined that it is “very safe,” and has not found any “causal links” between it and reported post-vaccination deaths.

Staci L Tanouye, a board-certified gynecologist based in Florida who posts informational videos about sexual health on TikTok and other social media platforms, told me that the HPV vaccine is “kind of the OG of misinformation for young people in particular.” 

“It’s basically anecdotal stories of people claiming to be vaccine injured in some way,” she explained, adding that she also sees a lot of “vague accusations” about the shot killing people.

TikTok isn’t the only social media outlet where HPV vaccine skeptics thrive. A recent study, published in the journal Vaccine, analyzing HPV-related content on Facebook found that posts expressing negative attitudes about immunization were prevalent and likely to generate high levels of engagement. Meanwhile, an October 2020 study in the American Journal of Public Health found that, despite the platform’s policy to moderate vaccine-related content, posts about the HPV shot on Pinterest were typically dominated by skeptics and received similarly high levels of attention. 

Tanouye, Wolynn and other doctors are posting snappy and whimsical videos about HPV vaccination and prevention on TikTok. Some have even found their way to the hashtag #hpvvaccineharms, disrupting what may otherwise be a rabbit hole of antivax content with credible medical information. The posters include @pacenurses2020, an account made up of nursing students at the U.S.-based Pace University, dedicated to spreading HPV vaccine awareness. 

Tanouye explained that their presence is necessary — and that there is an appetite among younger users for solid scientific information. “The more we get on there to combat some of this, the more we’re empowering them to watch something that they might have believed before, but now they’re questioning it, or tagging all of the doctors to point it out,” she said. “So it really is making a difference.”

The post TikTok’s wellness trends breed misinformation appeared first on Coda Story.

]]>
19834
TikTok sees a surge in anti-protest disinformation in Russia https://www.codastory.com/disinformation/russian-anti-protest-influencers/ Tue, 26 Jan 2021 15:44:15 +0000 https://www.codastory.com/?p=19691 Social media influencers with millions of followers have urged ordinary Russians to stay away from demonstrations in support of Alexey Navalny

The post TikTok sees a surge in anti-protest disinformation in Russia appeared first on Coda Story.

]]>
Scores of social media influencers flooded Russian TikTok over the weekend in a coordinated effort to urge young people against attending rallies in support of jailed politician Alexey Navalny.

In the lead up to protests in over 100 cities on Saturday, influencers with up to four million subscribers on the video sharing platform told their followers they could end up in jail for taking part in the demonstrations.

“I don’t think anyone wants to end up in jail so guys, take care of yourselves and your health and think about the consequences,” warned @_Vira__ in a video posted to her more than four million followers on TikTok. 

In another post, user xlazhii reminded his 188,000 followers that Navalny’s own children go to university outside of Russia. “So why should Russia’s youth be coming out for that, what is the point?” 

Both users have since deleted the clips.

Many of the influencers revealed a coordinated and sometimes clumsy attempt to harness TikTok’s immense popularity in Russia against the opposition figure. A number of influencers and marketing managers said that anonymous social media accounts were soliciting paid, anti-Navalny videos.

Boris Kantorovich, deputy commercial director at the influencer marketing firm Author’s Media in Moscow, wrote about a coordinated effort by TikTokers who tried to dissuade people from attending the protests. Kantorovich attached screenshots for an advertising call posted on popular marketing forums on January 24 and noted that “all of them repeat the same message.” The call outlined a script which can be heard in the dozens of videos posted across TikTok the same day. The ad promised users with over 20,000 subscribers about $25 for each post.

Russian authorities took a number of measures online to limit attendance. The state censorship agency, Roskomnadzor, forced social networks like Instagram, YouTube and TikTok to remove event pages and posts about the protests. 

While posts were largely meant to target younger Russians, a group where Navalny enjoys his highest approval numbers, it’s not clear how much of a dent the paid videos made against the flood of support for the opposition leader.

Tens of thousands of Russians protested over the weekend from Moscow to remote regions like Yakutsk, where Navalny supporters rallied in minus-60 Fahrenheit weather. The mass gatherings followed Navalny’s return to Russia on January 17, when he was arrested on arrival for violating the terms of a suspended sentence from 2014. The leader had been undergoing treatment in Berlin since last August after he was exposed to the Soviet-era Novichok nerve agent. Western officials have described his poisoning as an assassination attempt by the Russian state, a charge denied by the Kremlin.

Over 3,800 people were arrested in Saturday’s protests, according to OVD-Info, an activist group that monitors political arrests.

Following media reports about the paid posts, dozens of pro-Navalny TikTokers rallied around the politician, instructing their followers to unsubscribe from accounts which had posted warnings about the protests.

“Do you know how much you cost? 2,000 rubles, yea...You can find out who all these paid-for bloggers are by checking out this hashtag,” said user @ideniza, pointing to #norevolution to her 88,500 followers. 

TikTok, which has over 20 million subscribers in Russia, had been largely left untouched by authorities, in sharp contrast to other online spaces which are both heavily censored and subject to paid, pro-government campaigns. Authorities have been slow to adapt to the new platform, which launched in Russia in 2018 and has become a safe haven for activists and outspoken young people. 

That began to change in August last year when a criminal case was opened against an underage TikTok user in Chita in Siberia for “offending the feelings of believers.” The teenager had posted a video of himself lighting a cigarette with a church candle.

The post TikTok sees a surge in anti-protest disinformation in Russia appeared first on Coda Story.

]]>
19691
Is TikTok’s free-speech honeymoon over in Russia? https://www.codastory.com/authoritarian-tech/tiktok-russia/ Thu, 01 Oct 2020 16:19:19 +0000 https://www.codastory.com/?p=18138 The video-sharing platform has been a safe haven for activists and outspoken young people, but now that is changing

The post Is TikTok’s free-speech honeymoon over in Russia? appeared first on Coda Story.

]]>
Since its 2018 launch in Russia, TikTok has established itself as one of the country’s least censored online spaces. Platforms such as Facebook, Instagram and VK are heavily monitored by the authorities, with a number of users being handed lengthy prison terms for posts deemed “extremist” or “offensive.” But, as a relatively new and unfamiliar platform, the video sharing app has largely been left alone by the authorities.

That all began to change in late August, with a video posted from the city of Chita in Siberia. Soundtracked by Billie Eilish’s hit “Bad Guy,” it showed a teenage boy inside a gilded Orthodox church. Looking over his shoulders and crossing himself, he leaned over and lit a cigarette on a church candle. “This will be the crime of the year,” said an off-camera voice. 

The video was widely shared and a criminal case opened, accusing the boy of “offending the feelings of believers” — a charge carrying a sentence of up to three years in prison. After he published three online apologies, the local branch of the church appealed to authorities to drop the case.

Later that month came a second court action, in which a TikTok user was fined $250 for encouraging her followers to attend a protest in the city of Yekaterinburg. 

“People being imprisoned for their speech online is hardly news for us. It happens a lot on other platforms,” said Sarkis Darbinyan, founder of the Moscow-based Digital Rights Centre. “In this sense, there is no hope for TikTok. Most likely, it will start to comply and censor everything that Russian-owned platforms already do.”

https://youtu.be/oq6x5LUiO2Y

While TikTok’s free-speech honeymoon may now be over, thousands of videos are still being uploaded and viewed by young Russians every day. As such, it opens a rare window to what this demographic is thinking and talking about.

Much like Twitter or YouTube, TikTok, which has 20 million users in Russia, publishes its trending topics. Lately, protests in Belarus and the poisoning of Russia’s main opposition figure Alexey Navalny have dominated the conversation. TikTok also counts the number of views for trending topics: constitutional reform in Russia has 107 million, while a hashtag in support of a recently incarcerated governor in eastern Russia has a staggering 274 million.

This is especially remarkable considering that TikTok has an aggressive content moderation team, who in the first half of this year removed more than 104 million videos from the platform globally. Last month, reports surfaced of how TikTok specifically censors hashtags related to LGBTQ issues in countries including Jordan, Turkey and Russia. 

The results of the company’s content moderation strategy can be seen in its second-ever transparency report, published on September 22. Between January 1 and June 30, moderators removed more than 700 pieces of content, in response to requests filed by national governments. Russia took the top spot, responsible for nearly 300 requests.

In recent months, TikTok, which is owned by the Beijing-based company ByteDance, has come in for considerable regulatory scrutiny in the EU and U.S. over its collection of user data. Before a September court decision temporarily blocked the move, President Donald Trump also attempted to ban U.S. online stores from hosting the application.

But that hasn’t stopped young Russians posting.

Kiril Fedorov, a 27-year-old LGBTQ activist from Saint Petersburg, believes the chances of facing prosecution under Russia’s “gay propaganda” law are slimmer on TikTok than any other platform. 

“Right now TikTok is saved by its reputation as something apolitical and unserious,” he said. “But there’s really a lot of political content there, a lot about human rights, feminism, LGBT and so on.”

Federov, who has 90,000 likes on his account, and other Russians get around the platform’s censorship tools by writing posts and hashtags using Cyrillic instead of the European alphabet, or a mix of both. 

By Russian standards, Federov’s videos are extreme, including explicit jokes about gay sex and drag impersonations of prominent state officials. In one of his most popular clips, he plays out a scenario in which he runs into an ex-boyfriend in a Gulag camp after being jailed for LGBT activism.

Mainstream opposition figures have also jumped on the TikTok bandwagon, albeit begrudgingly. Alexey Navalny created an account this summer. Counterintuitively, he documented the process on YouTube, posting a video in which he said, “I’m not even sure why I’m doing this.” 

Denis Kaigorodov, a 20-year-old TikToker from Tambov, central Russia, is part of an account named @fear_patriots, along with a group of local amateur comedians. 

His top-performing post so far was made in June. In it, he called for voters to come out against a controversial amendment to Russia’s constitution that would allow President Vladimir Putin to remain in office for life. Although thousands of young Russians flocked to the platform ahead of the polls, the amendment passed with 78% of the vote, prompting widespread allegations of electoral fraud.

Since then, Kaigorodov, who has 240,000 likes on his account, has made a series of comic videos mocking the result. In them, he accuses officials of stuffing ballots and states that members of the electorate cast repeated fraudulent votes.

“Political jokes do really well with audiences,” he explained. “People start to think you possess some kind of special courage if you’re making a joke like that in Russia.”

This reporting was supported by the Russian Language News Exchange.

The post Is TikTok’s free-speech honeymoon over in Russia? appeared first on Coda Story.

]]>
18138
Welcome to TikTok’s sanitized version of Xinjiang https://www.codastory.com/disinformation/tiktok-xinjiang-sanitized-version/ Fri, 11 Sep 2020 12:17:03 +0000 https://www.codastory.com/?p=17785 TikTok’s Xinjiang hashtag is cleansed of content about the Uyghur humanitarian crisis, according to a new report

The post Welcome to TikTok’s sanitized version of Xinjiang appeared first on Coda Story.

]]>
TikTok is showing users a “politically convenient” and curated version of life in Xinjiang, flooding hashtag #Xinjiang with positive messages about the region while cleansing criticism of China’s repression of Muslim minorities, according to a new report published this week. 

Published by Australia’s Strategic Policy Institute, the report describes how China’s official line on the humanitarian crisis has been promoted to TikTok users around the world. Of the top 20 videos on TikTok’s Xinjiang hashtag, only one is critical of the Chinese Communist Party.

The report, which also looked at how the video-sharing platform’s LGBTQ hashtags in the Middle East and Russia are being shadow banned, studied how the app treated the program of oppression, surveillance and control currently underway in China’s northwest region.  It described TikTok as “a powerful political actor with a global reach,” with the ability to covertly control flows of information on its platform. 

While the subject of Xinjiang and China’s oppression of Muslim minorities is widely discussed across other social media platforms such as Facebook and Twitter, the Xinjiang hashtag on TikTok, owned by tech giant Bytedance, is noticeably free of criticism. A scroll through TikTok’s #Xinjiang videos shows users glossy propaganda videos and happy vlogs made by state media-linked accounts, Chinese influencers and production companies, while content that’s critical of the regime is relegated to the bottom of the feed. 

When researchers at the Australian institute analyzed all 444 videos on the Xinjiang hashtag, they found that only 5.6% of videos were critical of the crackdown on the Uyghurs. Almost half of the top 100 videos were either propaganda, presenting Xinjiang in a highly idealized light, or outwardly pro-CCP. 

“Do not rely on the view of the world that TikTok provides you, because it is very distorted,” said Fergus Ryan, an analyst at ASPI and the report’s lead researcher. “The CCP has enormous leverage over this company, and their temptation to then use that leverage to subtly tweak the algorithms is going to be irresistible.”

Bytedance is subject to China’s security, intelligence, counter-espionage and cyber security laws. “Bytedance itself works hand in hand with public security bureaus in China to produce and disseminate propaganda,” said Ryan, describing how ASPI’s research showed TikTok’s feeds being flooded with content from its Bytedance-owned, firewalled-off Chinese counterpart, Douyin. TikTok denied cross-posting content from Douyin, but said that its users may be doing so.

Last year, TikTok was criticized for censoring discussions of Uyghur oppression after the company banned an Afghan-American political activist who posted a video discussing Xinjiang. 

“TikTok remains committed to creating a fun, authentic, and safe place for our users,” a TikTok spokesperson said in an emailed statement. TikTok did not answer questions about being used as a platform for Chinese state propaganda. 

As reporting from Coda Story pointed out in August, the Xinjiang hashtag feed is flooded with videos, hailing Xinjiang as a tourist destination, and featuring happy Uyghurs living an idyllic rural lifestyle. 

Following up on Coda’s coverage, the ASPI report identified several characters within the Xinjiang hashtag. One particularly prominent user, Jessica Zang, has more than 16,000 TikTok fans and is an employee of state owned China Global Television Network and a member of China’s ruling Community Party (CCP). “Have no idea that my video were on top under #Xinjiang… that’s new to me,” said Zang in a Facebook message, adding that she wanted to use her English skills to post on the international version of the app. “I just wanted to share the beauty and fun things in China.” Her videos proliferate on the #Xinjiang hashtag, showing snippets from a recent trip to the region during the pandemic. 

Another account on the hashtag is @guanvideo, a Chinese production company that also produces videos for Beijing-owned Global Times.

The Xinjiang hashtag also features videos from Uyghurs within China, such as a young woman called @aygul_uyghur, whose account describes her as “just a simple girl from Xinjiang.” Her most popular video has racked up more than 100,000 views. In the past, Uyghurs living in exile have pointed to social media accounts like these as clearly propaganda-oriented, for the simple fact that they are allowed to post to western audiences. 

For many Uyghur citizens in Xinjiang, using a virtual private network in Xinjiang (a necessity to post on TikTok) is a shortcut to arrest and detainment, though there appear to be exceptions for users who use the video-sharing platform to promote the Chinese party line. 

“The result, even for TikTok users perusing the topic, is a depiction of Xinjiang that glosses over the human rights tragedy unfolding there and instead provides a more politically convenient version for the CCP,” the report said.

President Trump has given Bytedance a deadline of September 20 to sell its U.S. TikTok business and destroy all its copies of U.S. user data, or be banned in the U.S. In India, the app – which dominated the social media space in the country – was banned in June as part of an ongoing standoff between Delhi and Beijing, losing Bytedance 120 million monthly users.

An analysis by ASPI of Bytedance’s career page suggests that the company is continuing to hire workers in China to monitor international TikTok content. 

“Even if hypothetically they were able to completely sever the content moderation from China, that doesn't get to the core problem: What is happening in the algorithm,” said Ryan. “The algorithm is a total black box.”

The post Welcome to TikTok’s sanitized version of Xinjiang appeared first on Coda Story.

]]>
17785
Egyptian TikTok influencers’ funds frozen under their conviction https://www.codastory.com/authoritarian-tech/egypt-tiktok-influencers/ Wed, 19 Aug 2020 12:33:37 +0000 https://www.codastory.com/?p=17348 Last week, Mat Nashed reported on how the Egyptian authorities are targeting young female social media influencers. Mawada Eladhm and Hanin Hossam are just two of at least nine young women in the country to have recently received prison terms and heavy fines for posting videos on platforms including TikTok and Instagram. All were charged

The post Egyptian TikTok influencers’ funds frozen under their conviction appeared first on Coda Story.

]]>
Last week, Mat Nashed reported on how the Egyptian authorities are targeting young female social media influencers.

Mawada Eladhm and Hanin Hossam are just two of at least nine young women in the country to have recently received prison terms and heavy fines for posting videos on platforms including TikTok and Instagram.

All were charged with “violating family values” and inciting debauchery under a controversial cybercrime bill passed by the government of President Abdel Fattah el-Sisi in August 2018. This week, the Egyptian state went one step further, passing an order from the prosecutor general and a Cairo criminal court to freeze Eladhm and Hossam’s funds.

“When it comes to Egypt, the issue is the cybercrime law that is extending and legalizing the oppression of the state towards its own citizens from the physical space to online,” said Mohamad Najem, executive director of the Middle East and North Africa digital rights NGO SMEX.

Najem explained via WhatsApp that the law is notoriously vague, including provisions related to the “protection of family values” and “breaching public morals.” He added that such clauses “are being mainly applied against women.”

Eladhm and Hossam’s videos of themselves dancing and lip-syncing to pop songs may seem simple and innocent, but they attracted hundreds of thousands of followers. These large audiences are easy to monetize, allowing influencers to earn large sums from their content. The Egyptian authorities, however, consider that such online activities attack the fabric of traditional society, promoting immorality and encouraging prostitution.

This position is consistent with a wider effort to tighten the state’s grip on social media. In addition to accusing TikTok of spreading immorality, some politicians have called for it to be blocked entirely. A Cairo administrative court will also decide on September 20 whether to ban YouTube, one of the country’s most popular online platforms. 

Najem explained that while nations such as Saudi Arabia and the United Arab Emirates also use broad legislation to censor and criminalize the online activities of residents, Egypt’s TikTok trials stand out because of the large number of cases and the apparent focus on one gender. 

“Having single women controlling their own bodies outside the usual norm and definition might be challenging for the system, therefore it’s easier to criminalize them than to accept them as part of society,” he said.

The post Egyptian TikTok influencers’ funds frozen under their conviction appeared first on Coda Story.

]]>
17348
Egypt’s TikTok crackdown targets young female influencers https://www.codastory.com/authoritarian-tech/egypt-cybercrime-bill/ Wed, 12 Aug 2020 15:57:22 +0000 https://www.codastory.com/?p=17164 A vaguely defined cybercrime bill has seen the government convict Egyptian social media stars with millions of followers

The post Egypt’s TikTok crackdown targets young female influencers appeared first on Coda Story.

]]>
When Mawada Eladhm began posting videos on TikTok, she had no idea that being an online influencer in Egypt was so perilous. Despite having three million followers, she became the target of frequent derogatory comments on the popular video-sharing platform. In April, she appeared to acknowledge the situation, posting a clip showing her with dyed blue hair and lip-synching to a melancholic song from an old Egyptian TV series. The lyrics seemed to convey how she felt about her attackers: “This is a time when people have monsters deep inside their hearts.”

The next month, the nation’s Ministry of Interior issued a warrant for the 22-year-old’s arrest, accusing her of publishing videos and photographs that violated family values. Eladhm, who is the daughter of a retired policeman, fled her home in Cairo, but officers eventually found her in a suburb of the city by tracking her cellphone. She was sentenced in late July to two years in prison and fined nearly $19,000. 

Even Eladhm’s lawyer believes her to be guilty. “The police only arrested girls that misused apps,” Ahmed al-Bokheir told me during a telephone interview. “For example, girls are now using TikTok for online prostitution. These are the kind of girls that are being arrested.” 

Most of Eladhm’s videos feature her mouthing the words to pop songs or dancing to Arabic electronic music in fashionable dresses and crop tops. That wouldn’t be a crime in most countries, but in conservative Egypt she has become one of at least nine female TikTok users prosecuted in recent months on charges related to inciting debauchery and prostitution. 

The girls are all from middle or working-class backgrounds, and some monetized their followings to earn thousands of dollars. While their content did not violate the app’s community standards, Egyptian authorities have enforced their own red lines, without clearly demarcating them. 

Like Eladhm, the other women have been charged with “violating family values” – a vaguely defined clause from a controversial cybercrime bill that was passed in August 2018. Reporters Without Borders warned that the bill would legalize President Abdel Fatah el-Sisi’s broader war against online dissent, which has resulted in the blocking of at least 500 news websites and the jailing of numerous Egyptians for posts on Twitter and Facebook. 

Culture wars

Since toppling the Muslim Brotherhood’s Mohammad Morsi — Egypt’s only democratically elected leader — in July 2013, Sisi’s regime has cracked down on individuals who challenge the nation’s deeply entrenched social norms. In recent years, women have been jailed for speaking out against sexual harassment online, and the LGBTQ community has been targeted with raids on public gatherings, arrests and the torture of detainees. 

Now, the government is tightening its control of social media. Just last month, a Cairo administrative court said that it will decide on September 20 whether to block YouTube, the second-most-used social media platform in Egypt. The reason for these deliberations have not been disclosed to the public. In June, the nation’s Supreme Administrative Court ordered authorities to block the site for one month over its refusal to remove a video it deemed insulting to the Prophet Mohammad. 

However, even that decision failed to generate the same level of attention as the TikTok trials, which have become a key battleground in a wider culture war. In addition to the courts, Egypt’s parliament has also accused TikTok of spreading immorality with some lawmakers demanding that the government suspend the app.

TikTok, which is owned by the Beijing-based company ByteDance, has come under intense scrutiny in a number of countries over concerns that the Chinese government could use it to spy on users. On August 6, President Donald Trump signed an executive order banning any U.S. transactions with ByteDance. 

In Egypt, controversy flared up around social media platforms in April, when influencer Hanin Hossam uploaded a video on Instagram from her Cairo bedroom. In the clip, Hossam, 20, wore a red headscarf, matching lipstick and a grey sweater. With her phone held casually in front of her, she told her 746,000 followers that she was recruiting young women to work as influencers for a new video app named Likee, a rival to TikTok. 

The clip circulated online for three weeks before it was seen by Nashaat al-Dihy, an anchor for the popular satellite channel TeN TV. During his broadcast on April 19, Dihy played snippets of the footage before accusing Hossam of encouraging prostitution. Two nights later, intelligence officers showed up at Hossam’s home and arrested her in front of her family. She too was found guilty of violating family values and sentenced in late July to two years in prison and a fine of nearly $19,000. 

“May God punish Dihy,” said a close relative, who asked that her name not be published, for fear of reprisals from authorities. “He manipulated her video to make it look like she promoted immoral behavior.” 

Neither the Ministry of Interior nor police authorities responded to requests for comment for this story. Emails sent to TikTok and Likee also received no reply.

Hossam’s lawyer, Mahmoud Heidar, places the bulk of the blame on celebrities like Dihy for encouraging the state to pursue such cases. He added that Egypt’s countrywide coronavirus lockdown, which came into effect in mid-March and was lifted in late June, prompted people to spend much more time on social media than they had previously.

According to Heidar, many newcomers followed men who had criticized and bullied Hossam online for singing along to Egyptian pop songs and posing in fashionable outfits while wearing a traditional Muslim headscarf. 

One, who goes by the name of Naser Hekaia, told his 447,000 YouTube subscribers that “Hossam disrespects the veil she wears.” 

During a telephone conversation about the verdict against his client, Heidar complained that “our society convicted Hossam before the court did.” 

Double Standard 

The current clampdown on TikTok and other social media platforms highlights the Egyptian government’s inconsistent attitude towards digital spaces. Last month, dozens of women used Instagram to post detailed accounts of sexual assaults allegedly carried out by a 21-year-old Cairo student. Within days, police had arrested a man named Ahmed Bassam Zaki and launched an investigation into the allegations. This swift action prompted a brief wave of optimism that the authorities were finally ready to take such cases seriously. 

However, these hopes stand in sharp contrast to the arrest of the young women on TikTok. The most troubling case of all, though, has been the state’s reaction to a video posted on TikTok in May by 17-year-old Menna Abdel Aziz from Cairo. In it, she appeared with a swollen face and accused Mazen Ibrahim, 25, and three female accomplices of assault and rape. 

“If the government is watching this video, then get me justice,” she said. 

Police later confirmed that one of the alleged accomplices filmed and uploaded footage of the attack online. The most widely shared video, viewed tens of thousands of times on YouTube and TikTok, shows Aziz being slapped across the face while attempting to put on her trousers. But, rather than support Aziz, police arrested her and the alleged attackers on charges of inciting debauchery.  

Women’s rights advocates succeeded in lobbying for Aziz to be transferred from prison to a rehabilitation center, following a police investigation that confirmed her assault and rape. However, the charges against her remain. 

A number of activists have come together to launch campaigns to raise awareness about the ordeal facing women accused of promoting immorality on social media. 

“The government has created a distinction between ‘good’ and ‘bad’ women,” said Mozen Hassan, founder of the non-profit women’s rights group Nazra for Feminist Studies. In 2016, Hassan and her NGO were charged with receiving foreign funds for the purpose of “harming national security.” Her assets were frozen and she was banned from traveling. 

Hassan now believes that Sisi’s regime is targeting women online because the internet has become the last public space available to them. “The bad women are activists, human rights defenders and the TikTok girls,” she said. 

Mohammad Hamarsha contributed additional reporting.

The post Egypt’s TikTok crackdown targets young female influencers appeared first on Coda Story.

]]>
17164
TikTok apologizes after being accused of censoring #blacklivesmatter posts https://www.codastory.com/authoritarian-tech/tiktok-censors-blacklivesmatter/ Wed, 03 Jun 2020 14:12:13 +0000 https://www.codastory.com/?p=14703 We don’t just follow stories, we follow up. Isobel Cockerell has been reporting on how TikTok censors people – and how users get around it – over the past nine months.    Chinese-owned social media giant TikTok has apologized for censoring posts about protests which have rocked hundreds of cities across the U.S. in the last

The post TikTok apologizes after being accused of censoring #blacklivesmatter posts appeared first on Coda Story.

]]>
We don’t just follow stories, we follow up. Isobel Cockerell has been reporting on how TikTok censors people – and how users get around it – over the past nine months.   

Chinese-owned social media giant TikTok has apologized for censoring posts about protests which have rocked hundreds of cities across the U.S. in the last week. 

Earlier this week, the company responded to complaints from black users who posted videos using the hashtags #blacklivesmatter and #GeorgeFloyd and said they had received zero views. The company said the platform had experienced a “technical glitch” and issued a lengthy apology on Tuesday entitled “a message to our Black community.”

“We understand that many assumed this bug to be an intentional act to suppress the experiences and invalidate the emotions felt by the Black community,”  wrote Vanessa Pappas, TikTok’s U.S. General Manager, and Kudzi Chikumbu, Director of Creator Community.

As social media platforms like Facebook and Instagram went dark with #blackouttuesday posts on Tuesday, TikTok users were greeted with a banner that read #BlackLivesMatter and #theshowmustbepaused. It was the first week on the job for TikTok’s new CEO, Kevin Mayer, who joined the company from Disney. He said in a post: “I invite our community to hold us accountable for the actions we take over the coming weeks, months, and years.”

It’s not the first time TikTok, which has been downloaded more than two billion times, has been criticized for suppressing and censoring posts – in particular when they relate to China.

https://twitter.com/XLHawkins/status/1267750892339683328

In September, it was revealed that TikTok was censoring videos that might be displeasing to Beijing, such as those mentioning Tiananmen Square or footage from the Hong Kong protests

Coda Story has previously reported on how ethnic Uyghurs were circumventing the Chinese version of TikTok’s censors, which hides content exposing the mass detention policy and oppression of Muslim minorities in the northwest region of Xinjiang. In November, TikTok deleted an American user who posted a viral makeup video in which she discussed Xinjiang’s oppression of the Uyghurs. The company blamed her account’s censorship on “a human moderation error.”

The post TikTok apologizes after being accused of censoring #blacklivesmatter posts appeared first on Coda Story.

]]>
14703
Xinjiang’s TikTok wipes away evidence of Uyghur persecution — Coda Follows Up https://www.codastory.com/authoritarian-tech/xinjiang-china-tiktok-uyghur/ Fri, 24 Jan 2020 13:41:38 +0000 https://www.codastory.com/?p=11102 We don’t just follow stories, we follow up. Six months ago, our reporter Isobel Cockerell wrote a story about an international group of Uyghurs who trawled the Chinese version of TikTok for evidence of China’s mass crackdown on its Muslim minorities. Some spent every waking hour of their day on Douyin — the Chinese name

The post Xinjiang’s TikTok wipes away evidence of Uyghur persecution — Coda Follows Up appeared first on Coda Story.

]]>
We don’t just follow stories, we follow up. Six months ago, our reporter Isobel Cockerell wrote a story about an international group of Uyghurs who trawled the Chinese version of TikTok for evidence of China’s mass crackdown on its Muslim minorities. Some spent every waking hour of their day on Douyin — the Chinese name for TikTok, which is digitally walled off from its international counterpart.

In the months that have passed, TikTok has come under fire for shutting down a video of a young woman who discussed the Xinjiang concentration camps while curling her eyelashes. The platform later apologized for a "human moderation error."

“TikTok does not moderate content due to political sensitivities," a TikTok spokesman told me at the time.

Since then, Xinjiang’s Douyin space has become an all-singing, all-dancing propaganda platform.  

When I first spoke to him, Sydney-based Uyghur activist Alip Erkin, 41, was trawling Douyin every day for evidence of China’s persecution in Xinjiang. In recent months, he believes the app has been wiped of the most compromising information about Xinjiang. 

“I feel that nowadays the videos that I would hope to see on Douyin have decreased in number,” said Erkin. “I think Douyin has gathered experience of how they can best censor people.”

Now when Erkin logs on, he’s greeted by a wall of videos showing a sunny, smiling Xinjiang. “The visuals are very reflective of the facade of the situation and the fake acts of being happy and dancing and singing in public,” Erkin said.

Erkin has noticed how the Uyghur language – which is Turkic in origin and uses Arabic script – is being wiped away from Douyin. “Most Uyghurs are using Mandarin now for their captions and in their videos,” he said. 

https://twitter.com/Gheribjan/status/1215886779439448064

Erkin tried an experiment: “The other day I used a Uyghur language name to set up an account. About a minute later a notification came in saying: “the information you put in is not accepted by our rules.””

When Erkin changed it to Latin letters, the account name was approved. 

In November, a Douyin video of a young girl complaining about being censored for using Uyghur language went viral on social media. “I would like to ask Douyin, why are my videos suspended every time I do them in Uyghur?”

https://twitter.com/BrightDestinee/status/1193972576835362816

Aliye Yasin, whose name has been changed to protect her family, used to spend hours every day trying to trick the app’s algorithm into showing her content the Chinese government didn’t want her to see. For a while, it worked. But spending so much time on the app can take its toll. “I stopped digging,” she told me. Now, she says, whenever she logs on, “the algorithm just gives me propaganda again.”

Last week, state-owned media outlet Global Times published an article about how a hashtag, #charmofthexinjiangpeople, had gone viral on Xinjiang Douyin.

https://twitter.com/globaltimesnews/status/1217747844129378306

The article described how the hashtag featured “beautiful Xinjiang people in gorgeous ethnic costumes, and, of course, their brightest smiles.”

The footage was in stark contrast to some of the video content that has previously leaked out of Xinjiang via Douyin, including images of religious buildings being destroyed, or long lines of people waiting to be scanned at one of many security checkpoints. 

"Bytedance collaborates with public security bureaus across China, including in Xinjiang where it plays an active role in disseminating the party-state’s propaganda," observed a November report by the Australian Strategic Policy Institute.

“It wasn't that long ago that Uyghurs were using Douyin (TikTok) to shine a light on the brutal surveillance state in Xinjiang,” tweeted Fergus Ryan, an analyst at ASPI. “Looks like ByteDance has got that under control. Only "positive energy" now.”

The post Xinjiang’s TikTok wipes away evidence of Uyghur persecution — Coda Follows Up appeared first on Coda Story.

]]>
11102
How TikTok opened a window into China’s police state https://www.codastory.com/surveillance-and-control/tiktok-uyghur-china/ Wed, 25 Sep 2019 11:05:54 +0000 https://www.codastory.com/?p=8667 Uyghurs are gaming TikTok’s algorithm to find a loophole in Xinjiangs’s information lockdown

The post How TikTok opened a window into China’s police state appeared first on Coda Story.

]]>
Every evening after getting back from his studies, Alip Erkin sits at home in Sydney, Australia, and opens up the video-sharing app TikTok on his Android phone. He’s looking for something in particular: videos from Xinjiang in northwest China, which he left for the last time in 2012. Since then, Xinjiang has been rapidly transformed into a vast police state, where Uyghurs, the mostly Muslim ethnic group native to the region, are systematically targeted and surveilled, with more than a million held in concentration camps and detention centers.

Scrolling videos of home from thousands of miles away is strange, nostalgic, and bittersweet for anyone. But for Erkin, 41, it is a glimpse of a place the Chinese government doesn’t want him to see. 

Upon opening the TikTok app, Erkin is greeted by walls of auto-playing videos, most of which mean nothing to him: surreal comedy bits, cheery musical singalongs. “I get a cup of coffee and browse,” he said. “Most of the videos are worthless in terms of the political situation.”

But every so often, Erkin comes across a video that reveals something about the realities of  China’s mass surveillance crackdown and brainwashing policy in Xinjiang: a video of a propaganda rally, with Uyghurs singing songs praising the Communist Party of China; footage from inside a Uyghur orphanage for children with parents in detention; crowds of Uyghurs chanting in Mandarin rather than their native Uyghur language; a mosque being demolished. Often he’ll find shots of the deserted streets of once-bustling Kashgar, his now empty home city. 

Videos have surfaced on TikTok and other Chinese apps that appear to show the destruction of traditional Uyghur and other Muslim buildings and mosques.
The video on the far right is thought to be a Hui mosque

Most valuable of all is evidence of life under Xinjiang’s oppressive surveillance regime, such as footage of a long line of Uyghurs waiting to pass through a security checkpoint or heavily armed Xinjiang police in training. 

Often while browsing TikTok, Erkin spots smaller, unfamiliar details: Uyghur Muslim women without headscarves; men without beards. He notices his native Uyghur language gradually disappearing from street signs. “These parts of Uyghur culture are gone, and when I see that, I get depressed and angry,” Erkin said. “But I try to continue browsing even if it makes me unhappy, just for the purpose of useful information.” As soon as he finds a video of something interesting, Erkin downloads and archives it before it’s censored. He then reposts it to his Twitter page, Uyghur Bulletin. 

Xinjiang is in information lockdown. The Chinese government claims it is cracking down on terrorism and extremism in the region. Thanks to a de-facto ban on contact with foreign numbers, Uyghurs are afraid to message or call their loved ones who live outside China. Other ways of getting information out are also closely controlled: Foreign journalists visiting Xinjiang are tightly monitored, their cameras and phones routinely wiped at the checkpoints they have to pass through each day. For activists like Erkin, the totalitarian control on information creates a problem: The distinct lack of striking visuals makes it hard for people around the world to grasp the daily realities of Xinjiang’s humanitarian crisis. A lack of coordinated and coherent response from the international community is perhaps testament to that. While the Trump administration has made noises about imposing sanctions against China for its treatment of the Uyghurs, plans to do so appear to have melted away as trade talks between the two powers continue. All the while, concrete information about the ongoing crackdown is in short supply. 

Enter TikTok, the hugely popular, Chinese-built video-sharing app, owned by tech giant Bytedance. It’s currently used by around a billion people across the world. Thousands of videos from Xinjiang, filmed by both Uyghur and Han Chinese users, are uploaded every day to Chinese TikTok and other copycat video sharing apps. The sheer volume of videos makes it difficult for the authorities to censor everything. “They’re plugging the gaps,” said Darren Byler, a scholar who has extensively researched Uyghurs and technology. “But it’s done in a piecemeal way. The internet is a big place and it’s hard to police it.” And sometimes, compromising content slips through the net.

Erkin is part of a global network of Uyghurs who catch that content, continually trawling TikTok for visual clues about the situation in Xinjiang. Aliye Yasin, whose name has been changed to protect her family, began using the app in August. She opens up TikTok during her dinner time. “It’s midnight in China, and most people are asleep,” she said. Including, she believes, the authorities who monitor Xinjiang social media. 

One of the most striking videos Yasin found was posted by a Xinjiang police department’s official account. Armed police officers, who appear to be mostly Uyghur, declare their loyalty to the Party. The caption: “Swear to the motherland to protect the people’s peace.” 

Another official video shows mostly Uyghur police dancing before the Chinese flag. A sign above them suggests it’s part of a group psychological support session — a telling detail, given Uyghur police are routinely required to arrest and detain their own people. 

This video on the Yarkant Police TikTok channel shows what appear to be mostly Uyghur officers dancing before the Chinese flag and the Communist symbol.
The sign above them suggests it is a psychological support session

Erkin, Yasin, and the other Uyghur online activists are using TikTok to challenge China’s claims that it has de-escalated its surveillance and detention activities in Xinjiang. In July, a Xinjiang official stated that “most people” had been released from the “re-education camps.” This sparked a furious online campaign by Uyghurs across the globe who posted pictures of their still-missing relatives, asking the authorities to release them. 

Human rights groups said they’d seen no evidence of any large-scale releases, and called China’s statement “deceptive and unverifiable.” Last week, drone footage emerged from August 2018 of police leading hundreds of shackled and blindfolded men from a train in what is believed to be south-west Xinjiang. It is thought the men in the video are being shuttled into large detention centres. 

“It remains imperative that UN human rights investigators, independent observers and the media be given unrestricted access to the region as a matter of urgency,” said Nicholas Bequelin, Amnesty International’s Asia regional director, in response to the Xinjiang official’s statement.

For Yasin, the government’s statement was partly what inspired her mission to start scouring TikTok to expose the reality of life in Xinjiang. “That news fed my anger. They activated me by telling that lie,” she said. “I set up a new Douyin [Tiktok] account and started to look.”

It’s difficult work: TikTok videos are far from an ideal form of evidence. Most are all of nine seconds long, vary in quality, and are difficult to geolocate. “There’s no way of verifying their accuracy,” said Timothy Grose, an assistant professor at Rose-Hulman Institute of Technology in Terre Haute, Indiana. But, he said, given how difficult it is to go to the region right now, “any kind of visual evidence is important in building a more vivid and comprehensive picture of the situation.”

Mining TikTok for the sole purpose of seeing inside a police state is an unorthodox use of the app. TikTok, which did not respond to requests for comment for this article, shot to the top of Apple’s most downloaded list last year, and stayed there — hitting an estimated one billion installs in February. Most people know it as an app popular with teenagers in the US, Europe, and China for its user-generated, snappy, gimmicky short videos cut to pop music. But soon, the app was co-opted by people who understood its potential.

A video appears to show a long line of Uyghurs
waiting to be scanned at a checkpoint in Xinjiang

Earlier this year, I reported on the story of Kalbinur Tursun, a woman who managed to flee Xinjiang but was forced to leave her children behind. While casually browsing social media from her home in Istanbul, Tursun saw a TikTok video of her 6-year-old daughter, Aisha, filmed in what appears to be a Chinese orphanage for Uyghur children. It was the first time in years she had seen her daughter’s face.

Though Tursun’s story seems an astonishing coincidence, she’s not the only Uyghur to have discovered news of her missing family by chance through TikTok. In February, Business Insider reported on the story of Abdurahman Tohti, who lives in Turkey and had not heard from his family since they disappeared after leaving for Xinjiang on vacation in 2016. “While scrolling through Douyin...he saw a familiar sight: big, black eyes, and round, rosy cheeks,” reporter Alexandra Ma wrote. “It was his 4-year-old son, Abduleziz.” In the video, an off-camera voice asks: “What’s the name of the Fatherland?” “The People’s Republic of China!” the little boy yells. 

Tohti’s story was a turning point for Alip Erkin, the Uyghur activist in Australia. “I realized Douyin [TikTok] was one of the few platforms that people overseas can get some valuable information from,” he said. 

The Uyghurs who do this work need to use special tactics to get into Douyin, the Chinese version of TikTok, which can only be accessed with a Chinese phone.

China’s firewall — originally designed to keep Chinese people from accessing foreign websites — now appears to be also stopping foreigners from seeing in. “It looks like they’re creating a reverse great firewall, and Douyin is a perfect example. They want to keep TikTok outside and Douyin inside; there’s an intentionality there that has an element of censorship about it,” said James Leibold, associate professor in politics and Asian studies at La Trobe University, Melbourne. Day by day, he said, it’s becoming more difficult to access online content from Xinjiang. The solution, he believes, is to be ever more innovative and methodical. 

Once they’ve got around the firewall and accessed TikTok, the international Uyghur activists then have to “teach” the app’s algorithm to show them the videos they want to see. “You have to train it in a certain way,” Yasin said. “You can’t really search, because they cleaned up all the location-based search results. Anything that uses Xinjiang keywords is censored.” On Wednesday, the Guardian revealed that TikTok’s moderators are told to censor videos that make mention of politically sensitive subjects, such as Tiananmen Square and Tibetan independence. The revelation follows a report by the Washington Post examining how TikTok content from Hong Kong depicted a peaceful and “politically convenient” vision of the city, hardly alluding to the current protests. The Guardian report, based on leaked documents, said the app “limits its distribution through TikTok’s algorithmically-curated feed” for less potent infringements of the guidelines.

TikTok uses algorithms to “serve” users the content it thinks they will like, based on their reactions and responses to each video. 

“To make my feed more relevant, I don’t ‘like’ or comment on content other than that about Uyghurs or East Turkestan [the preferred Uyghur name for Xinjiang],” Erkin said. “I only like what I want to see.” That way, he’ll see more videos like it.

It’s a strangely satisfying process, Yasin said. “That’s the beauty of it. Sometimes the algorithm will recommend me something recently posted, not super popular — and it’s what I’m looking for.”

Two months ago, a Uyghur exile escaped Xinjiang and arrived in the United States. She brought her Chinese phone with her — a precious commodity. Using her old phone and Chinese sim card, she now works alongside a group of Uyghur students to mine TikTok. Mehmet Jan, a Uyghur graduate student in the US, helps run the project. “I categorize the videos into four groups,” he said, sorting them according to whether they show testimonials, surveillance, destruction of mosques, or cultural annihilation. 

The students are intent on collecting proof that Xinjiang is gradually being re-programmed into a state built in Beijing's image. The government claims it is trying to stamp out terrorism in the region. “This is no targeted response to violent extremism, but a concerted campaign to hollow out a whole culture,” scholar Rachel Harris wrote in an article for the Guardian in April. 

A recent flurry of TikTok footage of weddings between Uyghur women and Han Chinese men has been a source of distress for Uyghurs in the diaspora, who see the videos as evidence of forced racial assimilation. According to a report by Radio Free Asia’s Uyghur service, in 2017 the Xinjiang government introduced a “Uyghur-Han Marriage and Family Incentive Strategy,” which offered 10,000 yuan ($1,400) to Uyghur and Han Chinese couples who intermarried. 

Mixed marriages are a rarity in China: According to the 2010 census, just 0.2 percent of Uyghurs married Han people. James Leibold, the scholar in Australia, has also uncovered video evidence of the inter-marriage program. In April, he tweeted that the Chinese internet was “awash with short videos promoting Han-Uyghur inter-marriage.” Leibold explained how “there is a long history of this colonial strategy — using inter-ethnic marriage as a tool for national integration.”

Beijing’s treatment of the Uyghurs is part of a far-reaching campaign that outwardly attempts to rid the Muslim minority of certain aspects of its identity and convey an idealized image of a Xinjiang closer to the dominant Han Chinese culture. Through razing Uyghur mosques, destroying traditional Uyghur architecture, discouraging the use of Uyghur language, and incentivizing intermarriage, there is a deep concern that many parts of Uyghur life have been lost forever. Following international outcry over China’s imprisonment of more than a million Uyghurs, China appears to have ramped up its efforts to shield its activities in Xinjiang using firewalling, heightened censorship, and misleading official statements.

Meanwhile, in a further attempt to stifle outside criticism, Beijing-owned media has been flooded with pictures and stories depicting a bustling, vibrant Xinjiang. And government propagandists appear to be working in overdrive to control the region’s image — often posting TikTok videos of their own. “Here I am, lost in the wondrously chaotic night market in Xinjiang,” Eva Zheng, a Twitter user who appears to work for the Chinese government and regularly posts TikTok videos, wrote in August. Timothy Grose, the professor at Rose-Hulman, is frustrated by content like this. “It doesn’t help that we have competing videos,” he said. “Those who are unconvinced there’s anything going on are being bombarded with counter-narratives.” 

Crowds wave Chinese flags at what appears to be a propaganda
concert in Yopurga County, near Kashgar, Xinjiang

In an attempt to combat the onslaught of state propaganda, Alip Erkin was at one stage spending every waking hour mining for online information about Xinjiang. “It took a huge psychological toll on me, so I’m starting to consciously reduce screen time,” he said. 

The mental health repercussions of fighting this digital campaign is a source of worry for Uyghur-Australian activist Arslan Hidayat, who often gets sent TikTok videos by other Uyghurs over WhatsApp. “I’ve talked to people who’ve been psychologically distraught,” he said. Mehmet Jan, the student in the U.S., admits: “It is mentally exhausting. I have a feeling I spend too much time online. I can’t concentrate on other stuff.”

The TikTok miners are also concerned about the Uyghurs in Xinjiang who posted the videos in the first place. “It will be treated like a crime — like revealing state secrets,” Jan said.

In August, something unusual began to happen on Chinese TikTok. One by one, dozens of Uyghurs from inside Xinjiang began posting mute videos of themselves in front of pictures of their relatives. Many were crying. “I think there was a mastermind behind it — one or two creative people,” Hidayat said. It was a wordless, digital flashmob; each video copying the last. “People caught on without getting together, without having to explain the concept. It was completely coded,” Hidayat said. “But every Uyghur understood those videos.” In one video, a girl sits before a picture of four men, holding up four fingers. Slowly, she makes a fist. 

In August, videos of Uyghurs mutely standing before their relatives began appearing on TikTok. Uyghurs around the world saw the videos as a silent protest against China’s ongoing detention of Muslim minorities

Outside China, the international Uyghur digital diaspora quickly mobilized. They saw the videos as a silent uprising against the mass detention centres, and reposted them all over Facebook and Twitter. Hidayat gathered the videos into a thread, creating the hashtag #WeHearU. News outlets soon began picking up the story.

Uyghurs and China-watchers across the world continue to be struck by the sheer bravery of the videos. “You know when you see those movies where people throw a message in a bottle into the ocean?” Hidayat said. “It was like that. The videos didn’t really address anybody. It wasn’t a protest, because they cannot protest against the Chinese government. It’s literally just a message.”

A few days after the wave of silent videos, Yasin searched TikTok for some of the Uyghur users who had posted them. Amid the color and jangling noise of the video app, their feeds had become dark grids of broken grey video icons. Some people’s accounts had gone completely. “Your search results are empty, and no related content could be found,” the app informed her. 


The post How TikTok opened a window into China’s police state appeared first on Coda Story.

]]>
8667
WhatsApp, TikTok and the question of the chicken and the egg https://www.codastory.com/authoritarian-tech/authoritarian-tech-whatsapp-tiktok-and-the-question-of-the-chicken-and-the-egg/ Fri, 23 Aug 2019 07:50:23 +0000 https://www.codastory.com/?p=8336 You’ve probably heard of rural India’s WhatsApp problem: hate speech, forwarded rumors, and sometimes incited lynch mobs. Both the press and the Indian government have blamed the platform for this, and Coda has tracked some of India’s attempts to legislate the problem away. But others have suggested these weren’t really WhatsApp problems — they were

The post WhatsApp, TikTok and the question of the chicken and the egg appeared first on Coda Story.

]]>
You’ve probably heard of rural India’s WhatsApp problem: hate speech, forwarded rumors, and sometimes incited lynch mobs. Both the press and the Indian government have blamed the platform for this, and Coda has tracked some of India’s attempts to legislate the problem away.

But others have suggested these weren’t really WhatsApp problems — they were problems associated with social and religious issues in India that manifested on WhatsApp. “Technology is what we make of it,” wrote Indian economist Mihir Sharma. “If we in India choose to use convenient messaging to form lynch mobs, that tells us more about India than it does about WhatsApp.”

A big Wired story this week seems to vindicate this approach — that WhatsApp was the facilitator, not the cause. India’s latest scapegoat is the Chinese video app TikTok, which has been hosting caste-based hate videos. “TikTok is fueling India's deadly hate speech epidemic,” reads the article headline. The actual body of the article is more equivocal, talking about centuries old and entrenched caste issues as “massive problems [TikTok] faces in the country,” thereby granting that TikTok did not exactly create India’s caste system.

Most technology scholars I’ve talked to fall pretty squarely on the entrenched-problem side of things: That is, they think hate and violence issues that manifest online ultimately reflect real-world issues. To be fair, it’s hard to find people who disagree with that statement.

But for journalists, platform-based analysis is low-hanging fruit: You find a few instances of hate speech that weren’t taken down, interview a gullible user or two, quote a few concerned experts, and you have an exposé of a platform unable to contain the spread of digital violence. It’s not wrong to criticize these platforms, of course, but we see these kinds of articles over and over. This summer, we have seen this done for YouTube in Brazil and YouTube in the US, and one publication did a deep dive essentially listing YouTube channels it thinks should have been deleted but weren’t. (They were probably right — but you could write these pieces regularly.) These articles all point to the same conclusion: A platform is censoring things, but it should be censoring more things.

Again, it may well be true that the platforms aren’t censoring enough, and criticism is healthy. But the Wired story I mention here quotes a government official who said TikTok is “degrading culture.” Does anyone consider such a statement to be remotely true?

OTHER NEWS

  • This interview with a data scientist offers one of the best explanations I’ve seen of the current era of Artificial Intelligence. It connects the exaggerated hype around AI to the shady practices of those who use it. And it offers an amazingly intuitive and morally clear explanation of what exactly it means for AI to be “racist.” Highly recommended. (Logic Magazine)
  • We’ve written before about the threat posed by “cyber sovereignty” to the free internet. Some have even warned of a “splinternet,” where the web bifurcates into a restricted authoritarian network and a “free” internet. Now, a new Foreign Affairs op-ed argues that the West should welcome and even accelerate this split. That is, authoritarian countries should be kicked off the free web. It’s a strange argument, but worth reading. (Foreign Affairs)
  • Speaking of cyber sovereignty, Russia’s attempts to create a sovereign internet could make things harder for its infamously brazen hackers. (The Register)
  • Arzu Geybullayeva, a previous Coda contributor, reports on how the Turkish government is abusing Twitter’s own rules to get its enemies banned (Global Voices). That reminds us of a similar phenomenon Umer Ali reported for us from Pakistan (Coda Story).
  • Is tech addictive? I think so, but Vox’s Ezra Klein debates an author who thinks that’s not a useful way to talk about it. Though his ideas and books seem interesting, his dismissal of tech addiction strikes me as unpersuasive. (Vox)
  • A ranking of the world’s most-surveilled cities. The top ones are in China. It’s worth taking a look just to get a sense of the numbers of cameras — they are very large. (Comparitech)
  • Once again, a massive database of private information is found to be badly secured. (The Guardian)

The post WhatsApp, TikTok and the question of the chicken and the egg appeared first on Coda Story.

]]>
8336