Authoritarian Technology - Coda Story https://www.codastory.com/authoritarian-tech/ stay on the story Fri, 18 Apr 2025 16:27:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://eymjfqbav2v.exactdn.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1.png?lossy=1&resize=32%2C32&ssl=1 Authoritarian Technology - Coda Story https://www.codastory.com/authoritarian-tech/ 32 32 239620515 When I’m 125? https://www.codastory.com/authoritarian-tech/when-im-125/ Thu, 03 Apr 2025 14:07:36 +0000 https://www.codastory.com/?p=55448 What it means to live an optimized life and why Bryan Johnson’s Blueprint just doesn’t get it

The post When I’m 125? appeared first on Coda Story.

]]>
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors. 

If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect. 

We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.

It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough. 

In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.

This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.

I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.

I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony. 

Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there. 

I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become."

As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?” 

Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me. 

But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.

One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”

I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens. 

 So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?

I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions. 

Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.

The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.

I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.

I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.

In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."

He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die. 

These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.

I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.

In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.

The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.

We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.

Participants connected over Discord, comparing notes, and posting about our progress. 

Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.

There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?

Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.

On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."

So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.

Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.

We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.

The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.

When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.

Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.

I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.

Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.

This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.

We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact. 

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.

I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.

I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.

We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.

I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.

But until then my ears are still ringing.

This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post When I’m 125? appeared first on Coda Story.

]]>
55448
Captured: how Silicon Valley is building a future we never chose https://www.codastory.com/authoritarian-tech/captured-silicon-valley-future-religion-artificial-intelligence/ Thu, 03 Apr 2025 14:04:54 +0000 https://www.codastory.com/?p=55514 AI’s prophets speak of the technology with religious fervor. And they expect us all to become believers.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

]]>
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal. 

It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.

“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”

A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call. 

“You're wading through knee-deep water, people are screaming everywhere, and then…  What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”

Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world.  We looked at how the rest of us —  journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back. 

Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.

Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.

One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta. 

Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video –  training the company’s system so it can automatically filter out such content before the rest of us are exposed to it. 

She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward. 

Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe. 

In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me. 

Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”

I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?

You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.” 

As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join. 

In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.

What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.

“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.” 

The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future. 

There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now. 

In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.

Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

]]>
55514
Who owns the rights to your brain? https://www.codastory.com/authoritarian-tech/who-owns-the-rights-to-your-brain/ Thu, 03 Apr 2025 14:04:17 +0000 https://www.codastory.com/?p=55376 Soon technology will enable us to read and manipulate thoughts. A neurobiologist and an international lawyer joined forces to propose ways to protect ourselves

The post Who owns the rights to your brain? appeared first on Coda Story.

]]>
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.” 

Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.

This conversation has been edited for length and clarity.

Q: Rafael, can you tell me how your journey into neurorights started?

Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse  brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.

Q: How did that work? 

Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it. 

These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain. 

We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there. 

Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?

Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment. 

Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars? 

Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.

Q: It must have been a huge breakthrough

Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.

 And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”

I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.

Q: What do you mean by that?

Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.

Q: Jared, can you tell me how you came into this? 

Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.

Q: What was your reaction when you heard of the mouse experiment?

Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.

Q: Can you talk me through some of the other implications of this technology? 

Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.

That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.

In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting. 

Rafael:  I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information. 

Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.

But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces. 

And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.

How did you start thinking about ways to build rights and guardrails around neurotechnology?

Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.

So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data. 

Jared:  If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.

There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.

So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.

Rafael: We identified five areas of concern where neurotechnology could impact human rights:

The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.

The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.

The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.

The fourth is the right to equal access to neural augmentation.  Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.

And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.

Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post Who owns the rights to your brain? appeared first on Coda Story.

]]>
55376
In Kenya’s slums, they’re doing our digital dirty work https://www.codastory.com/authoritarian-tech/the-hidden-workers-who-train-ai-from-kenyas-slums/ Mon, 31 Mar 2025 19:08:31 +0000 https://www.codastory.com/?p=55374 Big Tech makes promises about our gleaming AI future, but its models are built on the backs of underpaid workers in Africa

The post In Kenya’s slums, they’re doing our digital dirty work appeared first on Coda Story.

]]>

This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.  

We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work. 

Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:

“The world is an evil place, and nobody's coming to save you.”

It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.


It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world. 

“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”

There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.

These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.

This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.

Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?” 

We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister. 

It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta. 

Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards. 

She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own. 

And then, she switched to working for a different client: Meta. 

“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death. 

“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said. 

Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT. 

Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said. 

After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.

Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.

Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.

Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.

“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty. 

And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.

But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.

When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI. 

Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”

You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.

Despite their defense of their record, Sama is facing legal action in Kenya. 

“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.

“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?” 

As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.

“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.” 

“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”

At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator. 

Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”

For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.

A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?

Artificial intelligence may well go down in history as one of humanity’s greatest triumphs.  Future generations may look back at this moment as the time we truly entered the future.

And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.  

So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen.  But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.

Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post In Kenya’s slums, they’re doing our digital dirty work appeared first on Coda Story.

]]>
55374
DeepSeek shatters Silicon Valley’s invincibility delusion https://www.codastory.com/authoritarian-tech/deepseek-shatters-silicon-valleys-invincibility-delusion/ Wed, 29 Jan 2025 14:26:25 +0000 https://www.codastory.com/?p=53979 A lean Chinese startup's AI breakthrough has exposed years of American hubris

The post DeepSeek shatters Silicon Valley’s invincibility delusion appeared first on Coda Story.

]]>
This week, as DeepSeek, a free AI-powered chatbot from China, embarrassed American tech giants and panicked investors, sending global markets tumbling, investor Marc Andreessen described its emergence as "AI's Sputnik moment." That is, the moment when self-belief and confidence tips over into hubris. It was not just stock prices that plummeted. The carefully constructed story of American technological supremacy also took a deep plunge. 

But perhaps the real shock should be that Silicon Valley was shocked at all.

For years, Silicon Valley and its cheerleaders spread the narrative of inevitable American dominance of the artificial intelligence industry. From the "Why China Can't Innovate" cover story in the Harvard Business Review to the breathless reporting on billion-dollar investments in AI, U.S. media spent years building an image of insurmountable Western technological superiority. Even this week, when Wired reported on the "shock, awe, and questions" DeepSeek had sparked, the persistent subtext seemed to be that technological efficiency from unexpected quarters was somehow fundamentally illegitimate. 

“In the West, our sense of exceptionalism is truly our greatest weakness,” says data analyst Christopher Wylie, author of MindF*ck, who famously blew the whistle on Cambridge Analytica in 2017. 

That arrogance was on full display just last year when OpenAI's Sam Altman, speaking to an audience in India, declared: "It's totally hopeless to compete with us. You can try and it's your job to try but I believe it is hopeless." He was dismissing the possibility that teams outside Silicon Valley could build substantial AI systems with limited resources.

There are still questions over whether DeepSeek had access to more computing power than it is admitting. Scale AI chief executive Alexandr Wong said in a recent interview that the Chinese company had access to thousands more of the highest grade chips than people know about, despite U.S. export controls.  What's clear, though, is that Altman didn't anticipate that a competitor would simply refuse to play by the rules he was trying to set and would instead reimagine the game itself.

By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight. While companies like OpenAI have poured hundreds of billions into massive data centers—with the Stargate project alone pledging an “initial investment” of $100 billion—DeepSeek demonstrated a fundamentally different path to innovation.

"For the first time in public, they've provided an efficient way to train reasoning models," explains Thomas Cao, professor of technology policy at Tufts University. "The technical detail is that they've come up with a way to do reinforcement learning without supervision. You don't have to hand-label a lot of data. That makes training much more efficient."

By developing an AI model that matches—and in many ways surpasses—American equivalents, DeepSeek challenged the Silicon Valley story that technological innovation demands massive resources and minimal oversight.

For the American media, which has drunk the Silicon Valley Kool Aid, the DeepSeek story is a hard one to stomach. For a long time, Wylie argues, while countries in Asia made massive technological breakthroughs, the story commonly told to the American people focused on American tech exceptionalism. 

An alternative approach, Wylie says, would be to see and “acknowledge that China is doing good things we can learn from without meaning that we have to adopt their system. Things can exist in parallel.” But instead, he adds, the mainstream media followed the politicians down the rabbit hole of focusing on the "China threat." 

These geopolitical fears have helped Big Tech shield itself from genuine competition and regulatory scrutiny. The narrative of a Cold War style “AI race” with China has also fed the assumption that a major technological power can be bullied into submission through trade restrictions. 

That assumption has also crumpled. The U.S. has spent the past two years attempting to curtail China's AI development through increasingly strict controls on advanced semiconductors. These restrictions, which began under Biden in 2022 and were significantly expanded last week under Trump, were designed to prevent Chinese companies from accessing the most advanced chips needed for AI development. 

DeepSeek developed its model using older generation chips stockpiled before the restrictions took effect, and its breakthrough has been held up as an example of genuine, bootstrap innovation. But Professor Cao cautions against reading too much into how export controls have catalysed development and innovation at DeepSeek. "If there had been no export control requirements,” he said, “DeepSeek could have been able to do things even more efficiently and faster. We don't see the counterfactual." 

DeepSeek is a direct rebuke to both Western assumptions about Chinese innovation and the methods the West has used to curtail it. 

As millions of Americans downloaded DeepSeek, making it the most downloaded app in the U.S., OpenAI’s Steven Heidel peevishly claimed that using it would mean giving away data to the Chinese Communist Party. Lawmakers too have warned about national security risks and dozens of stories like this one echoed suggestions that the app could be sending U.S. data to China. 

Security concers aside,  what really sets DeepSeek apart from its Western counterparts is not just efficiency of the model, but also the fact that it is open source. Which, counter-intuitively, makes a Beijing-funded app more democratic than its Silicon Valley predecessors. 

In the heated discourse surrounding technological innovation, "open source" has become more than just a technical term—it's a philosophy of transparency. Unlike proprietary models where code is a closely guarded corporate secret, open source invites global scrutiny and collective improvement.

DeepSeek is a direct rebuke to Western assumptions about Chinese innovation and the methods the West has used to curtail it.

At its core, open source means that the source code of a software is made freely available for anyone to view, modify, and distribute. When a technology is open source, users can download the entire code, run it on their own servers, and verify every line of its functionality. For consumers and technologists alike, open source means the ability to understand, modify, and improve technology without asking permission. It's a model that prioritizes collective advancement over corporate control. Already, for instance, the Chinese tech behemoth Alibaba has released a new version of its own large language model that it says is an upgrade on DeepSpeak.

Unlike ChatGPT or any other Western AI system, DeepSource can be run locally without giving away any data. "Despite the media fear-mongering, the irony is DeepSeek is now open source and could be implemented in a far more privacy-preserving way than anything offered by Meta or OpenAI,"  Wylie says. “If Sam Altman open sourced OpenAI, we wouldn’t look at it with the same skepticism, he would be nominated for the Nobel Peace Prize."

The open-source nature of DeepSeek is a huge part of the disruption it has caused. It challenges Silicon Valley's entire proprietary model and challenges our collective assumptions about both AI development and global competition. Not surprisingly, part of Silicon Valley’s response has been to complain that Chinese companies are using American companies’ intellectual property, even as their own large language models have been built by consuming vast amounts of information without permission.

This counterintuitive strategy of openness coming from an authoritarian state also gives China a massive soft power win that it will translate into geopolitical brownie points. Just as TikTok's algorithms outmaneuvered Instagram and YouTube by focusing on accessibility over profit, DeepSeek, which is currently topping iPhone downloads, represents another moment where what's better for users—open-source, efficient, privacy-preserving—challenges what's better for the boardroom.

We are yet to see how DeepSeek will reroute the development of AI, but just as the original Sputnik moment galvanized American scientific innovation during the Cold War, DeepSeek could shake Silicon Valley out of its complacency. For Professor Cao the immediate lesson is that the US must reinvest in fundamental research or risk falling behind. For Wylie, the takeaway of the DeepSeek fallout in the US is more meta: There is no need for a new Cold War, he argues. “There will only be an AI war if we decide to have one.”

Additional reporting by Masho Lomashvili.

The post DeepSeek shatters Silicon Valley’s invincibility delusion appeared first on Coda Story.

]]>
53979
Blocking Pornhub and the death of the World Wide Web https://www.codastory.com/authoritarian-tech/blocking-pornhub-and-the-death-of-the-world-wide-web/ Fri, 24 Jan 2025 13:07:51 +0000 https://www.codastory.com/?p=53843 The construction of digital walls, as governments exert more control over access to information, is changing the nature of the once global internet

The post Blocking Pornhub and the death of the World Wide Web appeared first on Coda Story.

]]>
It's time to acknowledge an uncomfortable truth. The internet, as we've known it for the last 15 years, is breaking apart. This is not just true in the sense of, say, China or North Korea not having access to Western services and apps. Across the planet, more and more nations are drawing clear lines of sovereignty between their internet and everyone else's. Which means it's time to finally ask ourselves an even more uncomfortable question: what happens when the World Wide Web is no longer worldwide?

Over the last few weeks the US has been thrown into a tailspin over the impending divest-or-ban law that might possibly block the youth of America from accessing their favorite short-form video app. But if you've only been following the Supreme Court's hearing on TikTok you may have totally missed an entirely separate Supreme Court hearing on whether or not southern American states like Texas are constitutionally allowed to block porn sites like Pornhub. As of this month, 17 US states have blocked Pornhub for refusing to adhere to "age-verification laws" that would force Pornhub to collect users' IDs before browsing the site, thus making sensitive, personal information vulnerable to security breaches. 

But it's not just US lawmakers that are questioning what's allowed on their corner of the web. 

Following a recent announcement that Meta would be relaxing their fact checking standards Brazilian regulators demanded a thorough explanation of how this would impact the country's 100 million users. Currently the Brazilian government is "seriously concerned" about these changes. Which itself is almost a verbatim repeat of how Brazilian lawmakers dealt with X last year, when they banned the platform for almost two months over how the platform handled misinformation about the country's 2023 attempted coup.

Speaking of X, the European Union seems to have finally had enough of Elon Musk's digital megaphone. They've been investigating the platform since 2023 and have given Musk a February deadline to explain exactly how the platform's algorithm works. To say nothing of the French and German regulators grappling with how to deal with Musk's interference in their national politics.

And though the aforementioned Chinese Great Firewall has always blocked the rest of the world from the country's internet users, last week there was a breach that Chinese regulators are desperately trying to patch. Americans migrated to a competing app called RedNote, which has now caught the attention of both lawmakers in China, who are likely to wall off American users from interacting with Chinese users, and lawmakers in the US, who now want to ban it once they finally deal with TikTok.

All of this has brought us to a stark new reality, where we can no longer assume that the internet is a shared global experience, at least when it comes to the web's most visible and mainstream apps. New digital borders are being drawn and they will eventually impact your favorite app. Whether you're an activist, a journalist, or even just a normal person hoping to waste some time on their phone (and maybe make a little money), the spaces you currently call home online are not permanent. 

Time to learn how a VPN works. At least until the authorities restrict and regulate access to VPNs too, as they already do in countries such as China, Iran, Russia and India. 

A version of this story was published in this week’s Coda Currents newsletter. Sign up here.

The post Blocking Pornhub and the death of the World Wide Web appeared first on Coda Story.

]]>
53843
Musk, Zuck and the business of chaos https://www.codastory.com/authoritarian-tech/musk-zuck-and-the-business-of-chaos/ Thu, 09 Jan 2025 14:09:45 +0000 https://www.codastory.com/?p=53609 Why interfering in European politics and abandoning fact-checks are about the bottom line

The post Musk, Zuck and the business of chaos appeared first on Coda Story.

]]>
A Coda Story from this week's Coda Currents newsletter

Elon Musk isn't just inserting himself into national conversations in democracies around the world - he's taking a flamethrower to them. "Who would have imagined," asked French president Emanuel Macron this week, "that the owner of one of the world's largest social networks would be supporting a new international reactionary movement and intervening directly in elections?"

The question encapsulated the growing concern among European leaders about Musk's increasingly aggressive intervention in European politics. But what appears to be Musk’s penchant for spreading digital chaos may actually be a calculated business strategy.

European Leaders React

Norway's Prime Minister Jonas Gahr Støre finds it "worrying that a man with enormous access to social media and huge economic resources involves himself so directly in the internal affairs of other countries. This is not," tutted Støre, "the way things should be between democracies and allies."

Germany's Olaf Scholz says he is trying to "stay cool" despite being labeled "Oaf Schitz," as Musk openly cheers for a far-right, pro-Putin party before next month's federal elections. "The rule is," Scholz told Stern magazine, "don't feed the troll."

Britain's Keir Starmer has had to deal for days with an onslaught of inflammatory posts about historical sexual abuse cases, with Musk using his platform to resurrect decades-old stories about grooming gangs in northern England. He finally bit back, declaring that those “"spreading lies and misinformation” were “not interested in victims,” but “interested in themselves.”

But Italy's Giorgia Meloni broke ranks with her counterparts, praising Musk as a "great figure of our times" while negotiating a $1.6 billion SpaceX deal - after a telling weekend visit to Trump's Mar-a-Lago.

Following the Money

Musk’s targeted invective against European leaders isn't just digital trolling - it's a business strategy. He is courting right wing parties, whatever their particular ideologies and rhetorical excesses, because he sees them as less likely to impose regulation, to seek to rein in Big Tech. Despite the concerns of European leaders, though, as long as Musk appears to have president-elect Trump's ear, they will continue to walk on eggshells around him. They will have noted how the outgoing Canadian prime minister Justin Trudeau has been celebrated by the global right as an early triumph of the coming Trump-Musk world order. Musk derided Trudeau as an "insufferable tool" just last month and rubbed it in after the latter stepped down. “2025,” Musk announced on X this week, “is looking good.” 

Musk's influence over global discourse, heavily reliant on distortion and half-truths, will likely grow. The question is: who will dare to challenge him? Not Mark Zuckerberg who is abandoning fact-checking to pivot to X-style “community notes”. 

It is true that fact- checking organizations have long been working against impossible odds, swimming against a tidal wave of digital sewage. Meta’s third party fact-checking system was akin, in the words of one content moderator, to “putting a beach shack in the way of a massive tsunami and expecting it to be a barrier.”  But the system's destruction still signifies a refusal to take even token responsibility for how social media platforms are used. Where once misinformation was a problem to be solved, it is now the primary mechanism of cultural exchange and political discourse.

“I don’t think Meta’s fact-checking program was particularly good; it certainly didn’t seem very successful.” says Bobbie Johnson, media strategist and former editor with MIT Technology Review. “BUT the speed at which Zuckerberg has publicly bent the knee to the incoming regime is still remarkable.”While, as Johnson points out, Big Tech is only too happy to bow down before Trump, it appears the incoming president is in turn putting the interests of Big Tech at the heart of his second term. Ironically, some of the pushback, at least in the case of “first buddy” Elon Musk, may come from within Trump’s MAGA movement. Musk was recently called out for his support of the H1B visa for skilled immigrants, which many of Trump’s base have described as a program that takes American jobs and suppresses American wages. Musk’s response was to deride his critics as “hateful racists.” For Musk, a committed race-baiter, spreading racist tropes is only a problem when it interferes with business.

The post Musk, Zuck and the business of chaos appeared first on Coda Story.

]]>
53609
The global battle to control VPNs https://www.codastory.com/authoritarian-tech/the-global-battle-to-control-vpns/ Thu, 21 Nov 2024 12:42:13 +0000 https://www.codastory.com/?p=52941 By targeting proxy connections, authoritarian governments are policing their citizens’ internet usage and blocking access to information

The post The global battle to control VPNs appeared first on Coda Story.

]]>
This week, the clerics of Pakistan’s Council of Islamic Ideology, a constitutional body, declared Virtual Private Networks to be effectively un-Islamic. VPNs are typically used by individuals to bypass government restrictions on particular websites and to avoid surveillance.

Pakistan is the latest in a series of countries – from Türkiye to the UAE – seeking to clamp down on or outright ban VPNs. In Russia, Apple has been actively aiding this censorship effort by removing over 60 VPN services from its app store between July and September alone. Apple, reports show, have removed nearly 100 VPN services from its app store in Russia without explanation. Russian authorities claim they have only asked for the removal of 25 such services.

Restricting VPN services is increasingly becoming a vital tool of state control. In September, it was reported that Russia has budgeted $660 billion over the next five years to expand its capacity to censor the internet. The Kremlin, while not banning VPNs, has worked to block them off and curtail their use. VPNs are only banned in a handful of countries, including North Korea, Iraq, Oman, Belarus and Turkmenistan. But in several others, such as China, Russia, Türkiye and India, governments must approve of VPN services, thus enabling the monitoring and surveillance of users.    

Last month, the Washington D.C.-based Freedom House published its annual Freedom of the Net report, concluding that “global internet freedom declined for the 14th consecutive year.” The report named Myanmar (alongside China) as having the “world’s worst environment for internet freedom.” It specifically noted that the country’s military regime had “imposed a new censorship system that ratcheted up restrictions on virtual private networks (VPNs).” In desperation, anti-regime forces have tried to set up Starlink systems in areas under their control, though the Elon Musk-owned service isn’t licensed in Myanmar.

VPN use typically surges in countries which seek to control access to the internet. In Mozambique, for example, demand for VPNs grew over 2,000% in just the week up to November 5, following a ban on social media in the wake of a disputed election. And in Brazil, demand for VPNs grew over 1,000% in September, after the country’s Supreme Court formally blocked access to X. Posting on X, owner Elon Musk called for Brazilians to use VPNs and millions did even at the risk of incurring thousands of dollars of fines each day. Brazil’s Supreme Court also called on Apple and Google to drop VPNs from their app stores before dropping that requirement, though there were allegations that Apple had already begun to comply.

The United Nations has described universal access to the internet as a human right rather than a privilege, which means countries seeking to deny citizens access to information are denying them their fundamental rights. For people in countries beset by crisis or controlled by authoritarian governments, VPNs are a “lifeline,” as one young Bangladeshi wrote after the government cut off the internet and began to violently suppress protests in July,

In September, The White House met with Big Tech representatives, including Amazon, Google, Microsoft, and Cloudflare, and urged them to make more server bandwidth available to VPN services partially funded by the U.S. government through the Open Technology Fund. The OTF claims users of VPNs it funds, particularly in Iran and Russia, have grown by the tens of millions since 2022 and it is struggling to keep up with demand.

With governments around the world now eager to keep tabs on and control VPN use, many internet security and freedom advocates back Mixnet technology, which hides user identities within a chain of proxy servers, as a more effective means to evade snooping. But in a world that appears to be turning towards more authoritarian governments and leaders, can internet freedom continue to escape the clutches of determined censors?
Back in Pakistan, VPN services will now have to be registered with the government by November 30 or be considered illegal. It is a decision that the jailed former prime minister Imran Khan described from his cell as “a direct assault on the rights of people.” Ironically, on November 6, when the current Pakistani prime minister, Shehbaz Sharif, congratulated Donald Trump on his election win, he did it on X. Something he could have only done, as Pakistanis around the world scornfully pointed out, if he used a VPN.

This story was originally published as a newsletter. To get Coda’s stories straight into your inbox, sign up here


The artwork for this piece was developed during a Rhode Island School of Design course taught by Marisa Mazria Katz, in collaboration with the Center for Artistic Inquiry and Reporting.

The post The global battle to control VPNs appeared first on Coda Story.

]]>
52941
Does Trump need Taiwan to make America great again? https://www.codastory.com/authoritarian-tech/does-trump-need-taiwan-to-make-america-great-again/ Thu, 14 Nov 2024 12:59:29 +0000 https://www.codastory.com/?p=52887 As the White House changes hands, bipartisan support for Taiwan might be wavering

The post Does Trump need Taiwan to make America great again? appeared first on Coda Story.

]]>
In the before-times, a few days before the election that saw Donald Trump comfortably secure a triumphant return to the White House, the Wall Street Journal published a scoop detailing Elon Musk’s secret chats with Vladimir Putin. One particular nugget stood out for China watchers: the allegation that Putin asked Musk to never activate his internet satellite constellation, Starlink, over Taiwan.

Think pieces and blogs across Chinese state media hailed the conversation as yet more evidence that Putin backs China’s claims over Taiwan — which in turn bolsters his own expansionism. 

“Putin is very good at helping China teach a lesson to its rebellious son. He made demands on Musk and hit Taiwan's weakest points,” wrote one Chinese military commentator to his 300,000 followers following the revelation. 

SpaceX responded to the allegation by saying that Starlink doesn’t operate over Taiwan because Taiwan won’t grant the company a license. The island democracy doesn’t want Starlink having majority ownership control over any satellite connection, so it’s been racing to build its own independent satellite internet service, free of Elon Musk’s grip.

Musk said last year, to Taiwan’s fury, that he believes Taiwan to be an “integral part of China,” comparing it to Hawaii. So it makes sense that the self-ruled island doesn’t want the billionaire in control of its satellite internet. 

Nonetheless, satellite internet is something Taiwan urgently needs. Its undersea fiber optic cables connecting the island to the internet are vulnerable, easily severed by ships in the South China Sea. It’s happened 27 times in the last five years. And as the Chinese military stages almost daily “war games” and drills around the island, including simulating a blockade of the island’s ports — an exercise it carried out most recently in October — it feels more urgent than ever that Taiwan has some way of accessing the internet via satellite. But it doesn’t want Starlink having the power to turn on – or off – that connection.  

What would Trump do if Xi Jinping imposed a blockade on Taiwan? “Oh, very easy,” he told a Wall Street Journal reporter last month. “I would say: If you go into Taiwan, I’m sorry to do this, I’m going to tax you at 150% to 200%,” meaning he would impose tariffs. When asked if he would use military force against a blockade, Trump replied “I wouldn’t have to, because he respects me and knows I’m fucking crazy.” 

Our colleagues at the China Digital Times collected and translated a series of responses to this statement that are worth a read. It was “intriguing”, wrote Hong Kong professor Ding Xueliang, that this was Trump’s only response. 

Chairman Rabbit, a nationalist WeChat blogger with more than two million followers, went further: “Trump has absolutely no interest in Taiwan or the South China Sea, and has no intention of becoming embroiled in a conflict with China,” he wrote. 

Since the Musk-Putin revelations, Taiwan’s government has said it welcomes applications from all satellite internet services, including Starlink, “provided they comply with Taiwanese laws.” 

The irony is that manufacturers in Taiwan actually make some key bits of hardware for Starlink satellite systems, like circuit boards and semiconductor chips. 

Taiwan supplies 90% of the world’s most advanced chips, and Trump wants to slap tariffs on those too. He has said in the past, without providing much evidence, that Taiwan “stole our chip business.” 

But Taiwan’s politicians say Trump needs Taiwan just as much as Taiwan needs Trump. Francois Wu, the country’s Deputy Foreign Minister, told reporters this week that "without Taiwan, he cannot make America great again. He needs the semiconductors made here."
On election day in the U.S., it was revealed that Starlink had asked its Taiwanese suppliers to shift manufacturing off the island, citing “geopolitical risks.” The report sparked fury in Taiwan, with talk of boycotting Tesla, and viral praise for Musk’s “foresight” across Chinese social media.

This story was originally published as a newsletter. To get Coda’s stories straight into your inbox, sign up here

The post Does Trump need Taiwan to make America great again? appeared first on Coda Story.

]]>
52887
Legendary Kenyan lawyer takes on Meta and Chat GPT https://www.codastory.com/authoritarian-tech/mercy-mutemi-meta-lawsuit/ Tue, 22 Oct 2024 13:09:27 +0000 https://www.codastory.com/?p=52322 Mercy Mutemi has made headlines all over the world for standing up for Kenya’s data annotators and content moderators, arguing the work they are subjected to is a new form of colonialism

The post Legendary Kenyan lawyer takes on Meta and Chat GPT appeared first on Coda Story.

]]>
Tech platforms run from Silicon Valley, and the handful of men behind them, often seem and act invincible. But a legal battle in Kenya is setting an important precedent for disrupting the Big Tech's strategy of obscuring and deflecting attention from the effect their platforms have on democracy and human rights around the world.  

Kenya is hosting unprecedented lawsuits against Meta Inc., the parent company of Facebook, WhatsApp, and Instagram. Mercy Mutemi, who made last year’s TIME 100 list, is a Nairobi-based lawyer who is leading the cases. She spends her days thinking about what our consumption of digital products should look like in the next 10 years. Will it be extractive and extortionist, or will it be beneficial? What does it look like from an African perspective? 

The conversation with Mercy Mutemi has been edited and condensed for clarity.

Isobel Cockerell: You’ve described this situation as a new form of colonialism. Could you explain that?  

Mercy Mutemi: From the government side, Kenya’s relationship with Big Tech, when it comes to annotation work, is framed as a partnership. But in reality, it’s exploitation. We’re not negotiating as equal partners. People aren’t gaining skills to build our own internal AI development. But at the same time, you're training all the algorithms for all the big tech companies, including Tesla, including the Walmarts of this world. All that training is happening here, but it just doesn't translate into skill transfer. It’s broken up into labeling work without any training to broaden people’s understanding of how AI works. What we see is, again, like a new form of colonization where it's just extraction of resources, with not enough coming back in terms of value, whether it's investing in people, investing in their growth and well-being, just paying decent salaries and helping the economy grow, for example, or investing in skill transfer. That's not happening. And when we say we're just creating jobs in the thousands, even hundreds of thousands, if the jobs are not quality jobs, then it's not a net benefit at the end of the day. That's the problem.

IC: Behind the legal battle with Meta are workers and their conditions. What challenges do they face in these tech roles, particularly content moderation?  

MM: Content moderators in Kenya face horrendous conditions. They’re often misled about the nature of the work, not warned that the work is going to be dangerous for them. There’s no adequate care provided to look after these workers, and they’re not paid well enough. And they’ve created this ecosystem of fear — it’s almost like this special Stockholm syndrome has been created where you know what you're going through is really bad, but you're so afraid of the NDA that you just would rather not speak up.  

If workers raise issues about the exploitation, they’re let go and blacklisted. It’s a classic “use and dump” model.

IC: What are your thoughts on Kenya being dubbed the “Silicon Savannah”?  

MM: I do not support that framing, just because I feel like it’s quite problematic to model your development after Silicon Valley, considering all the problems that have come out of there. But that branding has been part of Kenya's mission to be known as a digital leader. The way Silicon Valley interprets that is by seeing Kenya as a place where they can offload work they don’t want to do in the U.S. Work that is often dangerous. I’m talking about content moderation work, annotation work, and algorithm training, which in its very nature involves a lot of exposure to harmful content. That work is dumped on Kenya. Kenya says it’s interested in digital development, but what Kenya ends up getting is work that poses serious risks, rather than meaningful investment in its people or infrastructure.

IC: How did you first become interested in these issues?  

MM: It started when I took a short course on the law and economics of social media giants. That really opened my eyes to how business models are changing. It’s no longer just about buying and selling goods directly—now it’s about data, algorithms, and the advertising model. It was mind-blowing to learn how Google and Meta operate their algorithms and advertising models. That realization pushed me to study internet governance more deeply.

IC: Can you explain how data labeling and moderation for a large language model – like an AI chatbot – works?  

MM: When the initial version of ChatGPT was released, it had lots of sexual violence in it. So to clean up an algorithm like that, you just teach it all the worst kinds of sexual violence. And who does that? It's the data labelers. So for them to do that, they have to consume it and teach it to the algorithm. So what they needed to do is consume hours of text of every imaginable sexual violence simulation, like a rape or a defilement of a minor, and then label that text. Over and over again. So then, what the algorithm knows is, okay, this is what a rape looks like. That way, if you ask ChatGPT to show you the worst rape that could ever happen, there are now metrics in place that tell it not to give out this information because it’s been taught to recognize what it’s being asked for. And that’s thanks to Kenyan youth whose mental health is now toast, and whose life has been compromised completely. All because ChatGPT had to be this fancy thing that the world celebrated. And Kenyan youth got nothing from it.  

This is the next frontier of technology, and they’re building big tech on the backs of broken African youth, to put it simply. There's no skill transfer, no real investment in their well-being, just exploitation.

IC: But workers aren’t working directly for the Big Tech companies, right? They’re working for these middlemen companies that match Big Tech companies with workers — can you explain how that works?  

MM: Big Tech is not planting any roots in the country when it comes to hiring people to moderate content or train algorithms for AI. They're not really investing in the country in the sense that there’s no actual person to hold liable should anything go south. There's no registered office in Kenya for companies like Meta, TikTok, OpenAI. And really, it’s important that companies have a presence in a country so that there can be discussions around accountability. But that part is purposely left out.  

Instead, what you have are these middlemen. They’re called Business Process Outsourcing, or BPOs, that are run from the U.S., not run locally, but they have a registered office here, and a presence here. A person that can be held accountable. And then what happens is big tech companies negotiate these contracts with the business. So for example, I have clients who worked for Meta or OpenAI through a middleman company called Sama, or who worked for Meta through another called Majorel, or those who worked for Scale AI but through a company called RemoTasks.  

It’s almost like they're agents of big tech companies. So they will do big tech's bidding. If the big tech says jump, then they jump. So we find ourselves in this situation where these companies purely exist for the cover of escaping liability.  

And in the case of Meta, for example, when recruitments happen, the advertisements don't come from Meta, they come from the middleman. And what we've seen is purposeful, intentional efforts to hide the client, so as not to disclose that you're coming to do work for Meta… and not even being honest or upfront about the nature of the work, not even saying that this is content moderation work that you're coming to do.

Kenyan lawyer Mercy Mutemi (C) speaks to the media after filing a lawsuit against Meta at Milimani Law Courts in Nairobi on December 14, 2022. Yasuyoshi Chiba/AFP via Getty Images.

IC: What are the repercussions of this on workers?  

MM: Their mental health is destroyed – and there are often no measures in place to protect their well-being or respect them as workers. And then it's their job to figure out how to get out of that rut because they still are a breadwinner in an African context, and they still have to work, right? And in this community where mental health isn't the most spoken-about thing, how do you explain to your parents that you can't work?  

I literally had someone say that to me—that they never told their parents what work they do because how do you explain to your parents that this is what you watch, day in, day out? And that's why it's not enough for the government to say, “yes, 10,000 more jobs.” You really do have to question what the nature of these jobs is and how we are protecting the people doing them, how we are making sure that only people who willingly want to do the job are doing it.

IC: You said the government and the companies themselves have argued that this moderation work is bringing jobs to Kenya, and there’s also been this narrative that — almost like an NGO – these companies are helping lift people out of poverty. What do you say to that?  

MM: I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me. That looks like entrenching the problem further because you've destroyed not just one person, but everybody that relies on that person and everybody that's now going to be roped in, in the care of that one person. You've destroyed a bigger community that you set out to help.

IC: Do you feel alone in this fight?

MM: I wouldn’t say I’m alone, but it’s not a popular case to take at this time. Many people don’t want to believe that Kenya isn’t really benefiting from these big tech deals.  It’s not a narrative that Kenyans want to believe, and it's just not the story that the government wants at the end of the day. So not enough questions are being asked. No one's really opening the curtain to see what is this work?  Are our local companies benefiting out of this? Nobody's really asking those questions. So then in that context, imagine standing up to challenge those jobs. 

IC: Do you think it’s possible for Kenya to benefit from this kind of work without the exploitation?

MM: Let me just be very categorical. My position is not that this work shouldn't be coming into Kenya. But it can’t be the way it is now, where companies get to say “either you take our work and take it as horrible as it is with no care, and we exploit you to our satisfaction, or we, or we leave.” No. You can have dangerous work done in Kenya, but with appropriate level of care,  with respect,  and upholding the rights of these workers. It’s going to be a long journey to achieve justice. 

IC: In September, the Kenyan Court of Appeal made a ruling — that Meta, a U.S. company, can be sued in Kenya. Can you explain why this is important?

MM: The ruling by the Court of Appeal brings relief to the moderators. Their case at the Labour Court had been stopped as we awaited the decision by the Court of Appeal on whether or not Meta can be sued in Kenya by former Facebook Content Moderators. The Court of Appeal has now cleared the path for the moderators to present their evidence to the court against Meta, Sama and Majorel for human rights violations. They finally get a chance at a fair hearing and access to justice. 

The Court of Appeal has affirmed the groundbreaking decision of the Labour Court that it in today's world, digital workspaces are adequate anchors of jurisdiction. This means that a court can assume jurisdiction based on the location of an employee working remotely. That is a timely decision as the nature of work and workspaces has changed drastically. 

What this means for Meta is that they now have a chance to fully participate in the suit against them. What we have seen up to this point is constant dismissiveness of the authority of Kenyan courts over Meta claiming they cannot be sued in Kenya. The Court of Appeal has found that they not only can be sued but are properly sued in these cases. We look forward to participating in the legal process fully and presenting our clients' case to the court for a fair determination. 

Correction: This article has been updated to reflect that the Court of Appeal ruling was in regard to the case of 185 former Facebook content moderators, not a separate case of Mutemi's brought by two Ethiopian citizens.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. Court battles in Kenya could jeopardize the outsourcing model upon which Meta has built its global empire.

To dive deeper into the subject, read Silicon Savanna: The workers taking on Africa’s digital sweatshops

In September, the Kenyan Court of Appeal ruled that Meta could be sued in Kenya, and that the case of 185 former Facebook content moderators, who argue that they were unlawfully fired en masse, can proceed to trial in a Kenyan court. Meta has argued that as a U.S.-registered company, any claims against the company should be made in the U.S. The ruling was a landmark victory for Mutemi and her clients. 

The post Legendary Kenyan lawyer takes on Meta and Chat GPT appeared first on Coda Story.

]]>
52322
Global Crises, Local Consequences: How Silicon Valley Shapes Our World https://www.codastory.com/authoritarian-tech/global-crises-local-consequences-how-silicon-valley-shapes-our-world/ Wed, 09 Oct 2024 12:56:51 +0000 https://www.codastory.com/?p=52301 Whether you live in Beirut, Lebanon or Buffalo, NY, the underlying cause of your local problems are increasingly informed by the same global currents we track here at Coda: viral disinformation, systemic inequity, and the abuse of technology and power.   These currents connect the crises happening in different parts of the world into a global

The post Global Crises, Local Consequences: How Silicon Valley Shapes Our World appeared first on Coda Story.

]]>
Whether you live in Beirut, Lebanon or Buffalo, NY, the underlying cause of your local problems are increasingly informed by the same global currents we track here at Coda: viral disinformation, systemic inequity, and the abuse of technology and power.  

These currents connect the crises happening in different parts of the world into a global web of intricately connected problems. It may not be obvious, but Silicon Valley is right at the very heart of this web. Home to some of the richest and most powerful men on earth, Silicon Valley is the birthplace of the technology that has given us so much convenience and also taken so much away from us. 

The world may be on fire, but things are going well for Silicon Valley’s most powerful men. Sam Altman, the CEO of OpenAI, which is now officially worth $157 billion and Mark Zuckerberg, whose $72 billion dollar wealth surge this year could now make him the richest person on earth. 

Both are in a position to address some of the world’s greatest problems, yet both choose to avoid any responsibility, and instead choose to obscure and deflect. 

Take AI-powered disinformation in this election for example. It’s rampant, scary and consequential for American democracy. Sam Altman’s response? He wants us to be patient. In his recent letter worthy of a techno-optimism medal Altman argues that it would be a “mistake to get distracted by any particular challenge. Deep learning works and we will solve the remaining problems”.  

Zuckerberg says he wants Meta to be remembered for “building big,” not safe. Meta no longer even engages in a whack-a-mole game of fact checking and content moderation. Along with Google, Amazon and X, Meta has essentially dismantled its Trust and Safety team that at least tried to mitigate the real life damage caused by the algorithmic promotion of hateful content. Mark Zuckerberg, who wore an “Aut Zuck Aut Nihil.” “Either Zuck or Nothing” shirt as he presented his latest meta verse at the company’s annual developer conference. As for life in this world, he is apparently done with politics. 

It takes a very special kind of privilege to ask for patience in the face of a major, life threatening, world changing crises. The attitude is familiar to anyone who has seen authoritarianism up close: the goal of an authoritarian is to secure a monopoly on money and power. Maintaining a monopoly of the narrative is the way of achieving that. Human suffering may not be the objective, but if that’s what it takes to achieve the desired outcome, then it’s just collateral damage. 

I spent a lot of my week speaking to people who could be considered “collateral damage”: people in Beirut, where unprecedented escalation of violence between Israel and Iran is wreaking havoc on millions of lives. Friends in Ukraine, where Russia is making territorial gains while continuing to bomb, kill and maim civilians.  

As well as my own family in Georgia, where the Kremlin is making political gains: the Russian state propaganda machine is now openly backing an autocratic, populist government that is about to use a democratic tool–elections–to pull the country deeper into its autocratic orbit. The government’s campaign strategy resembles blackmail. “If you don’t vote for us, Russia will do to you what it did to Ukraine,” is literally the message of the election billboards the Georgian government put up this week.  

The roots of each of these crises are buried deep in the history of individual places, but so much of the journalism we do at Coda brings us back to Silicon Valley. 

The valley is the modern day equivalent of the heart of the Roman empire; a place of extreme abundance, fantastic innovation and terrifying detachment from the rest of the world.  
For this reason, it has never been more important to connect the dots between the patterns that weave into the web of our modern life.

WHY DID WE WRITE THIS STORY?

We are tracking how the super rich are changing the world for the rest of us. It’s not, of course, just Silicon Valley. In this investigation, we dig deep into the sanctioned lives of Russia’s richest men. 

The post Global Crises, Local Consequences: How Silicon Valley Shapes Our World appeared first on Coda Story.

]]>
52301
Climate Disinformation Worth Millions https://www.codastory.com/authoritarian-tech/climate-disinformation-worth-millions/ Fri, 27 Sep 2024 12:48:02 +0000 https://www.codastory.com/?p=52188 Google placed advertisements alongside articles by The Epoch Times, which generated close to $1.5 million in combined revenue

The post Climate Disinformation Worth Millions appeared first on Coda Story.

]]>
Google’s billion-dollar advertising business is financing and earning revenue from articles that challenge the existence of climate change and question its severity, according to a new investigation by Global Witness. The articles in question ran on The Epoch Times, a vastly successful and influential conservative news organization powered by Falun Gong, a religious group persecuted in China, which originally launched The Epoch Times as a free propaganda newsletter two decades ago to oppose the Chinese Communist Party.  

Global Witness’ investigation found that Google placed advertisements alongside articles by The Epoch Times, which are estimated to have generated close to $1.5 million in combined revenue for Google and the website owners over the last year. Global Witness believes that some of these articles breached Google’s own publishing policies that do not allow “unreliable and harmful claims” that “contradict authoritative scientific consensus on climate change”. Is it possible to have accountability in AdTech? I spoke to Guy Porter, senior investigator on the digital threat team at Global Witness and author of their latest investigation. Porter works in the climate disinformation unit, which leads investigations linking climate denial and disinformation to big tech and the platforms. 

NJ: Why is this investigation important?

Guy Porter: We think this is really important both because of scale and the apparatus that supports disinformation: Facebook advertising, Google monetization. Google commands the largest share of the digital advertising market and is helping to fund - and making money from - what we believe is opportunistic and dangerous information, Additionally, The Epoch Times is a big media empire. In 2019, it was one of the leading spenders on pro-Trump ads on Facebook. We're talking about big money.  Its publisher, Epoch Times Association reported a revenue of $128 million in 2022.

NJ: In response to your investigation in May 2024, which looked at Epoch Times spreading disinformation via Meta’s advertising platforms, the media organization responded saying that science around climate change, like anything else, was always a matter up for debate and that scientists often have differing opinions. How do you respond to that dizzying combination of free speech absolutism and climate change denial?

GP: The free speech argument is an unhelpful tactic that helps to delay climate action. These articles present these fringe views that are not peer reviewed as a growing consensus of scientific fact. We welcome people debating climate solutions. And we think that's really important to tackling the urgent climate crisis. But there are changes that need to be made to tackle monetization of this kind of content.
NJ: One of the changes you're hoping for is ad tech regulation. What would that look like? 

GP: Both the UN and also the EU Commission are looking at this really closely as we put forward in this investigation, advertisers are also suffering from limited transparency around AdTech. While there are tools that Google supplies to assure advertisers where their ads will appear, the system is opaque and advertisers rely on Google to stick to its own policies on climate denial.

NJ: One of the places where these ads denying climate change are running is in Brazil, where the impact of climate change has been relentless and devastating. Much of the climate disinformation is not disseminated in English: is that also why we need to pay attention to it?

GP: Absolutely. We also know from previous research by ProPublica that Google's performance in non English language websites is not great. The 2025 climate change conference COP is being held in Brazil – and we know that disinformation is rife around these meetings. We believe it’s crucial to protect the media ecosystem there.

The post Climate Disinformation Worth Millions appeared first on Coda Story.

]]>
52188
Sinister Tech: When Pagers Explode https://www.codastory.com/authoritarian-tech/sinister-tech-when-pagers-explode/ Thu, 26 Sep 2024 12:34:39 +0000 https://www.codastory.com/?p=52196 Outside the realm of geopolitics, we should all be alarmed about the larger implications of turning everyday tech into weapons of destruction

The post Sinister Tech: When Pagers Explode appeared first on Coda Story.

]]>
Cross-border hostilities between Hezbollah and Israel have been on since October 7, but Israel’s latest airstrikes in Lebanon have been horrific in their targeting of civilians. Hospitals and streets in Lebanon are overrun with injured and terrified civilians trying to escape war.

Meanwhile, it seems apparent that Operation Exploding Pagers on September 18 marked the beginning of Israel’s military escalation in Lebanon and Syria. Netanyahu has been losing credibility internationally and in Israel over Gaza, but his Likud party is seeing a resurgence in popularity following the attacks on Lebanon. Outside the realm of geopolitics, we should all be alarmed about the larger implications of turning everyday tech into weapons of destruction.

Israel is yet to claim responsibility for the pager explosions in Lebanon but the country has a history of turning tech devices into explosives. In 1973, Israel assassinated PLO leader Mahmoud Hamshari in Paris by hiding explosives in the marble stand of his phone. In 1996, Shin Bet, Israel’s internal security wing, assassinated Hamas’s chief bomb-maker, Yahya Ayyash, through a small explosive in his mobile phone which was then remotely detonated. In 2009, in collaboration with the CIA’s former Director Michael Hayden, Israel killed the terrorist Imad Mugniyeh by placing a bomb in the spare wheel compartment of his SUV in Damascus, Syria.

Much of the fear around personal devices being turned into remote controlled explosives is two fold: Could any of our devices and appliances be turned into bombs? What does this mean for international supply chain contamination? Writing about Hezbollah, Kim Ghattas notes that mothers in Lebanon turned off baby monitors out of fear for their childrens’ lives.

To begin with, it’s important to understand why Hezbollah relies on low tech like pagers and landlines. Reuters reported earlier this year that Hezbollah switched to low tech to counter Israel’s sophisticated surveillance tactics. Pagers also run on a different wireless network than mobile phones which makes them more resilient in times of emergency.

The AR-924 pagers that turned into explosive devices on 18 September were believed to have been made by Gold Apollo, a Taiwanese firm. Since the terror attack, Gold Apollo’s CEO has confirmed that it authorized another company, Budapest-based BAC Consulting, to use its brand name for product sales in certain regions. Gold Apollo has denied any links with BAC’s manufacturing operations. In turn, Hungarian authorities have reported that BAC Consulting was only an intermediary, with no manufacturing or production facilities in Hungary. They claim that Hezbollah bought its pager stock from a company registered in Bulgaria, Norta Global. The trail grows ever more complex, with Bulgarian authorities confirming that no customs records prove the existence of such goods being exported through the country. The Japanese company that was initially believed to have manufactured walkie talkies that blew up in the second attack in Lebanon, has also released a statement saying they discontinued making the devices in question ten years ago. 

An Indian man and a Hungarian woman who were part of the companies implicated in the manufactured devices are reported to have gone missing. 
Media coverage has both praised Israel for its tactical genius in targeting Hezbollah and described the attack as an act of terrorism — but it is important to remember that Israel is not the only country to have planted explosives in unexpected places. From the 1960s up until the 2000s, the US and CIA used multiple methods including exploding cigars and seashells in their attempts to assassinate Fidel Castro. Contaminating supply chains is also an old intelligence tactic, according to Emily Harding, a veteran of the CIA and the U.S. National Security Council, who told Kevin Colliers at NBC that these stories are often kept from the public: “Supply chain compromises are tried and true in intelligence work,” said Harding. “I literally cannot think of a single example that is unclassified.”

The post Sinister Tech: When Pagers Explode appeared first on Coda Story.

]]>
52196
Stop Drinking from the Toilet! https://www.codastory.com/authoritarian-tech/stop-drinking-from-the-toilet/ Tue, 10 Sep 2024 13:02:17 +0000 https://www.codastory.com/?p=51640 We have systems to filter our water. Now we need systems to filter our tech

The post Stop Drinking from the Toilet! appeared first on Coda Story.

]]>

Stop Drinking from the Toilet!

Judy Estrin has been thinking about digital connectivity since the early days of Silicon Valley. As a junior researcher at Stanford in the 1970s she worked on what became the Internet. She built tech companies, became Cisco’s Chief Technology Officer, and served on the board of Disney and FedEx. Now, she’s working to build our understanding of the digital systems that run our lives.

We can’t live without air. We can’t live without water. And now we can’t live without our phones. Yet our digital information systems are failing us. Promises of unlimited connectivity and access have led to a fractionalization of reality and levels of noise that undermine our social cohesion. Without a common understanding and language about what we are facing, we put at risk our democratic elections, the resolution of conflicts, our health and the health of the planet. In order to move beyond just reacting to the next catastrophe, we can learn something from water. We turn on the tap to drink or wash, rarely considering where the water comes from–until a crisis of scarcity or quality alerts us to a breakdown. As AI further infiltrates our digital world, a crisis in our digital information systems necessitates paying more attention to its flow.

Water is life sustaining, yet too much water, or impure water, makes us sick, destroys our environment, or even kills us. A bit of water pollution may not be harmful but we know that if the toxins exceed a certain level the water is no longer potable. We have learned that water systems need to protect quality at the source, that lead from pipes leach into the water, and that separation is critical–we don’t use the same pipes for sourcing drinking water and drainage of waste and sewage.

Today, digital services have become the information pipes of our lives. Many of us do not understand or care how they work. Like water, digital information can have varying levels of drinkability and toxicity–yet we don’t know what we are drinking. Current system designs are corroded by the transactional business models of companies that neither have our best interests in mind, nor the tools that can adequately detect impurities and sound the alarm. Digital platforms, such as Instagram, TikTok, or YouTube, don’t differentiate between types of content coming into their systems and they lack the equivalent of effective water filters, purification systems, or valves to stop pollution and flooding. We are both the consumers and the sources of this ‘digital water’ flowing through and shaping our minds and lives. Whether we want to learn, laugh, share, or zone-out, we open our phones and drink from that well. The data we generate fuels increasingly dangerous ad targeting and surveillance of our online movements. Reality, entertainment, satire, facts, opinion, and misinformation all blend together in our feeds. 

Digital platforms mix “digital water” and “sewage” in the same pipes, polluting our information systems and undermining the foundations of our culture, our public health, our economy, and our democracy. We see the news avoidance, extremism, loss of civility, reactionary politics, and conflicts. Less visible are other toxins, including the erosion of trust, critical thinking, and creativity. Those propagating the problems deny responsibility and ignore the punch line of Kranzberg’s first law which states, “technology is neither good nor bad; nor is it neutral." We need fundamental changes to the design of our information distribution systems so that they can benefit society and not just increase profit to a few at our expense.

To start, let us acknowledge the monetary incentives behind the tech industry’s course of action that dragged the public down as they made their fortunes. The foundational Internet infrastructure, developed in the 1970s and 80s, combined public and private players, and different levels of service and sources. Individual data bits traveled in packets down a shared distributed network designed to avoid single points of failure. Necessary separation and differentiation was enforced by the information service applications layered on top of the network. Users proactively navigated the web by following links to new sites and information, choosing for themselves where they sourced their content, be it their favorite newspaper or individual blogs. Content providers relied heavily on links from other sites creating interdependence that incentivized more respectful norms and behaviors, even when there was an abundance of disagreements and rants.

Then the 2000s brought unbridled consolidation as the companies that now make up BigTech focused on maximizing growth through ad-driven marketplaces. As with some privatized water systems, commercial incentives were prioritized above wellness. This was only amplified in the product design around the small screen of mobile phones, social discovery of content, and cloud computing. Today, we drink from a firehose of endless scrolling that has eroded our capacity for any differentiation or discernment. Toxicity is amplified and nuance eliminated by algorithms that curate our timelines based on an obscure blend of likes, shares, and behavioral data. As we access information through a single feed, different sources and types of content–individuals, bots, hyperbolic news headlines, professional journalism, fantasy shows, and human or AI generated–all begin to feel the same.

Social media fractured the very idea of truth by taking control of the distribution of information. Now. Generative AI has upended the production of content through an opaque mixing of vast sources of public and private, licensed, and pirated data. Once again, an incentive for profit and power is driving product choices towards centralized, resource intensive Large Language Models (LLMs). The LLMs are trained to recognize, interpret, and generate language in obscure ways and then spit out, often awe inspiring, text, images, and videos on demand. The artificial sweetener of artificial intelligence entices us to drink, even as we know that something may be wrong. The social media waters are already muddied by algorithms and agents, as we are now seeing “enshittification” (an aptly coined term by Cory Doctorow) of platforms as well as the overall internet, with increasing amounts of AI generated excrement in our feeds and searches.

We require both behavioral change and a new more distributed digital information system–one that combines public and private resources to ensure that neither our basic ‘tap’ water or our fancy bottled water will poison our children. This will require overcoming two incredibly strong sets of incentives. The first is a business culture that demands dominance through maximizing growth by way of speed and scale. Second is our prioritization of convenience with a boundless desire for a frictionless world. The fact that this is truly a “wicked problem” does not relieve us of the responsibility to take steps to improve our condition. We don’t need to let go entirely of either growth or convenience. We do need to recommit to a more balanced set of values.

As with other areas of public safety, mitigating today’s harms requires broad and deep education programs to spur individual and collective responsibility. We have thrown out the societal norms that guide us to not spit in the proverbial drink of the other, or piss in the proverbial pool. Instead of continuing to adapt to the lowest common decency, we need digital hygiene to establish collective norms for kids and adults. Digital literacy must encourage critical thinking and navigation of our digital environments with discernment; in other words, with a blend of trust and mistrust. In the analog world, our senses of smell and taste warn us when something is off. We need to establish the ability to detect rotten content and sources–from sophisticated phishing to deep fakes. Already awash in conspiracy theories and propaganda, conversational AI applications bring new avenues for manipulation as well as a novel set of emotional and ethical challenges. As we have learned from food labeling or terms of service, transparency only works when backed by the education to decipher the facts.

Mitigation is not sufficient. We need entrepreneurs, innovators, and funders who are willing to rethink systems and interface design assumptions and build products that are more proactive, distributed, and reinforcing of human agency. Proactive design must incorporate safety valves or upfront filters. Distributed design approaches can use less data and special purpose models, and the interconnection of diverse systems can provide more resilience than consolidated homogeneous ones. We need not accept the inevitability of general purpose brute force data beasts. Human agency designs would break with current design norms.  The default to everything looking the same leads to homogeneity and flattening. Our cars would be safer if they didn’t distract us like smart phones on wheels. The awe of discovery is healthier than the numbing of infinite scrolls. Questioning design and business model assumptions require us to break out of our current culture of innovation which is too focused on short term transactions and rapid scaling. The changes in innovation culture have influenced other industries and institutions, including journalism that is too often hijacked by today's commercial incentives. We cannot give up on a common understanding and knowledge, or on the importance of trust and common truths.   

We need policy changes to balance private and public sector participation. Many of the proposals on the table today lock in the worst of the problems, with legislation that reinforces inherently bad designs, removes liability, and/or targets specific implementations (redirecting us to equally toxic alternatives). Independent funding for education, innovation, and research is required to break the narrative and value capture of the BigTech ecosystem. We throw around words like safe, reliable, or responsible without a common understanding of what it means to really be safe. How can we ensure our water is safe to drink? Regulation is best targeted at areas where leakage leads to the most immediate harm–like algorithmic amplification, and lack of transparency and accountability. Consolidation into single points of power inevitably leads to broad based failure. A small number of corporations have assumed the authority of massive utilities that act as both public squares and information highways–without any of the responsibility.

Isolation and polarization have evolved from a quest for a frictionless society with extraordinary systems handcrafted to exploit our attention. It is imperative that we create separation, valves, and safeguards in the distribution and access of digital information. I am calling not for a return to incumbent gatekeepers, but instead for the creation of new distribution, curation, and facilitation mechanisms that can be scaled for the diversity of human need. There is no single answer, but the first step is to truly acknowledge the scope and scale of the problem. The level of toxicity in our ‘digital waters’ is now too high to address reactively by trying to fix things after the fact, or lashing out in the wrong way. We must question our assumptions and embrace fundamental changes in both our technology and culture in order to bring toxicity levels back to a level that does not continue to undermine our society.

Why This Story?

We are fully immersed in the digital world, but most of us have very little idea what we’re consuming, where it’s coming from, and what harm it may be doing. In part, that’s because we love the convenience that tech brings and we don’t want to enquire further. It’s also because the companies that provide this tech, by and large, prioritize commercial incentives over wellness.

The post Stop Drinking from the Toilet! appeared first on Coda Story.

]]>
51640
Elon Musk vs The Defender of Democracy https://www.codastory.com/authoritarian-tech/elon-musk-vs-the-defender-of-democracy/ Fri, 06 Sep 2024 17:16:14 +0000 https://www.codastory.com/?p=51984 How far must we go in the fight against the far-right? Elon Musk’s trials in Brazil raise crucial questions

The post Elon Musk vs The Defender of Democracy appeared first on Coda Story.

]]>
When tech titans run into trouble with governments, they make impassioned claims about being defenders of free-speech and Musk is no different. Time and again, the billionaire has claimed he is a “free speech absolutist” – but feelings are not facts, and Musk’s self-assessment is far from accurate. Since he took over X (formerly, Twitter), Musk has capitulated 80% of the time when asked by different governments to take down tweets, block accounts and suspend users. Musk has also cooperated in stifling free speech with right-wing governments in India under PM Narendra Modi and in Turkey under Erdogan — so what is the real reason he is suddenly championing free speech in Brazil? 

CONTEXT

The struggle between the right to free speech and curbing disinformation has a long history in Brazil, which has the world’s fifth largest digital population. 

As early as 2015, Brazil’s government has, on separate occasions, arrested employees from Facebook and shut down WhatsApp for not complying with government orders quickly enough. Then in 2018, Brazil’s government handed its police force the power to police social media platforms.

In 2021, the “fake news law” in Brazil mandated that social media services reveal the identities and personal details of users who shared anything decreed to be fake news or which threatened national security in any way. It also granted the government the power to shut down dissenting voices in any part of the internet.  And in 2022, before the election between Brazil’s former President Jair Bolsonaro and Luiz Inácio Lula da Silva, Brazil’s government granted itself further censorship powers to curb the use of disinformation during election campaigns.

ENTER ELON MUSK

Much of Musk’s ire at present is directed towards one particular judge in Brazil, Alexandre de Moraes, a Supreme Court justice who has been described by the Brazilian press as “the defender of democracy” and “Xandão,” Portuguese for “Big Alex”, for his wide-ranging investigations and quick prosecution of those he deems to be a threat to Brazil’s institutions. 

Musk and de Moraes began to butt heads soon after far-right supporters of former President Jair Bolsonaro rioted in Brazil this January. De Moraes asked X to purge far-right voices linked to the uprising, and Musk, who has frequently aligned himself with right-wing figures like Donald Trump and Jair Bolsonaro, accused de Moraes of censorship and stifling free speech. 

Last month, on Thursday, August 5, Musk ignored a 24-hour deadline from the Supreme Court to name a new legal representative for X, after the platform’s local office in Brazil was closed down mid-August. 

Soon after, de Moraes accused Musk of treating X like a “land without a law”, a place where misinformation, hate speech and propaganda thrive with no repercussions. Musk has responded with a characteristic tantrum (mantrum?) on X — he posted an AI-generated image of de Moraes behind bars, another image of a dog’s scrotum and called the judge “Voldemort”.

MUTUAL HYPOCRISY

Both free speech and democracy deserve better advocates in Brazil. While de Moraes is widely considered to be the man who saved Brazil’s democracy from the far right, disinformation and electoral interference, his unquestioned authority is cause for concern. Meanwhile, Musk’s haste in obeying right-wing governments in countries like India completely contradict his claims of being a “free speech absolutist”.

According to the New York Times, de Moraes has “jailed people without trial for posting threats on social media; helped sentence a sitting congressman to nearly nine years in prison for threatening the court; ordered raids on businessmen with little evidence of wrongdoing; suspended an elected governor from his job; and unilaterally blocked dozens of accounts and thousands of posts on social media, with virtually no transparency or room for appeal…His orders to ban prominent voices online have proliferated, and now he has the man accused of fanning Brazil’s extremist flames, Bolsonaro, in his cross hairs. Last week, de Moraes included Bolsonaro in a federal investigation of the riot, which he is overseeing, suggesting that the former president inspired the violence.”

A report from Rest of World says Musk has complied with 80% of the requests from governments to take down tweets — this is a 30% increase over what X (then Twitter) agreed to under previous leadership.

In India for instance, X blocked posts by journalists, celebrities and publications at the behest of the Modi government. The platform not only geo-blocked tweets in regions the government claimed social media was sparking public unrest during the farmer protests, but also globally banned accounts tweeting about the riots, including those of Canadian MP Jagmeet Singh and poet Rupi Kaur.

This article was originally published as our weekly newsletter where we dissect the news beyond the broad strokes. Sign-up here.

The post Elon Musk vs The Defender of Democracy appeared first on Coda Story.

]]>
51984
Guide to Pavel Durov https://www.codastory.com/authoritarian-tech/hope-fear-and-the-internet/ Fri, 30 Aug 2024 07:22:33 +0000 https://www.codastory.com/?p=51726 The Tech Mogul Under French Investigation and the Global Implications of His Unregulated Empire

The post Guide to Pavel Durov appeared first on Coda Story.

]]>
Headlines around the world have described Pavel Durov as Russia’s Mark Zuckerberg or Elon Musk but also the Robin Hood of the internet. These descriptions struggle to tell us anything of note because they attempt to reduce something non-American into Americanisms.

First, let us skim the similarities: Like Zuckerberg and Musk, Durov is a tech-bro with a massive social media and messaging platform that has run into trouble with different governments. Like them, he is insanely wealthy, obsessed with freedom of speech, loves free markets, capitalism and posting hot takes on his favorite app. Durov rarely gives interviews, choosing instead to post updates, vacation photos and thirst traps with meandering captions to his 11 million followers on Telegram. Like many tech-bros, he has a fascination with his own virility and recently claimed to have fathered over a hundred children across the world via his “high quality donor material”. In 2022, he also made paper planes out of 5000 ruble notes (approximately $70 at the time) and Henry Sugar-like, flung them into a crowd of people from his window. 

But unlike the American heroes of Silicon Valley, Durov is a man fashioning his own legend as an international man of mystery. His arrest is a striking example of how a tech billionaire’s monopoly over global information infrastructure gives them–as individuals–incredible geopolitical influence. 

Initial reactions from Russia have framed Durov’s arrest as an instance of Western hypocrisy on free speech. Russians (including voices from within the Russian government) are urging the Kremlin to intervene on his behalf. Access is tricky, but military blogs show deep anxiety as to what his arrest means for the Russian military–which relies on Telegram as one of its primary means of communication in the war with Ukraine.

Durov’s arrest and reactions from Moscow have once again raised a question about his links to the Russian government. The Kremlin’s position continues to be firmly aligned with NSA whistleblower Edward Snowden (now based in Moscow). who described it as “an assault on the basic human rights of speech and association” and Elon Musk who has compared the arrest to being executed for liking a meme in 2030.

In a rare interview four months before his arrest, Durov described leaving Russia as a young child and moving to Italy with his family. His first experience with free markets, as he described it, convinced him that this was the way to live. His brother Nikolai was already a mathematical prodigy at school, and although Pavel struggled with English at first, his teachers’ dismissive attitude towards him spurred him to becoming the “best student”.

“I realized I liked competition,” he said with a smile.

The Durovs moved back to Russia when Pavel was a teenager, after the collapse of the Soviet Union. Pavel’s father, a scholar of ancient Roman literature, had a new job, and the family was able to bring back with them their IBM computer from Italy. Nikolai and Pavel continued to thrive at school—they were now learning six foreign languages each, along with advanced mathematics and chemistry. In his spare time, Pavel was writing code and building websites for his fellow students. It was at this time that he built VKontakte, an early version of social media that soon became the biggest messaging platform across several post Soviet-Union countries. At the time, Vkontakte had a single employee: Pavel Durov himself.

The story of Durov’s run-ins with Russia’s government is better known: in 2011 and again in 2013, the government asked VKontakte to share private data belonging to Russian protestors and Ukrainian citizens. When Durov refused, he was given “two sub-optimal options”: he could either comply, or he could sell his stake in the company, resign and leave the country. He chose the latter. In 2014 Durov sold his shares in the company and left Russia, announcing his departure with an image post of dolphins and an immortal line from The Hitchhiker's Guide to the Galaxy: “So long, and thanks for all the fish.”

This is also when Durov’s story begins to differ from the smooth narrative turns of American tech broligarchy. Nikolai and Durov created Telegram, a new platform with the ability to host crowds of up to 200,000 people in channels, multi-media messaging, self-destructing texts and the ability to hold secrets. Durov traveled the world looking for a place to set up an office and rejected London, Singapore, Berlin and even San Francisco. “In the EU it was too hard to bring the people I wanted to employ from across the world,” he told Tucker Carlson. “In San Francisco, I drew too much attention.” (The only time Durov has ever been mugged was in San Francisco, he said, when he left Jack Dorsey’s house and phone snatchers attempted to take his phone as he was tweeting about the meeting. Durov says he fought them off and kept his phone.)

“I’d be eating breakfast at 9 am and the FBI would show up,” he said. “It made me realize that perhaps this was not the right place for me.”

Durov became a citizen of the UAE and of France. In 2022, he was named  the wealthiest man in the UAE, His current net worth is 15.5 billion USD.  

In July 2024, Telegram had 950 million active users, placing it just after WhatsApp, WeChat and Facebook Messenger. Telegram isn’t just one of the most popular messenger apps in Russia and in other post-Soviet countries, as digital freedoms are shrinking, the app’s popularity is growing across the world. The platform began to be used increasingly during COVID lockdowns when disinformation was rife, and platforms like Facebook were allegedly under pressure from governments to censor posts about the pandemic.

Telegram’s popularity has also grown through political crises and protests in Egypt, Iran, Hong Kong, Belarus, Russia and India—Telegram provides a secure means of communication and organization for protesters, but while calls for violence are explicitly forbidden on the app, little else is.

“Telegram is a neutral platform for all voices, because I believe the competition between different ideas can result in progress and a better world for everyone,” Durov told Carlson. But this glib take does little to address the very real concern about child pornography, revenge porn and deepfakes that are able to thrive on the app because of its lack of moderation.

In his telling, competition and freedom are the twin motivations behind all of Durov’s decisions. It’s always one or the other that will explain why he does what he does, whether that’s living in the UAE, resisting content moderation on Telegram, or refusing to invest in real estate and private jets. 

“Millions of people have been signing up and sharing content on Telegram in the last hour while Instagram and Facebook were down,” he posted after a Meta outage in March. “Telegram is more reliable than these services—despite spending several times less on infrastructure per user. We also have about 1000 times (!) fewer full-time employees than Meta, but manage to launch new features and innovate faster. Throughout 2023, Telegram was unavailable for a total of only 9 minutes out of the year’s 525,600 minutes. That’s a 99.9983% uptime!” 

Since his arrest and interrogation, prosecutors have said that the judge in Durov’s case sees grounds to formally investigate the charges against him. Durov has been released from custody, but is banned from leaving France. He  paid a bail of €5 million and must present himself at a police station twice a week. 

Durov’s arrest has also raised questions about whether tech titans can personally be held responsible for what users do on their platforms. In India, Narendra Modi’s government has already said that it will also be investigating Telegram, while the Indian press has been agog with details about Durov’s personal life, fixating on his virility and the blonde woman who has reportedly been missing since Durov’s arrest. Durov’s brother, the once-child prodigy Nikolai is also wanted by French authorities, and a warrant for their arrest was issued as early as March. Durov’s Toncoin has crashed since news of his detention. What remains to be seen is whether Pavel will fall prey to the cult of his own personality or regain that which he claims to value above all else—his freedom.

WHY DID WE WRITE THIS STORY?

 It’s hard to imagine another product of any other industry with this much sensitive information of so many people, with this much vast influence on lives and geopolitics, that is also this unregulated. Telegram, which claims to have as few as 30 engineers, is led by one capricious 39 year old man who is now under investigation in France. Pavel Durov, who posted 5 million euro bail cannot leave France and has to report to a police station twice a week, while authorities investigate him for a range of crimes  including possessing and distributing child porn, drug trafficking and criminal association.

The post Guide to Pavel Durov appeared first on Coda Story.

]]>
51726
YouTube slows down in Russia Amid News of Ukrainian Offensive https://www.codastory.com/authoritarian-tech/youtube-slows-down-in-russia-amid-news-of-ukrainian-offensive/ Thu, 08 Aug 2024 12:12:46 +0000 https://www.codastory.com/?p=51608 By forcing Russian YouTubers to Russian platforms, state agencies gain control over their content and control the trickle-down of news on the Russian internet

The post YouTube slows down in Russia Amid News of Ukrainian Offensive appeared first on Coda Story.

]]>
YouTube is facing a major slowdown in Russia amidst rumors of the platform closing down altogether, as a growing effort by the country to isolate its internet from the rest of the world. Coda spoke with Sarkis Darbinyan, the Managing partner of Digital Rights Center and the co-founder of Roskomsvoboda, the first Russian public organization operating in the field of digital rights protection and digital empowerment. 

Coda: Russian authorities announced last week that YouTube's performance would be slowed down up to 70%. Today, it is almost inaccessible in Russia without a VPN, and uploading a short video can take hours. What's happening?

Darbinyan: YouTube is being slowed down across the country. This is done centrally through DPI (Deep Packet Inspection) equipment, via providers. If a provider knows that a user is connecting to a YouTube server, it starts reducing the traffic, the speed drops, and all 4K videos either start buffering or YouTube switches them to low resolution. This contradicts the authorities' claims that outdated Google servers, which haven't been updated for two years, are to blame. Server degradation doesn't happen overnight. Here, we see interference in the traffic by Roskomnadzor (The Federal Service for Supervision of Communications, Internet censor).

C: Why are they slowing down YouTube and why now?

D: This has developed gradually. There have been many concerns about YouTube, not political ones related to social protests, but rather technical issues. How to block it? And how to block it without affecting other Google services, which, of course, could turn most Android devices into bricks. It apparently took them some time to figure this out. 

Currently, the blockage is not complete. YouTube is still the number one video platform in Russia in terms of users. This means that if it were completely blocked, most Russians would access it through VPNs and cross-border channels. This could potentially bring down the entire internet, as the load on cross-border channels would immediately increase when users connect to servers located abroad instead of their provider's server. Roskomnadzor is currently measuring and observing how the YouTube slowdown affects the load on cross-border channels. If the load increases, the blockages may be relaxed, but if the loads are small, they might push for a 100% blockage.

C: Is the goal to reorient users to Russian networks, like RuTube and VKontakte (the most popular Russian network, controlled by the state)?

D: I think so. What we see is a change in Kremlin's strategy. Instead of a harsh blockade, like the one that awaited Instagram and Facebook, the task now is to worsen the quality of video to intensify user migration to Russian alternatives. This might work, as not everyone has access to VPN services, which have become significantly limited. Not everyone is ready to use them. If this continues for many months, it will certainly encourage users to gradually move to other platforms.

C: What are the consequences for bloggers moving to Russian YouTube alternatives?

D: The authorities will definitely moderate and censor the content. Some videos might be deleted entirely, or an entire channel might be taken down. By moving to Russian platforms, a blogger becomes entirely dependent on Roskomnadzor and its will, losing control over their content. This will be more severe than dealing with YouTube's moderation team.

C: Is there a scenario in which they won't have to move to these platforms?

D: It depends on the resistance from users and content creators. If they say they are not ready to part with YouTube and arm themselves with VPNs, all of Roskomnadzor's actions will be in vain. But this situation will allow some of the audience to be lured away.

C: Besides VPNs, are there other ways to bypass these blockages?

D: Well, VPNs are, of course, the most robust tool not only for restoring access to information but also for restoring speed. Therefore, a good VPN channel will solve the problem of waiting for a YouTube video to load. Other tools like Tor can also help. I would like to remind you that Roskomnadzor has worked hard over the past six months to significantly narrow the choice of tools available to Russians.

C: Do you think this is a step towards something bigger for Roskomnadzor, in terms of internet blocking and increasing the so-called sovereignty of the internet?

D: Roskomnadzor and Russian censorship have distinctive features that set them apart from other countries, such as China. While it is becoming more like the Chinese model, it is still very different from the models in Iran or Turkmenistan, where the censorship system is even more severe. The key difference is that all allocated IP addresses in the country are conditionally divided into three lists: white, allowed ones, which belong to national state-owned companies; second, gray IP addresses, used by foreigners and foreign companies; and everything else. Everything else goes into the blacklist. With such a model, VPNs do not work at all because almost all addresses, except for the allowed ones, are blocked. However, for such countries, there are tools like Psiphon, which is not quite a VPN but rather a combination of proxy servers and proprietary development, which, in my opinion, is the only one that works under such total censorship conditions.

C: Why hasn't Russia implemented this yet?

D: Because Russia still has ambitions to trade with the whole world. Russia still sees itself as part of the international economic community. It wants to trade with India, China, Latin America, and Africa, unlike Turkmenistan. Therefore, trade is impossible without the internet. Implementing such a model would significantly limit the possibilities of foreign economic activity for state-owned companies and Russian legal entities.

Sovereign internet is essentially a barrier between Russian cyberspace and the global one. It has gateways that are, in one way or another, controlled by Roskomnadzor. But it is not only about censorship; it is also about active import substitution: replacing services, protocols, and cryptography, which Russian authorities are striving for.

The post YouTube slows down in Russia Amid News of Ukrainian Offensive appeared first on Coda Story.

]]>
51608
How tech design is always political https://www.codastory.com/authoritarian-tech/tech-design-ai-politics/ Thu, 29 Feb 2024 18:29:23 +0000 https://www.codastory.com/?p=50026 Social media companies have made many mistakes over the past 15 years. What if they’re repeated in the so-called AI revolution?

The post How tech design is always political appeared first on Coda Story.

]]>
Facebook has a long-maligned yet still active feature called “People You May Know.” It scours the network’s data troves, picks out the profiles of likely acquaintances, and suggests that you “friend” them. But not everyone you know is a friend.

Anthropologist Dragana Kaurin told me this week about a strange encounter she had with it some years back.

“I opened Facebook and I saw a face and a name I recognized. It was my first grade teacher,” she told me. Kaurin is Bosnian and fled Sarajevo as a child, at the start of the war and genocide that took hundreds of thousands of lives between 1992 and 1995. One of Kaurin’s last memories of school life in Sarajevo was of that very same teacher separating children in the classroom on the basis of their ethnicity, as if to foreshadow the ethnic cleansing campaign that soon followed.

“It was widely rumored that our teacher took up arms and shot at civilians, and secondly, that she had died during the war,” she said. “So it was like seeing a ghost.” Now at retirement age, the teacher’s profile showed her membership in a number of ethno-nationalist groups on Facebook. 

Kaurin spent the rest of that day feeling stunned, motionless. “I couldn’t function,” she said.

The people who designed the feature probably didn’t anticipate that it would have such effects. But even after more than a decade of journalists like The New York Times’ Kashmir Hill showing various harms it could inflict — Facebook has suggested that women “friend” their stalkers, sex workers “friend” their clients, and patients of psychiatrists “friend” one another — the “People You May Know” feature is still there today.

From her desk in lower Manhattan, Kaurin now runs Localization Lab, a nonprofit organization that works with underrepresented communities to make technology accessible through collaborative design and translation. She sees the “People You May Know” story as an archetypical example of a technology that was designed without much input from beyond the gleaming Silicon Valley offices in which it was conceived.

“Design is always political,” Kaurin told me. “It enacts underlying policies, biases and exclusion. Who gets to make decisions? How are decisions made? Is there space for iterations?” And then, of course, there’s the money. When a feature helps drive growth on a social media platform, it usually sticks around.

This isn’t a new story. But it is top of mind for me these days because of the emerging consensus that many of the same design mistakes that social media companies have made over the past 15 years will be repeated in the so-called “AI revolution.” And with its opaque nature, its ability to manufacture a false sense of social trust and its ubiquity, artificial intelligence may have the potential to bring about far worse harms than what we’ve seen from social media over the past decade. Should we worry?

“Absolutely,” said Kaurin. And it’s happening on a far bigger, far faster scale, she pointed out.

Cybersecurity guru Bruce Schneier and other prominent thinkers have argued that governments should institute “public AI” models that could function as a counterweight to corporate, profit-driven AI. Some states are already trying this, including China, the U.K. and Singapore. I asked Kaurin and her colleague Chido Musodza if they thought state-run AI models might be better equipped to represent the interests of more diverse sets of users than what’s built in Silicon Valley.

Both researchers wondered who would actually be building the technology and who would use it. “What is the state’s agenda?” Kaurin asked. “How does that state treat minority communities? How do users feel about the state?”

Musodza, who joined our conversation from Harare, Zimbabwe, considered the idea in the southern African context: “When you look at how some national broadcasters have an editorial policy with a political slant aligned towards the government of the day, it’s likely that AI will be aligned towards the same political slant as well,” she said.

She’s got a point. Researchers testing Singapore’s model found that when asked questions about history and politics, the AI tended to offer answers that cast the state in a favorable light.

“I think it would be naive for us to say that even though it’s public AI that it will be built without bias,” said Musodza. “It’s always going to have the bias of whoever designs it.”

Musodza said that for her, the question is: “Which of the evils are we going to pick, if we’re going to use the AI?” That led us to consider that a third way might be possible, depending on a person’s circumstances: to simply leave AI alone.

This piece was originally published as the most recent edition of the weekly Authoritarian Tech newsletter.

The post How tech design is always political appeared first on Coda Story.

]]>
50026
How Big Tech let down Navalny https://www.codastory.com/authoritarian-tech/russia-navalny-big-tech/ Wed, 21 Feb 2024 19:40:24 +0000 https://www.codastory.com/?p=49931 Silicon Valley was meant to be a boon to the Russian opposition, helping spread democratic ideas. Until the platforms bowed before a Kremlin crackdown

The post How Big Tech let down Navalny appeared first on Coda Story.

]]>
As if the world needed another reminder of the brutality of Vladimir Putin’s Russia, last Friday we learned of the untimely death of Alexei Navalny. I don’t know if he ever used the term, but Navalny was what Chinese bloggers might have called a true “netizen” — a person who used the internet to live out democratic values and systems that didn’t exist in their country.

Navalny’s work with the Anti-Corruption Foundation reached millions using major platforms like YouTube and LiveJournal. But they built plenty of their own technology too. One of their most famous innovations was “Smart Voting,” a system that could estimate which opposition candidates were most likely to beat out the ruling party in a given election. The strategy wasn’t to support a specific opposition party or candidate — it was simply to unseat members of the ruling party, United Russia. In regional races in 2020, it was credited with causing United Russia to lose its majority in state legislatures in Novosibirsk, Tambov and Tomsk.

The Smart Voting system was pretty simple — just before casting a ballot, any voter could check the website or the app to decide where to throw their support. But on the eve of national parliamentary elections in September 2021, Smart Voting suddenly vanished from the app stores for both Google and Apple. 

After a Moscow court banned Navalny’s organization for being “extremist,” Russia’s internet regulator demanded that both Apple and Google remove Smart Voting from their app stores. The companies bowed to the Kremlin and complied. YouTube blocked select Navalny videos in Russia and Google, its parent company, even blocked some public Google Docs that the Navalny team published to promote names of alternative candidates in the election. 

We will never know whether or not Navalny's innovative use of technology to stand up to the dictator would have worked. But Silicon Valley's decision to side with Putin was an important part of why Navalny’s plan failed.  

Navalny’s team felt so abandoned by the companies at that moment that they compared it to the U.S. withdrawal from Afghanistan. At the time, photos of U.S. planes taking flight and leaving desperate Afghans behind on the runways of the Kabul airport were dominating global media.

“It felt like we’re people running alongside a plane that’s taking off. And here we are, being left behind,” Ivan Zhdanov told my colleagues investigating the fallout of the Smart Voting story for “Undercurrents: Tech, Tyrants and Us,” Coda’s podcast about the role of technology in the rise of global authoritarianism. 

“We rely on YouTube, on Google Docs, on all these other tools, to spread ideas of freedom, of democracy. But right now we are in a game that has no rules,” he said at the time.

Why did these Big Tech behemoths, which claimed to support baseline human rights, bow down to the Kremlin? Neither company ever spoke publicly about the decision. The companies told Navalny’s organization that they were acting on a legal order. But what legitimacy does a legal order have when it’s clearly been written to target the government’s top adversary? 

This is the shaky ground on which these companies operate. If they want to keep doing business in a given country, they have to follow or at least pay lip service to the laws of the land. In a case like this one, it meant undermining the interests of regular Russians and democracy itself.

And then, just months later, the tables turned again. When Russia launched its full-scale invasion of Ukraine, companies across Silicon Valley put out statements declaring their support for Ukraine and their intentions to go after Russian state propaganda on their platforms. Both Meta and Twitter (now X) were banned in Russia, and companies like Apple and TikTok began blocking select services within the country. Tacit signs of support for the opposition also popped up. The Smart Voting app even reappeared in the App Store. Whatever rationale had led the company to remove the app suddenly evaporated.

This week, I caught up with Tanya Lokot and Marielle Wijermars, two internet policy scholars who specialize in the region, to ask their reflections on how things have evolved since that time, especially in the wake of Navalny’s death.

“It may be a bit too deterministic to say that his team’s dependence on tech platforms was ‘their downfall,’” they wrote in a joint response, noting that Navalny’s organization had “accounted for the restrictions and possible censorship and built alternative infrastructures to support their work.” They also talked about how building this kind of resilience has become more difficult since the start of the war. 

“It is getting harder and harder to find these alternatives, as more and more platforms are exiting Russia and users are relying on VPNs and other circumvention tools,” they wrote. Pressure from sanctions and an overall lack of technology is compounding the issue and isolating Russians further. And they noted that for Navalny’s organization, which now works mainly in exile, there are new challenges around getting information into the country. While the last few years have offered new lessons on the promise and perils of using technology to try to bring about change, Lokot and Wijermars made it clear that these are all mere battles in a much longer war.

Just yesterday, another tech company became the site of the latest battle — X briefly suspended the account of Navalny’s widow, Yulia Navalnaya. The company cited “automated security protocols” as the reason for the error.
After years avoiding the spotlight, Navalnaya came out this week with a gut-wrenching speech in which she declared her intention to seize the torch and keep fighting “harder, more desperately and more fiercely than before.” But with its tools decimated and its ultimate netizen gone, the fight now may be more brutal and more dangerous than ever.

This piece was originally published as the most recent edition of the weekly Authoritarian Tech newsletter.

Russia’s transformation into a full digital dictatorship that ultimately killed its most prominent critic did not happen overnight. Listen to this episode of “Undercurrents: Tech, Tyrants and Us” to understand how it unfolded and what role Western technology companies played in strengthening Putin’s regime.

The post How Big Tech let down Navalny appeared first on Coda Story.

]]>
49931
Taiwan confronts China’s disinformation behemoth ahead of vote https://www.codastory.com/authoritarian-tech/taiwan-election-disinformation-china/ Fri, 05 Jan 2024 11:44:53 +0000 https://www.codastory.com/?p=49252 China is using disinformation and propaganda to try to influence Taiwan’s election. A scrappy coalition of civil society organizations are fighting back

The post Taiwan confronts China’s disinformation behemoth ahead of vote appeared first on Coda Story.

]]>
On a sunny morning in Taipei last August, I joined a few dozen other people at the headquarters of the Kuma Academy for an introductory course in civil defense. We broke into groups to introduce ourselves. As our group leader presented us to the room, she mistakenly called me a “war correspondent.”

“No, no, that’s not right,” I interjected. “I’m here because I precisely don’t want to become a war correspondent in the future.” 

The Kuma Academy, established in September 2022, trains citizens in the basic skills they might need to survive and help their compatriots in the event of an attack. Civil defense has been on many people’s minds in Taiwan since Russia’s full-scale invasion of Ukraine in 2022. “If China Attacks,” a book covering potential scenarios for a Chinese invasion — co-written by Kuma Academy co-founder Puma Shen — has become a bestseller. 

Many of the attendees at the academy seem like regular office workers or homemakers. The youngest person I talk to is a high school student. A great deal of the curriculum is practical — basic medical training, contingency planning for an invasion, even what kind of material you should hide behind to protect yourself from gunfire. But a lot of the training is less about skills and more about shoring up the sense of agency that regular people feel: making them understand that they have the power to resist.  

In the face of Chinese propaganda and disinformation, that could be as important as weapons drills and first aid. Taiwan holds elections this month, pitting the pro-autonomy Democratic Progressive Party (DPP) against the more pro-Beijing KMT. The outcome of the vote has huge consequences for relations across the Taiwan Strait and for the future of an autonomous Taiwan, whose independence Beijing has vehemently opposed — and threatened to violently reverse — since the island first began to govern itself in 1949. Successfully interfering in the democratic process using what the Taiwanese government calls “cognitive warfare” could be a way for Beijing to achieve its goals in Taiwan without firing a shot. 

Despite — or because of — the stakes, Taiwan’s response to the challenge of Chinese election interference isn’t siloed in government ministries or the military. Just as civil resistance has to be embedded in society, the responsibility of defending the information space has been entrusted to an informal network of civil society organizations, think tanks, civilian hackerspaces and fact-checkers. 

“We’re often asked by international media if Taiwan has an umbrella organization for addressing disinformation-related issues. Or if there is a government institution coordinating these kinds of responses,” said Chihhao Yu, one of the co-founders of Information Environment Research Center (IORG), a think tank in Taiwan that researches cognitive warfare. “But first, there’s no such thing. Second, I don’t think there should be such an institution — that would be a single point of failure.”

A girl learns how to do CPR during an event held by Taiwanese civil defense organization Kuma Academy, in New Taipei City on November 18, 2023, to raise awareness of natural disaster and war preparedness. I-Hwa Cheng/AFP via Getty Images.

Disinformation from China is hardly new in Taiwan. During the Cold War, before the term “disinformation” was in the common lexicon, the Chinese Communist Party injected propaganda into the public sphere, trying to instill the idea that reunification was inevitable, and it was futile to resist. This is spread through many channels, including newspapers, magazines and radio. But, as in the rest of the world, social media has made it easier to reach a wide audience and spread falsehoods more rapidly and with greater deniability. Disinformation now circulates on international platforms including Facebook, Instagram, X and the South Korean-owned messaging app Line, which is popular in Taiwan, as well as on local forums such as PTT and DCard.

Disinformation from China used to be easy to spot. Its creators would use terms that weren't part of the local Taiwanese lexicon or write with simplified Chinese characters, the standard script in mainland China — Taiwan uses a traditional set of characters instead. However, this is changing, as information operations become more sophisticated and better at adapting language for the target audience. “Grammar, terms, and words are more and more similar to that of Taiwan in Chinese disinformation,” said Billion Lee, co-founder of the fact-checking organization Cofacts.

With the election approaching, the Chinese government has increased its efforts to localize its propaganda, recruiting social media influencers to spread its messaging and allegedly buying influence at the grassroots level by subsidizing trips to China for local Taiwanese politicians and their constituents. Over 400 trips took place in November and nearly 30% of Taipei’s borough chiefs — the lowest level of elected officials — have participated in them. 

The medium used to spread propaganda and disinformation has evolved as well. Cofacts started out in 2016 by building a fact-checking chatbot on Line, focusing on text-based falsehoods. Now, it has to work across multiple platforms and formats, including TikTok reels, Instagram stories, YouTube shorts and podcasts.

The aim of this election disinformation is often fairly obvious — boosting Beijing’s preferred candidates and discrediting those it considers hostile. 

In late November, 40 people were detained by Taiwanese authorities on voting interference charges. A separate investigation found a web of accounts across Facebook, YouTube and TikTok that worked to prop up support for the pro-China KMT. The so-called “Agitate Taiwan” network also attacked third-party candidate Ko Wen-je, whose party favors closer relations with China, but whose candidacy may divide the vote in a way that leads to a victory for the historically independence-leaning DPP. 

Other themes, Lee said, include trying to undermine the DPP leadership and casting them as inept by insinuating, falsely, that they failed to secure vaccines during the Covid-19 pandemic, and alleging that the DPP only pushed for the development of Taiwan’s domestically produced vaccine, Medigen, because it had made illicit investments in the company. Messaging also often targets Taipei’s relationship with the U.S., suggesting that America would abandon Taiwan in the event of a war.

These overtly political messages intersect with other influence operations and more traditional espionage. In November, 10 Taiwanese military personnel were arrested after allegedly making online videos pledging to surrender in the event of a Chinese invasion. One of those charged, a lieutenant colonel, was allegedly offered $15 million by China to fly a Chinook helicopter across the median line of the Taiwan Strait to a waiting Chinese aircraft carrier. Such defections and public promises not to resist, weaponized and spread on social media, are clearly aimed at undermining public morale in Taiwan. 

Those efforts can be oddly targeted. In May, Cynthia Yang, the deputy secretary-general of a nonprofit in Taiwan , received a series of calls from people with mainland Chinese accents after she ordered a copy of “If China Attacks” from the Taiwanese bookseller Eslite. The callers claimed to be from customer service, but they questioned Yang about her “ideologically problematic” purchase. It seemed to be an effort at psychological intimidation. After the incident was reported on by Taiwanese media, the book’s co-author Puma Shen quipped on social media that his next book would be titled “If China Calls.”

Fighting back against this full-spectrum influence campaign is hard. Chinese disinformation tactics have fed into a broader polarization in Taiwan, which is fragmenting the internet.  “Everyone uses a different internet these days,” Lee said. There's increasing recognition online that people inhabit echo chambers comprising their peers, which are difficult to break out of. 

It means that the organizations — mainly civil society groups — arrayed against a superpower keen on undermining Taiwan's democratic processes face a complex task.  Often these groups are small and scrappy, run by volunteers or just a handful of staff. They’re in an arms race that they can’t win — or at least, that they can’t win alone.

To compete, they’re collaborating. “Even if we don’t know each other, we can work together without directly cooperating,” said Yu from the Information Environment Research Center. “To use Cofacts as an example, we don’t directly coordinate with Cofacts. But because Cofacts has an open database with an open license, we can use their datasets of rumors and community fact-checking to conduct research, and we continue to do so.”

Cofacts has emerged as an important piece of infrastructure for Taiwan’s fact-checking ecosystem. The organization has used its Line bot as a way to build an enormous database of disinformation spotted in the wild, which it makes available to other groups via an application programming interface. Crucially, the bot allows users to collect disinformation that wasn’t circulating on open social media, such as Facebook or Twitter, but in closed-door messaging apps such as Line or Facebook Messenger. 

Systematically collecting that data allows other organizations to conduct more sophisticated analysis, spot patterns and respond strategically, rather than chasing down every lie and fact-checking it.

This collaborative approach can be traced back to g0v, the influential civic hacker community, from which a number of innovative initiatives have emerged in the past decade — from digitizing historical documents significant to contemporary Taiwanese politics to gamifying the identification of satellite images to find illegal factories on farmland. 

The g0v community runs decentralized hackathons for developing project ideas , taking place in classrooms and offices and bringing together anywhere from a few dozen to a few hundred people. Not all ideas make it to fruition, but some of the projects that come out of g0v — including those that tackle disinformation — may begin with just a small breakout group huddled in the corner of a hackathon.

It is these small civil society groups that Taiwan relies on to stay ahead of Chinese innovations in disinformation, with the hope that by being nimble and adaptable, they can hold back the tide. Bigger threats are coming. The rise of generative artificial intelligence, which can quickly create text, images, videos and more at scale, could allow China to increase the volume of propaganda it produces and make it seem more authentic by accurately using Taiwanese idioms and references. Certainly, there is no shortage of materials produced out of Taiwan’s open and free Internet for generative AI to learn from. 

Still, the solution may be precisely in the decentralized and networked nature of these efforts to combat Chinese disinformation campaigns. After all, a set-up in which a number of differing solutions emerge at once, often organically and spontaneously, has no single point of failure, as to borrow Yu’s words. 

“We wanted to connect people who wrote code and people concerned with society to work together,” Lee said, when asked about why she and her collaborators began Cofacts. Perhaps it’s faith in society to know for itself what’s best that keeps such groups going. And this may be the best weapon against authoritarianism — the belief that the connections between people can be enough to deal with a much larger enemy. The fight is on.

CORRECTION [01/12/2024 09:52AM EST]: The original version of this story stated that 40 people were detained by Taiwanese authorities on voting interference charges in connection to the Agitate Taiwan network. The detentions were not directly related to the network.

Why did we write this story?

Taiwan is a pioneer in digital defense and tech-enabled civil society. How it handles an onslaught of Chinese disinformation could set the standard for other democracies.

The post Taiwan confronts China’s disinformation behemoth ahead of vote appeared first on Coda Story.

]]>
49252
On British soil, foreign autocrats target their critics with impunity https://www.codastory.com/authoritarian-tech/on-british-soil-foreign-autocrats-target-their-critics-with-impunity/ Tue, 19 Dec 2023 14:08:39 +0000 https://www.codastory.com/?p=49038 Canada and the US have criticized the Modi government in India for pursuing its critics overseas. But in the UK, where tensions between diaspora communities are rising, the government has been silent

The post On British soil, foreign autocrats target their critics with impunity appeared first on Coda Story.

]]>
Death threats are pretty routine for British Sikh journalist Jasveer Singh. When he posts stories on social media about his community, they’re often met with abuse. He’s been called a terrorist, as have the subjects of his stories. His accounts have been reported en masse for allegedly posting offensive comments, prompting the platforms to suspend them. “It does descend into direct threats,” Singh said. “‘We’re coming for you next… We’re going to shut you up.’ That’s a daily occurrence.”

It’s never entirely clear who is behind the campaigns, or if they’re actively being coordinated. But the abuse tends to flare up during moments of political scandal in India. The country’s deepening ethnic and religious divisions under the Hindu nationalist government of Prime Minister Narendra Modi are plain to see in the digital realm. Trolling of minorities by supporters of Modi’s Bharatiya Janata Party is commonplace. India has used diplomatic channels to brand diaspora groups as terrorists, and has used digital channels to harass and disrupt potential opponents. Singh and other prominent Sikhs in the U.K. have received messages from X — the platform formerly known as Twitter — telling them that Indian authorities have demanded their accounts be blocked.

I think most people have got fairly thick-skinned about these threats,” said Dabinderjit Singh, a prominent British Sikh activist and advisor to the Sikh Federation U.K., a lobby group. But then the killings began, and the threats got harder to ignore. In Pakistan, two prominent Sikh separatists were gunned down, one in January, the second in May. A third, Hardeep Singh Nijjar, was killed in June in Vancouver, Canada, in what the Canadian government alleges was a state-sponsored assassination. A fourth plot was allegedly foiled by the FBI in the U.S. “Perhaps the situation is somewhat different now that those threats appear to be potentially real,” Dabinderjit Singh said. 

Adding to the sense of fear is the mysterious death of Avtar Singh Khanda, a Sikh activist based in the U.K.. Khanda, who had spoken publicly about receiving threats from the Indian authorities, died after a short illness in June. His family and colleagues are convinced he was poisoned and are demanding that the British authorities investigate his death.

British Sikhs are just the latest group to raise the alarm over the import of repression into the U.K. Uyghur exiles from China and democracy advocates who have fled Hong Kong have been aggressively targeted by people they believe work for the Chinese government. Iranian exile groups and media have been hit with cyberattacks and physical threats. Opponents of the Saudi and Emirati governments have been surveilled and harassed online. The multitude of cases show how authoritarian regimes are more willing than ever to reach across borders to target opponents living in western Europe and North America — and how much easier that has become in the digital era. 

Democratic governments have struggled to deal with these abuses, but perhaps none more so than the U.K., which is diplomatically diminished post-Brexit, gripped by constant crises, and increasingly authoritarian in its own politics. While the Canadian and U.S. governments have been vocal in their criticism of India’s transnational abuses, and worked to reassure the Sikh communities in their respective countries that they will be protected, the U.K. government has been deafeningly quiet. 

“Do one or two people have to be killed in the U.K. before our government says something?” Dabinderjit Singh said.

A mourner wears a t-shirt bearing a photograph of murdered Sikh community leader Hardeep Singh Nijjar, in Surrey, British Columbia. Darryl Dyck/The Canadian Press via AP.

Transnational repression on British soil appears to be rising just as the U.K. navigates a world in which its exit from the European Union has left its economic and diplomatic powers seriously diminished. The government, now stacked with Brexit hardliners, is desperately seeking new commercial and political partners to help it deliver on the promised benefits of severing ties with the world’s largest trading bloc. 

All this has led to some uncomfortable compromises. It’s difficult to stand up to superpowers (see China) or petrostates (see Saudi Arabia) when you know you may need to rely on them for investment and trade. 

The U.K.’s particular vulnerability overlaps with an uptick in transnational repression globally, partly because technology has made attacks much easier to procure and to get away with. Lives lived increasingly online leave many openings for attack. Emails, social media accounts or cloud services can be hacked. Online profiles can be cloned or impersonated. Repression can now be performed remotely and systematically in a way that wasn’t possible back when intimidating exiles meant you had to physically infiltrate their spaces. It is also a lot harder to hold perpetrators to account. Online harassment campaigns can be dismissed as the actions of the crowd, and can be hard to definitively track back to a government actor. Perpetrators of digital surveillance too can be notoriously difficult to pinpoint.

These less visible components of transnational repression work in concert with more overt actions, often using international legal mechanisms, such as arrest warrants and Interpol red notices, to put pressure on people, limiting their ability to travel or access finances. To give themselves cover, authoritarian countries have often co-opted the West’s obsession with national security, echoing the excuses made by the U.S. and U.K. to justify their own adventurism. 

“The availability of the rhetoric around extremism and terrorism, which arose as part of the War on Terror, gives countries a common language to talk about people who are dangerous or undesirable,” Yana Gorokhovskaia, a research director at NGO Freedom House, said. “It’s a way of catching someone in a web that everyone understands as bad.”

Uyghur communities in the U.K. have long complained about abuse from abroad. They say their online accounts have been hacked, they’ve received threatening messages over WhatsApp and WeChat, and their family homes back in Xinjiang have been raided by police. As revelations about the Chinese Communist Party’s massive “reeducation” camps and forced labor facilities in Xinjiang have emerged, these threats have increased. 

China’s reach into the U.K. became even more intrusive in 2021, after the CCP’s crackdown on pro-democracy movements in Hong Kong, which was a British colony until 1997. The U.K. government — which in 2015 declared a “golden era” of Sino-British relations — failed to prevent the Chinese government from unwinding the “one country, two systems” principle that gave Hong Kong its democratic freedoms. But it did offer an escape route for Hong Kongers, more than 160,000 of whom immigrated to the U.K. on special visas. Among them were many prominent democracy campaigners and activists. 

Former Hong Kong politicians and activists now living in the U.K. told me that they have had their emails and social media accounts hacked and that they have been doxxed and, they believe, followed by Chinese agents. U.K.-based activists, including the prominent labor campaigner Christopher Mung and the former protest leader Finn Lau have been put on a wanted list under Hong Kong’s National Security Law, with bounties of HK$1 million ($128,000) offered for information that leads to their arrest. 

In April, NGO Safeguard Defenders alleged that the Chinese government was running unsanctioned “police stations” in British cities. Those allegations were picked up by the influential right-wing media as violations of British sovereignty, which seemingly prompted the government to start talking in more robust terms about Chinese interference in the U.K. 

But the response — under a U.K. government scheme called the Defending Democracy Task Force — is mostly focused on tackling the obvious national security challenges presented by transnational repression.

What it doesn’t address is core human rights issues, like protecting people’s rights to free speech, free association and freedom from harassment, said Andrew Chubb, a senior lecturer in Chinese politics and international relations at Lancaster University who researches transnational repression. Security agencies don’t have a mandate to deal with human rights violations on British soil, unless they present a risk to the state — meaning that victims aren’t necessarily treated as victims, but as “potential threat vectors,” Chubb said. People facing human rights issues need to take their cases individually to court.

Framing the response in terms of sovereignty and national security means that victims of transnational repression — and whether or not their rights are protected — are subject to the U.K.’s diplomatic interests. 

“India is important to the U.K.’s future strategy in the Indo-Pacific. And Saudi Arabia is important in the Middle East and as a buyer of weapons,” Chubb said. “There's a very strong interest to overlook human rights issues where they concern these countries, which have not been deemed to pose national security threats.”

Simply put, this means that if you’re being targeted by a country that hasn’t yet crossed the boundary from trading partner to geopolitical rival, you’re largely on your own.

Hong Kong activists Finn Lau and Christopher Mung, who have had bounties placed on their heads by Chinese authorities. James Manning/PA Images via Getty Images.

The concerns of the Sikh community in the U.K. wouldn’t have reached a wider audience were it not for a brazen attack in Canada. On June 18, two hooded men shot dead Hardeep Singh Nijjar, a Canadian citizen and Sikh nationalist, in a Vancouver parking lot. Nijjar had supported the establishment of a Sikh homeland called Khalistan — an idea that the Modi government aggressively opposes — and he was known to be on an Indian government wanted list. In October, Canadian Prime Minister Justin Trudeau publicly accused India of masterminding Nijjar’s death. The Indian government responded forcefully, expelling Canadian diplomats and denying its involvement. But a month later, the U.S. announced that it had foiled a plot to assassinate another supporter of Khalistan independence: Gurpatwant Singh Pannun, a dual U.S.-Canadian citizen. The murder-for-hire scheme had been directed, U.S. Federal prosecutors say, by an Indian government official.

A week before Nijjar’s murder, Avtar Singh Khanda went into the hospital in Birmingham, U.K.. feeling unwell. Khanda, like Nijjar, was a vocal supporter of Khalistan independence, and his name was reported to have been included in a dossier of supposedly high-risk individuals that was handed to then-U.K. Prime Minister David Cameron by Modi in 2015.

Two days after Khanda was admitted to hospital, he was diagnosed with leukemia, complicated by blood clots. He died two days later. The coroner didn’t record the death as suspicious, but Khanda’s family and community couldn’t help but suspect foul play — acute myeloid leukemia, the form of blood cancer he was diagnosed with, can be caused by poisoning. For Khanda’s supporters, it was hard not to think of Russians like Alexander Litvinenko, who was assassinated with a lethal dose of polonium in 2006, or Sergei and Yulia Skripal, who were dosed with a nerve agent in Salisbury in 2018. 

“If it was a Russian that lived in Surrey or London, then the first thing people would think about was poison,” said Michael Polak, a barrister and human rights activist who is representing Khanda’s family. 

Polak says local police didn’t investigate the circumstances around Khanda’s death, despite his family’s pleas — something some Sikh activists say shows how little attention British authorities have paid to India’s adoption of the authoritarian playbook. 

Dabinderjit Singh, the activist, said the U.K. has been too quick to entertain the Indian government’s narrative that Khalistan separatists are terrorists and extremists. After the dossier that Modi reportedly gave to Cameron, a study was commissioned into Sikh extremism for the U.K. government-funded Centre for Research and Evidence on Security Threats. It found that there was “no threat to the British state or to the wider British public from Sikh activism.” But the idea of Sikh extremism nevertheless began to appear in government studies and news stories. In 2018, British police raided the homes of five Sikh activists in London and the West Midlands, a county to the west of London centered around the U.K.’s second city, Birmingham. West Midlands Police said at the time, in a tweet, that the raids were part of a counter-terrorism operation, “into allegations of extremist activity in India and fraud offenses.” No one was prosecuted on terrorism charges as a result of the raids.

While Indian media and the Indian government openly amped up the supposed threat of Khalistan separatism in the diaspora, there were covert efforts to discredit the movement. In November 2021, the Centre for Information Resilience, a London-based research organization, uncovered a network of fake accounts, “the RealSikh Network,” on Facebook, Instagram and Twitter (now X), which pushed out messages portraying supporters of Khalistan as extremists. The aim of the network, the center said, was to “stoke cultural tensions within India and international communities.”

These tensions are rising in the U.K. Jasveer Singh said he has tracked what he believes are other attempts to drive wedges between Sikhs and Muslims in the Indian diaspora in the U.K. — social media disinformation that plays on lurid conspiracies about Muslim men grooming Sikh girls, and vice versa.

There are also signs that Modi’s Hindu nationalism is spreading to other countries with alarming consequences. Rising support for Hindu nationalism and the online demonization of minorities has already led to violence in Australia. In September 2022, Muslims and Hindus clashed in the U.K. city of Leicester. Analysts and academics have suggested the deterioration of relations between the two communities was partly due to the growing influence of right-wing Hindutva ideologies within the diaspora. Supporters of Hindu nationalism have routinely demonized Muslims in India, and tried to portray them as not really being Indian. 

The South Asian Muslim community in Leicester is largely of Indian origin. After the clashes in the city, the Indian High Commission in London issued a statement condemning “the violence against Indian Community in Leicester and vandalization of premises and symbols of Hindu religion,” making no mention of the violence against Muslims.

With an election coming in India, these kinds of tensions are only going to grow, Jasveer Singh said. “It's only a matter of time before we see serious incidents in the U.K., unfortunately.”

Singh said he feels that the Sikh community is a “political football,” being sacrificed to allow the U.K. to pursue its geopolitical aims. “We’re well aware this is tied up in trade,” he said. “It is kind of frustrating and suspicious that the U.K. government is keeping such a distance from saying anything, especially after we've seen massive floodgates opened by Trudeau and Biden. It’s like, now or never. So I guess it’s never.”

Why did we write this story?

Technology and a global authoritarian shift are making transnational repression easier than ever. The U.K., weakened by Brexit and political chaos, is uniquely vulnerable. Sikh groups are the latest to accuse the government of allowing human rights violations on British soil.

The post On British soil, foreign autocrats target their critics with impunity appeared first on Coda Story.

]]>
49038
When deepfakes go nuclear https://www.codastory.com/authoritarian-tech/ai-nuclear-war/ Tue, 28 Nov 2023 14:01:33 +0000 https://www.codastory.com/?p=48430 Governments already use fake data to confuse their enemies. What if they start doing this in the nuclear realm?

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
Two servicemen sit in an underground missile launch facility. Before them is a matrix of buttons and bulbs glowing red, white and green. Old-school screens with blocky, all-capped text beam beside them. Their job is to be ready, at any time, to launch a nuclear strike. Suddenly, an alarm sounds. The time has come for them to shoot their deadly weapon.

With the correct codes input, the doors to the missile silo open, pointing a bomb at the sky. Sweat shines on their faces. For the missile to fly, both must turn their keys. But one of them balks. He picks up the phone to call their superiors.

That’s not the procedure, says his partner. “Screw the procedure,” the dissenter says. “I want somebody on the goddamn phone before I kill 20 million people.” 

Soon, the scene — which opens the 1983 techno-thriller “WarGames” — transitions to another set deep inside Cheyenne Mountain, a military outpost buried beneath thousands of feet of Colorado granite. It exists in real life and is dramatized in the movie. 

In “WarGames,” the main room inside Cheyenne Mountain hosts a wall of screens that show the red, green and blue outlines of continents and countries, and what’s happening in the skies above them. There is not, despite what the servicemen have been led to believe, a nuclear attack incoming: The alerts were part of a test sent out to missile commanders to see whether they would carry out orders. All in all, 22% failed to launch.

“Those men in the silos know what it means to turn the keys,” says an official inside Cheyenne Mountain. “And some of them are just not up to it.” But he has an idea for how to combat that “human response,” the impulse not to kill millions of people: “I think we ought to take the men out of the loop,” he says. 

From there, an artificially intelligent computer system enters the plotline and goes on to cause nearly two hours of potentially world-ending problems. 

Discourse about the plot of “WarGames” usually focuses on the scary idea that a computer nearly launches World War III by firing off nuclear weapons on its own. But the film illustrates another problem that has become more trenchant in the 40 years since it premiered: The computer displays fake data about what’s going on in the world. The human commanders believe it to be authentic and respond accordingly.

In the real world, countries — or rogue actors — could use fake data, inserted into genuine data streams, to confuse enemies and achieve their aims. How to deal with that possibility, along with other consequences of incorporating AI into the nuclear weapons sphere, could make the coming years on Earth more complicated.

The word “deepfake” didn’t exist when “WarGames” came out, but as real-life AI grows more powerful, it may become part of the chain of analysis and decision-making in the nuclear realm of tomorrow. The idea of synthesized, deceptive data is one AI issue that today's atomic complex has to worry about.

You may have encountered the fruits of this technology in the form of Tom Cruise playing golf on TikTok, LinkedIn profiles for people who have never inhabited this world or, more seriously, a video of Ukrainian President Volodymyr Zelenskyy declaring the war in his country to be over. These are deepfakes — pictures or videos of things that never happened, but which can look astonishingly real. It becomes even more vexing when AI is used to create images that attempt to depict things that are indeed happening. Adobe recently caused a stir by selling AI-generated stock photos of violence in Gaza and Israel. The proliferation of this kind of material (alongside plenty of less convincing stuff) leads to an ever-present worry any image presented as fact might actually have been fabricated or altered. 

It may not matter much whether Tom Cruise was really out on the green, but the ability to see or prove what’s happening in wartime — whether an airstrike took place at a particular location or whether troops or supplies are really amassing at a given spot — can actually affect the outcomes on the ground. 

Similar kinds of deepfake-creating technologies could be used to whip up realistic-looking data — audio, video or images — of the sort that military and intelligence sensors collect and that artificially intelligent systems are already starting to analyze. It’s a concern for Sharon Weiner, a professor of international relations at American University. “You can have someone trying to hack your system not to make it stop working, but to insert unreliable data,” she explained.

James Johnson, author of the book “AI and the Bomb,” writes that when autonomous systems are used to process and interpret imagery for military purposes, “synthetic and realistic-looking data” can make it difficult to determine, for instance, when an attack might be taking place. People could use AI to gin up data designed to deceive systems like Project Maven, a U.S. Department of Defense program that aims to autonomously process images and video and draw meaning from them about what’s happening in the world.

AI’s role in the nuclear world isn’t yet clear. In the U.S., the White House recently issued an executive order about trustworthy AI, mandating in part that government agencies address the nuclear risks that AI systems bring up. But problem scenarios like some of those conjured by “WarGames” aren’t out of the realm of possibility. 

In the film, a teenage hacker taps into the military's system and starts up a game he finds called "Global Thermonuclear War." The computer displays the game data on the screens inside Cheyenne Mountain, as if it were coming from the ground. In the Rocky Mountain war room, a siren soon blares: It looks like Soviet missiles are incoming. Luckily, an official runs into the main room in a panic. “We’re not being attacked,” he yells. “It’s a simulation!””

In the real world, someone might instead try to cloak an attack with deceptive images that portray peace and quiet.

Researchers have already shown that the general idea behind this is possible: Scientists published a paper in 2021 on “deepfake geography,” or simulated satellite images. In that milieu, officials have worried about images that might show infrastructure in the wrong location or terrain that’s not true to life, messing with military plans. Los Alamos National Laboratory scientists, for instance, made satellite images that included vegetation that wasn’t real and showed evidence of drought where the water levels were fine, all for the purposes of research. You could theoretically do the same for something like troop or missile-launcher movement.

AI that creates fake data is not the only problem: AI could also be on the receiving end, tasked with analysis. That kind of automated interpretation is already ongoing in the intelligence world, although it’s unclear specifically how it will be incorporated into the nuclear sphere. For instance, AI on mobile platforms like drones could help process data in real time and “alert commanders of potentially suspicious or threatening situations such as military drills and suspicious troop or mobile missile launcher movements,” writes Johnson. That processing power could also help detect manipulation because of the ability to compare different datasets. 

But creating those sorts of capabilities can help bad actors do their fooling. “They can take the same techniques these AI researchers created, invert them to optimize deception,” said Edward Geist, an analyst at the RAND Corporation. For Geist, deception is a “trivial statistical prediction task.” But recognizing and countering that deception is where the going gets tough. It involves a “very difficult problem of reasoning under uncertainty,” he told me. Amid the generally high-stakes feel of global dynamics, and especially in conflict, countries can never be exactly sure what’s going on, who’s doing what, and what the consequences of any action may be.

There is also the potential for fakery in the form of data that’s real: Satellites may accurately display what they see, but what they see has been expressly designed to fool the automated analysis tools.

As an example, Geist pointed to Russia’s intercontinental ballistic missiles. When they are stationary, they’re covered in camo netting, making them hard to pick out in satellite images. When the missiles are on the move, special devices attached to the vehicles that carry them shoot lasers toward detection satellites, blinding them to the movement. At the same time, decoys are deployed — fake missiles dressed up as the real deal, to distract and thwart analysis. 

“The focus on using AI outstrips or outpaces the emphasis put on countermeasures,” said Weiner.

Given that both physical and AI-based deception could interfere with analysis, it may one day become hard for officials to trust any information — even the solid stuff. “The data that you're seeing is perfectly fine. But you assume that your adversary would fake it,” said Weiner. “You then quickly get into the spiral where you can’t trust your own assessment of what you found. And so there’s no way out of that problem.” 

From there, it’s distrust all the way down. “The uncertainties about AI compound the uncertainties that are inherent in any crisis decision-making,” said Weiner. Similar situations have arisen in the media, where it can be difficult for readers to tell if a story about a given video — like an airstrike on a hospital in Gaza, for instance — is real or in the right context. Before long, even the real ones leave readers feeling dubious.

Ally Sheedy and Matthew Broderick in the 1983 MGM/UA movie "WarGames" circa 1983. Hulton Archive/Getty Images.

More than a century ago, Alfred von Schlieffen, a German war planner, envisioned the battlefield of the future: a person sitting at a desk with telephones splayed across it, ringing in information from afar. This idea of having a godlike overview of conflict — a fused vision of goings-on — predates both computers and AI, according to Geist.

Using computers to synthesize information in real-time goes back decades too. In the 1950s, for instance, the U.S. built the Continental Air Defense Command, which relied on massive machines (then known as computers) for awareness and response. But tests showed that a majority of Soviet bombers would have been able to slip through — often because they could fool the defense system with simple decoys. “It was the low-tech stuff that really stymied it,” said Geist. Some military and intelligence officials have concluded that next-level situational awareness will come with just a bit more technological advancement than they previously thought — although this has not historically proven to be the case. “This intuition that people have is like, ‘Oh, we’ll get all the sensors, we’ll buy a big enough computer and then we’ll know everything,’” he said. “This is never going to happen.”

This type of thinking seems to be percolating once again and might show up in attempts to integrate AI in the near future. But Geist’s research, which he details in his forthcoming book “Deterrence Under Uncertainty: Artificial Intelligence and Nuclear Warfare,” shows that the military will “be lucky to maintain the degree of situational awareness we have today” if they incorporate more AI into observation and analysis in the face of AI-enhanced deception. 

“One of the key aspects of intelligence is reasoning under uncertainty,” he said. “And a conflict is a particularly pernicious form of uncertainty.” An AI-based analysis, no matter how detailed, will only ever be an approximation — and in uncertain conditions there’s no approach that “is guaranteed to get an accurate enough result to be useful.” 

In the movie, with the proclamation that the Soviet missiles are merely simulated, the crisis is temporarily averted. But the wargaming computer, unbeknownst to the authorities, is continuing to play. As it keeps making moves, it displays related information about the conflict on the big screens inside Cheyenne Mountain as if it were real and missiles were headed to the States. 

It is only when the machine’s inventor shows up that the authorities begin to think that maybe this could all be fake. “Those blips are not real missiles,” he says. “They’re phantoms.”

To rebut fake data, the inventor points to something indisputably real: The attack on the screens doesn’t make sense. Such a full-scale wipeout would immediately prompt the U.S. to total retaliation — meaning that the Soviet Union would be almost ensuring its own annihilation. 

Using his own judgment, the general calls off the U.S.’s retaliation. As he does so, the missiles onscreen hit the 2D continents, colliding with the map in circular flashes. But outside, in the real world, all is quiet. It was all a game. “Jesus H. Christ,” says an airman at one base over the comms system. “We’re still here.”

Similar nonsensical alerts have appeared on real-life screens. Once, in the U.S., alerts of incoming missiles came through due to a faulty computer chip. The system that housed the chip sent erroneous missile alerts on multiple occasions. Authorities had reason to suspect the data was likely false. But in two instances, they began to proceed as if the alerts were real. “Even though everyone seemed to realize that it’s an error, they still followed the procedure without seriously questioning what they were getting,” said Pavel Podvig, senior researcher at the United Nations Institute for Disarmament Research and a researcher at Princeton University. 

In Russia, meanwhile, operators did exercise independent thought in a similar scenario, when an erroneous preliminary launch command was sent. “Only one division command post actually went through the procedure and did what they were supposed to do,” he said. “All the rest said, ‘This has got to be an error,’” because it would have been a surprise attack not preceded by increasing tension, as expected. It goes to show, Podvig said, “people may or may not use their judgment.” 

You can imagine in the near future, Podvig continued, nuclear operators might see an AI-generated assessment saying circumstances were dire. In such a situation, there is a need “to instill a certain kind of common sense” he said, and make sure that people don’t just take whatever appears on a screen as gospel. “The basic assumptions about scenarios are important too,” he added. “Like, do you assume that the U.S. or Russia can just launch missiles out of the blue?”

People, for now, will likely continue to exercise judgment about attacks and responses — keeping, as the jargon goes, a “human in the loop.”

The idea of asking AI to make decisions about whether a country will launch nuclear missiles isn’t an appealing option, according to Geist, though it does appear in movies a lot. “Humans jealously guard these prerogatives for themselves,” Geist said. 

“It doesn't seem like there’s much demand for a Skynet,” he said, referencing another movie, “Terminator,” where an artificial general superintelligence launches a nuclear strike against humanity.

Podvig, an expert in Russian nuclear goings-on, doesn’t see much desire for autonomous nuclear operations in that country. 

“There is a culture of skepticism about all this fancy technological stuff that is sent to the military,” he said. “They like their things kind of simple.” 

Geist agreed. While he admitted that Russia is not totally transparent about its nuclear command and control, he doesn’t see much interest in handing the reins to AI.

China, of course, is generally very interested in AI, and specifically in pursuing artificial general intelligence, a type of AI which can learn to perform intellectual tasks as well as or even better than humans can.

William Hannas, lead analyst at the Center for Security and Emerging Technology at Georgetown University, has used open-source scientific literature to trace developments and strategies in China’s AI arena. One big development is the founding of the Beijing Institute for General Artificial Intelligence, backed by the state and directed by former UCLA professor Song-Chun Zhu, who has received millions of dollars of funding from the Pentagon, including after his return to China. 

Hannas described how China has shown a national interest in “effecting a merger of human and artificial intelligence metaphorically, in the sense of increasing mutual dependence, and literally through brain-inspired AI algorithms and brain-computer interfaces.”

“A true physical merger of intelligence is when you're actually lashed up with the computing resources to the point where it does really become indistinguishable,” he said. 

That’s relevant to defense discussions because, in China, there’s little separation between regular research and the military. “Technological power is military power,” he said. “The one becomes the other in a very, very short time.” Hannas, though, doesn’t know of any AI applications in China’s nuclear weapons design or delivery. Recently, U.S. President Joe Biden and Chinese President Xi Jinping met and made plans to discuss AI safety and risk, which could lead to an agreement about AI’s use in military and nuclear matters. Also, in August, regulations on generative AI developed by China’s Cyberspace Administration went into effect, making China a first mover in the global race to regulate AI.

It’s likely that the two countries would use AI to help with their vast streams of early-warning data. And just as AI can help with interpretation, countries can also use it to skew that interpretation, to deceive and obfuscate. All three tasks are age-old military tactics — now simply upgraded for a digital, unstable age.

Science fiction convinced us that a Skynet was both a likely option and closer on the horizon than it actually is, said Geist. AI will likely be used in much more banal ways. But the ideas that dominate “WarGames” and “Terminator” have endured for a long time. 

“The reason people keep telling this story is it’s a great premise,” said Geist. “But it’s also the case,” he added, “that there’s effectively no one who thinks of this as a great idea.” 

It’s probably so resonant because people tend to have a black-and-white understanding of innovation. “There’s a lot of people very convinced that technology is either going to save us or doom us,” said Nina Miller, who formerly worked at the Nuclear Threat Initiative and is currently a doctoral student at the Massachusetts Institute of Technology. The notion of an AI-induced doomsday scenario is alive and well in the popular imagination and also has made its mark in public-facing discussions about the AI industry. In May, dozens of tech CEOs signed an open letter declaring that “mitigating the risk of extinction from AI should be a global priority,” without saying much about what exactly that means. 

But even if AI does launch a nuclear weapon someday (or provide false information that leads to an atomic strike), humans still made the decisions that led us there. Humans created the AI systems and made choices about where to use them. 

And, besides, in the case of a hypothetical catastrophe, AI didn’t create the environment that led to a nuclear attack. “Surely the underlying political tension is the problem,” said Miller. And that is thanks to humans and their desire for dominance — or their motivation to deceive. 

Maybe the humans need to learn what the computer did at the end of “WarGames.” “The only winning move,” it concludes, “is not to play.”

Why did we write this story?

AI-generated deepfakes could soon begin to affect military intelligence communications. In line with our focus on authoritarianism and technology, this story delves into the possible consequences that could emerge as AI makes its way into the nuclear arena.

The post When deepfakes go nuclear appeared first on Coda Story.

]]>
48430
In Africa’s first ‘safe city,’ surveillance reigns https://www.codastory.com/authoritarian-tech/africa-surveillance-china-magnum/ Wed, 08 Nov 2023 13:33:21 +0000 https://www.codastory.com/?p=48029 Nairobi boasts nearly 2,000 Huawei surveillance cameras citywide. But in the nine years since they were installed, it is hard to see their benefits.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
Nairobi purchased its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis.
Today, the city boasts nearly 2,000 Huawei surveillance cameras citywide, all sending data to the police.
On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers. But do the cameras work?

In Africa’s first ‘safe city,’ surveillance reigns

Lights, cameras, what action? In Nairobi, the question looms large for millions of Kenyans, whose every move is captured by the flash of a CCTV camera at intersections across the capital.

Though government promises of increased safety and better traffic control seem to play on a loop, crime levels here continue to rise. In the 1990s, Nairobi, with its abundant grasslands, forests and rivers, was known as the “Green City in the Sun.” Today, we more often call it “Nairobbery.”

I see it every time I venture into Nairobi’s Central Business District. Navigating downtown Nairobi on foot can feel like an extreme sport. I clutch my handbag, keep my phone tucked away and walk swiftly to dodge “boda boda” (motorbike) riders and hawkers whose claim on pedestrian walks is quasi-authoritarian. Every so often, I’ll hear a woman scream “mwizi!” and then see a thief dart down an alleyway. If not that, it will be a motorist hooting loudly at a traffic stop to alert another driver that their vehicle is being stripped of its parts, right then and there.

Every city street is dotted with cameras. They fire off a blinding flash each time a car drives past. But other than that, they seem to have little effect. I have yet to hear of or witness an incident in which thugs were about to rob someone, looked up, saw the CCTV cameras then stopped and walked away.

Nairobi launched its massive traffic surveillance system in 2014 as the country was grappling with a terrorism crisis. A series of major attacks by al-Shabab militants, including the September 2013 attack at Nairobi’s Westgate shopping complex in which 67 people were killed, left the city reeling and politicians under extreme pressure to implement solutions. A modern, digitized surveillance system became a national security priority. And the Chinese tech hardware giant Huawei was there to provide it. 

A joint contract between Huawei and Kenya’s leading telecom, Safaricom, brought us the Integrated Urban Surveillance System, and we became the site of Huawei’s first “Safe City” project in Africa. Hundreds of cameras were deployed across Nairobi’s Central Business District and major highways, all networked and sending data to Kenya’s National Police Headquarters. Nairobi today boasts nearly 2,000 CCTV cameras citywide.

On paper, the system promised the ultimate silver bullet: It put real-time surveillance tools into the hands of more than 9,000 police officers to support crime prevention, accelerated responses and recovery. Officials say police monitor the Kenyan capital at all times and quickly dispatch first responders in case of an emergency.

But do the cameras work? Nine years since they were installed, it is hard to see the benefits of these electronic eyes that follow us around the city day after day.

Early on, Huawei claimed that from 2014 to 2015, crime had decreased by 46% in areas supported by their technologies, but the company has since scrubbed its website of this report. Kenya’s National Police Service reported a smaller drop in crime rates in 2015 in Nairobi, and an increase in Mombasa, the other major city where Huawei’s cameras were deployed. But by 2017, Nairobi’s reported crime rates surpassed pre-installation levels.

According to a June 2023 report by Coda’s partners at the Edgelands Institute, an organization that studies the digitalization of urban security, there has been a steady rise in criminal activity in Nairobi for nearly a decade.

So why did Nairobi adopt this system in the first place? One straightforward answer: Kenya had a problem, and China offered a solution. The Kenyan authorities had to take action and Huawei had cameras to sell. So they made a deal.

Nairobi’s surveillance apparatus today has become part of the “Digital Silk Road” — China’s quest to wire the world. It is a central component of the Belt and Road Initiative, an ambitious global infrastructure development strategy that has spread China’s economic and political influence across the world. 

This hasn’t been easy for China in the industrialized West, with companies like Huawei battling sanctions by the U.S. and legal obstacles both in the U.K. and European Union countries. But in Africa, the Chinese technology giant has a quasi-monopoly on telecommunications infrastructure and technology deployment. Components from the company make up around 70% of 4G networks across the continent.

Chinese companies also have had a hand in building or renovating nearly 200 government buildings across the continent. They have built secure intra-governmental telecommunications networks and gifted computers to at least 35 African governments, according to research by the Heritage Foundation.

Grace Bomu Mutung’u, a Kenyan scholar of IT policy in Kenya and Africa, currently working with the Open Society Foundations, sees this as part of a race to develop and dominate network infrastructure, and to use this position to gather and capitalize on data that flows through networks.

“The Chinese are way ahead of imperial companies because they are approaching it from a different angle,” she told me. She posits that for China, the Digital Silk Road is meant to set a foundation for an artificial intelligence-based economy that China can control and profit from. Mutung’u derided African governments for being so beholden to development that their leaders keep missing the forest for the trees. “We seem to be caught in this big race. We have yet to define for ourselves what we want from this new economy.”

The failure to define what Africa wants from the data-driven economy and an obsession with basic infrastructure development projects is taking the continent through what feels like another Berlin scramble, Mutung’u told me, referring to the period between the 19th and early 20th centuries that saw European powers increase their stake in Africa from around 10% to about 90%.

“Everybody wants to claim a part of Africa,” she said. “If it wasn’t the Chinese, there would be somebody else trying to take charge of resources.” Mutung’u was alluding to China’s strategy of financing African infrastructure projects in exchange for the continent’s natural resources.

A surveillance camera in one of Nairobi's matatu buses.

Nairobi was the first city in Africa to deploy Huawei’s Safe City system. Since then, cities in Egypt, Nigeria, South Africa and a dozen other countries across the continent have followed suit. All this has drawn scrutiny from rights groups who see the company as a conduit in the exportation of China’s authoritarian surveillance practices. 

Indeed, Nairobi’s vast web of networked CCTV cameras offers little in the way of transparency or accountability, and experts like Mutung’u say the country doesn’t have sufficient data protection laws in place to prevent the abuse of data moving through surveillance systems. When the surveillance system was put in place in 2014, the country had no data protection laws. Kenya’s Personal Data Protection Act came into force in 2019, but the Office of the Data Protection Commissioner has yet to fully implement and enforce the law.

In a critique of what he described at the time as a “massive new spying system,” human rights lawyer and digital rights expert Ephraim Kenyanito argued that the government and Safaricom would be “operating this powerful new surveillance network effectively without checks and balances.” A few years later, in 2017, Privacy International raised concerns about the risks of capturing and storing all this data without clear policies on how that data should be treated or protected.

There was good reason to worry. In January 2018, an investigation by the French newspaper Le Monde revealed that there had been a data breach at the African Union headquarters in Addis Ababa following a hacking incident. Every night for five years, between 2012 and 2017, data downloaded from AU servers was sent to servers located in China. The Le Monde investigation alleged the involvement of the Chinese government, which denied the accusation. In March 2023, another massive cyber attack at AU headquarters left employees without access to the internet and their work emails for weeks.

The most recent incident brought to the fore growing concerns among local experts and advocacy groups about the surveillance of African leaders as Chinese construction companies continue to win contracts to build sensitive African government offices, and Chinese tech companies continue to supply our telecommunication and surveillance infrastructure. But if these fears have had any effect on agreements between the powers that be, it is not evident.

As the cameras on the streets of Nairobi continue to flash, researchers continue to ponder how, if at all, digital technologies are being used in the approach to security, coexistence and surveillance in the capital city.

The Edgelands Institute report found little evidence linking the adoption of surveillance technology and a decrease in crime in Kenya. It did find that a driving factor in rising crime rates was unemployment. For people under 35, the unemployment rate has almost doubled since 2015 and now hovers at 13.5%.

In a 2022 survey by Kenya’s National Crime Research Centre, a majority of respondents identified community policing as the most effective method of crime reduction. Only 4.2% of respondents identified the use of technology such as CCTV cameras as an effective method.

And the system has meanwhile raised concerns among privacy-conscious members of society regarding potential infringement upon the right to privacy for Kenyans and the technical capabilities of these technologies, including AI facial recognition. The secrecy often surrounding this surveillance, the Edgelands Institute report notes, complicates trust between citizens and the state.

It may be some time yet before the lights and the cameras lead to action.

Photographer Lindokuhle Sobekwa's portable camera obscura uses a box and a magnifying glass to take images for this story.

The post In Africa’s first ‘safe city,’ surveillance reigns appeared first on Coda Story.

]]>
48029
When AI doesn’t speak your language https://www.codastory.com/authoritarian-tech/artificial-intelligence-minority-language-censorship/ Fri, 20 Oct 2023 14:07:03 +0000 https://www.codastory.com/?p=47275 Better tech could do a lot of good for minority language speakers — but it could also make them easier to surveil

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
If you want to send a text message in Mongolian, it can be tough – it’s a script that most software doesn’t recognize. But for some people in Inner Mongolia, an autonomous region in northern China, that’s a good thing.

When authorities in Inner Mongolia announced in 2020 that the language would no longer be the language of instruction in schools, ethnic Mongolians — who make up about 18% of the population — feared the loss of their language, one of the last remaining markers of their distinctive identity. The news and then plans for protest flowed across WeChat, China’s largest messaging service. Parents were soon marching by the thousands in the streets of the local capital, demanding that the decision be reversed.

With the remarkable exception of the so-called Zero Covid protests of 2022, demonstrations of any size are incredibly rare in China, partially because online surveillance prevents large numbers of people from openly discussing sensitive issues in Mandarin, much less planning public marches. With automated surveillance technologies having a hard time with Mongolian though, protestors had the advantage of being able to coordinate with relative freedom. 

Most of the world's writing systems have been digitized using centralized standard code (known as Unicode), but the Mongolian script was encoded so sloppily that it is barely usable. Instead, people use a jumble of competing, often incompatible programs when they need to type in Mongolian. WeChat has a Mongolian keyboard, but it’s unwieldy and users often prefer to send each other screenshots of text instead. The constant exchange of images is inconvenient, but it has the unintended benefit of being much more complicated for authorities to monitor and censor.

All but 60 of the world’s roughly 7,000 languages are considered “low-resource” by artificial intelligence researchers. Mongolian belongs to the vast majority of languages barely represented on the internet whose speakers deal with many challenges resulting from the predominance of English on the global internet. As technology improves, automated processes across the internet — from search engines to social media sites — may start to work a lot better for under-resourced languages. This could do a lot of good, giving those language speakers access to all kinds of tools and markets, but it will likely also reduce the degree to which languages like Mongolian fly under the radar of censors. The tradeoff for languages that have historically hovered on the margins of the internet is between safety and convenience on one hand, and freedom from censorship and intrusive eavesdropping on the other.

Back in Inner Mongolia, when parents were posting on WeChat about their plans to protest, it became clear that the app’s algorithms couldn’t make sense of the jpegs of Mongolian cursive, said Soyonbo Borjgin, a local journalist who covered the protests. The images and the long voice messages that protesters would exchange were protected by the Chinese state’s ignorance — there were no AI resources available to monitor them, and overworked police translators had little chance of surveilling all possibly subversive communication. 

China’s efforts to stifle the Mongolian language within its borders have only intensified since the protests. Keen on the technological dimensions of the battle, Borjgin began looking into a machine learning system that was being developed at Inner Mongolia University. The system would allow computers to read images of the Mongolian script, after being fed and trained on digital reams of printed material that had been published when Mongolian still had Chinese state support. While reporting the story, Borjgin was told by the lead researcher that the project had received state money. Borjgin took this as a clear signal: The researchers were getting funding because what they were doing amounted to a state security project. The technology would likely be used to prevent future dissident organizing.

First-graders on the first day of school in Hohhot, Inner Mongolia Autonomous Region of China in August 2023. Liu Wenhua/China News Service/VCG via Getty Images.

Until recently, AI has only worked well for the vanishingly small number of languages with large bodies of texts to train the technology on. Even national languages with hundreds of millions of speakers, like Bangla, have largely remained outside the priorities of tech companies. Last year, though, both Google and Meta announced projects to develop AI for under-resourced languages. But while newer AI models are able to generate some output in a wide set of languages, there’s not much evidence to suggest that it’s high quality. 

Gabriel Nicholas, a research fellow at the Center for Democracy and Technology, explained that once tech companies have established the capacity to process a new language, they have a tendency to congratulate themselves and then move on. A market dominated by “big” languages gives them little incentive to keep investing in improvements. Hellina Nigatu, a computer science PhD student at the University of California, Berkeley, added that low-resource languages face the risk of “constantly trying to catch up” — or even losing speakers — to English.

Researchers also warn that even as the accuracy of machine translation improves, language models miss out on important, culturally specific details that can have real-world consequences. Companies like Meta, which partially rely on AI to review social media posts for things like hate speech and violence, have run into problems when they try to use the technology for under-resourced languages. Because they’ve been trained on just the few texts available, their AI systems too often have an incomplete picture of what words mean and how they’re used.

Arzu Geybulla, an Azerbaijani journalist who specializes in digital censorship, said that one problem with using AI to moderate social media content in under-resourced languages is the “lack of understanding of cultural, historical, political nuances in the way the language is being used on these platforms.” In Azerbaijan, where violence against Armenians is regularly celebrated online, the word “Armenian” itself is often used as a slur to attack dissidents. Because the term is innocuous in most other contexts, it’s easy for AI and even non-specialist human moderators to overlook its use. She also noted that AI used by social media platforms often lumps the Azerbaijani language together with languages spoken in neighboring countries: Azerbaijanis frequently send her screenshots of automated replies in Russian or Turkish to the hate speech reports they’d submitted in Azerbaijani.

But Geybulla believes improving AI for monitoring hate speech and incitement in Azerbaijani will lock in an essentially defective system. “I’m totally against training the algorithm,” she told me. “Content moderation needs to be done by humans in all contexts.” In the hands of an authoritarian government, sophisticated AI for previously neglected languages can become a tool for censorship. 

According to Geybulla, Azerbaijani currently has such “an old school system of surveillance and authoritarianism that I wouldn't be surprised if they still rely on Soviet methods.” Given the government’s demonstrated willingness to jail people for what they say online and to engage in mass online astroturfing, she believes that improving automated flagging for the Azerbaijani language would only make the repression worse. Instead of strengthening these easily abusable technologies, she argues that companies should invest in human moderators. “If I can identify inauthentic accounts on Facebook, surely someone at Facebook can do that too, and faster than I do,” she said. 

Different languages require different approaches when building AI. Indigenous languages in the Americas, for instance, show forms of complexity that are hard to account for without either large amounts of data — which they currently do not have — or diligent expert supervision. 

One such expert is Michael Running Wolf, founder of the First Languages AI Reality initiative, who says developers underestimate the challenge of American languages. While working as a researcher on Amazon’s Alexa, he began to wonder what was keeping him from building speech recognition for Cheyenne, his mother’s language. Part of the problem, he realized, was computer scientists’ unwillingness to recognize that American languages might present challenges that their algorithms couldn’t understand. “All languages are seen through the lens of English,” he told me.

Running Wolf thinks Anglocentrism is mostly to blame for the neglect that Indigenous languages have faced in the tech world. “The AI field, like any other space, is occupied by people who are set in their ways and unintentionally have a very colonial perspective,” he told me. “It's not as if we haven't had the ability to create AI for Indigenous languages until today. It's just no one cares.” 

American languages were put in this position deliberately. Until well into the 20th century, the U.S. government’s policy position on Indigenous American languages was eradication. From 1860 to 1978, tens of thousands of children were forcibly separated from their parents and kept in boarding schools where speaking their mother tongues brought beatings or worse. Nearly all Indigenous American languages today are at immediate risk of extinction. Running Wolf hopes AI tools like machine translation will make Indigenous languages easier to learn to fluency, making up for the current lack of materials and teachers and reviving the languages as primary means of communication.

His project also relies on training young Indigenous people in machine learning — he’s already held a coding boot camp on the Lakota reservation. If his efforts succeed, he said, “we'll have Indigenous peoples who are the experts in natural language processing.” Running Wolf said he hopes this will help tribal nations to build up much-needed wealth within the booming tech industry.

The idea of his research allowing automated surveillance of Indigenous languages doesn’t scare Running Wolf so much, he told me. He compared their future online to their current status in the high school basketball games that take place across North and South Dakota. Indigenous teams use Lakota to call plays without their opponents understanding. “And guess what? The non-Indigenous teams are learning Lakota so that they know what the Lakota are doing,” Running Wolf explained. “I think that's actually a good thing.”

The problem of surveillance, he said, is “a problem of success.” He hopes for a future in which Indigenous computer scientists are “dealing with surveillance risk because the technology's so prevalent and so many people speak Chickasaw, so many people speak Lakota or Cree, or Ute — there's so many speakers that the NSA now needs to have the AI so that they can monitor us,” referring to the U.S. National Security Agency, infamous for its snooping on communications at home and abroad.

Not everyone wishes for that future. The Cheyenne Nation, for instance, wants little to do with outsiders, he told me, and isn’t currently interested in using the systems he’s building. “I don’t begrudge that perspective because that’s a perfectly healthy response to decades, generations of exploitation,” he said.

Like Running Wolf, Borjgin believes that in some cases, opening a language up to online surveillance is a sacrifice necessary to keep it alive in the digital era. “I somewhat don’t exist on the internet,” he said. Because their language has such a small online culture, he said, “there’s an identity crisis for Mongols who grew up in the city,” pushing them instead towards Mandarin. 

Despite the intense political repression that some of China’s other ethnic minorities face, Borjgin said, “one thing I envy about Tibetan and Uyghur is once I ask them something they will just google it with their own input system and they can find the result in one second.” Even though he knows that it will be used to stifle dissent, Borjgin still supports improving the digitization of the Mongol script: “If you don't have the advanced technology, if it only stays to the print books, then the language will be eradicated. I think the tradeoff is okay for me.”

Why did we write this story?

The AI industry so far is dominated by technology built by and for English speakers. This story asks what the technology looks like for speakers of less common languages, and how that might change in the near term.

The post When AI doesn’t speak your language appeared first on Coda Story.

]]>
47275
Silicon Savanna: The workers taking on Africa’s digital sweatshops https://www.codastory.com/authoritarian-tech/kenya-content-moderators/ Wed, 11 Oct 2023 11:11:00 +0000 https://www.codastory.com/stayonthestory/silicon-savannah-taking-on-africas-digital-sweatshops-in-the-heart-of-silicon-savannah/ Content moderators for TikTok, Meta and ChatGPT are demanding that tech companies reckon with the human toll of their enterprise.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>

 Silicon Savanna: The workers taking on Africa's digital sweatshops

This story was updated at 6:30 ET on October 16, 2023

Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.

He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.

Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.

One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said. 

Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”

Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.

Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.

That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.

“The word evil is not equal to what we saw,” he said. 

Yet he had to stay in the job. Pay was low — less than two dollars an hour, Wabe told me — but going back to Ethiopia, where he had been tortured and imprisoned, was out of the question. Wabe worked with dozens of other migrants and refugees from other parts of Africa who faced similar circumstances. Money was too tight — and life too uncertain — to speak out or turn down the work. So he and his colleagues kept their heads down and steeled themselves each day for the deluge of terrifying images.

Over time, Wabe began to see moderators as “soldiers in disguise” — a low-paid workforce toiling in the shadows to make Facebook usable for billions of people around the world. But he also noted a grim irony in the role he and his colleagues played for the platform’s users: “Everybody is safe because of us,” he said. “But we are not.”  

Wabe said dozens of his former colleagues in Sama’s Nairobi offices now suffer from post-traumatic stress disorder. Wabe has also struggled with thoughts of suicide. “Every time I go somewhere high, I think: What would happen if I jump?” he wondered aloud. “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

The West End Towers house the Nairobi offices of Majorel, a Luxembourg-based content moderation firm with over 22,000 employees on the African continent.

To most people using the internet — most of the world — this kind of work is literally invisible. Yet it is a foundational component of the Big Tech business model. If social media sites were flooded with videos of murder and sexual assault, most people would steer clear of them — and so would the advertisers that bring the companies billions in revenue.

Around the world, an estimated 100,000 people work for companies like Sama, third-party contractors that supply content moderation services for the likes of Facebook’s parent company Meta, Google and TikTok. But while it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These “soldiers in disguise” are reaching a breaking point. Because of people like Wabe, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.

In May, more than 150 moderators in Kenya, who keep the worst of the worst off of platforms like Facebook, TikTok and ChatGPT, announced their drive to create a trade union for content moderators across Africa. The union would be the first of its kind on the continent and potentially in the world.

There are also major pending lawsuits before Kenya’s courts targeting Meta and Sama. More than 180 content moderators — including Wabe — are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Sama ended its content moderation agreement with Meta and Majorel picked up the contract instead. The plaintiffs say they were blacklisted from reapplying for their jobs after Majorel stepped in. In August, a judge ordered both parties to settle the case out of court, but the mediation broke down on October 16 after the plaintiffs' attorneys accused Meta of scuttling the negotiations and ignoring moderators' requests for mental health services and compensation. The lawsuit will now proceed to Kenya's employment and labor relations court, with an upcoming hearing scheduled for October 31.

The cases against Meta are unprecedented. According to Amnesty International, it is the “first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.” Forthcoming court rulings could jeopardize Meta’s status in Kenya and the content moderation outsourcing model upon which it has built its global empire. 

Meta did not respond to requests for comment about moderators’ working conditions and pay in Kenya. In an emailed statement, a spokesperson for Sama said the company cannot comment on ongoing litigation but is “pleased to be in mediation” and believes “it is in the best interest of all parties to come to an amicable resolution.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing marks a turning point in the country’s tech labor trajectory. 

“This is the tech industry’s sweatshop moment,” Madung said. “Every big corporate industry here — oil and gas, the fashion industry, the cosmetics industry — have at one point come under very sharp scrutiny for the reputation of extractive, very colonial type practices.”

Nairobi may soon witness a major shift in the labor economics of content moderation. But it also offers a case study of this industry’s powerful rise. The vast capital city — sometimes called “Silicon Savanna” — has become a hub for outsourced content moderation jobs, drawing workers from across the continent to review material in their native languages. An educated, predominantly English-speaking workforce makes it easy for employers from overseas to set up satellite offices in Kenya. And the country’s troubled economy has left workers desperate for jobs, even when wages are low.

Sameer Business Park, a massive office compound in Nairobi’s industrial zone, is home to Nissan, the Bank of Africa, and Sama’s local headquarters. But just a few miles away lies one of Nairobi’s largest informal settlements, a sprawl of homes made out of scraps of wood and corrugated tin. The slum’s origins date back to the colonial era, when the land it sits on was a farm owned by white settlers. In the 1960s, after independence, the surrounding area became an industrial district, attracting migrants and factory workers who set up makeshift housing on the area adjacent to Sameer Business Park.

For companies like Sama, the conditions here were ripe for investment by 2015, when the firm established a business presence in Nairobi. Headquartered in San Francisco, the self-described “ethical AI” company aims to “provide individuals from marginalized communities with training and connections to dignified digital work.” In Nairobi, it has drawn its labor from residents of the city’s informal settlements, including 500 workers from Kibera, one of the largest slums in Africa. In an email, a Sama spokesperson confirmed moderators in Kenya made between $1.46 and $3.74 per hour after taxes.

Grace Mutung’u, a Nairobi-based digital rights researcher at Open Society Foundations, put this into local context for me. On the surface, working for a place like Sama seemed like a huge step up for young people from the slums, many of whom had family roots in factory work. It was less physically demanding and more lucrative. Compared to manual labor, content moderation “looked very dignified,” Mutung’u said. She recalled speaking with newly hired moderators at an informal settlement near the company’s headquarters. Unlike their parents, many of them were high school graduates, thanks to a government initiative in the mid-2000s to get more kids in school.

“These kids were just telling me how being hired by Sama was the dream come true,” Mutung’u told me. “We are getting proper jobs, our education matters.” These younger workers, Mutung’u continued, “thought: ‘We made it in life.’” They thought they had left behind the poverty and grinding jobs that wore down their parents’ bodies. Until, she added, “the mental health issues started eating them up.” 

Today, 97% of Sama’s workforce is based in Africa, according to a company spokesperson. And despite its stated commitment to providing “dignified” jobs, it has caught criticism for keeping wages low. In 2018, the company’s late founder argued against raising wages for impoverished workers from the slum, reasoning that it would “distort local labor markets” and have “a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.”

Content moderation did not become an industry unto itself by accident. In the early days of social media, when “don’t be evil” was still Google’s main guiding principle and Facebook was still cheekily aspiring to connect the world, this work was performed by employees in-house for the Big Tech platforms. But as companies aspired to grander scales, seeking users in hundreds of markets across the globe, it became clear that their internal systems couldn’t stem the tide of violent, hateful and pornographic content flooding people’s newsfeeds. So they took a page from multinational corporations’ globalization playbook: They decided to outsource the labor.

More than a decade on, content moderation is now an industry that is projected to reach $40 billion by 2032. Sarah T. Roberts, a professor of information studies at the University of California at Los Angeles, wrote the definitive study on the moderation industry in her 2019 book “Behind the Screen.” Roberts estimates that hundreds of companies are farming out these services worldwide, employing upwards of 100,000 moderators. In its own transparency documents, Meta says that more than 15,000 people moderate its content in more than 20 sites around the world. Some (it doesn’t say how many) are full-time employees of the social media giant, while others (it doesn’t say how many) work for the company’s contracting partners.

Kauna Malgwi was once a moderator with Sama in Nairobi. She was tasked with reviewing content on Facebook in her native language, Hausa. She recalled watching coworkers scream, faint and develop panic attacks on the office floor as images flashed across their screens. Originally from Nigeria, Malgwi took a job with Sama in 2019, after coming to Nairobi to study psychology. She told me she also signed a nondisclosure agreement instructing her that she would face legal consequences if she told anyone she was reviewing content on Facebook. Malgwi was confused by the agreement, but moved forward anyway. She was in graduate school and needed the money.

A 28-year-old moderator named Johanna described a similar decline in her mental health after watching TikTok videos of rape, child sexual abuse, and even a woman ending her life in front of her own children. Johanna currently works with the outsourcing firm Majorel, reviewing content on TikTok, and asked that we identify her using a pseudonym, for fear of retaliation by her employer. She told me she’s extroverted by nature, but after a few months at Majorel, she became withdrawn and stopped hanging out with her friends. Now, she dissociates to get through the day at work. “You become a different person,” she told me. “I’m numb.”

This is not the experience that the Luxembourg-based multinational — which employs more than 22,000 people across the African continent — touts in its recruitment materials. On a page about its content moderation services, Majorel’s website features a photo of a woman donning a pair of headphones and laughing. It highlights the company’s “Feel Good” program, which focuses on “team member wellbeing and resiliency support.”

According to the company, these resources include 24/7 psychological support for employees “together with a comprehensive suite of health and well-being initiatives that receive high praise from our people," Karsten König, an executive vice president at Majorel, said in an emailed statement. "We know that providing a safe and supportive working environment for our content moderators is the key to delivering excellent services for our clients and their customers. And that’s what we strive to do every day.”

But Majorel’s mental health resources haven’t helped ease Johanna’s depression and anxiety. She says the company offers moderators in her Nairobi office with on-site therapists who see employees in individual and group “wellness” sessions. But Johanna told me she stopped attending the individual sessions after her manager approached her about a topic she shared in confidentiality with her therapist. “They told me it was a safe space,” Johanna explained, “but I feel that they breached that part of the confidentiality so I do not do individual therapy.” TikTok did not respond to a request for comment by publication.

Instead, she looked for other ways to make herself feel better. Nature has been especially healing. Whenever she can, Johanna takes herself to Karura Forest, a lush oasis in the heart of Nairobi. One afternoon, she brought me to one of her favorite spots there, a crashing waterfall beneath a canopy of trees. This is where she tries to forget about the images that keep her up at night. 

Johanna remains haunted by a video she reviewed out of Tanzania, where she saw a lesbian couple attacked by a mob, stripped naked and beaten. She thought of them again and again for months. “I wondered: ‘How are they? Are they dead right now?’” At night, she would lie awake in her bed, replaying the scene in her mind.

“I couldn’t sleep, thinking about those women.”

Johanna’s experience lays bare another stark reality of this work. She was powerless to help victims. Yes, she could remove the video in question, but she couldn’t do anything to bring the women who were brutalized to safety. This is a common scenario for content moderators like Johanna, who are not only seeing these horrors in real-time, but are asked to simply remove them from the internet and, by extension, perhaps, from public record. Did the victims get help? Were the perpetrators brought to justice? With the endless flood of videos and images waiting for review, questions like these almost always go unanswered.

The situation that Johanna encountered highlights what David Kaye, a professor of law at the University of California at Irvine and the former United Nations special rapporteur on freedom of expression, believes is one of the platforms’ major blindspots: “They enter into spaces and countries where they have very little connection to the culture, the context and the policing,” without considering the myriad ways their products could be used to hurt people. When platforms introduce new features like livestreaming or new tools to amplify content, Kaye continued, “are they thinking through how to do that in a way that doesn’t cause harm?”

The question is a good one. For years, Meta CEO Mark Zuckerberg famously urged his employees to “move fast and break things,” an approach that doesn’t leave much room for the kind of contextual nuance that Kaye advocates. And history has shown the real-world consequences of social media companies’ failures to think through how their platforms might be used to foment violence in countries in conflict.

The most searing example came from Myanmar in 2017, when Meta famously looked the other way as military leaders used Facebook to incite hatred and violence against Rohingya Muslims as they ran “clearance operations” that left an estimated 24,000 Rohingya people dead and caused more than a million to flee the country. A U.N. fact-finding mission later wrote that Facebook had a “determining role” in the genocide. After commissioning an independent assessment of Facebook’s impact in Myanmar, Meta itself acknowledged that the company didn’t do “enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

Yet five years later, another case now before Kenya’s high court deals with the same issue on a different continent. Last year, Meta was sued by a group of petitioners including the family of Meareg Amare Abrha, an Ethiopian chemistry professor who was assassinated in 2021 after people used Facebook to orchestrate his killing. Amare’s son tried desperately to get the company to take down the posts calling for his father’s head, to no avail. He is now part of the suit that accuses Meta of amplifying hateful and malicious content during the conflict in Tigray, including the posts that called for Amare’s killing.

The case underlines the strange distance between Big Tech behemoths and the content moderation industry that they’ve created offshore, where the stakes of moderation decisions can be life or death. Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University's Stern School of Business who authored a seminal 2020 report on the issue, believes this distance helped corporate leadership preserve their image of a shiny, frictionless world of tech. Social media was meant to be about abundant free speech, connecting with friends and posting pictures from happy hour — not street riots or civil war or child abuse.

“This is a very nitty gritty thing, sifting through content and making decisions,” Barrett told me. “They don't really want to touch it or be in proximity to it. So holding this whole thing at arm’s length as a psychological or corporate culture matter is also part of this picture.”

Sarah T. Roberts likened content moderation to “a dirty little secret. It’s been something that people in positions of power within the companies wish could just go away,” Roberts said. This reluctance to deal with the messy realities of human behavior online is evident today, even in statements from leading figures in the industry. For example, with the July launch of Threads, Meta’s new Twitter-like social platform, in July, Instagram head Adam Mosseri expressed a desire to keep “politics and hard news” off the platform.

The decision to outsource content moderation meant that this part of what happened on social media platforms would “be treated at arm’s length and without that type of oversight and scrutiny that it needs,” Barrett said. But the decision had collateral damage. In pursuit of mass scale, Meta and its counterparts created a system that produces an impossible amount of material to oversee. By some estimates, three million items of content are reported on Facebook alone on a daily basis. And despite what some of Silicon Valley’s other biggest names tell us, artificial intelligence systems are insufficient moderators. So it falls on real people to do the work.

One morning in late July, James Oyange, a former tech worker, took me on a driving tour of Nairobi’s content moderation hubs. Oyange, who goes by Mojez, is lanky and gregarious, quick to offer a high five and a custom-made quip. We pulled up outside a high-rise building in Westlands, a bustling central neighborhood near Nairobi’s business district. Mojez pointed up to the sixth floor: Majorel’s local office, where he worked for nine months, until he was let go.

He spent much of his year in this building. Pay was bad and hours were long, and it wasn’t the customer service job he’d expected when he first signed on — this is something he brought up with managers early on. But the 26-year-old grew to feel a sense of duty about the work. He saw the job as the online version of a first responder — an essential worker in the social media era, cleaning up hazardous waste on the internet. But being the first to the scene of the digital wreckage changed Mojez, too — the way he looks, the way he sleeps, and even his life’s direction.

That morning, as we sipped coffee in a trendy, high-ceilinged cafe in Westlands, I asked how he’s holding it together. “Compared to some of the other moderators I talked to, you seem like you’re doing okay,” I remarked. “Are you?”

His days often started bleary-eyed. When insomnia got the best of him, he would force himself to go running under the pitch-black sky, circling his neighborhood for 30 minutes and then stretching in his room as the darkness lifted. At dawn, he would ride the bus to work, snaking through Nairobi’s famously congested roads until he arrived at Majorel’s offices. A food market down the street offered some moments of relief from the daily grind. Mojez would steal away there for a snack or lunch. His vendor of choice doled out tortillas stuffed with sausage. He was often so exhausted by the end of the day that he nodded off on the bus ride home.

And then, in April 2023, Majorel told him that his contract wouldn’t be renewed.

It was a blow. Mojez walked into the meeting fantasizing about a promotion. He left without a job. He believes he was blacklisted by company management for speaking up about moderators’ low pay and working conditions.

A few weeks later, an old colleague put him in touch with Foxglove, a U.K.-based legal nonprofit supporting the lawsuit currently in mediation against Meta. The organization also helped organize the May meeting in which more than 150 African content moderators across platforms voted to unionize.

At the event, Mojez was stunned by the universality of the challenges facing moderators working elsewhere. He realized: “This is not a Mojez issue. These are 150 people across all social media companies. This is a major issue that is affecting a lot of people.” After that, despite being unemployed, he was all in on the union drive. Mojez, who studied international relations in college, hopes to do policy work on tech and data protection someday. But right now his goal is to see the effort through, all the way to the union’s registry with Kenya’s labor department.

Mojez’s friend in the Big Tech fight, Wabe, also went to the May meeting. Over lunch one afternoon in Nairobi in July, he described what it was like to open up about his experiences  publicly for the first time. “I was happy,” he told me. “I realized I was not alone.” This awareness has made him more confident about fighting “to make sure that the content moderators in Africa are treated like humans, not trash,” he explained. He then pulled up a pant leg and pointed to a mark on his calf, a scar from when he was imprisoned and tortured in Ethiopia. The companies, he said, “think that you are weak. They don’t know who you are, what you went through.”

A popular lunch spot for workers outside Majorel's offices.

Looking at Kenya’s economic woes, you can see why these jobs were so alluring. My visit to Nairobi coincided with a string of July protests that paralyzed the city. The day I flew in, it was unclear if I would be able to make it from the airport to my hotel — roads, businesses and public transit were threatening to shut down in anticipation of the unrest. The demonstrations, which have been bubbling up every so often since last March, came in response to steep new tax hikes, but they were also about the broader state of Kenya’s faltering economy — soaring food and gas prices and a youth unemployment crisis, some of the same forces that drive throngs of young workers to work for outsourcing companies and keep them there.

Leah Kimathi, a co-founder of the Kenyan nonprofit Council for Responsible Social Media, believes Meta’s legal defense in the labor case brought by the moderators betrays Big Tech’s neo-colonial approach to business in Kenya. When the petitioners first filed suit, Meta tried to absolve itself by claiming that it could not be brought to trial in Kenya, since it has no physical offices there and did not directly employ the moderators, who were instead working for Sama, not Meta. But a Kenyan labor court saw it differently, ruling in June that Meta — not Sama — was the moderators’ primary employer and the case against the company could move forward.

“So you can come here, roll out your product in a very exploitative way, disregarding our laws, and we cannot hold you accountable,” Kimathi said of legal Meta’s argument. “Because guess what? I am above your laws. That was the exact colonial logic.”

Kimathi continued: “For us, sitting in the Global South, but also in Africa, we’re looking at this from a historical perspective. Energetic young Africans are being targeted for content moderation and they come out of it maimed for life. This is reminiscent of slavery. It’s just now we’ve moved from the farms to offices.”

As Kimathi sees it, the multinational tech firms and their outsourcing partners made one big, potentially fatal miscalculation when they set up shop in Kenya: They didn’t anticipate a workers’ revolt. If they had considered the country’s history, perhaps they would have seen the writing of the African Content Moderator’s Union on the wall.

Kenya has a rich history of worker organizing in resistance to the colonial state. The labor movement was “a critical pillar of the anti-colonial struggle,” Kimathi explained to me. She and other critics of Big Tech’s operations in Kenya see a line that leads from colonial-era labor exploitation and worker organizing to the present day. A workers’ backlash was a critical part of that resistance — and one the Big Tech platforms and their outsourcers may have overlooked when they decided to do business in the country.

“They thought that they would come in and establish this very exploitative industry and Kenyans wouldn’t push back,” she said. Instead, they sued.

What happens if the workers actually win?

Foxglove, the nonprofit supporting the moderators’ legal challenge against Meta, writes that the outcome of the case could disrupt the global content moderation outsourcing model. If the court finds that Meta is the “‘true employer’ of their content moderators in the eyes of the law,” Foxglove argues, “then they cannot hide behind middlemen like Sama or Majorel. It will be their responsibility, at last, to value and protect the workers who protect social media — and who have made tech executives their billions.”

But there is still a long road ahead, for the moderators themselves and for the kinds of changes to the global moderation industry that they are hoping to achieve.

In Kenya, the workers involved in the lawsuit and union face practical challenges. Some, like Mojez, are unemployed and running out of money. Others are migrant workers from elsewhere on the continent who may not be able to stay in Kenya for the duration of the lawsuit or union fight.

The Moderator’s Union is not yet registered with Kenya’s labor office, but if it becomes official, its members intend to push for better conditions for moderators working across platforms in Kenya, including higher salaries and more psychological support for the trauma endured on the job. And their ambitions extend far beyond Kenya. The network hopes to inspire similar actions in other countries’ content moderation hubs. According to Martha Dark, Foxglove’s co-founder and director, the industry’s working conditions have spawned a cross-border, cross-company organizing effort, drawing employees from Africa, Europe and the U.S.

“There are content moderators that are coming together from Poland, America, Kenya, and Germany talking about what the challenges are that they experience when trying to organize in the context of working for Big Tech companies like Facebook and TikTok,” she explained.

Still, there are big questions that might hinge on the litigation’s ability to transform the moderation industry. “It would be good if outsourced content reviewers earned better pay and were better treated,” NYU’s Paul Barrett told me. “But that doesn't get at the issue that the mother companies here, whether it’s Meta or anybody else, is not hiring these people, is not directly training these people and is not directly supervising these people.” Even if the Kenyan workers are victorious in their lawsuit against Meta, and the company is stung in court, “litigation is still litigation,” Barrett explained. “It’s not the restructuring of an industry.”

So what would truly reform the moderation industry’s core problem? For Barrett, the industry will only see meaningful change if companies can bring “more, if not all of this function in-house.”

But Sarah T. Roberts, who interviewed workers from Silicon Valley to the Philippines for her book on the global moderation industry, believes collective bargaining is the only pathway forward for changing the conditions of the work. She dedicated the end of her book to the promise of organized labor.

“The only hope is for workers to push back,” she told me. “At some point, people get pushed too far. And the ownership class always underestimates it. Why does Big Tech want everything to be computational in content moderation? Because AI tools don’t go on strike. They don't talk to reporters.”

Artificial intelligence is part of the content moderation industry, but it will probably never be capable of replacing human moderators altogether. What we do know is that AI models will continue to rely on human beings to train and oversee their data sets — a reality Sama’s CEO recently acknowledged. For now and the foreseeable future, there will still be people behind the screen, fueling the engines of the world’s biggest tech platforms. But because of people like Wabe and Mojez and Kauna, their work is becoming more visible to the rest of us.

While writing this piece, I kept returning to one scene from my trip to Nairobi that powerfully drove home the raw humanity at the base of this entire industry, powering the whole system, as much as the tech scions might like to pretend otherwise. I was in the food court of a mall, sitting with Malgwi and Wabe. They were both dressed sharply, like they were on break from the office: Malgwi in a trim pink dress and a blazer, Wabe in leather boots and a peacoat. But instead, they were just talking about how work ruined them.

At one point in the conversation, Wabe told me he was willing to show me a few examples of violent videos he snuck out while working for Sama and later shared with his attorney. If I wanted to understand “exactly what we see and moderate on the platform,” Wabe explained, the opportunity was right in front of me. All I had to do was say yes.

I hesitated. I was genuinely curious. A part of me wanted to know, wanted to see first-hand what he had to deal with for more than a year. But I’m sensitive, maybe a little breakable. A lifelong insomniac. Could I handle seeing this stuff? Would I ever sleep again?

It was a decision I didn’t have to make. Malgwi intervened. “Don’t send it to her,” she told Wabe. “It will traumatize her.”

So much of this story, I realized, came down to this minute-long exchange. I didn’t want to see the videos because I was afraid of how they might affect me. Malgwi made sure I didn’t have to. She already knew what was on the other side of the screen.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. This story offers a deep dive on court battles in Kenya that could jeopardize the outsourcing model upon which Meta has built its global empire.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>
47011
Meta cozies up to Vietnam, censorship demands and all https://www.codastory.com/authoritarian-tech/vietnam-censorship-facebook/ Thu, 28 Sep 2023 15:25:58 +0000 https://www.codastory.com/?p=46764 U.S. social media companies have become indispensable partners in Vietnam's information control regime

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
When Vietnamese Prime Minister Pham Minh Chinh and his delegation visited Meta's Menlo Park headquarters in California last week, they were welcomed with a board reminiscent of Facebook’s desktop interface.

"What's on your mind?" it read at the top. Beneath the standard status update prompt were a series of messages written in Vietnamese that extended a warm welcome to the prime minister, underscoring the collaboration between his government and the social media giant. Sunny statements are reported to have dominated the meeting in which the two sides rhapsodized about bolstering their partnership.

Prime Minister Chinh highlighted the instrumental role American companies, Meta in particular, might play in uncorking the potentials of the Comprehensive Strategic Partnership that the U.S. and Vietnam cemented in mid-September. He encouraged Meta to deepen its ties with Vietnamese firms to boost the digital economy. Joel Kaplan, Meta’s vice president for U.S. public policy, indicated willingness to support Vietnamese businesses of all sizes, adding that the company hopes to continue producing “metaverse equipment” in the country. 

The warm aura of the meeting obscured an uncomfortable reality for Meta on the other side of the Pacific: It has become increasingly enmeshed in the Vietnamese government's draconian online censorship regime. In a country whose leaders once frowned upon it, Facebook has seen its relationship with the Vietnamese government morph from one of animosity to an unlikely alliance of convenience. Not a small feat for the social media giant.

Facebook has long been the most popular social media platform in Vietnam. Today, over 70% of Vietnam’s total population of nearly 100 million people use it for content sharing, business operations and messaging.

For years, Facebook’s approach to content policy in Vietnam appeared to be one of caution, in which the company brought some adherence to free speech principles to decision-making when it was faced with censorship demands from the government. But in 2020, it shifted to one of near-guaranteed compliance with official demands, at least in the eyes of Vietnamese authorities. It was in that year that the Vietnamese government claimed that the company went from approving 70 to 75%% of censorship requests from the authorities, to a staggering 95%. Since then Vietnamese officials have maintained that Facebook's compliance rate is upwards of 90%.

Meta’s deference to Vietnam’s official line continues today. Last June, an article in the Washington Post quoted two former employees who, speaking on the condition of anonymity, said that Facebook had taken on an internal list of Vietnam Communist Party officials who it agreed to shield from criticism on its platform. The undisclosed list is included in the company’s internal guidelines for moderating online content, with Vietnamese authorities having a significant sway on it, the Post reported. While the Post did not cite the names of the Vietnamese officials on the list, it noted that Vietnam is the only country in East Asia for which Facebook provides this type of white-glove treatment.

Also in June, the government instructed cross-border social platforms to employ artificial intelligence models capable of automatically detecting and removing “toxic” content. A month earlier, in the name of curbing online scams, the authorities said they were gearing up to enforce a requirement that all social media users, whether on local or foreign platforms, verify their identities.

These back-to-back developments are emblematic of the Vietnamese government’s growing confidence in asserting its authority over Big Tech.

Facebook's corporate headquarters location in Menlo Park, California. Josh Edelson/AFP via Getty Images.

How has Vietnam reached this critical juncture? Two key factors seem to account for why Vietnamese authorities are able to boss around Big Tech.

The first is Vietnam’s economic lure. Vietnam's internet economy is one of the most rapidly expanding markets in Southeast Asia. According to a report by Google and Singapore's Temasek Holdings, Vietnam's digital economy hit $23 billion in 2022 and is projected to reach approximately $50 billion by 2025, with growth fueled primarily by a thriving e-commerce sector. 

Dangling access to a market of nearly 100 million people, Vietnamese authorities have become increasingly adept at exploiting their economic leverage to browbeat Big Tech companies into compliance. Facebook's 70 million users aside, DataReportal estimates that YouTube has 63 million users and TikTok around 50 million in Vietnam.

Although free speech principles were foundational for major American social media platforms, it may be naive to expect them to adhere to any express ideological value proposition at this stage. Above all else, they prioritize rapid growth, outpacing competitors and solidifying their foothold in online communication and commerce. At the end of the day, it is the companies’ bottom line that has dictated how Big Tech operates across borders.

Alongside market pressures, Vietnam has also gained leverage through its own legal framework. Big Tech companies have recognized that they need to adhere to local laws in the countries where they operate, and the Vietnamese government has capitalized on this, amping up its legal arsenal to tighten its grip on cyberspace, knowing full well that Facebook, along with YouTube and TikTok, will comply. Nowhere is this tactic more manifest than in the crackdown on what the authorities label as anti-state content. 

Over the past two decades, the crackdown on anti-state content has shaped the way Vietnamese authorities deployed various online censorship strategies, while also dictating how a raft of laws and regulations on internet controls were formulated and enforced. From Hanoi’s perspective, anti-state content can undermine national prestige, besmirch the reputation of the ruling Communist Party and slander and defame Vietnamese leaders.

There is one other major benefit that the government derives from the big platforms: it uses them to promote its own image. Like China, Vietnam has since 2017 deployed a 10,000-strong military cyber unit tasked to manipulate online discourse to enforce the Communist Party’s line. The modus operandi of Vietnam’s cyber troops has been to ensure “a healthy cyberspace” and protect the regime from “wrong,” “distorting,” or “false news,” all of which are in essence “anti-state” content in the view of the authorities.

And the biggest companies now readily comply. A majority of online posts that YouTube and Facebook have restricted or removed at the behest of Vietnamese authorities were related to  “government criticism” or ones that “oppose the Communist Party and the Government of Vietnam,” according to the transparency reports by Google and Facebook.

The latest data disclosed by Vietnam’s Ministry of Information and Communications indicates that censorship compliance rates by Facebook and YouTube both exceed 90%.

In this context, Southeast Asia provides a compelling case study. Notably, four of the 10 countries with the highest number of Facebook users worldwide are also in Southeast Asia: Indonesia, the Philippines, Vietnam and Thailand. Across the region, censorship requests have pervaded the social media landscape and redefined Big Tech-government relations. 

“Several governments in the region have onerous regulation that compels digital platforms to adhere to strict rules over what content is or isn’t allowed to be on the platform,” Kian Vesteinsson, an expert on technology and democracy at Freedom House, told me. “Companies that don’t comply with these rules may risk fines, criminal or civil liability, or even outright bans or blocks,” Vesteinsson said.

But a wholesale ban on any of the biggest social platforms feels highly improbable today. These companies have become indispensable partners in Vietnam’s online censorship regime, to the point that the threat of shutting them down is more of a brinkmanship tactic than a realistic option. In other words, they are too important to Vietnam to be shut down. And the entanglement goes both ways — for Facebook and Google, the Vietnamese market is too lucrative for them to back out or resist censorship demands.

To wit, after Vietnam threatened to block Facebook in 2020 over anti-government posts, the threat never materialized. And Facebook has largely met the demands of Vietnamese authorities ever since.

Last May, TikTok faced a similar threat. Vietnam launched a probe into TikTok's operations in Vietnam, warning that any failure to comply with Vietnamese regulations could see the platform shown the door in this lucrative market. While the outcome of the inspection is pending and could be released any time, there are already signs that TikTok, the only foreign social media platform to have set up shop in Vietnam, will do whatever it takes to get on the good side of Vietnamese authorities. In June, TikTok admitted to its wrongdoings in Vietnam and pledged to take corrective actions.

The fuss that Vietnamese authorities have made about both Facebook and TikTok has likely masked their real intent: to further strong-arm these platforms into becoming more compliant and answerable to Vietnamese censors. Judging by their playbook, Vietnamese authorities are likely to continue wielding the stick of shutdown as a pretext to tighten the grip on narratives online, fortify state controls on social media and solidify the government's increasing leverage over Big Tech.

Could a different kind of platform emerge in this milieu? Vietnam’s economy of scale would scarcely allow for this kind of development: The prospect of building a more robust domestic internet ecosystem that could elbow out Facebook or YouTube doesn’t really exist. Absent bigger political and economic changes, Hanoi will remain reliant on foreign tech platforms to curb dissent, gauge public sentiment, discover corrupt behavior by local officials and get out its own messages to its internet-savvy population.

The post Meta cozies up to Vietnam, censorship demands and all appeared first on Coda Story.

]]>
46764
Without space to detain migrants, the UK tags them https://www.codastory.com/authoritarian-tech/uk-gps-tagging-home-office-asylum/ Thu, 21 Sep 2023 14:25:08 +0000 https://www.codastory.com/?p=46581 The Home Office says electronically tracking asylum seekers is a humane alternative to detention. But migrants say it’s damaging their mental health

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
The U.K. is presenting asylum seekers with an ultimatum: await deportation and asylum processing in Rwanda, face detention or wear a tracking device. Or leave voluntarily.

As thousands of people continue to arrive in the U.K., the British authorities are scrambling for new ways to monitor and control them. Under the government’s new rules, Britain has a legal duty to detain and deport anyone who arrives on its shores via truck or boat regardless of whether they wish to seek asylum. Passed in July 2023, the Illegal Migration Act has already been described by the United Nations Human Rights Office as “exposing refugees to grave risks in breach of international law.”

More than 20,000 people have come to the U.K. on small boats so far in 2023, and some 175,000 people are already waiting for an asylum decision. But officials say the U.K. does not have the physical space to detain people under the new law. And a public inquiry published this week argued that the U.K. should not detain migrants for more than 28 days. The report found evidence of abusive, degrading and racist treatment of migrants held in a detention center near London’s Gatwick Airport.

With detention centers at capacity and under scrutiny for mistreating migrants, and with the Rwanda scheme facing court challenges, those awaiting deportation or asylum proceedings are increasingly being monitored using technology instead, such as GPS-enabled ankle trackers that allow officials to follow the wearer’s every move. The ankle tracker program, which launched as a pilot in June 2022, was initially scheduled to last 12 months. But this summer, without fanfare, the government quietly uploaded a document to its website with the news that it was continuing the pilot to the end of 2023.

A Home Office spokesperson told me that “the GPS tracking pilot helps to deter absconding.” But absconding rates among migrants coming to the U.K. are low: The Home Office itself reported that they stood at 3% in 2019 and 1% in 2020, in response to a Freedom of Information request filed by the advocacy group Migrants Organize. In other official statements, the Home Office has expressed concern that the Rwanda policy may lead to “an increased risk of absconding and less incentive to comply with any conditions of immigration bail.” So authorities are fitting asylum seekers with GPS tags to ensure they don’t disappear before they can be deported.

Privacy advocates say the policy is invasive, ineffective and detrimental to the mental and physical health of the wearers. 

“Forging ahead, and massively expanding, such a harmful scheme with no evidence to back up its usefulness is simply vindictive,” said Lucie Audibert, a legal officer at the digital rights group Privacy International, which launched a legal challenge against the pilot program last year, arguing there were not adequate safeguards in place to protect people’s basic rights. 

Migrants who have been tagged under the scheme say the experience is dehumanizing. “It feels like an outside prison,” said Sam, a man in his thirties who fled a civil war with his family when he was a small child and has lived in the U.K. ever since. Sam, whose name has been changed, was told by the Home Office at the end of last year that he would need to wear a tag while the government considered whether to deport him after he had served a criminal sentence.

The Home Office has also outsourced the implementation of the GPS tracking system to Capita PLC, a private security company. Capita has been tasked with fitting tags and monitoring the movements and other relevant data collected on each and every person wearing a device. For migrants like Sam, that means dealing with anonymous Capita staff — rather than the government — whenever his tag was being fitted, checked or replaced.

After a month of wearing the tag, Sam felt depression beginning to set in. He was worried about leaving the house, for fear of accidentally bumping the strap. He was afraid that if too many problems arose with the tracker, the Home Office might use it as an excuse to deport him. Another constant anxiety weighed on him too: keeping the device charged. Capita staff told him its battery could last 24 hours. But he soon found out that wasn’t true — and it would lose charge without warning when he was out, vibrating loudly and flashing with a red light.

“Being around people and getting the charger out so you can charge your ankle — it’s so embarrassing,” Sam said. He never told his child that he had been tagged. “I always hid it under tracksuits or jeans,” he said, not wanting to burden his child with the constant physical reminder that he could be deported.

The mental health problems Sam experienced are not unusual for people who have to wear tracking devices. In the U.S., border authorities first deployed ankle monitors in 2014, in response to an influx of migrants from Central America. According to a 2021 study surveying 150 migrants forced to wear the devices, 12% said wearing the tags led to thoughts of suicide, while 40% said they believed they had been psychologically scarred by the experience.

Capita staff regularly showed up at Sam’s home to check on the tag, and they often came at different times than the Home Office told Sam they would come. Sometimes, they would show up without any warning at all. 

Sam remembered an occasion when Capita officers told him that “the system was saying the strap had been tampered with.” The agents examined his ankle and found nothing wrong with the device. This became a routine: The team showed up randomly to tell him there was a problem or that his location wasn’t registering. “It was all these little things that seemed to make out I was doing something wrong. In the end, I realized it wasn’t me, it was the tag that was the problem. I felt harassed,” Sam told me. 

At one point, Sam said he received a letter from the Home Office saying he had breached his bail conditions because he had not been home when the Capita people came calling. According to Home Office documents, breaching bail conditions is a good enough reason for the government to have access to a migrant’s “trail data”: a live inventory of a person’s precise location every minute of the day and night. He’s worried that this tracking data might be used against him as the government deliberates on whether or not to deport him. 

Sam is not alone in dealing with glitches with the tag. In a study of 19 migrants tagged under the British scheme, 15 participants had practical issues with the devices, such as the devices failing or chargers not working. 

When I asked Capita to comment on these findings, the company redirected me to the Home Office, which denied that there were any concerns. “Device issues are rare and service users are provided with a 24-hour helpline to report any problems,” a government spokesperson said. They then added: “Capita’s field and monitoring staff receive safeguarding training and are able to signpost tag wearers to support organizations where appropriate.”

Migration campaigners say contracts like the one Home Office has with Capita serve to line the pockets of big private security companies at the taxpayers’ expense while helping the government push out the message that they’re being tough on immigration.

“Under this government, we have seen a steep rise in the asylum backlog,” said Monish Bhatia, a lecturer in Sociology at the University of York, who studies the effects of GPS tagging. “Instead of directing resources to resolving this backlog,” he told me, “they have come up with rather expensive and wasteful gimmicks.” 

The ankle monitor scheme forms part of Britain’s so-called “hostile environment” policy, introduced more than a decade ago by then-Home Secretary Theresa May, who described it as an effort to “create, here in Britain, a really hostile environment for illegal immigrants.” It has seen the government pour billions of pounds into deterring and detaining migrants — from building a high-tech network of surveillance along the English channel in an attempt to thwart small boat crossings to the 120 million pound ($147 million) deal to deport migrants to Rwanda. 

The Home Office estimates it will have to spend between 3 and 6 billion pounds (between $3.68 and $7.36 billion) on detaining, accommodating and removing migrants over the next two years. But the option to tag people, while cheaper than keeping them locked up, also costs the government significant amounts of money. The U.K. currently has two contracts with security companies for electronically tagging both migrants and those in the criminal justice system. One with G4S, which provides the tag hardware, worth 22 million pounds ($27.5 million) and another with Capita, which runs electronic tagging services for 114 million pounds ($142 million), fitting and troubleshooting the tags.

The Home Office said the GPS tagging scheme would help streamline the asylum process and that it was “determined to break the business model of the criminal people smugglers and prevent people from making dangerous journeys across the Channel.” 

For his part, Sam eventually got his tag removed — he was granted an exception due to the tag’s effects on his mental health. After the tag was gone, he described how he felt like it was still there for weeks. He still put his clothes and shoes on as if the tag was still strapped to his ankle. 

“It took me a while to realize I was actually free from their eyes,” he said. But his status remains uncertain: He is still facing the threat of deportation.

Correction: An earlier version of this article incorrectly stated Monish Bhatia's affiliation. As of April 2023, he is a lecturer at the University of York, not Birkbeck, University of London.

The post Without space to detain migrants, the UK tags them appeared first on Coda Story.

]]>
46581
Advertising erectile dysfunction pills? No problem. Breast health? Try again https://www.codastory.com/authoritarian-tech/meta-health-ads/ Thu, 07 Sep 2023 13:14:01 +0000 https://www.codastory.com/?p=46363 Women’s health groups say Meta is discriminating against them, while letting men’s sexual health ads flourish

The post Advertising erectile dysfunction pills? No problem. Breast health? Try again appeared first on Coda Story.

]]>
It happened again last week. Lisa Lundy logged into her company’s Instagram account only to be greeted with yet another rejection. This one was an advertisement about breast cancer awareness, featuring a close-up of a woman's bare decolletage with the caption: “90% of breast cancer diagnoses are not hereditary.” 

Lundy thought the ad could educate social media users about the risk factors for breast cancer, but it never saw the light of day. Instead, Instagram rejected it for violating its policies on nudity and sexual activity.

For more than a year, Lundy’s company, Complex Creatures, has struggled to find a home for its content on Instagram. The platform has rejected scores of the company’s advertisements and posts since its account went live in June 2022. Lundy co-founded Complex Creatures with her sister, a breast cancer survivor, to raise awareness about the disease and provide health and wellness products for women undergoing breast cancer treatment. But the content rejections came rolling in as soon as she started posting. It didn’t take long for Lundy to realize that Meta, owner of Instagram, was nixing her content because of its subject matter: the breast. 

Screenshots of censored posts from the Complex Creatures Instagram account. Courtesy of Lisa Lundy.

“How do you desexualize the breast?” she asked. “It’s so much of what we’re trying to do.” But platforms like Instagram, Lundy said, “don’t want to let us.” In a call over Zoom, she shared some screenshots of her company’s censored content. One was a post about how massages can improve breast health, featuring a photo of a woman’s hands fully covering her breasts. “But they’re allowed to do this,” she sighed, pulling up an advertisement from a men’s health brand for an erectile dysfunction treatment containing an image of a hand clutching an eggplant with the caption: “Get hard.” The censorship, she added, “is an ongoing challenge. We’re talking about breast cancer and breast health.” Access to the right information about the disease and its risk factors, she explained, can be a matter of “life and death.”

The censorship that Lundy routinely confronts on Instagram is part of a deeper history at Meta, which has long faced criticism for censoring material about breasts on Facebook. But it’s not just breast-focused content that’s not getting through. Lundy belongs to a community of nonprofits and startups focused on women’s health that face routine — and often bewildering — censorship across Facebook and Instagram. 

Screenshots of censored posts from the Complex Creatures Instagram account. Courtesy of Lisa Lundy.

I spoke with representatives from six organizations focused on women’s health care globally, and they told me that while Meta regularly approves advertisements for material that promotes men's sexuality and sexual pleasure, it regularly blocks them from publishing advertisements and posts about a wide range of health and reproductive services aimed at women, including reproductive health, fertility treatments and breast care. Often, these posts are rejected on the grounds that they violate the company's advertising policies on promoting sexual pleasure and adult content.

This kind of censorship comes at an existential moment for the U.S.-based reproductive rights community after the Supreme Court’s overturning of Roe v. Wade — the nearly 50-year-old ruling that legalized abortion across the U.S. — in 2022. As I reported in March 2023, abortion opponents have sought to clamp down on abortion speech online in the post-Roe era, introducing policies in Texas, Iowa, and South Carolina that would prohibit websites from publishing information about abortion. That’s on top of censorship that reproductive rights groups already face when they try to post content about accessing abortion care on platforms like Instagram and Facebook — even in countries where the procedure is legal. 

According to Emma Clark Gratton, a communications officer for the Australia chapter of the international reproductive health nonprofit MSI Reproductive Choices, the organization is routinely blocked from running ads about abortion services on Facebook, often for violating the company’s advertising policy on social issues, elections, and politics. Abortion is “totally legal” in Australia, Clark Gratton explained, but on Meta’s platforms, it is “still very restricted in terms of what we can post.” The organization’s clinical team in Australia, she added, can advertise for vasectomy services on Facebook, “but they definitely couldn’t do an ad promoting abortion services, which is literally what they do. They’re an abortion provider.”

Women First Digital, a group that provides information resources about abortion globally, has dealt extensively with restrictions on social media networks. Michell Mor, a digital strategy manager with the organization, put it to me this way: “Because big tech is from the United States, everything that happens there is replicated around the world.”

The impact of these restrictions reaches well beyond social media, says Carol Wersbe, chief of staff for the Center for Intimacy Justice, a nonprofit that has been tracking Meta’s rejections of health-related ads. 

“Advertising represents so much more than just a company getting an ad on Facebook,” Wersbe told me. “It's visibility, access to information. If we can't advertise for things like pelvic pain and endometriosis, how do we ever reduce the stigma from those topics?” 

In January 2022, the Center for Intimacy Justice published a survey of 60 women’s healthcare startups about their experiences with censorship on Facebook and Instagram. The participating companies offer products and services for a range of women’s healthcare needs, from fertility and pregnancy support to postpartum recovery, menstrual health, and menopause relief. All of the companies surveyed reported having their ads rejected by Instagram and Facebook, and half said their accounts were suspended after Meta removed their ads. According to the report, ads were frequently taken down after they were flagged for promoting “adult products and services,” which are not permitted under the company’s advertising policies.  

Some ads that didn’t make the cut featured products to relieve side effects of menopause; another included background about consent in school sexual education courses. During the same time period, the report points out, Meta approved ads for men’s sexual health products, including treatments for premature ejaculation, erectile dysfunction pills promising to help consumers “get hard or your money back” and men’s lubricants to “level up your solo time.” The platform allowed these ads despite its own rules prohibiting ads from promoting products and services that “focus on sexual pleasure.”

Meta quietly updated its advertising guidelines after the report came out, stating that ads for family planning, contraception, menopause relief, and reproductive health care are allowed. Though the social media giant expanded the scope of permissible advertisements on paper, Wersbe says the status quo remains unchanged. “Across the board, we're still seeing our partners experiencing rejections,” she explained. The censorship that she and others in the field are observing cuts across languages, markets, and continents. “Facebook’s ads policy is a global policy, so when it changes something it affects their whole user base,” explained Wersbe. “We’ve seen rejections in Arabic, Spanish, French, Swedish, Swahili. It’s really pervasive.”

In March 2023, the organization filed a complaint with the U.S. Federal Trade Commission, urging the agency to investigate whether Meta is engaging in deceptive trade practices by rejecting ads from women’s health organizations that comply with its stated advertising policies, while allowing similar advertisements promoting men’s sexual health. The complaint alleges that the social media giant is unevenly applying its ads rules based on the gender of the target audience. These removals, it argues, constitute discriminatory censorship and perpetuate “inequality of access to health information and services for women and people of underrepresented genders.” 

In reporting this story, I contacted Meta with questions about the Center for Intimacy Justice’s report, the Federal Trade Commission complaint, and the rejection of Lundy’s advertisements. A spokesperson responded and shared the company’s published Community Standards, but declined to comment on the record.

Alexandra Lundqvist told me that alongside the outreach challenges that these issues create, ad rejections also make it harder for women-led health companies to get a leg up among investors. Lundqvist is a communications lead with The Case for Her, an investment firm that funds women’s sexual health organizations worldwide, including the Center for Intimacy Justice. “The general Silicon Valley big tech investor is not going to go to a women’s health company, especially when they can’t really advertise their work because they get blocked all the time. When these companies can’t advertise their work, they can’t scale, they can’t get funding,” Lundqvist explained. That exacerbates inequities that women and nonbinary entrepreneurs already face in securing investments from the male-dominated venture capital industry, creating a negative feedback loop for companies marketing products by and for women. “There is a big systems impact,” she added.

Lundy, who says her breast health company continues to experience widespread rejections despite Meta’s policy update, believes the censorship has a corrosive effect on consumers and creators alike. The content takedowns make it harder for entrepreneurs like herself to reach customers, make money, and attract investors. But they also prevent people from learning potentially life-saving information about breast cancer.

“There’s not a lot of information out there about breast health,” she said, describing her own lack of awareness about the disease prior to her sister’s diagnosis at age 37. “We had no family history,” she told me. “Her gynecologist missed it and she had never had a mammogram.” The experience, she continued, “really illuminated how much we didn’t know about our breasts.”

Lundy and her sister founded the company in part to address the information vacuum that left them both in the dark — to reach people before diagnosis and support those with the disease through treatment. But Meta makes that mission harder. “We want to normalize the breast,” she said, “but it’s almost like the algorithm and the people making the algorithms can’t think about a breast or a woman’s body in any way other than sexuality or arousal.” The censorship that Complex Creature routinely faces for posting material on Instagram about breast health, Lundy told me, “feels like the patriarchal system at work.”

The morning after our call, Lundy emailed me an update: a photo of two squashes meant to resemble breasts hanging side by side — the visual for an Instagram ad about her company’s summer sale. The post, she wrote, “was rejected last night. They’re gourds.”

The post Advertising erectile dysfunction pills? No problem. Breast health? Try again appeared first on Coda Story.

]]>
46363
The Albanian town that TikTok emptied https://www.codastory.com/authoritarian-tech/albania-tiktok-migration-uk/ Thu, 24 Aug 2023 15:28:36 +0000 https://www.codastory.com/?p=42467 “It’s like the boys have gone extinct,” say women in Kukes. They’ve all left for London, chasing dreams of fast cars and easy money sold on social media

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>

The Albanian town that TikTok emptied

“I once had an idea in the back of my mind to leave this place and go abroad,” Besmir Billa told me earlier this year as we sipped tea in the town of Kukes, not far from Albania’s Accursed Mountains. “Of course, like everybody else, I’ve thought about it.”

The mountains rose up all around us like a great black wall. Across the valley, we could see a half-constructed, rusty bridge, suspended in mid-air. Above it stood an abandoned, blackened building that served during Albania’s 45-year period of communist rule as a state-run summer camp for workers on holiday. 

Since the fall of communism in 1991, Kukes has lost roughly half of its population. In recent years, thousands of young people — mostly boys and men — have rolled the dice and journeyed to England, often on small boats and without proper paperwork. 

Fifteen years ago, people would come to Kukes from all over the region for market day, where they would sell animals and produce. The streets once rang with their voices. Those who’ve lived in Kukes for decades remember it well. Nowadays, it’s much quieter.

Billa, 32, chose not to leave. He found a job in his hometown and stayed with his family. But for a person his age, he’s unusual.

You can feel the emptiness everywhere you go, he told me. “Doctors all go abroad. The restaurants are always looking for bartenders or waiters. If you want a plumber, you can’t find one.” Billa’s car broke down recently. Luckily, he loves fixing things himself — because it’s difficult to find a mechanic.

Besmir Billa playing a traditional Albanian instrument, called the cifteli, in Kukes.

All the while, there is a parallel reality playing out far from home, one that the people of Kukes see in glimpses on TikTok and Instagram. Their feeds show them a highly curated view of what their lives might look like if they left this place: good jobs, plenty of money, shopping at designer stores and riding around London in fast cars. 

In Kukes, by comparison, times are tough. Salaries are low, prices are rising every week and there are frequent power outages. Many families can barely afford to heat their homes or pay their rent. For young people growing up in the town, it’s difficult to persuade them that there’s a future here.

Three days before I met Billa, a gaggle of teenage boys chased a convoy of flashy cars down the street. A Ferrari, an Audi and a Mercedes had pulled into town, revving their engines and honking triumphantly. The videos were uploaded to TikTok, where they were viewed and reposted tens of thousands of times.

Behind the wheel were TikTok stars Dijonis Biba and Aleks Vishaj, on a victory lap around the remote region. They’re local heroes: They left Albania for the U.K. years ago, became influencers with hundreds of thousands of followers, and now they’re back, equipped with cars, money and notoriety.

Vishaj, dubbed the “King of TikTok” by the British tabloids, was reportedly convicted of robbery in the U.K. and deported in 2021. Biba, a rapper, made headlines in the British right-wing press the same year for posting instructions to YouTube on how to enter the U.K. with false documents. Police then found him working in a secret cannabis house in Coventry. He was eventually sentenced to 15 months in prison. 

The pair now travel the world, uploading TikTok videos of their high-end lifestyle: jet skiing in Dubai, hanging out in high-rise hotels, driving their Ferrari with the needle touching 300 kilometers per hour (180 mph) through the tunnel outside Kukes. 

Billa’s nephews, who are seven and 11, were keen to meet him and get a selfie when they came to town, like every other kid in Kukes. 

“Young people are so affected by these models, and they’re addicted to social media. Emigrants come back for a holiday, just for a few days, and it’s really hard for us,” Billa said. 

Billa is worried about his nephews, who are being exposed to luxury lifestyle videos from the U.K., which go against the values that he’s trying to teach them. They haven’t yet said they want to leave the country, but he’s afraid that they might start talking about it one day. “They show me how they want a really expensive car, or tell me they want to be social media influencers. It’s really hard for me to know what to say to them,” he said.

Billa feels like he’s fighting against an algorithm, trying to show his nephews that the lifestyle that the videos promote isn’t real. “I’m very concerned about it. There’s this emphasis for kids and teenagers to get rich quickly by emigrating. It’s ruining society. It’s a source of misinformation because it’s not real life. It’s just an illusion, to get likes and attention.”

And he knows that the TikTok videos that his nephews watch every day aren’t representative of what life is really like in the U.K. “They don’t tell the darker story,” he said.

The Gjallica mountains rise up around Kukes, one of the poorest cities in Europe.

In 2022, the number of people leaving Albania for the U.K. ticked up dramatically, as well as the number of those seeking asylum, at around 16,000, more than triple the previous year. According to the Migration Observatory at the University of Oxford, one reason for the uptick in claims may be that Albanians who lack proper immigration status are more likely to be identified, leading them to claim asylum in order to delay being deported. But Albanians claiming asylum are also often victims of blood feuds — long-standing disputes between communities, often resulting in cycles of revenge — and viciously exploitative trafficking networks that threaten them and their families if they return to Albania.

By 2022, Albanian criminal gangs in Britain were in control of the country’s illegal marijuana-growing trade, taking over from Vietnamese gangs who had previously dominated the market. The U.K.’s lockdown — with its quiet streets and newly empty businesses and buildings — likely created the perfect conditions for setting up new cannabis farms all over the country. During lockdown, these gangs expanded production and needed an ever-growing labor force to tend the plants — growing them under high-wattage lamps, watering them and treating them with chemicals and fertilizers. So they started recruiting. 

Everyone in Kukes remembers it: The price of passage from Albania to the U.K. on a truck or small boat suddenly dropped when Covid-19 restrictions began to ease. Before the pandemic, smugglers typically charged 18,000 pounds (around $22,800) to take Albanians across the channel. But last year, posts started popping up on TikTok advertising knock-down prices to Britain starting at around 4,000 pounds (around $5,000). 

People in Kukes told me that even if they weren’t interested in being smuggled abroad, TikTok’s algorithm would feed them smuggling content — so while they were watching other unrelated videos, suddenly an anonymous post advertising cheap passage to the U.K. would appear on their “For You” feed.

TikTok became an important recruitment tool. Videos advertising “Black Friday sales” offered special discounts after Boris Johnson’s resignation, telling people to hurry before a new prime minister took office, or when the U.K. Home Office announced its policy to relocate migrants to Rwanda. People remember one post that even encouraged Albanians to come and pay their respects to Queen Elizabeth II when she died in September last year. There was a sense of urgency to the posts, motivating people to move to the U.K. while they still could, lest the opportunity slip away. 

The videos didn’t go into detail about what lay just beneath the surface. Criminal gangs offered to pay for people’s passage to Britain, on the condition they worked for them when they arrived. They were then typically forced to work on cannabis farms to pay off the money they owed, according to anti-human trafficking advocacy groups and the families that I met in Kukes. 

Elma Tushi, 17, in Kukes, Albania.

“I imagined my first steps in England to be so different,” said David, 33, who first left Albania for Britain in 2014 after years of struggling to find a steady job. He could barely support his son, then a toddler, or his mother, who was having health problems and couldn’t afford her medicine. He successfully made the trip across the channel by stowing away in a truck from northern France. 

He still remembers the frightened face of the Polish driver who discovered him hiding in the wheel well of the truck, having already reached the outskirts of London. David made his way into the city and slept rough for several weeks. “I looked at everyone walking by, sometimes recognizing Albanians in the crowd and asking them to buy me bread. I couldn’t believe what was happening to me.” 

He found himself half-hoping the police might catch him and send him home. “I was so desperate. But another part of me said to myself, ‘You went through all of these struggles, and now you’re going to give up?’”

David, who asked us to identify him with a pseudonym to protect his safety, found work in a car wash. He was paid 35 pounds (about $44) a day. “To me, it felt like a lot,” he said. “I concentrated on saving money every moment of the day, with every bite of food I took,” he told me, describing how he would live for three or four days on a tub of yogurt and a package of bread from the grocery chain Lidl, so that he could send money home to his family.

At the car wash, his boss told him to smile at the customers to earn tips. “That’s not something we’re used to in Albania,” he said. “I would give them the keys and try to smile, but it was like this fake, frozen, hard smile.”

Like David, many Albanians begin their lives in the U.K. by working in the shadow economy, often at car washes or construction sites where they’re paid in cash. While there, they can be targeted by criminal gangs with offers of more lucrative work in the drug trade. In recent years, gangs have funneled Albanian workers from the informal labor market into cannabis grow houses. 

David said he was careful to avoid the lure of gangsters. At the French border, someone recognized him as Albanian and approached, offering him a “lucky ticket” to England with free accommodation when he arrived. He knew what price he would have to pay — and ran. “You have to make deals with them and work for them,” he told me, “and then you get sucked into a criminal life forever.”

It’s a structure that traps people in a cycle of crime and debt: Once in the U.K., they have no documents and are at the mercy of their bosses, who threaten to report them to the police or turn them into the immigration authorities if they don’t do as they say. 

Gang leaders manipulate and intimidate their workers, said Anxhela Bruci, Albania coordinator at the anti-trafficking foundation Arise, who I met in Tirana, the Albanian capital. “They use deception, telling people, ‘You don’t have any documents, I’m going to report you to the police, I have evidence you have been working here.’ There’s that fear of going to prison and never seeing your family again.” 

Gangs, Bruci told me, will also make personal threats against the safety of their victims’ families. “They would say, ‘I'm going to kill your family. I'm going to kill your brother. I know where he lives.’ So you’re trapped, you’re not able to escape.”

She described how workers often aren’t allowed to leave the cannabis houses they’re working in, and are given no access to Wi-Fi or internet. Some are paid salaries of 600-800 pounds (about $760-$1,010) a month. Others, she added, are effectively bonded labor, working to pay back the money they owe for their passage to Britain. It’s a stark difference from the lavish lifestyles they were promised.

As for telling their friends and family back home about their situation, it’s all but impossible. “It becomes extremely dangerous to speak up,” said Bruci. Instead, once they do get online, they feel obliged to post a success story. “They want to be seen as brave. We still view the man as the savior of the family,” said Bruci, who is herself Albanian.

Bruci believes that some people posting on TikTok about their positive experience going to the U.K. could be “soldiers” for traffickers. “Some of them are also victims of modern slavery themselves and then they have to recruit people in order to get out of their own trafficking situation.”

As I was reporting this story, summer was just around the bend and open season for recruitment had begun. A quick search in Albanian on TikTok brought up a mass of new videos advertising crossings to the U.K. If you typed in "Angli" — Albanian for “England” — on TikTok the top three videos to appear all involved people making their way into the UK. One was a post advertising cheap crossings, and the other two were Albanians recording videos of their journeys across the channel. After we flagged this to TikTok, those particular posts were removed. New posts, however, still pop up every day.

With the British government laser-focused on small boat crossings, and drones buzzing over the beaches of northern France, traveling by truck was being promoted at a reduced price of 3,000 pounds (about $3,800). And a new luxury option was also on offer — speedboat crossings from Belgium to Britain that cost around 10,000 pounds (about $12,650) per person.

Kevin Morgan, TikTok’s head of trust and safety for Africa, Europe and the Middle East, said the company has a “zero tolerance approach to human smuggling and trafficking,” and permanently bans offending accounts. TikTok told me it had Albanian-speaking moderators working for the platform, but would not specify how many. 

In March, TikTok announced a new policy as part of this zero-tolerance approach. The company said it would automatically redirect users who searched for particular keywords and phrases to anti-trafficking sites. In June, the U.K.’s Border Force told the Times that they believed TikTok’s controls had helped lower the numbers of small boat crossings into Britain. Some videos used typos on purpose to get around TikTok’s controls. As recently as mid-August, a search on TikTok brought up a video with a menu of options to enter Britain — via truck, plane or dinghy.

In Kukes, residents follow British immigration policy with the same zeal as they do TikTok videos from Britain. They trade stories and anecdotes about their friends, brothers and husbands. Though their TikTok feeds rarely show the reality of life in London, some young people in Kukes know all is not as it seems.

“The conditions are very miserable, they don’t eat very well, they don’t wash their clothes, they don’t have much time to live their lives,” said Evis Zeneli, 26, as we scrolled through TikTok videos posted by her friends in the U.K., showing a constant stream of designer shopping trips to Gucci, Chanel and Louis Vuitton.

It’s the same for a 19-year-old woman I met whose former classmate left last year. Going by his social media posts, life looks great — all fast cars and piles of British banknotes. But during private conversations, they talk about how difficult his life really is. The videos don’t show it, she told me, but he is working in a cannabis grow house. 

“He’s not feeling very happy. Because he doesn’t have papers, he’s obliged to work in this illegal way. But he says life is still better over there than it is here,” she said.

 “It’s like the boys have gone extinct,” she added. At her local park, which used to be a hangout spot for teenagers, she only sees old people now.

Albiona Thaçi, 33, at home with her daughter.

“There’s this huge silence,” agreed Albiona Thaçi, 33, whose husband traveled to the U.K. nine months ago in a small boat. When he left, she brought her two daughters to the seaside to try to take their mind off of the terrifying journey that their father had undertaken. Traveling across the English Channel in a fragile dinghy, he dropped his phone in the water, and they didn’t hear from him for days. “Everything went black,” Thaçi said. Eventually, her husband called from the U.K., having arrived safely. But she still doesn’t know when she’ll see him again. 

In her 12-apartment building, all the men have left. “Now we have this very communal feeling. Before, we used to knock on each others’ doors. Now, we just walk in and out.” But Thaçi’s friends have noticed that when they get together for coffee in the mornings, she’s often checked out of their conversation. “My heart, my mind, is in England,” she said. She plans to join her husband if he can get papers for her and their daughters. 

The absence of men hangs over everything. In the village of Shishtavec, in the mountains above Kukes, five women crowded around the television one afternoon when I visited. It was spring, but it still felt like winter. They were streaming a YouTube video of dozens of men from their village, all doing a traditional dance at a wedding — in London. 

Adelie Molla and her aunt Resmije Molla watch television in Shishtavec.

“They’re doing the dance of men,” said Adelie Molla, 22. She had just come in from the cold, having collected water from the well up by the town mosque. The women told me that the weather had been mild this year. “The winter has gone to England,” laughed Molla’s mother Yaldeze, 53, whose son left for the U.K. seven months ago. Many people in their village have Bulgarian heritage, meaning they can apply for European passports and travel to Britain by plane, without needing to resort to small boats.

The whole family plans to eventually migrate to Britain and reunite. “For better or worse I have to follow my children,” said Yaldeze, who has lived in the village her whole life. She doesn’t speak a word of English. “I’m going to be like a bird in a cage.” 

Around the town, some buildings are falling into disrepair while others are half-finished, the empty window-frames covered in plastic sheeting. A few houses look brand new, but the windows are dark. Adelie explained that once people go to the U.K., they use the money they make there to build houses in their villages. The houses lie empty, except when the emigrants come to visit. And when they come back to visit their hometown, they drive so that they can show off cars with U.K. license plates — proof they’ve made it. 

 “This village is emptying out,” Molla said, describing the profound boredom that had overtaken her life. “Maybe after five years, no one will be here at all anymore. They’ll all be in London.”

The old city of Kukes was submerged beneath a reservoir when Albania’s communist regime built a hydropower dam in the 1970s.

The oldest settlements of Kukes date back to the fourth century. In the 1960s, when Albania’s communist government decided to build a hydropower dam, the residents of Kukes all had to leave their homes and relocate further up the mountain to build a new city, while the ancient city was flooded beneath an enormous reservoir. And in the early 1970s, under Enver Hoxha’s paranoid communist regime, an urban planner was tasked with building an underground version of Kukes, where 10,000 people could live in bunkers for six months in the event of an invasion. A vast network of tunnels still lies beneath the city today. 

“Really, there are three Kukeses,” one local man told me: the Kukes where we were walking around, the subterranean Kukes beneath our feet, and the Kukes underwater. But even the Kukes of today is a shadow of its former self, a town buried in the memories of the few residents who remain.

View of a street in Kukes, Albania.

David was deported from Britain in 2019 after police stopped him at a London train station. He tried to return to the U.K. in December 2022 by hiding in a truck but couldn’t get past the high-tech, high-security border in northern France. He is now back in Kukes, struggling to find work. 

He wanted me to know he was a patriotic person who, given the chance to have a good life, would live in Albania forever. But, he added, “You don’t understand how much I miss England. I talk in English, I sing in English, I cook English food, and I don’t want my soul to depart this earth without going one more time to England.”

He still watches social media reels of Albanians living in the U.K. “Some people get lucky and get rich. But when you see it on TikTok or Instagram, it might not even be real.” 

Besmir Billa, whose nephews worry him with their TikTok aspirations, has set himself a challenge. He showed me his own TikTok account, which he started last summer.

The grid is full of videos showcasing the beauty of Kukes: clips of his friends walking through velvety green mountains, picking flowers and petting wild horses. “I’m testing myself to see if TikTok can be used for a good thing,” he told me. 

“The idea I had is to express something valuable, not something silly. I think this is something people actually need,” he said. During the spring festival, a national holiday in Albania when the whole country pours onto the streets to celebrate the end of winter, he posted a video showing young people in the town giving flowers to older residents. 

At first, his nephews were “not impressed” by their uncle’s page. But then, the older boy clocked the total number of views on the spring festival video: 40,000 and counting. 

 

The post The Albanian town that TikTok emptied appeared first on Coda Story.

]]>
42467
Senegal is stifling its democracy in the dark https://www.codastory.com/authoritarian-tech/senegal-is-stifling-its-democracy-in-the-dark/ Fri, 11 Aug 2023 13:37:50 +0000 https://www.codastory.com/?p=45724 By shutting down the internet and jailing the opposition, the Senegalese government turns to the authoritarian playbook to suppress protests

The post Senegal is stifling its democracy in the dark appeared first on Coda Story.

]]>
On July 31, after jailing opposition leader Ousmane Sonko and dissolving the political party that he leads, Senegal’s government ordered a nationwide mobile internet shutdown. The communications ministry said the shutdown was meant to curb “hateful messages.”

The authorities had made a similar decision in June after a Senegalese court handed Sonko a two-year prison sentence in absentia, a decision his supporters believed was a politically motivated attempt to prevent Sonko from running for president in 2024. At least 16 people died when Sonko’s supporters and Senegalese police clashed on the streets of the capital Dakar. The subsequent July protests left at least two people dead.

Last week, Sonko was hospitalized after going on a hunger strike to protest his arrest.  

“We fear the government,” Mohammed Diouf, a Dakar school teacher told me. “The government does not want the world to know what is happening in our country.” He said the internet shutdown left him unable to communicate with other protesters. “There is brutal oppression, and many young demonstrators have been killed and injured. The security forces use live fire, that is the situation,” said Diouf, who opted to use a pseudonym out of fear of reprisal.

On August 2, the day before Diouf and I spoke, the Senegalese government announced an indefinite ban on TikTok, the app that young people have been using to document violent encounters between demonstrators and the security apparatus.

Fueling public anger is a widely held fear that Senegalese President Macky Sall, currently serving his second term in office, may try to run for president again in 2024. In 2016, a public referendum on presidential term limits reset the period a president can stay in power to a maximum of two five-year terms. Sall, who had, at the time, begun serving his second term, argued that the constitutional amendment “reset the clock to zero,” making him eligible to run again. 

In an address to the nation after the June protests, Sall vowed he would not run for a third term. But experts say he is to blame for the ambiguity that has fueled unrest.

“This problem has to be put at the feet of Macky Sall. For a long time, he made the potential of him running for a third time ambiguous,” said Ibrahim Anoba, an African affairs analyst and a fellow at the Center for African Prosperity. “You can imagine what the populace will feel,” Anoba told me. “More so, if the president becomes intolerant of opposition leaders.” 

Current political anxieties have been compounded by the economic downturn resulting from the Covid-19 pandemic and the food shortages triggered by Russia’s war in Ukraine. Senegal’s poverty rate was 36.3% in 2022, according to the World Bank, and the economy has also been hampered by rising debt.. 

The future looked much brighter in 2014, when newly discovered oil reserves appeared to set the stage for Senegal to become a major oil producer. But this oil, too, is now a source of public anxiety: Senegalese citizens fear that Sall will cede these riches to European companies.

Protesters, galvanized by Sonko amid concerns that Sall might indeed pursue a third term,  worried that Sall, a geological engineer before he became president, wanted to preside over the anticipated oil boom. It tipped public discontent into violent unrest, particularly among the country’s youth, who decried massive corruption, the overbearing influence of France and the slowdown of the economy. 

“We are fighting that the country retains the sovereignty of its wealth and natural resources which the government wants to sell off to oil firms. And for that, we will go until the end because it is our future that is at stake,” Diouf, the Dakar school teacher, told me. It is to Sonko that voters like Diouf look to reform Senegal’s system.

Sonko’s PASTEF party started in 2014 as a fringe party composed of political newcomers. Sonko, a young former tax inspector had shot to national recognition when he became a whistleblower in 2016, exposing the use of offshore tax havens by foreign companies to avoid paying taxes in Senegal. He became a member of the national assembly in 2017 and ran for president in 2019, trailing third behind Sall and Idrissa Seck Rewmi.

His criticism of Sall and his larger-than-life internet presence have endeared Sonko to young voters. He rapidly became the main threat to the ruling party. And it is that threat, say Sonko’s supporters, that is driving the criminal charges Sonko now faces, including rape (for which he was acquitted), formenting insurrection, creating political unrest, terrorism and theft.

State measures to control protests led by Sonko supporters have been violent and draconian. The internet shutdowns also pose a threat to Senegal’s already floundering economy. In the first quarter of 2023, Senegal’s unemployment rate stood at 21.5%, and Net Blocks estimates that each day without access to mobile internet costs the country nearly $8 million.

Financial and cryptocurrency trades, as well as ride hailing and e-commerce businesses, are all seeing losses due to the network shutdowns. “With the restriction of the internet that is becoming recurrent these days, we no longer have the opportunity to sell or buy USDT,” said Mady Dia, referring to Tether, a cryptocurrency “stablecoin” pegged to the U.S. dollar. “That is an abysmal shortfall,” Dia, who works with a cryptocurrency exchange, told me.

Dia and Diouf both said they’d withdrawn money when the protests began, expecting that the banks would likely close and that financial services would be crippled were the authorities to impose an internet shutdown. 

The political situation, Dia said, and the internet shutdowns have left him contemplating options for leaving Senegal altogether. 

“Many young people are ready to abandon their country if Sall remains in power in 2024,” he told me. In the past decade, thousands of young Senegalese have sought to move to Europe in search of better fortunes, often on small boats. These perilous journeys have claimed hundreds of lives. Last month, at least 15 people drowned after a boat carrying migrants and refugees capsized off the coast of Dakar.

In a West Africa beset by political instability – the most recent example being the coup in Niger – Senegal has been cited as a model of democracy. That reputation is starting to wear off. 

“This is really bad for the region itself,” said Anoba, the analyst at the Center for African Prosperity. “As you know, Macky Sall is one of the leading figures in West Africa, and right now [as] we are trying to quench the fires of coups that are changing the political terrain, this is the last thing we want.”

Threats against Senegalese media represent another sign of democratic backsliding in the country. In June, a television channel offering live coverage of the protests was suspended for 30 days. And Papa Ale Niang, a journalist with the prominent daily newspaper Dakarmatin, was charged on August 1, like Sonko, with “inciting insurrection.”

Internet shutdowns are also a sign of faltering democratic values. “Cutting off the internet is tantamount to denying the right to information, which is a constitutional principle, not to mention international laws,” said Emmanuel Diokh, the Senegal lead at Internet Sans Frontières, an international organization that defends access to the internet. 

Since 2017, internet shutdowns have become an increasingly common tactic of information and social control in Africa. Cameroon’s long-serving president, Paul Biya, imposed an internet ban in the English-speaking region of the country in 2017 that lasted three months. In 2019, Zimbabwean President Emmerson Mnangagwa also imposed an internet shutdown in response to protests. Governments in Ethiopia, Eritrea and Equatorial Guinea have also imposed strict internet regulations in the past five years.

All of these countries have used the same rationale: The actions were intended to curb hate speech or to avoid the breakdown of order. Sall has shown one thing to the Senegalese people — the internet is not safe from government control. Instead of curbing hate speech, shutting down the internet is a sign that he is prepared to use any means necessary to decimate the opposition before the elections in February. Still, protesters like Diouf say they will not relent.

The post Senegal is stifling its democracy in the dark appeared first on Coda Story.

]]>
45724