Captured - Coda Story https://www.codastory.com/idea/captured/ stay on the story Fri, 18 Apr 2025 16:27:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 https://eymjfqbav2v.exactdn.com/wp-content/uploads/2019/07/cropped-LogoWeb2021Transparent-1.png?lossy=1&resize=32%2C32&ssl=1 Captured - Coda Story https://www.codastory.com/idea/captured/ 32 32 239620515 When I’m 125? https://www.codastory.com/authoritarian-tech/when-im-125/ Thu, 03 Apr 2025 14:07:36 +0000 https://www.codastory.com/?p=55448 What it means to live an optimized life and why Bryan Johnson’s Blueprint just doesn’t get it

The post When I’m 125? appeared first on Coda Story.

]]>
I grew up in rural Idaho in the late 80s and early 90s. My childhood was idyllic. I’m the oldest of five children. My father was an engineer-turned-physician, and my mother was a musician — she played the violin and piano. We lived in an amazing community, with great schools, dear friends and neighbors. There was lots of skiing, biking, swimming, tennis, and time spent outdoors. 

If something was very difficult, I was taught that you just had to reframe it as a small or insignificant moment compared to the vast eternities and infinities around us. It was a Mormon community, and we were a Mormon family, part of generations of Mormons. I can trace my ancestry back to the early Mormon settlers. Our family were very observant: going to church every Sunday, and deeply faithful to the beliefs and tenets of the Mormon Church.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become." And since God is perfect, the belief is that we too can one day become perfect. 

We believed in perfection. And we were striving to be perfect—realizing that while we couldn't be perfect in this life, we should always attempt to be. We worked for excellence in everything we did.

It was an inspiring idea to me, but growing up in a world where I felt perfection was always the expectation was also tough. 

In a way, I felt like there were two of me. There was this perfect person that I had to play and that everyone loved. And then there was this other part of me that was very disappointed by who I was—frustrated, knowing I wasn't living up to those same standards. I really felt like two people.

This perfectionism found its way into many of my pursuits. I loved to play the cello. Yo-Yo Ma was my idol. I played quite well and had a fabulous teacher. At 14, I became the principal cellist for our all-state orchestra, and later played in the World Youth Symphony at Interlochen Arts Camp and in a National Honors Orchestra. I was part of a group of kids who were all playing at the highest level. And I was driven. I wanted to be one of the very, very best.

I went on to study at Northwestern in Chicago and played there too. I was the youngest cellist in the studio of Hans Jensen, and was surrounded by these incredible musicians. We played eight hours a day, time filled with practice, orchestra, chamber music, studio, and lessons. I spent hours and hours working through the tiniest movements of the hand, individual shifts, weight, movement, repetition, memory, trying to find perfect intonation, rhythm, and expression. I loved that I could control things, practice, and improve. I could find moments of perfection.

I remember one night being in the practice rooms, walking down the hall, and hearing some of the most beautiful playing I'd ever heard. I peeked in and didn’t recognize the cellist. They were a former student now warming up for an audition with the Chicago Symphony. 

Later on, I heard they didn’t get it. I remember thinking, "Oh my goodness, if you can play that well and still not make it..." It kind of shattered my worldview—it really hit me that I would never be the very best. There was so much talent, and I just wasn't quite there. 

I decided to step away from the cello as a profession. I’d play for fun, but not make it my career. I’d explore other interests and passions.

There's a belief in Mormonism: "As man is, God once was. As God is, man may become."

As I moved through my twenties, my relationship with Mormonism started to become strained. When you’re suddenly 24, 25, 26 and not married, that's tough. Brigham Young [the second and longest-serving prophet of the Mormon Church] said that if you're not married by 30, you're a menace to society. It just became more and more awkward to be involved. I felt like people were wondering, “What’s wrong with him?” 

Eventually, I left the church. And I suddenly felt like a complete person — it was a really profound shift. There weren’t two of me anymore. I didn’t have to put on a front. Now that I didn’t have to worry about being that version of perfect, I could just be me. 

But the desire for perfection was impossible for me to kick entirely. I was still excited about striving, and I think a lot of this energy and focus then poured into my work and career as a designer and researcher. I worked at places like the Mayo Clinic, considered by many to be the world’s best hospital. I studied in London at the Royal College of Art, where I received my master’s on the prestigious Design Interactions course exploring emerging technology, futures, and speculative design. I found I loved working with the best, and being around others who were striving for perfection in similar ways. It was thrilling.

One of the big questions I started to explore during my master's studies in design, and I think in part because I felt this void of meaning after leaving Mormonism, was “what is important to strive for in life?” What should we be perfecting? What is the goal of everything? Or in design terms, “What’s the design intent of everything?”

I spent a huge amount of time with this question, and in the end I came to the conclusion that it’s happiness. Happiness is the goal. We should strive in life for happiness. Happiness is the design intent of everything. It is the idea that no matter what we do, no matter what activity we undertake, we do it because we believe doing it or achieving the thing will make us better off or happier. This fit really well with the beliefs I grew up with, but now I had a new, non-religious way in to explore it.

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met. You're happy when you have a wonderful meal because your body has evolved to identify good food as improving your chances of survival. The same is true for sleep, exercise, sex, family, friendships, meaning, purpose–everything can be seen through this evolutionary happiness lens. 

 So if happiness evolved as the signal for survival, then I wanted to optimize my survival to optimize that feeling. What would it look like if I optimized the design of my life for happiness? What could I change to feel the most amount of happiness for the longest amount of time? What would life look like if I lived perfectly with this goal in mind?

I started measuring my happiness on a daily basis, and then making changes to my life to see how I might improve it. I took my evolutionary basic needs for survival and organized them in terms of how quickly their absence would kill me as a way to prioritize interventions. 

Breathing was first on the list — we can’t last long without it. So I tried to optimize my breathing. I didn’t really know how to breathe or how powerful breathing is—how it changes the way we feel, bringing calm and peace, or energy and alertness. So I practiced breathing.

The optimizations continued, diet, sleep, exercise, material possessions, friends, family, purpose, along with a shedding of any behaviour or activity that I couldn’t see meaningfully improving my happiness. For example, I looked at clothing and fashion, and couldn’t see any real happiness impact. So I got rid of almost all of my clothing, and have worn the same white t-shirts and grey or blue jeans for the past 15 years.

I got involved in the Quantified Self (QS) movement and started tracking my heart rate, blood pressure, diet, sleep, exercise, cognitive speed, happiness, creativity, and feelings of purpose. I liked the data. I’d go to QS meet-ups and conferences with others doing self experiments to optimize different aspects of their lives, from athletic performance, to sleep, to disease symptoms.

I also started to think about longevity. If I was optimizing for happiness through these evolutionary basics, how long could one live if these needs were perfectly satisfied? I started to put on my websites – “copyright 2103”. That’s when I’ll be 125. That felt like a nice goal, and something that I imagined could be completely possible — especially if every aspect of my life was optimized, along with future advancements in science and medicine.

In 2022, some 12 years later, I came across Bryan Johnson. A successful entrepreneur, also ex-Mormon, optimizing his health and longevity through data. It was familiar. He had come to this kind of life optimization in a slightly different way and for different reasons, but I was so excited by what he was doing. I thought, "This is how I’d live if I had unlimited funds."

He said he was optimizing every organ and body system: What does our heart need? What does our brain need? What does our liver need? He was optimizing the biomarkers for each one. He said he believed in data, honesty and transparency, and following where the data led. He was open to challenging societal norms. He said he had a team of doctors, had reviewed thousands of studies to develop his protocols. He said every calorie had to fight for its life to be in his body. He suggested everything should be third-party tested. He also suggested that in our lifetime advances in medicine would allow people to live radically longer lives, or even to not die. 

These ideas all made sense to me. There was also a kind of ideal of perfect and achieving perfection that resonated with me. Early on, Bryan shared his protocols and data online. And a lot of people tried his recipes and workouts, experimenting for themselves. I did too. It also started me thinking again more broadly about how to live better, now with my wife and young family. For me this was personal, but also exciting to think about what a society might look like when we strived at scale for perfection in this way. Bryan seemed to be someone with the means and platform to push this conversation.

I think all of my experience to this point was the set up for, ultimately, my deep disappointment in Bryan Johnson and my frustrating experience as a participant in his BP5000 study.

In early 2024 there was a callout for people to participate in a study to look at how Bryan’s protocols might improve their health and wellbeing. He said he wanted to make it easier to follow his approach, and he started to put together a product line of the same supplements that he used. It was called Blueprint – and the first 5000 people to test it out would be called the Blueprint 5000, or BP5000. We would measure our biomarkers and follow his supplement regime for three months and then measure again to see its effects at a population level. I thought it would be a fun experiment, participating in real citizen science moving from n=1 to n=many. We had to apply, and there was a lot of excitement among those of us who were selected. They were a mix of people who had done a lot of self-quantification, nutritionists, athletes, and others looking to take first steps into better personal health. We each had to pay about $2,000 to participate, covering Blueprint supplements and the blood tests, and we were promised that all the data would be shared and open-sourced at the end of the study.

The study began very quickly, and there were red flags almost immediately around the administration of the study, with product delivery problems, defective product packaging, blood test problems, and confusion among participants about the protocols. There wasn’t even a way to see if participants died during the study, which felt weird for work focused on longevity. But we all kind of rolled with it. We wanted to make it work.

We took baseline measurements, weighed ourselves, measured body composition, uploaded Whoop or Apple Watch data, did blood tests covering 100s of biomarkers, and completed a number of self-reported studies on things like sexual health and mental health. I loved this type of self-measurement.

Participants connected over Discord, comparing notes, and posting about our progress. 

Right off, some effects were incredible. I had a huge amount of energy. I was bounding up the stairs, doing extra pull-ups without feeling tired. My joints felt smooth. I noticed I was feeling bulkier — I had more muscle definition as my body fat percentage started to drop.

There were also some strange effects. For instance, I noticed in a cold shower, I could feel the cold, but I didn’t feel any urgency to get out. Same with the sauna. I had weird sensations of deep focus and vibrant, vivid vision. I started having questions—was this better? Had I deadened sensitivity to pain? What exactly was happening here?

Then things went really wrong. My ears started ringing — high-pitched and constant. I developed Tinnitus. And my sleep got wrecked. I started waking up at two, three, four AM, completely wired, unable to turn off my mind. It was so bad I had to stop all of the Blueprint supplements after only a few weeks.

On the Discord channel where we were sharing our results, I saw Bryan talking positively about people having great experiences with the stack. But when I or anyone else mentioned adverse side effects, the response tended to be: “wait until the study is finished and see if there’s a statistical effect to worry about."

So positive anecdotes were fine, but when it came to negative ones, suddenly, we needed large-scale data. That really put me off. I thought the whole point was to test efficacy and safety in a data-driven way. And the side effects were not ignorable.

Many of us were trying to help each other figure out what interventions in the stack were driving different side effects, but we were never given the “1,000+ scientific studies” that Blueprint was supposedly built upon which would have had side-effect reporting. We struggled even to get a complete list of the interventions that were in the stack from the Blueprint team, with numbers evolving from 67 to 74 over the course of the study. It was impossible to tell which ingredient in which products was doing what to people.

We were told to no longer discuss side-effects in the Discord but email Support with issues. I was even kicked off the Discord at one point for “fear mongering” because I was encouraging people to share the side effects they were experiencing.

The Blueprint team were also making changes to the products mid-study, changing protein sources and allulose levels, leaving people with months’ worth of expensive essentially defective products, and surely impacting study results.

When Bryan then announced they were launching the BP10000, allowing more people to buy his products, even before the BP5000 study had finished, and without addressing all of the concerns about side effects, it suddenly became clear to me and many others that we had just been part of a launch and distribution plan for a new supplement line, not participants in a scientific study.

Bryan has not still to this day, a year later, released the full BP5000 data set to the participants as he promised to do. In fact he has ghosted participants and refuses to answer questions about the BP5000. He blocked me on X recently for bringing it up. I suspect that this is because the data is really bad, and my worries line up with reporting from the New York Times where leaked internal Blueprint data suggests many of the BP5000 participants experienced some negative side effects, with some participants even having serious drops in testosterone or becoming pre-diabetic.

I’m still angry today about how this all went down. I’m angry that I was taken in by someone I now feel was a snake oil salesman. I’m angry that the marketing needs of Bryan’s supplement business and his need to control his image overshadowed the opportunity to generate some real science. I’m angry that Blueprint may be hurting some people. I’m angry because the way Bryan Johnson has gone about this grates on my sense of perfection.

Bryan’s call to “Don’t Die” now rings in my ears as “Don’t Lie” every time I hear it. I hope the societal mechanisms for truth will be able to help him make a course correction. I hope he will release the BP5000 data set and apologize to participants. But Bryan Johnson feels to me like an unstoppable marketing force at this point — full A-list influencer status — and sort of untouchable, with no use for those of us interested in the science and data.

This experience has also had me reflecting on and asking bigger questions of the longevity movement and myself.

We’re ignoring climate breakdown. The latest indications suggest we’re headed toward three degrees of warming. These are societal collapse numbers, in the next 15 years. When there are no bees and no food, catastrophic fires and floods, your Heart Rate Variability doesn’t really matter. There’s a sort of “bunker mentality” prevalent in some of the longevity movement, and wider tech — we can just ignore it, and we’ll magically come out on the other side, sleep scores intact. 

The question then became: What is happiness? I came to the conclusion that happiness is chemical—an evolved sensation that indicates when our needs in terms of survival have been met.

I’ve also started to think that calls to live forever are perhaps misplaced, and that in fact we have evolved to die. Death is a good thing. A feature, not a bug. It allows for new life—we need children, young people, new minds who can understand this context and move us forward. I worry that older minds are locked into outdated patterns of thinking, mindsets trained in and for a world that no longer exists, thinking that destroyed everything in the first place, and which is now actually detrimental to progress. The life cycle—bringing in new generations with new thinking—is the mechanism our species has evolved to function within. Survival is and should be optimized for the species, not the individual.

I love thinking about the future. I love spending time there, understanding what it might look like. It is a huge part of my design practice. But as much as I love the future, the most exciting thing to me is the choices we make right now in each moment. All of that information from our future imaginings should come back to help inform current decision-making and optimize the choices we have now. But I don’t see this happening today. Our current actions as a society seem totally disconnected from any optimized, survivable future. We’re not learning from the future. We’re not acting for the future.

We must engage with all outcomes, positive and negative. We're seeing breakthroughs in many domains happening at an exponential rate, especially in AI. But, at the same time, I see job displacement, huge concentration of wealth, and political systems that don't seem capable of regulating or facilitating democratic conversations about these changes. Creators must own it all. If you build AI, take responsibility for the lost job, and create mechanisms to share wealth. If you build a company around longevity and make promises to people about openness and transparency, you have to engage with all the positive outcomes and negative side effects, no matter what they are.

I’m sometimes overwhelmed by our current state. My striving for perfection and optimizations throughout my life have maybe been a way to give me a sense of control in a world where at a macro scale I don’t actually have much power. We are in a moment now where a handful of individuals and companies will get to decide what’s next. A few governments might be able to influence those decisions. Influencers wield enormous power. But most of us will just be subject to and participants in all that happens. And then we’ll die.

But until then my ears are still ringing.

This article was put together based on interviews J.Paul Neeley did with Isobel Cockerell and Christopher Wylie, as part of their reporting for CAPTURED, our new audio series on how Silicon Valley’s AI prophets are choosing our future for us. You can listen now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post When I’m 125? appeared first on Coda Story.

]]>
55448
Captured: how Silicon Valley is building a future we never chose https://www.codastory.com/authoritarian-tech/captured-silicon-valley-future-religion-artificial-intelligence/ Thu, 03 Apr 2025 14:04:54 +0000 https://www.codastory.com/?p=55514 AI’s prophets speak of the technology with religious fervor. And they expect us all to become believers.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

]]>
In April last year I was in Perugia, at the annual international journalism festival. I was sitting in a panel session about whether AI marked the end of journalism, when a voice note popped up on my Signal. 

It came from Christopher Wylie. He’s a data scientist and the whistleblower who cracked open the Cambridge Analytica scandal in 2018. I had just started working with him on a new investigation into AI. Chris was supposed to be meeting me, but he had found himself trapped in Dubai in a party full of Silicon Valley venture capitalists.

“I don’t know if you can hear me — I’m in the toilet at this event, and people here are talking about longevity, how to live forever, but also prepping for when people revolt and when society gets completely undermined,” he had whispered into his phone. “You have in another part of the world, a bunch of journalists talking about how to save democracy. And here, you've got a bunch of tech guys thinking about how to live past democracy and survive.”

A massive storm and a once-in-a-generation flood had paralyzed Dubai when Chris was on a layover on his way to Perugia. He couldn’t leave. And neither could the hundreds of tech guys who were there for a crypto summit. The freakish weather hadn’t stopped them partying, Chris told me over a frantic Zoom call. 

“You're wading through knee-deep water, people are screaming everywhere, and then…  What do all these bros do? They organize a party. It's like the world is collapsing outside and yet you go inside and it's billionaires and centimillionaires having a party,” he said. “Dubai right now is a microcosm of the world. The world is collapsing outside and the people are partying.”

Chris and I eventually managed to meet up. And for over a year we worked together on a podcast that asks what is really going on inside the tech world.  We looked at how the rest of us —  journalists, artists, nurses, businesses, even governments — are being captured by big tech’s ambitions for the future and how we can fight back. 

Mercy was a content moderator for Meta. She was paid around a dollar an hour for work that left her so traumatized that she couldn't sleep. And when she tried to unionize, she was laid off.

Our reporting took us around the world from the lofty hills of Twin Peaks in San Francisco to meet the people building AI models, to the informal settlements of Kenya to meet the workers training those models.

One of these people was Mercy Chimwani, who we visited in her makeshift house with no roof on the outskirts of Nairobi. There was mud beneath our feet, and above you could see the rainclouds through a gaping hole where the unfinished stairs met the sky. When it rained, Mercy told us, water ran right through the house. It’s hard to believe, but she worked for Meta. 

Mercy was a content moderator, hired by the middlemen Meta used to source employees. Her job was to watch the internet’s most horrific images and video –  training the company’s system so it can automatically filter out such content before the rest of us are exposed to it. 

She was paid around a dollar an hour for work that left her so traumatized that she couldn’t sleep. And when she and her colleagues tried to unionize, she was laid off. Mercy was part of the invisible, ignored workforce in the Global South that enables our frictionless life online for little reward. 

Of course, we went to the big houses too — where the other type of tech worker lives. The huge palaces made of glass and steel in San Francisco, where the inhabitants believe the AI they are building will one day help them live forever, and discover everything there is to know about the universe. 

In Twin Peaks, we spoke to Jeremy Nixon, the creator of AGI House San Francisco (AGI for Artificial General Intelligence). Nixon described an apparently utopian future, a place where we never have to work, where AI does everything for us, and where we can install the sum of human knowledge into our brains. “The intention is to allow every human to know everything that’s known,” he told me. 

Later that day, we went to a barbecue in Cupertino and got talking to Alan Boehme, once a chief technology officer for some of the biggest companies in the world, and now an investor in AI startups. Boehme told us how important it was, from his point of view, that tech wasn’t stymied by government regulation. We have to be worried that people are going to over-regulate it. Europe is the worst, to be honest with you,” he said. “Let's look at how we can benefit society and how this can help lead the world as opposed to trying to hold it back.”

I asked him if regulation wasn’t part of the reason we have democratically elected governments, to ensure that all people are kept safe, that some people aren’t left behind by the pace of change? Shouldn’t the governments we elect be the ones deciding whether we regulate AI and not the people at this Cupertino barbecue?

You sound like you're from Sweden,” Boehme responded. “I'm sorry, that's social democracy. That is not what we are here in the U. S. This country is based on a Constitution. We're not based on everybody being equal and holding people back. No, we're not in Sweden.” 

As we reported for the podcast, we came to a gradual realization – what’s being built in Silicon Valley isn’t just artificial intelligence, it’s a way of life — even a religion. And it’s a religion we might not have any choice but to join. 

In January, the Vatican released a statement in which it argued that we’re in danger of worshiping AI as God. It's an idea we'd discussed with Judy Estrin, who worked on building some of the earliest iterations of the internet. As a young researcher at Stanford in the 1970s, Estrin was building some of the very first networked connections. She is no technophobe, fearful of the future, but she is worried about the zealotry she says is taking over Silicon Valley.

What if they truly believe humans are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us.

“If you worship innovation, if you worship anything, you can't take a step back and think about guardrails,” she said about the unquestioning embrace of AI. “So we, from a leadership perspective, are very vulnerable to techno populists who come out and assert that this is the only way to make something happen.” 

The first step toward reclaiming our lost agency, as AI aims to capture every facet of our world, is simply to pay attention. I've been struck by how rarely we actually listen to what tech leaders are explicitly saying about their vision of the future. 

There's a tendency to dismiss their most extreme statements as hyperbole or marketing, but what if they're being honest? What if they truly believe humans, or at least most humans, are replaceable, that traditional concepts of humanity are outdated, that a technological "god" should supersede us? These aren't just ideological positions – they're the foundations for the world being built around us right now. 

In our series, we explore artificial intelligence as something that affects our culture, our jobs, our media and our politics. But we should also ask what tech founders and engineers are really building with AI, or what they think they’re building. Because if their vision of society does not have a place for us in it, we should be ready to reclaim our destiny – before our collective future is captured.

Our audio documentary series, CAPTURED: The Secret Behind Silicon Valley’s AI Takeover is available now on Audible. Do please tune in, and you can dig deeper into our stories and the people we met during the reporting below.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us? You can listen to the Captured audio series on Audible now.

The post Captured: how Silicon Valley is building a future we never chose appeared first on Coda Story.

]]>
55514
Who owns the rights to your brain? https://www.codastory.com/authoritarian-tech/who-owns-the-rights-to-your-brain/ Thu, 03 Apr 2025 14:04:17 +0000 https://www.codastory.com/?p=55376 Soon technology will enable us to read and manipulate thoughts. A neurobiologist and an international lawyer joined forces to propose ways to protect ourselves

The post Who owns the rights to your brain? appeared first on Coda Story.

]]>
Jared Genser and Rafael Yuste are an unlikely pair. Yuste, a professor at Columbia University, spends his days in neuroscience labs, using lasers to experiment on the brains of mice. Genser has traveled the world as an international human rights lawyer representing prisoners in 30 countries. But when they met, the two became fast friends. They found common ground in their fascination with neurorights – in “human rights,” as their foundation’s website puts it, “for the Age of Neurotechnology.” 

Together, they asked themselves — and the world – what happens when computers start to read our minds? Who owns our thoughts, anyway? This technology is being developed right now — but as of this moment, what happens to your neural data is a legal black box. So what does the fight to build protections for our brains look like? I sat down with Rafael and Jared to find out.

This conversation has been edited for length and clarity.

Q: Rafael, can you tell me how your journey into neurorights started?

Rafael: The story starts with a particular moment in my career. It happened about ten years ago while I was working in a lab at Columbia University in New York. Our research was focused on understanding how the cerebral cortex works. We were studying mice, because the mouse  brain is a good model for the human brain. And what we were trying to do was to implant images into the brains of mice so that they would behave as if they were seeing something, except they weren't seeing anything.

Q: How did that work? 

Rafael: We were trying to take control of the mouse’s visual perception. So we’d implant neurotechnology into a mouse using lasers, which would allow us to record the activity of the part of the brain responsible for vision, the visual cortex, and change the activity of those neurons. With our lasers, we could map the activity of this part of the brain and try to control it. 

These mice were looking at a screen that showed them a particular image, of black and white bars of light that have very high contrast. We used to talk, tongue-in-cheek, about playing the piano with the brain. 

We trained the mice to lick from a little spout of juice whenever they saw that image. With our new technology, we were able to decode the brain signals that correspond this image to the mouse and — we hoped — play it back to trick the mice into seeing the image again, even though it wasn’t there. 

Q: So you artificially activated particular neurons in the brain to make it think it had seen that image?

Rafael: These are little laboratory mice. We make a surgical incision and we implant in their skull a transparent chamber so that we can see their brains from above with our microscope, with our lasers. And we use our lasers to optically penetrate the brain. We use one laser to image, to map the activity of these neurons. And we use a second laser, a second wavelength, to activate these neurons again. All of this is done with a very sophisticated microscope and computer equipment. 

Q: So what happened when you tried to artificially activate the mouse’s neurons, to make it think it was looking at the picture of the black and white bars? 

Rafael: When we did that, the mouse licked from the spout of juice in exactly the same way as if he was looking at this image, except that he wasn't. We were putting that image into its brain. The behavior of the mice when we took over its visual perception was identical to when the mouse was actually seeing the real image.

Q: It must have been a huge breakthrough

Rafael: Yes, I remember it perfectly. It was one of the most salient days of my life. We were actually altering the behavior of the mice by playing the piano with their cortex. We were ecstatic. I was super happy in the lab, making plans.

 And then when I got home, that's when it hit me. I said, “wait, wait, wait, this means humans will be able to do the same thing to other humans.”

I felt this responsibility, like it was a double-edged sword. That night I didn't sleep, I was shocked. I talked to my wife, who works in human rights. And I decided that I should start to get involved in cleaning up the mess.

Q: What do you mean by that?

Rafael: I felt the responsibility of ensuring that these powerful methods that could decode brain activity and manipulate perception had to be regulated to ensure that they were used for the benefit of humanity.

Q: Jared, can you tell me how you came into this? 

Jared: Rafael and I met about four years ago. I'm an international human rights lawyer based in Washington and very well known globally for working in that field. I had a single hour-long conversation with Rafa when we met, and it completely transformed my view of the human rights challenges we’ll face in this century. I had no idea about neurotechnologies, where they were, or where they might be heading. Learning how far along they have come and what’s coming in just the next few years — I was blown away. I was both excited and concerned as a human rights lawyer about the implications for our common humanity.

Q: What was your reaction when you heard of the mouse experiment?

Jared: Immediately, I thought of The Matrix. He told me that what can be done in a mouse today could be done in a chimpanzee tomorrow and a human after that. I was shocked by the possibilities. While implanting images into a human brain is still far off, there’s every reason to expect it will eventually be possible.

Q: Can you talk me through some of the other implications of this technology? 

Jared :Within the next few years, we’re expected to have wearable brain-computer interfaces that can decode thought to text at 75–80 words per minute with 90 percent accuracy.

That will be an extraordinary revolution in how we interact with technology. Apple is already thinking about this—they filed a patent last year for the next-generation AirPods with built-in EEG scanners. This is undoubtedly one of the applications they are considering.

In just a few years, if you have an iPhone in your pocket and are wearing earbuds, you could think about opening a text message, dictating it, and sending it—all without touching a device. These developments are exciting. 

Rafael:  I imagine that, we'll be hybrid. And part of our processing will happen with devices that will be connected to our brains, to our nervous system. And this could enhance our perception. Our memories — you would be able to do the equivalent to a web search mentally. And that's going to change our behavior. That's going to change the way we absorb information. 

Jared: Ultimately, there's every reason to expect we’ll be able to cure chronic pain disease. It’s already being shown in labs that an implantable brain-computer interface can manage pain for people with chronic pain diseases. By turning off misfiring neurons, you can reduce the pain they feel.

But if you can turn off the neurons, you can turn on the neurons. And that would mean you'll have a wearable cap or hat that could torture a person simply by flipping a switch. In just a few years, physical torture may no longer be necessary because of brain-computer interfaces. 

And If these devices can decode your thoughts, that raises serious concerns. What will the companies behind these technologies be able to do with your thoughts? Could they be decoded against your wishes and used for purposes beyond what the devices are advertised for? Those are critical questions we need to address.

How did you start thinking about ways to build rights and guardrails around neurotechnology?

Rafael: I was inspired by the Manhattan Project, where scientists who developed nuclear technology were also involved in regulating its use. That led me to think that we should take a similar approach with neurotechnology — where the power to read and manipulate brain activity needs to be regulated. And that’s how we came up with the idea of the Neurorights Foundation.

So in 2017, I organized a meeting at Columbia University’s Morningside campus of experts from various fields to discuss the ethical and societal implications of neurotechnology. And this is where we came up with the idea of neurorights — sort of brain rights, that would protect brain rights and brain data. 

Jared:  If you look at global consumer data privacy laws, they protect things like biometric, genetic, and biological information. But neural data doesn't fall under any of these categories. Neural data is electrical and not biological, so it isn't considered biometric data.

There are few, if any, safeguards to protect users from having their neural data used for purposes beyond the intended function of the devices they’ve purchased.

So because neural data doesn't fit within existing privacy protections, it isn't covered by state privacy laws. To address this, we worked with Colorado to adopt the first-ever amendment to its Privacy Act, which defines neural data and includes it under sensitive, protected data.

Rafael: We identified five areas of concern where neurotechnology could impact human rights:

The first is the right to mental privacy – ensuring that the content of our brain activity can't be decoded without consent.

The second is the right to our own mental integrity so that no one can change a person's identity or consciousness.

The third is the right to free will – so that our behavior is determined by one's own volition, not by external influences, to prevent situations like what we did to those mice.

The fourth is the right to equal access to neural augmentation.  Technology and AI will lead to human augmentation of our mental processes, our memory, our perception, our capabilities. And we think there should be fair and equal access to neural augmentation in the future.

And the fifth neuroright is protection from bias and discrimination – safeguarding against interference in mental activity, as neurotechnology could both read and alter brain data, and change the content of people's mental activity.

Jared: The Neurorights Foundation is focused on promoting innovation in neurotechnologies while managing the risks of misuse or abuse. We see enormous potential in neurotechnologies that could transform what it means to be human. At the same time, we want to ensure that proper guardrails are in place to protect people's fundamental human rights.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post Who owns the rights to your brain? appeared first on Coda Story.

]]>
55376
In Kenya’s slums, they’re doing our digital dirty work https://www.codastory.com/authoritarian-tech/the-hidden-workers-who-train-ai-from-kenyas-slums/ Mon, 31 Mar 2025 19:08:31 +0000 https://www.codastory.com/?p=55374 Big Tech makes promises about our gleaming AI future, but its models are built on the backs of underpaid workers in Africa

The post In Kenya’s slums, they’re doing our digital dirty work appeared first on Coda Story.

]]>

This article is an adapted extract from CAPTURED, our new podcast series with Audible about the secret behind Silicon Valley’s AI Takeover. Click here to listen.  

We’re moving slowly through the traffic in the heart of the Kenyan capital, Nairobi. Gleaming office blocks have sprung up in the past few years, looming over the townhouses and shopping malls. We’re with a young man named James Oyange — but everyone who knows him calls him Mojez. He’s peering out the window of our 4x4, staring up at the high-rise building where he used to work. 

Mojez first walked into that building three years ago, as a twenty-five-year-old, thinking he would be working in a customer service role at a call center. As the car crawled along, I asked him what he would say to that young man now. He told me he’d tell his younger self something very simple:

“The world is an evil place, and nobody's coming to save you.”

It wasn't until Mojez started work that he realised what his job really required him to do. And the toll it would take.


It turned out, Mojez's job wasn't in customer service. It wasn't even in a call center. His job was to be a “Content Moderator,” working for social media giants via an outsourcing company. He had to read and watch the most hateful, violent, grotesque content released on the internet and get it taken down so the rest of us didn’t have to see it. And the experience changed the way he thought about the world. 

“You tend to look at people differently,” he said, talking about how he would go down the street and think of the people he had seen in the videos — and wonder if passersby could do the same things, behave in the same ways. “Can you be the person who, you know, defiled this baby? Or I might be sitting down with somebody who has just come from abusing their wife, you know.”

There was a time – and it wasn’t that long ago – when things like child pornography and neo-Nazi propaganda were relegated to the darkest corners of the internet. But with the rise of algorithms that can spread this kind of content to anyone who might click on it, social media companies have scrambled to amass an army of hidden workers to clean up the mess.

These workers are kept hidden for a reason. They say if slaughterhouses had glass walls, the world would stop eating meat. And if tech companies were to reveal what they make these digital workers do, day in and day out, perhaps the world would stop using their platforms.

This isn't just about “filtering content.” It's about the human infrastructure that makes our frictionless digital world possible – the workers who bear witness to humanity's darkest impulses so that the rest of us don't have to.

Mojez is fed up with being invisible. He's trying to organise a union of digital workers to fight for better treatment by the tech companies. “Development should not mean servitude,” he said. “And innovation should not mean exploitation, right?” 

We are now in the outskirts of Nairobi, where Mojez has brought us to meet his friend, Mercy Chimwani. She lives on the ground floor of the half-built house that she rents. There's mud beneath our feet, and above you can see the rain clouds through a gaping hole where the unfinished stairs meet the sky. There’s no electricity, and when it rains, water runs right through the house. Mercy shares a room with her two girls, her mother, and her sister. 

It’s hard to believe, but this informal settlement without a roof is the home of someone who used to work for Meta. 

Mercy is part of the hidden human supply chain that trains AI. She was hired by what’s called a BPO, or a Business Process Outsourcing company, a middleman that finds cheap labour for large Western corporations. Often people like Mercy don’t even know who they’re really working for. But for her, the prospect of a regular wage was a step up, though her salary – $180 a month, or about a dollar an hour – was low, even by Kenyan standards. 

She started out working for an AI company – she did not know the name – training software to be used in self-driving cars. She had to annotate what’s called a “driveable space” – drawing around stop signs and pedestrians, teaching the cars’ artificial intelligence to recognize hazards on its own. 

And then, she switched to working for a different client: Meta. 

“On the first day on the job it was hectic. Like, I was telling myself, like, I wish I didn't go for it, because the first image I got to see, it was a graphic image.” The video, Mercy told me, is imprinted on her memory forever. It was a person being stabbed to death. 

“You could see people committing suicide live. I also saw a video of a very young kid being raped live. And you are here, you have to watch this content. You have kids, you are thinking about them, and here you are at work. You have to like, deal with that content. You have to remove it from the platform. So you can imagine all that piling up within one person. How hard it is,” Mercy said. 

Silicon Valley likes to position itself as the pinnacle of innovation. But what they hide is this incredibly analogue, brute force process where armies of click workers relentlessly correct and train the models to learn. It’s the sausage factory that makes the AI sausage. Every major tech company does this – TikTok, Facebook, Google and OpenAI, the makers of ChatGPT. 

Mercy was saving to move to a house that had a proper roof. She wanted to put her daughters into a better school. So she felt she had to carry on earning her wage. And then she realised that nearly everyone she worked with was in the same situation as her. They all came from the very poorest neighborhoods in Nairobi. “I realised, like, yo, they're really taking advantage of people who are from the slums.” she said. 

After we left Mercy’s house, Mojez took us to the Kibera informal settlement. “Kibera is the largest urban slum area in Africa, and the third largest slum in the entire world,”he told us as we drove carefully through the twisting, crooked streets. There were people everywhere – kids practicing a dance routine, whole families piled onto motorbikes. There were stall holders selling vegetables and live chickens, toys and wooden furniture. Most of the houses had corrugated iron roofs and no running water indoors.

Kibera is where the model of recruiting people from the poorest areas to do tech work was really born. A San Francisco-based organization called Sama started training and hiring young people here to become digital workers for Big Tech clients including Meta and Open AI.

Sama claimed that they offered a way for young Kenyans to be a part of Silicon Valley’s success. Technology, they argued, had the potential to be a profound equalizer, to create opportunities where none existed.

Mojez has brought us into the heart of Kibera to meet his friend Felix. A few years ago Felix heard about the Sama training school - back then it was called Samasource. He heard how they were teaching people to do digital work, and that there were jobs on offer. So, like hundreds of others, Felix signed up.

“This is Africa,” he said, as we sat down in his home. “Everyone is struggling to find a job.” He nodded his head out towards the street. “If right now you go out here, uh, out of 10, seven or eight people have worked with SamaSource.” He was referring to people his age – Gen Z and young millennials – who were recruited by Sama with the promise that they would be lifted out of poverty. 

And for a while, Felix’s life was transformed. He was the main breadwinner for his family, for his mother and two kids, and at last he was earning a regular salary.

But in the end, Felix was left traumatized by the work he did. He was laid off. And now he feels used and abandoned. “There are so many promises. You’re told that your life is going to be changed, that you’re going to be given so many opportunities. But I wouldn't say it's helping anyone, it's just taking advantage of people,” he said.

When we reached out to Sama, a PR representative disputed the notion that Sama was taking advantage and cashing in on Silicon Valley’s headlong rush towards AI. 

Mental health support, the PR insisted, had been provided and the majority of Sama’s staff were happy with the conditions.“Sama,” she said, “has a 16-year track record of delivering meaningful work in Sub-Saharan Africa, lifting nearly 70,000 people out of poverty.” Sama eventually cancelled its contracts with Meta and OpenAI, and says it no longer recruits content moderators. When we spoke to Open AI, which has hired people in Kenya to train their model, they said that they believe data annotation work needed to be done humanely. The efforts of the Kenyan workers were, they said, “immensely valuable.”

You can read Sama’s and Open AI’s response to our questions in full below. Meta did not respond to our requests for comment.

Despite their defense of their record, Sama is facing legal action in Kenya. 

“I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me,” said Mercy Mutemi, a lawyer representing more than 180 content moderators in a lawsuit against Sama and Meta. The workers say they were unfairly laid off when they tried to lobby for better conditions, and then blacklisted.

“You've used them,” Mutemi said. “They're in a very compromised mental health state, and then you've dumped them. So how did you help them?” 

As Mutemi sees it, the result of recruiting from the slum areas is that you have a workforce of disadvantaged people, who’ll be less likely to complain about conditions.

“People who've gone through hardship, people who are desperate, are less likely to make noise at the workplace because then you get to tell them, ‘I will return you to your poverty.’ What we see is again, like a new form of colonization where it's just extraction of resources, and not enough coming back in terms of value whether it's investing in people, investing in their well-being, or just paying decent salaries, investing in skill transfer and helping the economy grow. That's not happening.” 

“This is the next frontier of technology,” she added, “and you're building big tech on the backs of broken African youth.”

At the end of our week in Kenya, Mojez takes us to Karura forest, the green heart of Nairobi. It’s an oasis of calm, where birds, butterflies and monkeys live among the trees, and the rich red earth has that amazing, just-rained-on smell. He comes here to decompress, and to try to forget about all the horrific things he’s seen while working as a content moderator. 

Mojez describes the job he did as a digital worker as a loss of innocence. “It made me think about, you know, life itself, right? And that we are alone and nobody's coming to save us. So nowadays I've gone back to how my ancestors used to do their worship — how they used to give back to nature.” We're making our way towards a waterfall. “There's something about the water hitting the stones and just gliding down the river that is therapeutic.”

For Mojez, one of the most frightening things about the work he was doing was the way that it numbed him, accustomed him to horror. Watching endless videos of people being abused, beheaded, or tortured - while trying to hit performance targets every hour - made him switch off his humanity, he said.

A hundred years from now, will we remember the workers who trained humanity’s first generation of AI? Or will these 21st-century monuments to human achievement bear only the names of the people who profited from their creation?

Artificial intelligence may well go down in history as one of humanity’s greatest triumphs.  Future generations may look back at this moment as the time we truly entered the future.

And just as ancient monuments like the Colosseum endure as a lasting embodiment of the values of their age, AI will embody the values of our time too.  

So, we face a question: what legacy do we want to leave for future generations? We can't redesign systems we refuse to see. We have to acknowledge the reality of the harm we are allowing to happen.  But every story – like that of Mojez, Mercy and Felix –- is an invitation. Not to despair, but to imagine something better for all of us rather than the select few.

Christopher Wylie and Becky Lipscombe contributed reporting. Our new audio series on how Silicon Valley’s AI prophets are choosing our future for us is out now on Audible.

Your Early Warning System

This story is part of “Captured”, our special issue in which we ask whether AI, as it becomes integrated into every part of our lives, is now a belief system. Who are the prophets? What are the commandments? Is there an ethical code? How do the AI evangelists imagine the future? And what does that future mean for the rest of us?

The post In Kenya’s slums, they’re doing our digital dirty work appeared first on Coda Story.

]]>
55374
I’m a neurology ICU nurse. The creep of AI in our hospitals terrifies me https://www.codastory.com/surveillance-and-control/nursing-ai-hospitals-robots-capture/ Tue, 12 Nov 2024 12:56:45 +0000 https://www.codastory.com/?p=52469 The healthcare landscape is changing fast thanks to the introduction of artificial intelligence. These technologies have shifted decision-making power away from nurses and on to the robots. Michael Kennedy, who works as a neuro-intensive care nurse in San Diego and is a member of California Nurses Association and National Nurses United, believes AI could destroy

The post I’m a neurology ICU nurse. The creep of AI in our hospitals terrifies me appeared first on Coda Story.

]]>
The healthcare landscape is changing fast thanks to the introduction of artificial intelligence. These technologies have shifted decision-making power away from nurses and on to the robots. Michael Kennedy, who works as a neuro-intensive care nurse in San Diego and is a member of California Nurses Association and National Nurses United, believes AI could destroy nurses’ intuition, skills, and training. The result being that patients are left watched by more machines and fewer pairs of eyes. Here is Michael’s  story, as told to Coda’s Isobel Cockerell. This conversation has been edited and condensed for clarity.  

Every morning at about 6:30am I catch the trolley car from my home in downtown San Diego up to the hospital where I work — a place called La Jolla. Southern California isn't known for its public transportation, but I'm the weirdo that takes it — and I like it. It's quick, it's easy, I don't have to pay for parking, it's wonderful. A typical shift is 12 hours and it ends up being 13 by the time you do your report and get all your charting done, so you're there for a very long time. 

Most of the time, I don’t go to work expecting catastrophe — of course it happens once in a while, but usually I’m just going into a normal job, where you do routine stuff.

I work in the neuro-intensive care unit. The majority of our patients have just had neurosurgery for tumors or strokes. It’s not a happy place most of the time. I see a lot of people with long recoveries ahead of them who need to relearn basic skills — how to hold a pencil, how to walk. After a brain injury, you lose those abilities, and it's a long process to get them back. It's not like we do a procedure, fix them, and they go home the next day. We see patients at their worst, but we don't get to see the progress. If we're lucky, we might hear months later that they've made a full recovery. It's an environment where there's not much instant gratification. 

As a nurse, you end up relying on intuition a lot. It's in the way a patient says something, or just a feeling you get from how they look. It’s not something I think machines can do — and yet, in recent years, we’ve seen more and more artificial intelligence creep into our hospitals. 

I get to work at 7am. The hospital I work at looks futuristic from the outside — it’s this high-rise building, all glass and curved lines. It’s won a bunch of architectural awards. The building was financed by Irwin Jacobs, who’s the billionaire owner of Qualcomm, a big San Diego tech company. I think the hospital being owned by a tech billionaire really has a huge amount to do with the way they see technology and the way they dive headfirst into it.

They always want to be on the cutting edge of everything. And so when something new comes out, they're going to jump right on it. I think that's part of why they dive headfirst into this AI thing.  

We didn't call it AI at first. The first thing that happened was these new innovations just crept into our electronic medical record system. They were tools that monitored whether specific steps in patient treatment were being followed. If something was missed or hadn’t been done, the AI would send an alert. It was very primitive, and it was there to stop patients falling through the cracks. 

Then in 2018, the hospital bought a new program from Epic, the electronic medical record company. It predicted something called “patient acuity” — basically the workload each patient requires from their nursing care. It’s a really important measurement we have in nursing, to determine how sick a person is and how many resources they will need. At its most basic level, we just classify patients as low, medium or high need. Before the AI came in, we basically filled in this questionnaire — which would ask things like how many meds a patient needed. Are they IV meds? Are they crushed? Do you have a central line versus a peripheral? That sort of thing. 

This determines whether a patient was low, medium or high-need. And we’d figure out staffing based on that. If you had lots of high-need patients, you needed more staffing. If you had mostly low-need patients, you could get away with fewer. 

We used to answer the questions ourselves and we felt like we had control over it. We felt like we had agency. But one day, it was taken away from us. Instead, they bought this AI-powered program without notifying the unions, nurses, or representatives. They just started using it and sent out an email saying, 'Hey, we're using this now.'

The new program used AI to pull from a patient’s notes, from the charts, and then gave them a special score. It was suddenly just running in the background at the hospital.

The problem was, we had no idea where these numbers were coming from. It felt like magic, but not in a good way. It would spit out a score, like 240, but we didn't know what that meant. There was no clear cutoff for low, medium, or high need, making it functionally useless.

The upshot was, it took away our ability to advocate for patients. We couldn’t point to a score and say, 'This patient is too sick, I need to focus on them alone,' because the numbers didn’t help us make that case anymore. They didn’t tell us if a patient was low, medium, or high need. They just gave patients a seemingly random score that nobody understood, on a scale of one to infinity.

We felt the system was designed to take decision-making power away from nurses at the bedside. Deny us the power to have a say in how much staffing we need. 

That was the first thing.

Then, earlier this year, the hospital got a huge donation from the Jacobs family, and they hired a chief AI officer. When we heard that, alarm bells went off — “they're going all in on AI,” we said to each other. We found out about this Scribe technology that they were rolling out. It’s called Ambient Documentation. They announced they were going to pilot this program with the physicians at our hospital. 

It basically records your encounter with your patient. And then it's like chat GPT or a large language model — it takes everything and just auto populates a note. Or your “documentation.”

There were obvious concerns with this, and the number one thing that people said was, "Oh my god — it's like mass surveillance. They're gonna listen to everything our patients say, everything we do. They're gonna track us.”

This isn't the first time they've tried to track nurses. My hospital hasn’t done this, but there are hospitals around the US that use tracking tags to monitor how many times you go into a room to make sure you're meeting these metrics. It’s as if they don’t trust us to actually care for our patients. 

We leafletted our colleagues to try to educate them on what “Ambient Documentation” actually means. We demanded to meet with the chief AI officer. He downplayed a lot of it, saying, 'No, no, no, we hear you. We're right there with you. We're starting; it’s just a pilot.' A lot of us rolled our eyes.

He said they were adopting the program because of physician burnout. It’s true, documentation is one of the most mundane aspects of a physician's job, and they hate doing it.

The reasoning for bringing in AI tools to monitor patients is always that it will make life easier for us, but in my experience, technology in healthcare rarely makes things better. It usually just speeds up the factory floor, squeezing more out of us, so they can ultimately hire fewer of us. 

“Efficiency” is a buzzword in Silicon Valley, but get it out of your mind when it comes to healthcare. When you're optimizing for efficiency, you're getting rid of redundancies. But when patients' lives are at stake, you actually want redundancy. You want extra slack in the system. You want multiple sets of eyes on a patient in a hospital. 

When you try to reduce everything down to a machine that one person relies on to carry out decisions, then there's only one set of eyes on that patient. That may be efficient, but by creating efficiency, you're also creating a lot of potential points of failure. So, efficiency isn't as efficient as tech bros think it is.

In an ideal world, they believe technology would take away mundane tasks, allowing us to focus on patient encounters instead of spending our time typing behind a computer. 

But who thinks recording everything a patient says and storing it on a third-party server is a good idea? That’s crazy. I’d need assurance that the system is 100 percent secure — though nothing ever is. We’d all love to be freed from documentation requirements and be more present with our patients.

There’s a proper way to do this. AI isn’t inevitable, but it’s come at us fast. One day, ChatGPT was a novelty, and now everything is AI. We’re being bombarded with it.

The other thing that’s burst into our hospitals in recent years is an AI-powered alert system. They’re these alerts that ping us to make sure we’ve done certain things — like checked for sepsis, for example. They’re usually not that helpful, or not timed very well. The goal is to stop patients falling through the cracks — that’s obviously a nightmare scenario in healthcare. But I don’t think the system is working as intended.

I don’t think the goal is really to provide a safety net for everyone — I think it’s actually to speed us up, so we can see more patients, reduce visits down from 15 minutes to 12 minutes to 10. Efficiency, again.

I believe the goal is for these alerts to eventually take over healthcare. To tell us how to do our jobs rather than have hospitals spend money training nurses and have them develop critical thinking skills, experience, and intuition. So we basically just become operators of the machines.

As a seasoned nurse, I’ve learned to recognize patterns and anticipate potential outcomes based on what I see. New nurses don’t have that intuition or forethought yet; developing critical thinking is part of their training. When they experience different situations, they start to understand that instinctively.

In the future, with AI, and alerts pinging them all day reminding them how to do their job, new cohorts of nurses might not develop that same intuition. Critical thinking is being shifted elsewhere — to the machine. I believe the tech leaders envision a world where they can crack the code of human illness and automate everything based on algorithms. They just see us as machines that can be figured out.

The artwork for this piece was developed during a Rhode Island School of Design course taught by Marisa Mazria Katz, in collaboration with the Center for Artistic Inquiry and Reporting.

The post I’m a neurology ICU nurse. The creep of AI in our hospitals terrifies me appeared first on Coda Story.

]]>
52469
Legendary Kenyan lawyer takes on Meta and Chat GPT https://www.codastory.com/authoritarian-tech/mercy-mutemi-meta-lawsuit/ Tue, 22 Oct 2024 13:09:27 +0000 https://www.codastory.com/?p=52322 Mercy Mutemi has made headlines all over the world for standing up for Kenya’s data annotators and content moderators, arguing the work they are subjected to is a new form of colonialism

The post Legendary Kenyan lawyer takes on Meta and Chat GPT appeared first on Coda Story.

]]>
Tech platforms run from Silicon Valley, and the handful of men behind them, often seem and act invincible. But a legal battle in Kenya is setting an important precedent for disrupting the Big Tech's strategy of obscuring and deflecting attention from the effect their platforms have on democracy and human rights around the world.  

Kenya is hosting unprecedented lawsuits against Meta Inc., the parent company of Facebook, WhatsApp, and Instagram. Mercy Mutemi, who made last year’s TIME 100 list, is a Nairobi-based lawyer who is leading the cases. She spends her days thinking about what our consumption of digital products should look like in the next 10 years. Will it be extractive and extortionist, or will it be beneficial? What does it look like from an African perspective? 

The conversation with Mercy Mutemi has been edited and condensed for clarity.

Isobel Cockerell: You’ve described this situation as a new form of colonialism. Could you explain that?  

Mercy Mutemi: From the government side, Kenya’s relationship with Big Tech, when it comes to annotation work, is framed as a partnership. But in reality, it’s exploitation. We’re not negotiating as equal partners. People aren’t gaining skills to build our own internal AI development. But at the same time, you're training all the algorithms for all the big tech companies, including Tesla, including the Walmarts of this world. All that training is happening here, but it just doesn't translate into skill transfer. It’s broken up into labeling work without any training to broaden people’s understanding of how AI works. What we see is, again, like a new form of colonization where it's just extraction of resources, with not enough coming back in terms of value, whether it's investing in people, investing in their growth and well-being, just paying decent salaries and helping the economy grow, for example, or investing in skill transfer. That's not happening. And when we say we're just creating jobs in the thousands, even hundreds of thousands, if the jobs are not quality jobs, then it's not a net benefit at the end of the day. That's the problem.

IC: Behind the legal battle with Meta are workers and their conditions. What challenges do they face in these tech roles, particularly content moderation?  

MM: Content moderators in Kenya face horrendous conditions. They’re often misled about the nature of the work, not warned that the work is going to be dangerous for them. There’s no adequate care provided to look after these workers, and they’re not paid well enough. And they’ve created this ecosystem of fear — it’s almost like this special Stockholm syndrome has been created where you know what you're going through is really bad, but you're so afraid of the NDA that you just would rather not speak up.  

If workers raise issues about the exploitation, they’re let go and blacklisted. It’s a classic “use and dump” model.

IC: What are your thoughts on Kenya being dubbed the “Silicon Savannah”?  

MM: I do not support that framing, just because I feel like it’s quite problematic to model your development after Silicon Valley, considering all the problems that have come out of there. But that branding has been part of Kenya's mission to be known as a digital leader. The way Silicon Valley interprets that is by seeing Kenya as a place where they can offload work they don’t want to do in the U.S. Work that is often dangerous. I’m talking about content moderation work, annotation work, and algorithm training, which in its very nature involves a lot of exposure to harmful content. That work is dumped on Kenya. Kenya says it’s interested in digital development, but what Kenya ends up getting is work that poses serious risks, rather than meaningful investment in its people or infrastructure.

IC: How did you first become interested in these issues?  

MM: It started when I took a short course on the law and economics of social media giants. That really opened my eyes to how business models are changing. It’s no longer just about buying and selling goods directly—now it’s about data, algorithms, and the advertising model. It was mind-blowing to learn how Google and Meta operate their algorithms and advertising models. That realization pushed me to study internet governance more deeply.

IC: Can you explain how data labeling and moderation for a large language model – like an AI chatbot – works?  

MM: When the initial version of ChatGPT was released, it had lots of sexual violence in it. So to clean up an algorithm like that, you just teach it all the worst kinds of sexual violence. And who does that? It's the data labelers. So for them to do that, they have to consume it and teach it to the algorithm. So what they needed to do is consume hours of text of every imaginable sexual violence simulation, like a rape or a defilement of a minor, and then label that text. Over and over again. So then, what the algorithm knows is, okay, this is what a rape looks like. That way, if you ask ChatGPT to show you the worst rape that could ever happen, there are now metrics in place that tell it not to give out this information because it’s been taught to recognize what it’s being asked for. And that’s thanks to Kenyan youth whose mental health is now toast, and whose life has been compromised completely. All because ChatGPT had to be this fancy thing that the world celebrated. And Kenyan youth got nothing from it.  

This is the next frontier of technology, and they’re building big tech on the backs of broken African youth, to put it simply. There's no skill transfer, no real investment in their well-being, just exploitation.

IC: But workers aren’t working directly for the Big Tech companies, right? They’re working for these middlemen companies that match Big Tech companies with workers — can you explain how that works?  

MM: Big Tech is not planting any roots in the country when it comes to hiring people to moderate content or train algorithms for AI. They're not really investing in the country in the sense that there’s no actual person to hold liable should anything go south. There's no registered office in Kenya for companies like Meta, TikTok, OpenAI. And really, it’s important that companies have a presence in a country so that there can be discussions around accountability. But that part is purposely left out.  

Instead, what you have are these middlemen. They’re called Business Process Outsourcing, or BPOs, that are run from the U.S., not run locally, but they have a registered office here, and a presence here. A person that can be held accountable. And then what happens is big tech companies negotiate these contracts with the business. So for example, I have clients who worked for Meta or OpenAI through a middleman company called Sama, or who worked for Meta through another called Majorel, or those who worked for Scale AI but through a company called RemoTasks.  

It’s almost like they're agents of big tech companies. So they will do big tech's bidding. If the big tech says jump, then they jump. So we find ourselves in this situation where these companies purely exist for the cover of escaping liability.  

And in the case of Meta, for example, when recruitments happen, the advertisements don't come from Meta, they come from the middleman. And what we've seen is purposeful, intentional efforts to hide the client, so as not to disclose that you're coming to do work for Meta… and not even being honest or upfront about the nature of the work, not even saying that this is content moderation work that you're coming to do.

Kenyan lawyer Mercy Mutemi (C) speaks to the media after filing a lawsuit against Meta at Milimani Law Courts in Nairobi on December 14, 2022. Yasuyoshi Chiba/AFP via Getty Images.

IC: What are the repercussions of this on workers?  

MM: Their mental health is destroyed – and there are often no measures in place to protect their well-being or respect them as workers. And then it's their job to figure out how to get out of that rut because they still are a breadwinner in an African context, and they still have to work, right? And in this community where mental health isn't the most spoken-about thing, how do you explain to your parents that you can't work?  

I literally had someone say that to me—that they never told their parents what work they do because how do you explain to your parents that this is what you watch, day in, day out? And that's why it's not enough for the government to say, “yes, 10,000 more jobs.” You really do have to question what the nature of these jobs is and how we are protecting the people doing them, how we are making sure that only people who willingly want to do the job are doing it.

IC: You said the government and the companies themselves have argued that this moderation work is bringing jobs to Kenya, and there’s also been this narrative that — almost like an NGO – these companies are helping lift people out of poverty. What do you say to that?  

MM: I think when you give people work for a period of time and those people can't work again because their mental health is destroyed, that doesn't look like lifting people out of poverty to me. That looks like entrenching the problem further because you've destroyed not just one person, but everybody that relies on that person and everybody that's now going to be roped in, in the care of that one person. You've destroyed a bigger community that you set out to help.

IC: Do you feel alone in this fight?

MM: I wouldn’t say I’m alone, but it’s not a popular case to take at this time. Many people don’t want to believe that Kenya isn’t really benefiting from these big tech deals.  It’s not a narrative that Kenyans want to believe, and it's just not the story that the government wants at the end of the day. So not enough questions are being asked. No one's really opening the curtain to see what is this work?  Are our local companies benefiting out of this? Nobody's really asking those questions. So then in that context, imagine standing up to challenge those jobs. 

IC: Do you think it’s possible for Kenya to benefit from this kind of work without the exploitation?

MM: Let me just be very categorical. My position is not that this work shouldn't be coming into Kenya. But it can’t be the way it is now, where companies get to say “either you take our work and take it as horrible as it is with no care, and we exploit you to our satisfaction, or we, or we leave.” No. You can have dangerous work done in Kenya, but with appropriate level of care,  with respect,  and upholding the rights of these workers. It’s going to be a long journey to achieve justice. 

IC: In September, the Kenyan Court of Appeal made a ruling — that Meta, a U.S. company, can be sued in Kenya. Can you explain why this is important?

MM: The ruling by the Court of Appeal brings relief to the moderators. Their case at the Labour Court had been stopped as we awaited the decision by the Court of Appeal on whether or not Meta can be sued in Kenya by former Facebook Content Moderators. The Court of Appeal has now cleared the path for the moderators to present their evidence to the court against Meta, Sama and Majorel for human rights violations. They finally get a chance at a fair hearing and access to justice. 

The Court of Appeal has affirmed the groundbreaking decision of the Labour Court that it in today's world, digital workspaces are adequate anchors of jurisdiction. This means that a court can assume jurisdiction based on the location of an employee working remotely. That is a timely decision as the nature of work and workspaces has changed drastically. 

What this means for Meta is that they now have a chance to fully participate in the suit against them. What we have seen up to this point is constant dismissiveness of the authority of Kenyan courts over Meta claiming they cannot be sued in Kenya. The Court of Appeal has found that they not only can be sued but are properly sued in these cases. We look forward to participating in the legal process fully and presenting our clients' case to the court for a fair determination. 

Correction: This article has been updated to reflect that the Court of Appeal ruling was in regard to the case of 185 former Facebook content moderators, not a separate case of Mutemi's brought by two Ethiopian citizens.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. Court battles in Kenya could jeopardize the outsourcing model upon which Meta has built its global empire.

To dive deeper into the subject, read Silicon Savanna: The workers taking on Africa’s digital sweatshops

In September, the Kenyan Court of Appeal ruled that Meta could be sued in Kenya, and that the case of 185 former Facebook content moderators, who argue that they were unlawfully fired en masse, can proceed to trial in a Kenyan court. Meta has argued that as a U.S.-registered company, any claims against the company should be made in the U.S. The ruling was a landmark victory for Mutemi and her clients. 

The post Legendary Kenyan lawyer takes on Meta and Chat GPT appeared first on Coda Story.

]]>
52322
Stop Drinking from the Toilet! https://www.codastory.com/authoritarian-tech/stop-drinking-from-the-toilet/ Tue, 10 Sep 2024 13:02:17 +0000 https://www.codastory.com/?p=51640 We have systems to filter our water. Now we need systems to filter our tech

The post Stop Drinking from the Toilet! appeared first on Coda Story.

]]>

Stop Drinking from the Toilet!

Judy Estrin has been thinking about digital connectivity since the early days of Silicon Valley. As a junior researcher at Stanford in the 1970s she worked on what became the Internet. She built tech companies, became Cisco’s Chief Technology Officer, and served on the board of Disney and FedEx. Now, she’s working to build our understanding of the digital systems that run our lives.

We can’t live without air. We can’t live without water. And now we can’t live without our phones. Yet our digital information systems are failing us. Promises of unlimited connectivity and access have led to a fractionalization of reality and levels of noise that undermine our social cohesion. Without a common understanding and language about what we are facing, we put at risk our democratic elections, the resolution of conflicts, our health and the health of the planet. In order to move beyond just reacting to the next catastrophe, we can learn something from water. We turn on the tap to drink or wash, rarely considering where the water comes from–until a crisis of scarcity or quality alerts us to a breakdown. As AI further infiltrates our digital world, a crisis in our digital information systems necessitates paying more attention to its flow.

Water is life sustaining, yet too much water, or impure water, makes us sick, destroys our environment, or even kills us. A bit of water pollution may not be harmful but we know that if the toxins exceed a certain level the water is no longer potable. We have learned that water systems need to protect quality at the source, that lead from pipes leach into the water, and that separation is critical–we don’t use the same pipes for sourcing drinking water and drainage of waste and sewage.

Today, digital services have become the information pipes of our lives. Many of us do not understand or care how they work. Like water, digital information can have varying levels of drinkability and toxicity–yet we don’t know what we are drinking. Current system designs are corroded by the transactional business models of companies that neither have our best interests in mind, nor the tools that can adequately detect impurities and sound the alarm. Digital platforms, such as Instagram, TikTok, or YouTube, don’t differentiate between types of content coming into their systems and they lack the equivalent of effective water filters, purification systems, or valves to stop pollution and flooding. We are both the consumers and the sources of this ‘digital water’ flowing through and shaping our minds and lives. Whether we want to learn, laugh, share, or zone-out, we open our phones and drink from that well. The data we generate fuels increasingly dangerous ad targeting and surveillance of our online movements. Reality, entertainment, satire, facts, opinion, and misinformation all blend together in our feeds. 

Digital platforms mix “digital water” and “sewage” in the same pipes, polluting our information systems and undermining the foundations of our culture, our public health, our economy, and our democracy. We see the news avoidance, extremism, loss of civility, reactionary politics, and conflicts. Less visible are other toxins, including the erosion of trust, critical thinking, and creativity. Those propagating the problems deny responsibility and ignore the punch line of Kranzberg’s first law which states, “technology is neither good nor bad; nor is it neutral." We need fundamental changes to the design of our information distribution systems so that they can benefit society and not just increase profit to a few at our expense.

To start, let us acknowledge the monetary incentives behind the tech industry’s course of action that dragged the public down as they made their fortunes. The foundational Internet infrastructure, developed in the 1970s and 80s, combined public and private players, and different levels of service and sources. Individual data bits traveled in packets down a shared distributed network designed to avoid single points of failure. Necessary separation and differentiation was enforced by the information service applications layered on top of the network. Users proactively navigated the web by following links to new sites and information, choosing for themselves where they sourced their content, be it their favorite newspaper or individual blogs. Content providers relied heavily on links from other sites creating interdependence that incentivized more respectful norms and behaviors, even when there was an abundance of disagreements and rants.

Then the 2000s brought unbridled consolidation as the companies that now make up BigTech focused on maximizing growth through ad-driven marketplaces. As with some privatized water systems, commercial incentives were prioritized above wellness. This was only amplified in the product design around the small screen of mobile phones, social discovery of content, and cloud computing. Today, we drink from a firehose of endless scrolling that has eroded our capacity for any differentiation or discernment. Toxicity is amplified and nuance eliminated by algorithms that curate our timelines based on an obscure blend of likes, shares, and behavioral data. As we access information through a single feed, different sources and types of content–individuals, bots, hyperbolic news headlines, professional journalism, fantasy shows, and human or AI generated–all begin to feel the same.

Social media fractured the very idea of truth by taking control of the distribution of information. Now. Generative AI has upended the production of content through an opaque mixing of vast sources of public and private, licensed, and pirated data. Once again, an incentive for profit and power is driving product choices towards centralized, resource intensive Large Language Models (LLMs). The LLMs are trained to recognize, interpret, and generate language in obscure ways and then spit out, often awe inspiring, text, images, and videos on demand. The artificial sweetener of artificial intelligence entices us to drink, even as we know that something may be wrong. The social media waters are already muddied by algorithms and agents, as we are now seeing “enshittification” (an aptly coined term by Cory Doctorow) of platforms as well as the overall internet, with increasing amounts of AI generated excrement in our feeds and searches.

We require both behavioral change and a new more distributed digital information system–one that combines public and private resources to ensure that neither our basic ‘tap’ water or our fancy bottled water will poison our children. This will require overcoming two incredibly strong sets of incentives. The first is a business culture that demands dominance through maximizing growth by way of speed and scale. Second is our prioritization of convenience with a boundless desire for a frictionless world. The fact that this is truly a “wicked problem” does not relieve us of the responsibility to take steps to improve our condition. We don’t need to let go entirely of either growth or convenience. We do need to recommit to a more balanced set of values.

As with other areas of public safety, mitigating today’s harms requires broad and deep education programs to spur individual and collective responsibility. We have thrown out the societal norms that guide us to not spit in the proverbial drink of the other, or piss in the proverbial pool. Instead of continuing to adapt to the lowest common decency, we need digital hygiene to establish collective norms for kids and adults. Digital literacy must encourage critical thinking and navigation of our digital environments with discernment; in other words, with a blend of trust and mistrust. In the analog world, our senses of smell and taste warn us when something is off. We need to establish the ability to detect rotten content and sources–from sophisticated phishing to deep fakes. Already awash in conspiracy theories and propaganda, conversational AI applications bring new avenues for manipulation as well as a novel set of emotional and ethical challenges. As we have learned from food labeling or terms of service, transparency only works when backed by the education to decipher the facts.

Mitigation is not sufficient. We need entrepreneurs, innovators, and funders who are willing to rethink systems and interface design assumptions and build products that are more proactive, distributed, and reinforcing of human agency. Proactive design must incorporate safety valves or upfront filters. Distributed design approaches can use less data and special purpose models, and the interconnection of diverse systems can provide more resilience than consolidated homogeneous ones. We need not accept the inevitability of general purpose brute force data beasts. Human agency designs would break with current design norms.  The default to everything looking the same leads to homogeneity and flattening. Our cars would be safer if they didn’t distract us like smart phones on wheels. The awe of discovery is healthier than the numbing of infinite scrolls. Questioning design and business model assumptions require us to break out of our current culture of innovation which is too focused on short term transactions and rapid scaling. The changes in innovation culture have influenced other industries and institutions, including journalism that is too often hijacked by today's commercial incentives. We cannot give up on a common understanding and knowledge, or on the importance of trust and common truths.   

We need policy changes to balance private and public sector participation. Many of the proposals on the table today lock in the worst of the problems, with legislation that reinforces inherently bad designs, removes liability, and/or targets specific implementations (redirecting us to equally toxic alternatives). Independent funding for education, innovation, and research is required to break the narrative and value capture of the BigTech ecosystem. We throw around words like safe, reliable, or responsible without a common understanding of what it means to really be safe. How can we ensure our water is safe to drink? Regulation is best targeted at areas where leakage leads to the most immediate harm–like algorithmic amplification, and lack of transparency and accountability. Consolidation into single points of power inevitably leads to broad based failure. A small number of corporations have assumed the authority of massive utilities that act as both public squares and information highways–without any of the responsibility.

Isolation and polarization have evolved from a quest for a frictionless society with extraordinary systems handcrafted to exploit our attention. It is imperative that we create separation, valves, and safeguards in the distribution and access of digital information. I am calling not for a return to incumbent gatekeepers, but instead for the creation of new distribution, curation, and facilitation mechanisms that can be scaled for the diversity of human need. There is no single answer, but the first step is to truly acknowledge the scope and scale of the problem. The level of toxicity in our ‘digital waters’ is now too high to address reactively by trying to fix things after the fact, or lashing out in the wrong way. We must question our assumptions and embrace fundamental changes in both our technology and culture in order to bring toxicity levels back to a level that does not continue to undermine our society.

Why This Story?

We are fully immersed in the digital world, but most of us have very little idea what we’re consuming, where it’s coming from, and what harm it may be doing. In part, that’s because we love the convenience that tech brings and we don’t want to enquire further. It’s also because the companies that provide this tech, by and large, prioritize commercial incentives over wellness.

The post Stop Drinking from the Toilet! appeared first on Coda Story.

]]>
51640
Silicon Valley’s sci-fi dreams of colonizing Mars https://www.codastory.com/oligarchy/silicon-valley-elon-musk-colonizing-mars/ Wed, 03 Jul 2024 17:15:11 +0000 https://www.codastory.com/?p=50793 It was a late spring evening in Devon, England, in May 2021. Even before we saw the satellites, the party had become surreal: it was one of the first gatherings in the region since the pandemic had begun. We were camping in tipis in a field overlooking the Jurassic Coast, the ocean thundering below. Inside

The post Silicon Valley’s sci-fi dreams of colonizing Mars appeared first on Coda Story.

]]>

Silicon Valley’s sci-fi dreams of colonizing Mars

It was a late spring evening in Devon, England, in May 2021. Even before we saw the satellites, the party had become surreal: it was one of the first gatherings in the region since the pandemic had begun. We were camping in tipis in a field overlooking the Jurassic Coast, the ocean thundering below. Inside the biggest tent, people were singing, smoking, shouting. The evening was unraveling. Someone—masked, costumed—stuck his face inside the flap and yelled, with great theater: “Starlink is visible! Starlink is visible!”

Half of the party knew what he meant, the other half just stared. Led by those who knew, we headed out into the dark field and peered up at the sky. Directly above our heads, above our field, our very tent—a moving train of what looked like stars, perfectly spaced, perhaps fifty of them, speeding across the sky, on and on and on. Some people in the crowd began screaming: the ones who knew nothing of the satellite network Starlink, who thought the world was ending. Their reaction of pure, primeval terror was echoed all over the world every time Starlink sent up a new batch of satellites, and people who had never heard of Elon Musk’s project looked up. 

Since the beginning of the Space Race, in 1955, fewer than 250 objects a year were sent into orbit. Then, in May 2019, came the launch of Starlink, which has since launched more than 6,000 satellites. Musk has ambitions to put 42,000 satellites into space, blanketing the whole planet in a kind of mesh. As the pandemic raged across the world, the night sky quietly began changing forever—and a few months after my trip to Devon, Elon Musk became the richest man on Earth.

Musk has repeatedly said that revenue from Starlink, which is forecasted to be about $6.6 billion in 2024, is in service of his ultimate dream for Starlink’s parent company SpaceX: making humans multiplanetary. Colonizing Mars.

“There’s really two main reasons, I think, to make life multiplanetary and to establish a self-sustaining civilization on Mars,” Musk said in 2015. “One is the defensive reason, to ensure that the light of consciousness as we know it is not extinguished—will last much longer—and the second is that it would be an amazing adventure that we could all enjoy, vicariously if not personally.”

The red planet, the fire star, the bringer of war. For millennia, humans have stared up at the rust-colored planet in the sky and wondered.

“Mars has been fascinating to people for as long as there have been human beings,” the science fiction author Kim Stanley Robinson told me over a Zoom call. “It’s weird. It’s red. It has that backward glitch in its motion, it wanes and grows in its brightness. Everyone always knew it was weird, and it’s attractive to people.”

Robinson lives in Davis, California, well within what he calls the “Blast Zone” of Silicon Valley’s influence. He wrote Red Mars, a cult sci-fi classic about colonizing the planet, in 1992, when Musk was a college student. Three decades on, Mars is on our minds more than ever, and Robinson’s fiction is morphing into reality.

Kim Stanley Robinson, London, 2014. Will Ireland/SFX Magazine/Future via Getty Images.

An avid sci-fi fan, Musk says he will send the first ship to colonize the red planet by the end of this decade. His dream to colonize space is shared by many of the most powerful players in tech.

“They want to ensure the light of consciousness persists by reducing the probability of human extinction,” said Émile P. Torres, a philosopher who used to be part of what they call the emergent “cult” of Silicon Valley, which envisions a utopian future where humans conquer the universe and plunder the cosmos. They call themselves transhumanists, long-termists, effective altruists, cosmists: people who believe we should strive for immortality, bend nature’s laws to our own will, and transcend terrestrial limitations. “This grand vision of reengineering humanity, spreading to space, is about subjugating nature and maximizing economic productivity.” 

Many billionaires in Silicon Valley envision a future where we can transcend the limits of our bodies and Earth itself, becoming superhuman by enhancing our consciousness through artificial general intelligence and spreading human life out into space. These ideas are the stuff of science fiction; indeed, they are inspired by it. The richest men in Silicon Valley share a deep love of sci-fi. And, armed with billions of dollars, they’re bent on making the stories of their childhood a reality. For Amazon's Jeff Bezos, who founded his own rocket company, the influences are Star Trek and the books of sci-fi authors Isaac Asimov and Robert A. Heinlein, who wrote futuristic fantasies depicting humans as pioneers capable of colonizing other planets. Google founders Larry Page and Sergey Brin, who have invested heavily in space ventures, alongside Meta founder Mark Zuckerberg, are all aficionados of the 1992 Neal Stephenson novel Snow Crash, which depicts virtual worlds and coined the term “metaverse.”

Douglas Adams poses holding a copy of the book which has "Don't Panic" written on the front cover. 29th November 1978. Daily Mirror/Mirrorpix/Mirrorpix via Getty Images.

Musk wants to name the first colonizer ship to Mars “Heart of Gold,” after a ship in Douglas Adams’s The Hitchhiker’s Guide to the Galaxy. And his ambition to terraform the planet could be straight out of Robinson’s Red Mars. The novel is set in 2026—Musk once said he was “highly confident” that SpaceX would land humans on Mars in that year; he now hints closer to 2029. Musk has talked about the “lessons” he has drawn from reading science fiction: “you should try to take the set of actions that are likely to prolong civilization, minimize the probability of a dark age.” The Harvard historian Jill Lepore calls this “extra-terrestrial capitalism,” a colonialist vision of expanding indefinitely, and extracting far beyond this world and into the next.

At the outset of Red Mars, the Ares, the first-ever colonial spaceship, is transporting 100 scientists to the red planet. Their mission: to terraform Mars, turning it from a dusty, inhospitable wasteland into Earth 2.0, a habitable place for humans, with a thicker, Earth-like atmosphere, as well as oceans, breathable air, and low radiation. This plotline is exactly Musk’s plan.

“We can warm it up,” Musk has said of Mars’ freezing, thin atmosphere. His plan is simple—to “nuke Mars,” detonating explosions at the poles and making mini-suns that would heat up the entire planet. The idea is straight science fiction, but he is serious. It’s a more extreme version of the plot of Robinson’s book, which has giant mirrors deployed to reflect more sunlight on the red planet.

Robinson said he is “trying to keep a nuanced portrait of Musk,” who probably read Red Mars as a college student. He sees Musk as someone “hampered by his right wing activities” who owns a “very good rocket company” but whose ambition to colonize the cosmos is pure “fantasyland”.

“This is a fantasy game — ‘let’s ignore gravity, let’s ignore or gut microbiome, let’s ignore cosmic radiation’. Well, you can ignore them if you want—but what a stupid thing to do,” Robinson said. “We are geocentric creatures. We are expressions of the earth and even Mars will screw us up."

Robinson did not mince his words when speaking of his work inspiring the philosophies of the world’s most powerful tech billionaires. “Transhumanism, effective altruism, long-termism, etc.—these are bad science fiction stories,” Robinson said. “And as a science fiction writer, I am offended because science fiction should not be fantasy.

For Robinson, the ambitions and philosophy of Silicon Valley are a warped version of science fiction, far removed from the novels he writes. He describes his work as realistic, but also out of reach of the present: “stuff we might really do with technology, that’s within our grasp, but far enough out that it’s quite utopian.” And yet, the world’s richest man is out there, right now, pouring billions of dollars into making the plot of Red Mars a reality.

Robinson talks about his readers as “co-creators” of the story. “They bring their own experiences. They are co-creating it. So Musk’s Mars, he’s co-creating it. He might have got some ideas from reading the Mars Trilogy.” Ultimately, though, he said: “I am not responsible for the ideas that people come to.”

Science fiction and storytelling have always had the power to inspire real events. The 19th century astronomer Percival Lowell was famous for his belief that Mars was covered in Martian-built canals—an idea that, even though it was pure fancy, changed the course of 20th century history. “We wouldn’t have gotten to the moon yet if it wasn’t for Percival Lowell writing his fantasies about Mars in the 1890s,” Robinson said, explaining how the German Rocket Society, an amateur rocket association, was founded on Lowell’s beliefs. Among its members was a young aerospace engineer who would go on to develop the V-2 rocket for Nazi Germany during World War II—and later, the Saturn V rocket that propelled NASA’s Apollo missions to the Moon. Wernher von Braun, too, believed that humans should one day colonize Mars.

Percival Lowell 1914. Martian canals depicted by Percival Lowell.

Robinson’s novels can sometimes feel more like blueprints for the future than fiction, instruction manuals for how to change a planet’s climate. His storylines are full of drudgery; grinding practicalities that pull you down from fantasy into logistics. Red Mars, for all its grand vistas of the dusty planet, wretched storms and soaring volcanoes, is countered by inordinate periods when Robinson’s characters are building toilets and sewage systems or else caught up in petty practical disagreements and relationship problems. Perhaps, ironically, it’s the bureaucracy of his books that makes their ideas feel so within reach.

I first heard of Robinson at a dinner party in East London. The meal had been cleared away, and we were drinking wine. My host, a young climate activist, had just returned from Alaska, where he had been tagging along on a yacht trip with a select group of superrich investors all gathered to watch glaciers crumble into the sea and be told about dwindling blue whale numbers. Everyone on the boat was talking about the same book: Robinson’s latest novel, The Ministry for the Future. It had blown their minds.

Set in a near-future Earth where humanity is finally forced to deal with its broken climate or go extinct, it almost reads like a manual for how we might fix our burning world. Like Red Mars, the novel describes an extreme approach for fixing the climate: geoengineering. That’s the concept that we can redesign the very atmosphere of the Earth, tweak the elements to our own ends by shooting massive quantities of particles into the stratosphere, and thereby dim the sun. It is thanks to Robinson’s novel that most people have even heard of the practice. As environmentalist Bill McKibben has written, “a novel feature of the geoengineering debate is that many people first heard about it in a novel.”

“It’s so successful, I think it hardly counts as a cult novel now,” said David Keith, a professor of geophysical sciences at the University of Chicago who is one of the most prominent scientists working in the field of geoengineering. Keith said that Robinson had consulted with him ahead of writing The Ministry for the Future. “I don’t want to claim any inspiration, but we met,” he said with a smile, adding that he thought of Robinson as “an environmental guru.”

Robinson Crusoe On Mars, lobbycard, Paul Mantee, 1964. LMPC via Getty Images.

Geoengineering sci-fi like Robinson’s has ignited the imagination of Silicon Valley elites hoping to fix the planet’s problems. Luke Iseman and Andrew Song, a pair of San Francisco entrepreneurs who founded a startup called Make Sunsets, are already deploying solar geoengineering on a micro-scale, releasing balloons filled with sulfur dioxide over the deserts of Nevada. They call their project “sunscreen for the earth”—a term they got from ChatGPT, the AI chatbot. Iseman told me he founded the company after reading science fiction about geoengineering, both Robinson’s book and Termination Shock, the latest novel by Neal Stephenson. “The ideas are amazing,” said Iseman. “I think we’ll see Ministry for the Future-style actions sooner rather than later, for better and worse.” Iseman described how he read both books and immediately began envisioning how he could make them a reality.  

“The more I learned, the more excited I became,” he said, adding that he had grand ambitions for Make Sunsets to keep expanding, unfettered, and try to alter the Earth’s atmosphere. “We’ve got a couple of years of runway to work on this, and a laundry list of fun sci-fi-esque technologies that will let us do this better over time,” he said. Mexico banned solar geoengineering after Make Sunsets carried out a rogue balloon release in Baja California without government permission. By contrast, he said, Nevada is a “good launch site for experimental stuff.”

Make Sunsets and other geoengineering projects have faced criticism for a cowboy-style approach to the future of the planet. Indigenous groups have condemned them as taking a colonial attitude toward the skies. “Solar geoengineering is kind of the ultimate colonization,” said Asa Larsson-Blind, a Saami activist from northern Sweden who has been campaigning for a global moratorium on solar geoengineering. “Not only of nature and the Earth, but also the atmosphere. Treating the Earth as machinery and saying that we’re not just entitled to control the Earth itself, we will control the whole atmosphere, is to take it a step further.”

Robinson said the message of his books is being missed. “You don’t just burst in some Promethean way to the one techno-fix. The technology that matters is law, and justice, and therefore—politics. And this is what the techno crowd doesn’t want to admit.”

Musk, a private citizen, has already decided for us what the rule of law will be on Mars. “Most likely the form of government on Mars would be a direct democracy, not representative,” he said during his 2022 Time Person of the Year interview. “We shouldn’t be passing laws that are longer than The Lord of the Rings.”

Artist impression of a Mars settlement with cutaway view.
NASA Ames Research Center.

The tech elite’s desire to spread out into space isn’t a new whim. “Expansion is everything,” said the imperialist diamond mining magnate Cecil Rhodes. He would stare up at the sky and regret that humanity couldn’t yet expand outwards into space, those “vast worlds which we could never reach.” Rhodes' words were recorded in his last will and testament, published in 1902. “I would annex the planets if I could.”

In Robinson’s Red Mars, a great fight is underway—a fight of ideologies between the Reds, who believe colonizing Mars will destroy a place that has remained unchanged for billions of years, and the Greens, who want to create an Earth-like biosphere. The Reds make an argument akin to those of Indigenous groups on Earth. Why, they say, can’t we let Mars be Mars? A place that has been unravaged by human exploitation. A place where the rocks, the ice, the sky, have their own value.

“Let the planet be, leave it to be wilderness,” one character, Anne, pleads to her fellow scientists. She’s heartbroken by the thought of extracting, altering, colonizing the planet, and wrecking its ancient landforms and its planetary history. “You want to do that because you think you can. You want to try it out and see—as if this were some playground sandbox for you to build castles in.”

I asked Robinson if he thought the same way Anne did—if he was, in fact, Anne. “Oh, no,” he said with a laugh. “My characters are much more interesting than I am.”

That night in Devon, when we saw the Starlink satellites going up, already feels like a relic from a bygone era, from a time when the night sky was uncluttered by human ambition. Now, whenever I look up, wherever I am in the world, I can spot one of Musk’s satellites within a matter of seconds.

Before long, satellites in the sky will outnumber the stars we can see. The universe will be blotted out by fast-moving pieces of metal reflecting back at us. And perhaps the Mars of our solar system will one day resemble the Mars of Kim Stanley Robinson’s science fiction, the Mars of the fever dreams of the richest people in the world. A Mars that has been transformed by humans to look more like our own Earth—no longer a red light in the sky, but one that looks like what we already know here on Earth. At that point, we’ll have nothing in the universe to look at but ourselves.

Complicating Colonialism

This story is part of our Complicating Colonialism series, which explores how unfinished conversations about the past play out in our daily lives and shape our collective future. Read more from this series produced in partnership with Stranger's Guide Magazine.

The post Silicon Valley’s sci-fi dreams of colonizing Mars appeared first on Coda Story.

]]>
50793
Silicon Savanna: The workers taking on Africa’s digital sweatshops https://www.codastory.com/authoritarian-tech/kenya-content-moderators/ Wed, 11 Oct 2023 11:11:00 +0000 https://www.codastory.com/stayonthestory/silicon-savannah-taking-on-africas-digital-sweatshops-in-the-heart-of-silicon-savannah/ Content moderators for TikTok, Meta and ChatGPT are demanding that tech companies reckon with the human toll of their enterprise.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>

 Silicon Savanna: The workers taking on Africa's digital sweatshops

This story was updated at 6:30 ET on October 16, 2023

Wabe didn’t expect to see his friends’ faces in the shadows. But it happened after just a few weeks on the job.

He had recently signed on with Sama, a San Francisco-based tech company with a major hub in Kenya’s capital. The middle-man company was providing the bulk of Facebook’s content moderation services for Africa. Wabe, whose name we’ve changed to protect his safety, had previously taught science courses to university students in his native Ethiopia.

Now, the 27-year-old was reviewing hundreds of Facebook photos and videos each day to decide if they violated the company’s rules on issues ranging from hate speech to child exploitation. He would get between 60 and 70 seconds to make a determination, sifting through hundreds of pieces of content over an eight-hour shift.

One day in January 2022, the system flagged a video for him to review. He opened up a Facebook livestream of a macabre scene from the civil war in his home country. What he saw next was dozens of Ethiopians being “slaughtered like sheep,” he said. 

Then Wabe took a closer look at their faces and gasped. “They were people I grew up with,” he said quietly. People he knew from home. “My friends.”

Wabe leapt from his chair and stared at the screen in disbelief. He felt the room close in around him. Panic rising, he asked his supervisor for a five-minute break. “You don’t get five minutes,” she snapped. He turned off his computer, walked off the floor, and beelined to a quiet area outside of the building, where he spent 20 minutes crying by himself.

Wabe had been building a life for himself in Kenya while back home, a civil war was raging, claiming the lives of an estimated 600,000 people from 2020 to 2022. Now he was seeing it play out live on the screen before him.

That video was only the beginning. Over the next year, the job brought him into contact with videos he still can’t shake: recordings of people being beheaded, burned alive, eaten.

“The word evil is not equal to what we saw,” he said. 

Yet he had to stay in the job. Pay was low — less than two dollars an hour, Wabe told me — but going back to Ethiopia, where he had been tortured and imprisoned, was out of the question. Wabe worked with dozens of other migrants and refugees from other parts of Africa who faced similar circumstances. Money was too tight — and life too uncertain — to speak out or turn down the work. So he and his colleagues kept their heads down and steeled themselves each day for the deluge of terrifying images.

Over time, Wabe began to see moderators as “soldiers in disguise” — a low-paid workforce toiling in the shadows to make Facebook usable for billions of people around the world. But he also noted a grim irony in the role he and his colleagues played for the platform’s users: “Everybody is safe because of us,” he said. “But we are not.”  

Wabe said dozens of his former colleagues in Sama’s Nairobi offices now suffer from post-traumatic stress disorder. Wabe has also struggled with thoughts of suicide. “Every time I go somewhere high, I think: What would happen if I jump?” he wondered aloud. “We have been ruined. We were the ones protecting the whole continent of Africa. That’s why we were treated like slaves.”

The West End Towers house the Nairobi offices of Majorel, a Luxembourg-based content moderation firm with over 22,000 employees on the African continent.

To most people using the internet — most of the world — this kind of work is literally invisible. Yet it is a foundational component of the Big Tech business model. If social media sites were flooded with videos of murder and sexual assault, most people would steer clear of them — and so would the advertisers that bring the companies billions in revenue.

Around the world, an estimated 100,000 people work for companies like Sama, third-party contractors that supply content moderation services for the likes of Facebook’s parent company Meta, Google and TikTok. But while it happens at a desk, mostly on a screen, the demands and conditions of this work are brutal. Current and former moderators I met in Nairobi in July told me this work has left them with post-traumatic stress disorder, depression, insomnia and thoughts of suicide.

These “soldiers in disguise” are reaching a breaking point. Because of people like Wabe, Kenya has become ground zero in a battle over the future of content moderation in Africa and beyond. On one side are some of the most powerful and profitable tech companies on earth. On the other are young African content moderators who are stepping out from behind their screens and demanding that Big Tech companies reckon with the human toll of their enterprise.

In May, more than 150 moderators in Kenya, who keep the worst of the worst off of platforms like Facebook, TikTok and ChatGPT, announced their drive to create a trade union for content moderators across Africa. The union would be the first of its kind on the continent and potentially in the world.

There are also major pending lawsuits before Kenya’s courts targeting Meta and Sama. More than 180 content moderators — including Wabe — are suing Meta for $1.6 billion over poor working conditions, low pay and what they allege was unfair dismissal after Sama ended its content moderation agreement with Meta and Majorel picked up the contract instead. The plaintiffs say they were blacklisted from reapplying for their jobs after Majorel stepped in. In August, a judge ordered both parties to settle the case out of court, but the mediation broke down on October 16 after the plaintiffs' attorneys accused Meta of scuttling the negotiations and ignoring moderators' requests for mental health services and compensation. The lawsuit will now proceed to Kenya's employment and labor relations court, with an upcoming hearing scheduled for October 31.

The cases against Meta are unprecedented. According to Amnesty International, it is the “first time that Meta Platforms Inc will be significantly subjected to a court of law in the global south.” Forthcoming court rulings could jeopardize Meta’s status in Kenya and the content moderation outsourcing model upon which it has built its global empire. 

Meta did not respond to requests for comment about moderators’ working conditions and pay in Kenya. In an emailed statement, a spokesperson for Sama said the company cannot comment on ongoing litigation but is “pleased to be in mediation” and believes “it is in the best interest of all parties to come to an amicable resolution.”

Odanga Madung, a Kenya-based journalist and a fellow at the Mozilla Foundation, believes the flurry of litigation and organizing marks a turning point in the country’s tech labor trajectory. 

“This is the tech industry’s sweatshop moment,” Madung said. “Every big corporate industry here — oil and gas, the fashion industry, the cosmetics industry — have at one point come under very sharp scrutiny for the reputation of extractive, very colonial type practices.”

Nairobi may soon witness a major shift in the labor economics of content moderation. But it also offers a case study of this industry’s powerful rise. The vast capital city — sometimes called “Silicon Savanna” — has become a hub for outsourced content moderation jobs, drawing workers from across the continent to review material in their native languages. An educated, predominantly English-speaking workforce makes it easy for employers from overseas to set up satellite offices in Kenya. And the country’s troubled economy has left workers desperate for jobs, even when wages are low.

Sameer Business Park, a massive office compound in Nairobi’s industrial zone, is home to Nissan, the Bank of Africa, and Sama’s local headquarters. But just a few miles away lies one of Nairobi’s largest informal settlements, a sprawl of homes made out of scraps of wood and corrugated tin. The slum’s origins date back to the colonial era, when the land it sits on was a farm owned by white settlers. In the 1960s, after independence, the surrounding area became an industrial district, attracting migrants and factory workers who set up makeshift housing on the area adjacent to Sameer Business Park.

For companies like Sama, the conditions here were ripe for investment by 2015, when the firm established a business presence in Nairobi. Headquartered in San Francisco, the self-described “ethical AI” company aims to “provide individuals from marginalized communities with training and connections to dignified digital work.” In Nairobi, it has drawn its labor from residents of the city’s informal settlements, including 500 workers from Kibera, one of the largest slums in Africa. In an email, a Sama spokesperson confirmed moderators in Kenya made between $1.46 and $3.74 per hour after taxes.

Grace Mutung’u, a Nairobi-based digital rights researcher at Open Society Foundations, put this into local context for me. On the surface, working for a place like Sama seemed like a huge step up for young people from the slums, many of whom had family roots in factory work. It was less physically demanding and more lucrative. Compared to manual labor, content moderation “looked very dignified,” Mutung’u said. She recalled speaking with newly hired moderators at an informal settlement near the company’s headquarters. Unlike their parents, many of them were high school graduates, thanks to a government initiative in the mid-2000s to get more kids in school.

“These kids were just telling me how being hired by Sama was the dream come true,” Mutung’u told me. “We are getting proper jobs, our education matters.” These younger workers, Mutung’u continued, “thought: ‘We made it in life.’” They thought they had left behind the poverty and grinding jobs that wore down their parents’ bodies. Until, she added, “the mental health issues started eating them up.” 

Today, 97% of Sama’s workforce is based in Africa, according to a company spokesperson. And despite its stated commitment to providing “dignified” jobs, it has caught criticism for keeping wages low. In 2018, the company’s late founder argued against raising wages for impoverished workers from the slum, reasoning that it would “distort local labor markets” and have “a potentially negative impact on the cost of housing, the cost of food in the communities in which our workers thrive.”

Content moderation did not become an industry unto itself by accident. In the early days of social media, when “don’t be evil” was still Google’s main guiding principle and Facebook was still cheekily aspiring to connect the world, this work was performed by employees in-house for the Big Tech platforms. But as companies aspired to grander scales, seeking users in hundreds of markets across the globe, it became clear that their internal systems couldn’t stem the tide of violent, hateful and pornographic content flooding people’s newsfeeds. So they took a page from multinational corporations’ globalization playbook: They decided to outsource the labor.

More than a decade on, content moderation is now an industry that is projected to reach $40 billion by 2032. Sarah T. Roberts, a professor of information studies at the University of California at Los Angeles, wrote the definitive study on the moderation industry in her 2019 book “Behind the Screen.” Roberts estimates that hundreds of companies are farming out these services worldwide, employing upwards of 100,000 moderators. In its own transparency documents, Meta says that more than 15,000 people moderate its content in more than 20 sites around the world. Some (it doesn’t say how many) are full-time employees of the social media giant, while others (it doesn’t say how many) work for the company’s contracting partners.

Kauna Malgwi was once a moderator with Sama in Nairobi. She was tasked with reviewing content on Facebook in her native language, Hausa. She recalled watching coworkers scream, faint and develop panic attacks on the office floor as images flashed across their screens. Originally from Nigeria, Malgwi took a job with Sama in 2019, after coming to Nairobi to study psychology. She told me she also signed a nondisclosure agreement instructing her that she would face legal consequences if she told anyone she was reviewing content on Facebook. Malgwi was confused by the agreement, but moved forward anyway. She was in graduate school and needed the money.

A 28-year-old moderator named Johanna described a similar decline in her mental health after watching TikTok videos of rape, child sexual abuse, and even a woman ending her life in front of her own children. Johanna currently works with the outsourcing firm Majorel, reviewing content on TikTok, and asked that we identify her using a pseudonym, for fear of retaliation by her employer. She told me she’s extroverted by nature, but after a few months at Majorel, she became withdrawn and stopped hanging out with her friends. Now, she dissociates to get through the day at work. “You become a different person,” she told me. “I’m numb.”

This is not the experience that the Luxembourg-based multinational — which employs more than 22,000 people across the African continent — touts in its recruitment materials. On a page about its content moderation services, Majorel’s website features a photo of a woman donning a pair of headphones and laughing. It highlights the company’s “Feel Good” program, which focuses on “team member wellbeing and resiliency support.”

According to the company, these resources include 24/7 psychological support for employees “together with a comprehensive suite of health and well-being initiatives that receive high praise from our people," Karsten König, an executive vice president at Majorel, said in an emailed statement. "We know that providing a safe and supportive working environment for our content moderators is the key to delivering excellent services for our clients and their customers. And that’s what we strive to do every day.”

But Majorel’s mental health resources haven’t helped ease Johanna’s depression and anxiety. She says the company offers moderators in her Nairobi office with on-site therapists who see employees in individual and group “wellness” sessions. But Johanna told me she stopped attending the individual sessions after her manager approached her about a topic she shared in confidentiality with her therapist. “They told me it was a safe space,” Johanna explained, “but I feel that they breached that part of the confidentiality so I do not do individual therapy.” TikTok did not respond to a request for comment by publication.

Instead, she looked for other ways to make herself feel better. Nature has been especially healing. Whenever she can, Johanna takes herself to Karura Forest, a lush oasis in the heart of Nairobi. One afternoon, she brought me to one of her favorite spots there, a crashing waterfall beneath a canopy of trees. This is where she tries to forget about the images that keep her up at night. 

Johanna remains haunted by a video she reviewed out of Tanzania, where she saw a lesbian couple attacked by a mob, stripped naked and beaten. She thought of them again and again for months. “I wondered: ‘How are they? Are they dead right now?’” At night, she would lie awake in her bed, replaying the scene in her mind.

“I couldn’t sleep, thinking about those women.”

Johanna’s experience lays bare another stark reality of this work. She was powerless to help victims. Yes, she could remove the video in question, but she couldn’t do anything to bring the women who were brutalized to safety. This is a common scenario for content moderators like Johanna, who are not only seeing these horrors in real-time, but are asked to simply remove them from the internet and, by extension, perhaps, from public record. Did the victims get help? Were the perpetrators brought to justice? With the endless flood of videos and images waiting for review, questions like these almost always go unanswered.

The situation that Johanna encountered highlights what David Kaye, a professor of law at the University of California at Irvine and the former United Nations special rapporteur on freedom of expression, believes is one of the platforms’ major blindspots: “They enter into spaces and countries where they have very little connection to the culture, the context and the policing,” without considering the myriad ways their products could be used to hurt people. When platforms introduce new features like livestreaming or new tools to amplify content, Kaye continued, “are they thinking through how to do that in a way that doesn’t cause harm?”

The question is a good one. For years, Meta CEO Mark Zuckerberg famously urged his employees to “move fast and break things,” an approach that doesn’t leave much room for the kind of contextual nuance that Kaye advocates. And history has shown the real-world consequences of social media companies’ failures to think through how their platforms might be used to foment violence in countries in conflict.

The most searing example came from Myanmar in 2017, when Meta famously looked the other way as military leaders used Facebook to incite hatred and violence against Rohingya Muslims as they ran “clearance operations” that left an estimated 24,000 Rohingya people dead and caused more than a million to flee the country. A U.N. fact-finding mission later wrote that Facebook had a “determining role” in the genocide. After commissioning an independent assessment of Facebook’s impact in Myanmar, Meta itself acknowledged that the company didn’t do “enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

Yet five years later, another case now before Kenya’s high court deals with the same issue on a different continent. Last year, Meta was sued by a group of petitioners including the family of Meareg Amare Abrha, an Ethiopian chemistry professor who was assassinated in 2021 after people used Facebook to orchestrate his killing. Amare’s son tried desperately to get the company to take down the posts calling for his father’s head, to no avail. He is now part of the suit that accuses Meta of amplifying hateful and malicious content during the conflict in Tigray, including the posts that called for Amare’s killing.

The case underlines the strange distance between Big Tech behemoths and the content moderation industry that they’ve created offshore, where the stakes of moderation decisions can be life or death. Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University's Stern School of Business who authored a seminal 2020 report on the issue, believes this distance helped corporate leadership preserve their image of a shiny, frictionless world of tech. Social media was meant to be about abundant free speech, connecting with friends and posting pictures from happy hour — not street riots or civil war or child abuse.

“This is a very nitty gritty thing, sifting through content and making decisions,” Barrett told me. “They don't really want to touch it or be in proximity to it. So holding this whole thing at arm’s length as a psychological or corporate culture matter is also part of this picture.”

Sarah T. Roberts likened content moderation to “a dirty little secret. It’s been something that people in positions of power within the companies wish could just go away,” Roberts said. This reluctance to deal with the messy realities of human behavior online is evident today, even in statements from leading figures in the industry. For example, with the July launch of Threads, Meta’s new Twitter-like social platform, in July, Instagram head Adam Mosseri expressed a desire to keep “politics and hard news” off the platform.

The decision to outsource content moderation meant that this part of what happened on social media platforms would “be treated at arm’s length and without that type of oversight and scrutiny that it needs,” Barrett said. But the decision had collateral damage. In pursuit of mass scale, Meta and its counterparts created a system that produces an impossible amount of material to oversee. By some estimates, three million items of content are reported on Facebook alone on a daily basis. And despite what some of Silicon Valley’s other biggest names tell us, artificial intelligence systems are insufficient moderators. So it falls on real people to do the work.

One morning in late July, James Oyange, a former tech worker, took me on a driving tour of Nairobi’s content moderation hubs. Oyange, who goes by Mojez, is lanky and gregarious, quick to offer a high five and a custom-made quip. We pulled up outside a high-rise building in Westlands, a bustling central neighborhood near Nairobi’s business district. Mojez pointed up to the sixth floor: Majorel’s local office, where he worked for nine months, until he was let go.

He spent much of his year in this building. Pay was bad and hours were long, and it wasn’t the customer service job he’d expected when he first signed on — this is something he brought up with managers early on. But the 26-year-old grew to feel a sense of duty about the work. He saw the job as the online version of a first responder — an essential worker in the social media era, cleaning up hazardous waste on the internet. But being the first to the scene of the digital wreckage changed Mojez, too — the way he looks, the way he sleeps, and even his life’s direction.

That morning, as we sipped coffee in a trendy, high-ceilinged cafe in Westlands, I asked how he’s holding it together. “Compared to some of the other moderators I talked to, you seem like you’re doing okay,” I remarked. “Are you?”

His days often started bleary-eyed. When insomnia got the best of him, he would force himself to go running under the pitch-black sky, circling his neighborhood for 30 minutes and then stretching in his room as the darkness lifted. At dawn, he would ride the bus to work, snaking through Nairobi’s famously congested roads until he arrived at Majorel’s offices. A food market down the street offered some moments of relief from the daily grind. Mojez would steal away there for a snack or lunch. His vendor of choice doled out tortillas stuffed with sausage. He was often so exhausted by the end of the day that he nodded off on the bus ride home.

And then, in April 2023, Majorel told him that his contract wouldn’t be renewed.

It was a blow. Mojez walked into the meeting fantasizing about a promotion. He left without a job. He believes he was blacklisted by company management for speaking up about moderators’ low pay and working conditions.

A few weeks later, an old colleague put him in touch with Foxglove, a U.K.-based legal nonprofit supporting the lawsuit currently in mediation against Meta. The organization also helped organize the May meeting in which more than 150 African content moderators across platforms voted to unionize.

At the event, Mojez was stunned by the universality of the challenges facing moderators working elsewhere. He realized: “This is not a Mojez issue. These are 150 people across all social media companies. This is a major issue that is affecting a lot of people.” After that, despite being unemployed, he was all in on the union drive. Mojez, who studied international relations in college, hopes to do policy work on tech and data protection someday. But right now his goal is to see the effort through, all the way to the union’s registry with Kenya’s labor department.

Mojez’s friend in the Big Tech fight, Wabe, also went to the May meeting. Over lunch one afternoon in Nairobi in July, he described what it was like to open up about his experiences  publicly for the first time. “I was happy,” he told me. “I realized I was not alone.” This awareness has made him more confident about fighting “to make sure that the content moderators in Africa are treated like humans, not trash,” he explained. He then pulled up a pant leg and pointed to a mark on his calf, a scar from when he was imprisoned and tortured in Ethiopia. The companies, he said, “think that you are weak. They don’t know who you are, what you went through.”

A popular lunch spot for workers outside Majorel's offices.

Looking at Kenya’s economic woes, you can see why these jobs were so alluring. My visit to Nairobi coincided with a string of July protests that paralyzed the city. The day I flew in, it was unclear if I would be able to make it from the airport to my hotel — roads, businesses and public transit were threatening to shut down in anticipation of the unrest. The demonstrations, which have been bubbling up every so often since last March, came in response to steep new tax hikes, but they were also about the broader state of Kenya’s faltering economy — soaring food and gas prices and a youth unemployment crisis, some of the same forces that drive throngs of young workers to work for outsourcing companies and keep them there.

Leah Kimathi, a co-founder of the Kenyan nonprofit Council for Responsible Social Media, believes Meta’s legal defense in the labor case brought by the moderators betrays Big Tech’s neo-colonial approach to business in Kenya. When the petitioners first filed suit, Meta tried to absolve itself by claiming that it could not be brought to trial in Kenya, since it has no physical offices there and did not directly employ the moderators, who were instead working for Sama, not Meta. But a Kenyan labor court saw it differently, ruling in June that Meta — not Sama — was the moderators’ primary employer and the case against the company could move forward.

“So you can come here, roll out your product in a very exploitative way, disregarding our laws, and we cannot hold you accountable,” Kimathi said of legal Meta’s argument. “Because guess what? I am above your laws. That was the exact colonial logic.”

Kimathi continued: “For us, sitting in the Global South, but also in Africa, we’re looking at this from a historical perspective. Energetic young Africans are being targeted for content moderation and they come out of it maimed for life. This is reminiscent of slavery. It’s just now we’ve moved from the farms to offices.”

As Kimathi sees it, the multinational tech firms and their outsourcing partners made one big, potentially fatal miscalculation when they set up shop in Kenya: They didn’t anticipate a workers’ revolt. If they had considered the country’s history, perhaps they would have seen the writing of the African Content Moderator’s Union on the wall.

Kenya has a rich history of worker organizing in resistance to the colonial state. The labor movement was “a critical pillar of the anti-colonial struggle,” Kimathi explained to me. She and other critics of Big Tech’s operations in Kenya see a line that leads from colonial-era labor exploitation and worker organizing to the present day. A workers’ backlash was a critical part of that resistance — and one the Big Tech platforms and their outsourcers may have overlooked when they decided to do business in the country.

“They thought that they would come in and establish this very exploitative industry and Kenyans wouldn’t push back,” she said. Instead, they sued.

What happens if the workers actually win?

Foxglove, the nonprofit supporting the moderators’ legal challenge against Meta, writes that the outcome of the case could disrupt the global content moderation outsourcing model. If the court finds that Meta is the “‘true employer’ of their content moderators in the eyes of the law,” Foxglove argues, “then they cannot hide behind middlemen like Sama or Majorel. It will be their responsibility, at last, to value and protect the workers who protect social media — and who have made tech executives their billions.”

But there is still a long road ahead, for the moderators themselves and for the kinds of changes to the global moderation industry that they are hoping to achieve.

In Kenya, the workers involved in the lawsuit and union face practical challenges. Some, like Mojez, are unemployed and running out of money. Others are migrant workers from elsewhere on the continent who may not be able to stay in Kenya for the duration of the lawsuit or union fight.

The Moderator’s Union is not yet registered with Kenya’s labor office, but if it becomes official, its members intend to push for better conditions for moderators working across platforms in Kenya, including higher salaries and more psychological support for the trauma endured on the job. And their ambitions extend far beyond Kenya. The network hopes to inspire similar actions in other countries’ content moderation hubs. According to Martha Dark, Foxglove’s co-founder and director, the industry’s working conditions have spawned a cross-border, cross-company organizing effort, drawing employees from Africa, Europe and the U.S.

“There are content moderators that are coming together from Poland, America, Kenya, and Germany talking about what the challenges are that they experience when trying to organize in the context of working for Big Tech companies like Facebook and TikTok,” she explained.

Still, there are big questions that might hinge on the litigation’s ability to transform the moderation industry. “It would be good if outsourced content reviewers earned better pay and were better treated,” NYU’s Paul Barrett told me. “But that doesn't get at the issue that the mother companies here, whether it’s Meta or anybody else, is not hiring these people, is not directly training these people and is not directly supervising these people.” Even if the Kenyan workers are victorious in their lawsuit against Meta, and the company is stung in court, “litigation is still litigation,” Barrett explained. “It’s not the restructuring of an industry.”

So what would truly reform the moderation industry’s core problem? For Barrett, the industry will only see meaningful change if companies can bring “more, if not all of this function in-house.”

But Sarah T. Roberts, who interviewed workers from Silicon Valley to the Philippines for her book on the global moderation industry, believes collective bargaining is the only pathway forward for changing the conditions of the work. She dedicated the end of her book to the promise of organized labor.

“The only hope is for workers to push back,” she told me. “At some point, people get pushed too far. And the ownership class always underestimates it. Why does Big Tech want everything to be computational in content moderation? Because AI tools don’t go on strike. They don't talk to reporters.”

Artificial intelligence is part of the content moderation industry, but it will probably never be capable of replacing human moderators altogether. What we do know is that AI models will continue to rely on human beings to train and oversee their data sets — a reality Sama’s CEO recently acknowledged. For now and the foreseeable future, there will still be people behind the screen, fueling the engines of the world’s biggest tech platforms. But because of people like Wabe and Mojez and Kauna, their work is becoming more visible to the rest of us.

While writing this piece, I kept returning to one scene from my trip to Nairobi that powerfully drove home the raw humanity at the base of this entire industry, powering the whole system, as much as the tech scions might like to pretend otherwise. I was in the food court of a mall, sitting with Malgwi and Wabe. They were both dressed sharply, like they were on break from the office: Malgwi in a trim pink dress and a blazer, Wabe in leather boots and a peacoat. But instead, they were just talking about how work ruined them.

At one point in the conversation, Wabe told me he was willing to show me a few examples of violent videos he snuck out while working for Sama and later shared with his attorney. If I wanted to understand “exactly what we see and moderate on the platform,” Wabe explained, the opportunity was right in front of me. All I had to do was say yes.

I hesitated. I was genuinely curious. A part of me wanted to know, wanted to see first-hand what he had to deal with for more than a year. But I’m sensitive, maybe a little breakable. A lifelong insomniac. Could I handle seeing this stuff? Would I ever sleep again?

It was a decision I didn’t have to make. Malgwi intervened. “Don’t send it to her,” she told Wabe. “It will traumatize her.”

So much of this story, I realized, came down to this minute-long exchange. I didn’t want to see the videos because I was afraid of how they might affect me. Malgwi made sure I didn’t have to. She already knew what was on the other side of the screen.

Why did we write this story?

The world’s biggest tech companies today have more power and money than many governments. This story offers a deep dive on court battles in Kenya that could jeopardize the outsourcing model upon which Meta has built its global empire.

The post Silicon Savanna: The workers taking on Africa’s digital sweatshops appeared first on Coda Story.

]]>
47011
Life on Earth, after humans https://www.codastory.com/climate-crisis/adam-kirsch-anthropocene-antihumanist-earth/ Tue, 25 Jul 2023 13:06:45 +0000 https://www.codastory.com/?p=45438 In a future without us, would the world be better off, asks writer Adam Kirsch

The post Life on Earth, after humans appeared first on Coda Story.

]]>
The Anthropocene refers to the idea that, particularly since the mid-20th century, humans have created a new geological epoch through our transformational impact on the Earth. Earlier this month, the Anthropocene Working Group, an international team of scientists, claimed they had found clear evidence of the beginning of the Anthropocene in a lake in Ontario, Canada. In the lake’s depths, sedimentary evidence was found of radioactive plutonium and hazardous fly ash from the burning of fossil fuels. 

The havoc we have wreaked on our environment is why the Anthropocene epoch may be our last. Humanity has been talking about the apocalypse for thousands of years. But in 2023, as we grapple with the hottest temperatures ever recorded, the imminent threat of climate disaster and the rapid advancement of artificial intelligence, there is a greater urgency to the questions some are asking about what the world would really look like without us. Would it be better to leave the Earth to the animals, to the trees, even to the rocks? And would the world be a safer and more benevolent place if we let AI robots run everything? 

In “The Revolt Against Humanity: Imagining a Future Without Us,” the American poet and critic Adam Kirsch interrogates the prospect of a world that is no longer dominated by humans — either because we have driven ourselves to extinction or because we have been replaced by artificial intelligence. Sitting in a sweltering Rome on the hottest day ever recorded in the ancient capital, I spoke to Adam Kirsch on the phone in New York City, where the air quality index hovered near hazardous because of the wildfire smoke drifting over from Canada. It was difficult not to talk about the “end times.”

This conversation has been edited for length and clarity.

When did you first start thinking about a future without humans?

I began to want to write the book during the pandemic when, very quickly, I felt like my physical world contracted to the space of an apartment. It struck me how little of a difference that made to my life. So much of what I do and what most of us do can be done virtually rather than physically — whether it's work, leisure or consumption. I began to think about the idea that human life has already changed. It has already gone virtual and disengaged from the physical in ways that our ancestors would not have understood. And the transhumanists’ idea is just another step on that path. 

Let’s clarify for our readers what “transhumanists” think. They basically imagine a world where the human condition can be improved or even replaced by technology like AI, right? 

Transhumanism is the school of thought which says that in the future, we will be able to use technology to overcome the limitations of our physical bodies. Transhumanists look to a future where humans will give way to another species or another form of life that isn't embodied in flesh and blood. It isn't necessarily mortal, and it might be able to live indefinitely, as a record of information, or as a simulation, or in the virtual world. 

Or, alternatively, transhumanism says that we will just be able to escape the limitations of our bodies with genetic engineering. One of the most vivid strains of transhumanism right now is the idea that in a future with artificial intelligence, there might be minds that are not human minds at all. Minds that are actually born on computers and that have a very different relationship to reality and the physical world than we do. And that those minds will become the leading form of life on our planet and take over from us in a violent or benevolent way. 

Another group you look at in your book also considers what the world would look like if humans no longer dominated it. They are called “anthropocene antihumanists” and seem to believe that humans are a kind of cancer on the earth, multiplying like a parasite. And that the world would be better off without us.

Antihumanists say that humans have taken over from nature as the most important factor on the planet. They say we no longer live alongside nature, but we control nature and dominate it. This, they believe, is eventually going to lead to the decline or disappearance of humanity itself. And they think that would be a good thing. So antihumanism can be anything from saying we should stop having children to predicting that an environmental calamity is going to reduce us to just a few leftover populations. Philosophically, it can take the form of saying, ‘How can we think about the world in ways that don’t put humanity at the center of it?’ They give equal respect and agency to nonhuman things and even nonliving things, like objects or the ocean. 

Or a rock. It’s funny, I’ve been thinking a lot recently about what a world without humans looks like. Especially as I grapple with the realities of the climate crisis and biodiversity loss. I sometimes find myself fantasizing about what the natural world looked like before human civilization. Reading your book was an intense experience in that way, because it forces you to think about the Earth without humanity. What kind of place did it take you to psychologically, while you were writing? 

It's very difficult to imagine the disappearance of humanity as a real prospect — in the same way that it's sort of hard to imagine what it's like to be dead. We could all theoretically agree that at some point there will no longer be a human species, that we will have become extinct. And that just as the dinosaurs did, someday we will disappear. But to think about that happening tomorrow or next year plays havoc with all of our assumptions about what matters and how we go about our days. Thinking about these things is on a different track from daily life. In daily life, we're dealing with the world as it is — raising children and going to work. We’re not thinking about the future in an abstract or philosophical way.

Yes, it’s a kind of bizarre cognitive dissonance to think about a world millions of years from now when humans don’t exist and then go back to thinking about what to have for lunch. 

When the book was published in January, almost right away, all of the things that I was writing about started to become much more mainstream. First, there was ChatGPT, which led to  people talking about artificial intelligence in a very immediate way and talking about how dangerous it might be. And then came this summer that we’re having with all these broken temperature records and parts of the world becoming dangerously hot and endangering human life. Even to me — someone who's been thinking about this and researching and writing about it for a long time — when it erupts into your actual life, it seems like kind of a shock. We have a tendency to think about dire things or radical changes in the abstract and not deal with the concrete until we absolutely have to. 

I think we rely so much on shards of hope that seem to get slimmer and slimmer every year. You talk about hope a lot in the book. How hopeful would you say you are? 

I think that all of us rely on hope. We rely on the assumption that the future is going to be like the present because that’s the only way we know how to navigate the world. But one of the things that drew me to the people I write about in the book is that they're not afraid to think about things that seem frightening or impossible, that most people dismiss as science fiction or extremism. They’re thinking through the idea of, ‘What if the world actually was like this in the future? What if we actually did have computers that could outthink us or what if billions of people could no longer survive because of climate change? What would that do to our sense of ourselves and the way we live?’ And I think that that’s useful to think about. Both for its own sake and because it maybe also makes us more willing to take action in the present. 

There was one Franz Kafka quote in your book that really stood out to me. “There is hope — an infinite amount of hope — but not for us.” What does that mean to you?

What transhumanists and antihumanists are trying to say is, ‘Well, maybe in the future, there won't be us, but there will be something else that we can be hopeful for.’ They say that the disappearance of humanity might not mean the end of everything that we care about. They’re trying to nudge us into a new way of thinking that if we're not here, it might not matter that much — as long as something else is. Both of them think of humanity as a stage. That the normal progression of the human species is to supersede ourselves or eliminate ourselves, not by accident, but by necessity. 

The post Life on Earth, after humans appeared first on Coda Story.

]]>
45438