The Quiet Revolution
How artificial intelligence is rewriting the rules of everyday life — from the food you order to the job you never got, from the news you believe to the person you think you are.
Priya wakes up in Bengaluru at 6:45 a.m. She reaches for her phone — as most of us do — and within seconds, without thinking about it, she has already been sorted, scored and served by at least a dozen algorithms. Her newsfeed has been curated. Her email has been filtered. A notification nudges her toward a flash sale she didn’t ask for but might, statistically speaking, click on. An app tells her the traffic is bad; another suggests a route. By the time Priya has brushed her teeth, artificial intelligence has made more decisions about her morning than she has.
This is the quiet revolution. Not the kind that announces itself with fanfare and barricades, but the kind that seeps in through software updates and terms of service nobody reads. AI is no longer a futuristic abstraction confined to research labs and science fiction. It is here — in your food delivery app, your bank’s loan department, your child’s school admission process, your doctor’s diagnostic screen. And it is making choices that shape your life in ways you may never see.
The question is no longer whether AI will change society. It already has. The real question — the one that should keep us up at night — is: who gets to decide how?
Somewhere in a nondescript server farm — cooled to a crisp 18°C, humming with the combined electricity of a small city — there exists a version of you. Not a clone. Not a photograph. Something stranger. It is a mathematical model of your preferences, your habits, your weaknesses, your likely next move. Researchers call it your “data double” — a shadow self constructed entirely from digital breadcrumbs.
Every time you linger three seconds longer on a video, that’s data. Every time you open an app at 11 p.m., skip a song, hesitate before buying, abandon a shopping cart — data. Individually, these fragments seem meaningless. But aggregated, processed and pattern-matched across billions of similar profiles, they become something powerful: a prediction machine.
This is the engine behind what writer Shoshana Zuboff calls “surveillance capitalism” — a system where your behaviour isn’t just observed, it’s harvested, packaged and sold. Not your data in the boring sense of your email address, but something more intimate: your future intentions. What you’re likely to buy. Who you might vote for. When you’re feeling vulnerable. The products of this system aren’t the ads you see — the product is you.
The unsettling part isn’t that companies know what you bought last week. It’s that they can predict, with eerie accuracy, what you’ll want next Tuesday. And increasingly, they don’t just predict — they nudge. The line between personalisation and manipulation is thin, and it’s getting thinner by the day.
Chicago, Illinois — 2023 Robert, 34, applied for a home loan online. Within minutes, he received a rejection. No human ever reviewed his application. No one called to explain. The algorithm had spoken. Robert, who is Black, later discovered that the system used his zip code — a predominantly Black neighbourhood — as a proxy for risk. Decades of housing segregation, baked into a dataset, perpetuated by a machine that didn’t know it was being unfair.
Robert’s story is not unusual. It’s not even rare. Across the world, algorithms are making high-stakes decisions about people’s lives — who gets a loan, who gets hired, who gets flagged by police, who sees a doctor first — and doing so in ways that are often invisible, unexplainable and shockingly biased.
The problem isn’t that machines are evil. It’s simpler and more insidious than that: machines learn from data, and data reflects the world as it has been — not as it should be. If you train a hiring algorithm on ten years of résumés from a company that mostly hired men, the algorithm learns that being male is a signal of a “good” candidate. If you build a facial recognition system using mostly light-skinned faces, it fails catastrophically on darker skin. The bias isn’t a bug. It’s a mirror.
🏛️ Criminal Justice — COMPAS
A widely used sentencing algorithm in the US was found to label Black defendants as “high risk” at nearly twice the rate of white defendants — even when their actual reoffending rates were similar. Judges trusted the number. The number was wrong.
💼 Hiring — The Amazon Experiment
Amazon built an AI recruiting tool to find top talent. It taught itself to penalise résumés that included the word “women’s” — as in “women’s chess club.” Trained on a decade of male-dominated hiring, the machine had learned that male was the default. Amazon scrapped it.
📷 Facial Recognition — Who Gets Seen?
Researcher Joy Buolamwini tested major facial recognition systems. For light-skinned men, errors were under 1%. For dark-skinned women: nearly 35%. The technology worked brilliantly — for some faces. It was nearly blind to others.
🏥 Healthcare — The Cost of Being Black
A major US healthcare algorithm used spending as a proxy for medical need. Since systemic inequality means Black patients historically spend less on healthcare, the system concluded they were healthier. They weren’t. Millions were under-referred.
Nairobi, Kenya — 2024 Daniel sits in a small office, clicking through images on a screen. For eight hours a day, he labels photographs — “car,” “pedestrian,” “stop sign” — so that a self-driving car in California can learn to see. He earns about two dollars an hour. His employer is a subcontractor of a subcontractor. The AI he is training will be worth billions. Daniel will never drive the car.
We like to imagine AI as something self-sufficient — a brain in the cloud, teaching itself. The reality is far more human, and far less glamorous. Behind every “smart” system is an army of invisible workers: data labellers in Nairobi, content moderators in Manila reviewing traumatic images, gig workers on platforms like Amazon Mechanical Turk doing tasks for pennies that machines can’t yet handle.
Researchers Mary Gray and Siddharth Suri call this “ghost work” — the hidden human labour that powers the illusion of machine intelligence. These workers are not employees. They have no benefits, no job security, no union. Many develop PTSD from reviewing violent and disturbing content. They are the human cost of making AI look effortless.
And then there’s the other side of AI and work: the people whose jobs AI is changing from within. The radiologist whose scans are now pre-read by a machine. The journalist whose articles compete with AI-generated text. The customer service agent replaced by a chatbot. The delivery rider whose entire shift is controlled by an algorithm that decides the route, sets the pay, monitors the speed, and can “deactivate” the worker without warning or explanation.
Here is a fact that should bother you: nearly a third of the world’s population has never used the internet. Not because they chose not to, but because they can’t afford it, can’t access it, or live in places where infrastructure hasn’t arrived. While Silicon Valley debates whether AI will achieve consciousness, 2.6 billion people are still waiting for a reliable connection.
The digital divide is not just about who has a smartphone. It runs deeper. There’s the access gap — can you get online at all? Then there’s the skills gap — once online, do you know how to evaluate information, protect your privacy, use digital tools productively? And finally, the outcomes gap — does being online actually translate into a better job, better health, a better life?
In India, a farmer in Bihar and a tech worker in Hyderabad may both own phones. But the quality of their digital lives is worlds apart. In the United States, broadband access in rural Appalachia looks nothing like fibre-connected Manhattan. In sub-Saharan Africa, women are significantly less likely than men to be online — not because of technology, but because of social norms, cost and safety concerns. The divide follows the same old fault lines of inequality: class, race, gender, geography. AI doesn’t bridge these divides. Left ungoverned, it deepens them.
Open your Instagram. Scroll through the last twenty posts you’ve liked. Now ask yourself: is this who I really am — or who the algorithm thinks I am?
Social media has turned all of us into performers. We curate, filter, caption and crop our lives into a highlight reel. The sociologist Erving Goffman described life as a stage, with a “front stage” — the version of ourselves we present to the world — and a “back stage,” where we let the mask drop. Social media, by collapsing these stages together, has created something Goffman never imagined: a world where your boss, your mother, your ex and a total stranger all see the same performance.
And now AI adds another layer. The algorithm doesn’t just show your content to others — it decides which content you see, slowly narrowing your world into what researcher Eli Pariser calls a “filter bubble.” You don’t see what’s true. You see what’s engaging. And what’s engaging, it turns out, is often what’s enraging.
The internet was supposed to democratise truth. Anyone could publish. Anyone could investigate. The gatekeeper era — of editors and broadcasters deciding what you should know — was over. Information would be free.
What happened instead was something no one quite predicted. The same openness that allowed citizen journalists to report from war zones also allowed conspiracy theories to travel faster than facts. The same algorithms that surface cat videos also amplify political rage — because outrage drives engagement, and engagement drives revenue.
And then came the deepfakes. AI-generated videos so convincing that a president can appear to say things they never said. Voices cloned from thirty seconds of audio. Photographs of events that never happened. In the 2024 elections across the world, deepfake videos surfaced in India, the US, Indonesia and the UK — sometimes crude, sometimes terrifyingly polished.
We now live in an era where seeing is no longer believing. And the damage isn’t just that people believe false things — it’s that they stop believing anything at all. When everything could be fake, trust itself becomes the casualty.
In 2024, the European Union did something remarkable: it passed the world’s first comprehensive law specifically for AI. The EU AI Act classifies AI systems by risk — from minimal (spam filters) to unacceptable (social credit scoring) — and imposes strict requirements on high-risk applications in healthcare, policing and hiring.
Across the Atlantic, the United States has taken a different path — lighter on regulation, heavier on trust in industry to self-govern. China has its own approach: AI regulation tightly aligned with state control, requiring algorithms to uphold “core socialist values.” India is somewhere in between, with a new data protection law but no comprehensive AI regulation yet.
The patchwork is telling. The technology is global, but governance is local. A facial recognition system banned in Brussels can be freely deployed in a city that has no such laws. An algorithm considered too risky for European healthcare can be exported to countries with weaker protections. The result: a kind of regulatory arbitrage, where the most vulnerable populations often have the least protection.
🇪🇺 Europe — Caution First
The EU AI Act takes a risk-based approach. High-risk AI must be transparent, audited and subject to human oversight. Social scoring and mass surveillance are banned. Critics say it may slow innovation; supporters say it protects rights.
🇺🇸 United States — Innovate First
Reliance on voluntary commitments and sector-specific rules. Tech companies largely self-govern. Executive orders set direction but lack legislative teeth. The world’s biggest AI companies are American — and largely unregulated.
🇨🇳 China — Control First
Algorithmic transparency requirements and content regulations aligned with state ideology. AI must serve state interests. Massive investment in AI research alongside tight political control over its deployment.
🇮🇳 India — Building the Framework
The Digital Personal Data Protection Act (2023) is a start. Sector-specific AI guidelines are under development. India balances ambitions as an AI hub with deep concerns about data privacy, digital exclusion and corporate power.
Here’s a detail that rarely makes the headlines: training a single large AI model — the kind that powers your favourite chatbot — can emit as much carbon dioxide as five cars produce over their entire lifetimes. The data centres that house these systems consume staggering amounts of electricity and water. The chips that power them require rare earth minerals mined, often under harsh conditions, in Congo, Chile and China.
AI has a body. Not in the sci-fi sense, but in the material sense: mines, factories, server farms, undersea cables, cooling systems, mountains of electronic waste. Writer Kate Crawford calls AI an “extractive industry” — one that takes from the earth and from human labour in ways we rarely see and almost never discuss.
And here’s the twist: the environmental burden falls hardest on the Global South — the same communities that benefit least from AI’s products and have the smallest voice in how it’s governed. The cobalt miner in Congo, the water-stressed community near an Arizona data centre, the e-waste handler in Ghana — they are subsidising the AI revolution with their health and their landscapes.
If you’ve read this far, you might feel overwhelmed. That’s understandable. The scale of AI’s impact — on jobs, on justice, on identity, on democracy, on the planet — can feel paralysing. But here’s the thing: this story is not finished. The way AI reshapes society is not predetermined. It is being decided, right now, by choices being made in boardrooms, parliaments, design studios and, yes, in the daily decisions of ordinary people like you.
You don’t need a computer science degree to have a voice in this conversation. In fact, the conversation needs voices that aren’t from computer science — voices from social work, education, healthcare, the arts, from communities that have been historically excluded from technology decisions.
1. Stay Curious, Not Cynical
You don’t need to understand neural networks. But understanding how your newsfeed is curated, how your data is used, and why that “personalised” ad appeared — that’s power. Read widely. Follow journalists and researchers covering AI.
2. Demand Transparency
Ask the platforms you use: what data are you collecting? How are decisions being made about me? Support organisations fighting for digital rights — like EFF, Access Now, or India’s Internet Freedom Foundation.
3. Protect Your Digital Self
Use privacy settings deliberately. Review app permissions. Consider who benefits from your data — and whether you’re comfortable with that trade. Small habits compound into meaningful protection.
4. Engage as a Citizen
AI regulation is being debated in every democracy. Participate in public consultations. Vote for leaders who take technology governance seriously. The rules being written now will shape the next fifty years.
The quiet revolution is underway. It won’t ask your permission. But it will respond to your attention, your voice and your refusal to look away. The story of AI and society is ultimately a story about us — about the values we embed in our machines, the futures we choose to build, and the ones we choose to resist.
— End of feature —










