The IKEA Effect and the Rise of the ‘AI Artist’

There’s a peculiar psychological phenomenon known as the IKEA Effect — named after the Swedish flat-pack furniture empire. It describes how people place disproportionately high value on things they’ve partially created themselves. In other words, if you build it (even just part of it), you’re likely to love it more.

You spend an hour assembling a wobbly bookshelf, and suddenly, it’s not just furniture — it’s a personal triumph. A reflection of you. A thing you made. That pride and ownership are powerful. But what happens when we apply the IKEA Effect to art — more specifically, AI-generated art?

The New Wave of “AI Artists”

In the past few years, we’ve seen a rise in people proudly calling themselves AI artists. Using tools like Midjourney, DALL·E, or Leonardo.Ai, users input a few descriptive words — a “prompt” — and in seconds, a beautiful, fully-formed image appears.

The result can be stunning, surreal, and emotionally evocative. And yet… the person behind it has only provided the ingredients. The AI is the real chef. Or rather, the entire factory.

This is where the IKEA Effect kicks in. Because the user typed the words, they feel they’ve created the art. Just like assembling a table with Allen keys, that sense of partial authorship gives them a burst of pride — and for some, that’s enough to claim a title like “artist.”

Prompting ≠ Craft

Let’s be clear: Prompt engineering is a skill, especially when creating complex or consistent series of images. But is it the same as studying anatomy for years to draw a human figure? Is it the same as mastering oil paint, or understanding light and texture, or dedicating your life to understanding how art moves people?

Many traditional artists feel a growing sense of frustration. They’ve trained for years — often at great personal and financial cost — to hone their craft. And now, a person with no formal experience can type “renaissance-style portrait of a cat playing the violin in a flower field” and get instant praise, clicks, or even paid commissions.

That’s not to say AI art can’t be beautiful or meaningful. But should it be valued the same way? Should the prompt engineer be celebrated like a painter, illustrator, or photographer?

Art vs Assembly: The Emotional Disconnect

Art is often about the process — the hours of sketching, revising, reworking, and the human stories behind each mark. AI shortcuts this entirely. There’s no mistake-making, no happy accident, no soulful imperfection. It’s mass generation dressed as creativity.

AI art feels good to the maker, because they’ve added the egg to the pre-made cake mix. But that’s not baking. It’s assembling. And while there’s nothing wrong with a Betty Crocker moment now and then, we should be careful about how we frame it — especially when real bakers have spent years perfecting their recipes.

The Economic and Cultural Shift

There’s also an economic layer here. Many artists now find themselves priced out of commissions — replaced by AI tools and those who can use them to undercut with speed and scale. Others see their styles mimicked and their work fed into the very data sets that power these AI tools, without permission or credit.

This isn’t just about individual recognition — it’s about the value we place on human creativity in a world increasingly defined by machines.

Final Thoughts: Don’t Dismiss the Makers

The IKEA Effect can be a wonderful thing — it reminds us that participation creates pride. But in art, we need to ask: How much participation is enough to justify the label of “artist”?

At Flaminky, we believe creativity comes in many forms. AI can absolutely be a tool in the creative toolbox. But let’s not forget — or undervalue — the people who have dedicated their lives to understanding and creating true art.

Because in a world where AI can do almost anything, what might become rare — and truly valuable — is the human hand, human struggle, and human story behind the work.


The Hidden Environmental Cost of Your AI Prompt

When we ask artificial intelligence to generate an image, video, or piece of writing — whether that’s a holiday itinerary, a blog post, or a deepfake of a celebrity eating a Greggs sausage roll — we rarely think about what it takes to make that response happen. It’s just a line of text and a click, right?

Wrong.

Behind every AI-generated answer is a massive environmental footprint, one that’s growing faster than we realise — and that footprint has a name: data centres.

The AI Industry’s Dirty Secret

Every time you interact with AI — whether it’s ChatGPT, Midjourney, DALL·E, or Google Veo — your request is processed by thousands of computers stored in vast server rooms. These servers don’t run on magic. They consume electricity, pump out heat, and require huge amounts of water to cool down. This is particularly true for large language models and video-generation AIs, which are computationally intensive.

And with the world’s obsession with AI skyrocketing, the environmental cost is scaling with it.

Water: The Invisible Cost of Intelligence

A single AI model, during its training phase, can consume millions of litres of water. When AI companies say they’re “training” a model, they’re essentially putting thousands of GPUs (graphics processing units) through months of high-intensity computation, which generates immense heat.

How is that heat managed?

Water cooling systems.

According to a 2023 report, training OpenAI’s GPT-3 in Microsoft’s data centres consumed approximately 700,000 litres of clean water. And that’s just one model. Every AI response you prompt after that adds to the ongoing usage. Meta, Google, and Amazon also use millions of litres of water per day to keep their AI servers stable and functioning.

Electricity and Carbon Emissions

It’s not just about water. AI consumes an astonishing amount of power, often sourced from fossil-fuel-heavy grids. In 2022, data centres accounted for roughly 1–1.5% of the world’s electricity consumption — and with AI exploding in popularity since then, that figure is only climbing.

To put things in perspective:

  • Generating one AI image can use as much power as charging your phone 30 times.

  • Training a single AI model can emit up to 284 tonnes of CO₂ — that’s the equivalent of 60 petrol cars driving for a year.

What Even Is a Server Room?

Imagine a gigantic warehouse filled with rows upon rows of machines stacked in towers — constantly running, constantly humming, constantly consuming power. These are server farms, and they’re the backbone of the internet and AI.

The irony? Some of these server centres are now being built in deserts, including Arizona — one of the driest places on Earth — where water for cooling is already in scarce supply.

AI Sustainability: Greenwashing or Progress?

Tech giants like Google and Microsoft claim they are working towards carbon neutrality and “green AI.” Some are investing in air cooling, renewable energy, and liquid immersion cooling (where servers are dunked in non-conductive liquid to keep them cool without evaporating water).

But critics argue these efforts are slow and performative — especially as companies race to release bigger, faster, and more powerful AI tools without pause. If each model release requires exponentially more energy, is “sustainability” even possible at that scale?

What Can We Do?

We’re not saying “don’t use AI” — it’s an incredible tool and part of the future. But we need awareness and responsibility. Here’s what that can look like:

  • Use AI mindfully — not just for novelty or spammy content.

  • Support AI tools from companies making transparent, measurable green efforts.

  • Advocate for tech policy and regulations that hold AI companies accountable for environmental impacts.

  • If you’re a creator, brand or business, ask how your content is being made — and what it costs the planet.

Final Thoughts: There’s No Such Thing as a “Free” Prompt

Just because AI feels weightless doesn’t mean it’s without weight. Every time we ask a bot to dream, something in the real world works harder, burns hotter, and drinks more water to make that dream come true.

At Flaminky, we believe in technology that not only fuels creativity but does so responsibly. In a world increasingly shaped by code, it’s time we thought beyond the keyboard — and considered what our prompts are really asking of the planet.


Google Veo 3 and the Blurring Line Between Reality and Illusion

Google’s recent unveiling of Veo 3, its most advanced AI video generation tool yet, marks a massive leap in artificial intelligence — and not without consequence. With the ability to generate photorealistic, cinematic-quality video from text prompts, Veo 3 raises exciting possibilities but also serious concerns across industries and societies. From the world of cinema to the realms of misinformation, jobs, and creativity — nothing will remain untouched.

What Is Google Veo 3?

Think ChatGPT, but instead of words, it produces full videos. Veo 3 can create scenes from scratch based on natural language descriptions, add realistic camera movements, lighting, and even emotional tones to visuals. It can emulate specific film styles, recreate environments, and build entire narratives from a prompt. This isn’t just animation — this is AI-generated cinema.

The Future of Cinema and Content Creation

Let’s be honest: Veo 3 is a game-changer for filmmakers, marketers, and content creators. Agencies that once required entire crews, cameras, actors, editors, and thousands in production costs can now conjure an entire campaign with just a keyboard and an imagination.

On the flip side? Real creatives, from set designers to DOPs, may find their roles threatened.

This could give rise to a new kind of filmmaker — a “prompt director” — but what about the value of human-crafted stories, imperfections, and the magic of on-set collaboration? Will we crave authenticity in a world where everything can be perfectly faked?

Deepfakes, Fake News & Dead Internet Theory

Veo 3 brings the Dead Internet Theory uncomfortably closer to reality — the idea that much of the internet is no longer created or interacted with by real people, but by bots and AI.

Soon, you may not be able to tell if that video of a celebrity saying something inflammatory is real. Deepfakes, which once required high technical knowledge, are now democratised — and that’s dangerous. Combine this with political agendas, fake news, and conspiracy echo chambers, and we’re looking at a future where truth becomes optional.

Expect a flood of AI-generated media that’s indistinguishable from reality. And if people already distrust mainstream news, how will they cope when nothing can be verified?

The Scammer’s New Playground

Imagine receiving a video call or message from a loved one — or so you think — only to realise it was a scammer using Veo-like tools to deepfake their likeness. The tools that were once the preserve of high-end studios are becoming accessible to anyone. The scammer from Facebook Marketplace doesn’t need Photoshop anymore — they have Veo 3.

AI-generated misinformation could cause identity theft, reputational damage, and even geopolitical tensions. We’re not just fighting misinformation — we’re fighting hyperrealism.

Marketing Agencies and the Collapse of “Real”

From brands creating entire ad campaigns without shooting a single frame to influencers that don’t exist, Veo 3 may accelerate the AI-first marketing era. It’s cheaper, faster, and often indistinguishable from real footage. But as more brands embrace it, the human touch — that raw authenticity that builds trust — may start to erode.

What happens when every influencer is AI-generated, every advert a prompt, every model digitally sculpted?

The Creativity Question

Veo 3 brings us back to the central question: What is creativity in the age of AI?

Are we entering a post-human artistic phase, where ideas matter more than execution? Or are we devaluing the skill, effort, and emotional depth behind human-made art?

There’s no doubt AI tools like Veo 3 can assist creatives — offering new ways to ideate, prototype, and tell stories. But we must also be aware of how easy it is to let the machine do all the work — and how quickly human talent can become undervalued, or even obsolete.

Final Thoughts: A Fork in the Algorithm

Google Veo 3 is both a revolution and a warning. It offers power, convenience, and breathtaking possibilities — but also a mirror to the darkest parts of our digital culture: manipulation, job displacement, surveillance, and the erosion of truth.

As we marvel at what’s possible, we also need to ask better questions: Who controls these tools? Who verifies what’s real? Who gets left behind?

At Flaminky, we celebrate the intersection of culture, tech, and society — and right now, we’re at one of those defining crossroads. The future isn’t just coming fast… it’s being generated.


Why It Took Decades to Test Female Crash Dummies – And the Deadly Risk Women Still Face

It’s 2025. We’ve got AI companions, billionaires in space, and yet… women are only just being accurately included in car safety testing. Shocking, isn’t it?

For decades, the standard crash test dummy has been based on the “average male body” — and that’s had devastating consequences for women behind the wheel or in the passenger seat. It’s a disturbing oversight, and one that’s only recently started to be addressed.

The Gender Bias in Crash Testing

Crash test dummies have existed since the 1950s. But for the majority of that time, they’ve been designed around the male anatomy — typically based on a 76kg, 1.77m tall man. The problem? That doesn’t reflect half the population.

It wasn’t until 2011 that a smaller “female” dummy began being used in U.S. tests — but even that version was simply a scaled-down male dummy, not accurately representing female physiology. In Europe, the situation has been much the same.

In 2022, Swedish researchers developed the world’s first crash test dummy designed to reflect the average female body, accounting for differences in:

  • Muscle mass and strength
  • Pelvic structure
  • Neck size and strength
  • Sitting posture

And the results were eye-opening.

Women Are More Likely to Die or Be Injured

Because of these design flaws, women are at a significantly higher risk of injury or death in car accidents.

According to a 2019 study by the University of Virginia:

  • Women are 73% more likely to be seriously injured in a car crash.
  • They are 17% more likely to die in the same crash scenario as a man.

These aren’t small margins — they’re life-threatening gaps in safety that have gone unaddressed for far too long.

Why Has It Taken So Long?

The short answer: systemic bias.

The auto industry, historically dominated by men, has long seen the “male” body as the default. Car designs — from seat belts and airbags to headrests and dashboards — have been tailored to male proportions. Meanwhile, female bodies were seen as outliers or variations, not a core part of the safety equation, and we still don’t have pregnancy safety seatbelts.

There’s also the issue of regulatory lag. Even though new female-specific crash test dummies exist, they’re still not required in many official safety tests. That means many manufacturers aren’t using them unless pressured to do so.

The Push for Change

In the UK and EU, awareness is slowly growing. The European New Car Assessment Programme (Euro NCAP) has begun revising its protocols, and researchers like Dr. Astrid Linder (featured in the BBC article) are pushing for sex-specific crash testing to become a global standard.

Dr. Linder’s research has been pivotal in showing that differences in how men and women move during a crash — especially in whiplash scenarios — demand better representation in crash simulations.

But change needs to be systemic, not symbolic.

What Needs to Happen Next

For true equity in car safety, we need:

  • Female crash dummies required in all crash tests — not just optional extras.
  • Updated regulations reflecting the average dimensions and biomechanics of women.
  • Inclusion of diverse body types, including pregnant women, elderly passengers, and various body sizes.
  • Transparent data on how vehicles perform for all genders — not just men.

Final Thoughts

It shouldn’t take decades to realise that safety should apply to everyone equally. Women have been literally dying from being left out of the testing process. And for all our talk of equality and progress, something as fundamental as car safety still reveals the blind spots of a male-centric world.

Since I’ve recently been in a car collision myself and have had my own experience, I remembered about this design safety feature, which is unfortunately still not in all cars around the world and that still affects nearly half of the world population and motor users.

At Flaminky, we believe visibility matters. Whether it’s crash dummies, representation in tech, or storytelling — including everyone isn’t a luxury. It’s a basic right.

Let’s hope the auto industry finally gets the crash course it desperately needs.


AI Job Interviews: A Technological Step Forward or a Step Back for Fair Hiring?

Imagine preparing for a job interview, only to be greeted not by a friendly face, but by a robotic interface with no human behind it. No chance to charm with your personality, explain the nuance of your CV, or clarify a misunderstood answer. Just an algorithm, scanning your expressions, analysing your tone, and crunching numbers you can’t see.

Welcome to the growing world of AI job interviews — and the very real fears that come with it.

The Rise of AI in Recruitment

More companies, especially large corporations and tech firms, are turning to AI to handle the initial stages of recruitment. From parsing CVs with automated filters to conducting video interviews analysed by machine learning, AI promises to save time and money while “removing human bias”.

But here’s the problem: AI might actually be introducing more bias — just in a subtler, harder-to-challenge way.

Flawed from the Start: Data Bias

AI doesn’t think for itself — it’s only as good as the data it’s trained on. If that data reflects societal biases (spoiler: it often does), the AI will learn and repeat those same biases.

For example, if a company’s past hiring decisions favoured a particular gender, accent, or ethnicity, the AI might learn to prioritise those traits — and penalise others. It’s not just unethical; it’s illegal in many countries. Yet it’s quietly happening in background code.

Dehumanising the Hiring Process

Interviews are supposed to be a conversation. A chance for employers and candidates to connect, share, and assess suitability beyond just a checklist. AI, on the other hand, can’t gauge human nuance, empathy, or potential — it can only look at surface data.

This means:

  • Neurodivergent candidates may be misjudged based on non-standard eye contact or tone.
  • People from diverse cultural backgrounds may be filtered out due to accent or mannerisms.
  • Technical errors (like a poor internet connection) might wrongly signal lack of engagement or skill.

Worse still, candidates often have no one to speak to when things go wrong. No follow-up contact, no appeal process — just a rejection email, if anything at all.

Locking Out Opportunity

What happens when the “gatekeeper” to a job is an AI that doesn’t understand people? We risk creating a system where brilliant, capable individuals are excluded not because of their talent or values, but because they didn’t score highly on a robotic rubric they never got to understand.

In sectors like creative industries, teaching, or customer-facing roles — where emotional intelligence is crucial — AI interviews often fail to capture what really matters. Human connection.

The Future of Hiring: People First

We’re not anti-tech at Flaminky. In fact, we love when tech helps streamline systems and remove unnecessary barriers. But replacing humans entirely in such a sensitive, life-changing process as recruitment is not just flawed — it’s dangerous.

Instead of removing humans, companies should be using AI as a tool — not a replacement. That means:

  • Letting AI help shortlist, but not finalise decisions.
  • Allowing candidates to request a human-led interview instead.
  • Being transparent about how AI is used, and giving people the chance to appeal.

In Summary

Jobs are about more than just data. They’re about people — their growth, values, adaptability, and potential. AI interviews may tick boxes, but they miss the heart of what makes someone the right fit.

Until AI can truly understand humans, humans should be the ones doing the hiring.

After all, we’re not algorithms. We’re people. Let’s keep it that way.


The Doomscrolling Spiral: How Endlessly Scrolling Is Messing With Our Minds

It starts innocently enough. You open your phone to check a message, maybe scroll through TikTok or the news while waiting for your coffee to brew. Next thing you know, 45 minutes have passed and you’re deep into videos about climate disaster, global conflict, political chaos, or some stranger’s heartbreak — all while your coffee’s gone cold.

Welcome to the world of doomscrolling.

What Is Doomscrolling?

Doomscrolling is the act of endlessly consuming negative news or content online, especially via social media. Whether it’s updates on war, economic collapse, political scandals, celebrity break-ups or climate panic — the stream is infinite, and often feels inescapable.

It’s a fairly new term, but the behaviour is ancient: humans are wired to look for threats. In a modern, digital world, that primal instinct gets hijacked by infinite scroll feeds and clickbait headlines — feeding our anxiety while keeping us hooked.

Why Can’t We Look Away?

There’s a certain psychological trap at play. Negative information captures more of our attention than neutral or positive stories. It feels urgent, like something we need to know. Add algorithms to the mix — which prioritise content that provokes strong emotional reactions — and suddenly you’re trapped in a digital echo chamber of despair.

Apps like Twitter (now X), TikTok and Instagram are designed to hold your attention. Doomscrolling doesn’t happen because you’re weak-willed — it happens because it’s literally engineered that way.

Mental Health Fallout

The impact isn’t just digital; it’s deeply emotional and psychological. Studies have linked excessive doomscrolling to:

  • Increased anxiety and depression
  • Disrupted sleep patterns
  • Feelings of helplessness and burnout
  • Decreased focus and productivity

It can also desensitise you — numbing your reaction to genuinely important news because you’re overloaded by a constant stream of disaster.

The Post-TikTok Era: Worse or Better?

With TikTok’s looming ban in places like the US, users are already jumping ship to alternatives like Red Note and Reels. But if these platforms operate on the same engagement-driven model, are we just jumping from one doomscrolling feed to another?

The real question isn’t what platform we’re using — it’s how we’re using them.

Reclaiming Control

Here’s the thing: information isn’t the enemy. We should stay informed. But not at the cost of our mental health or inner peace.

Here’s how you can break the doomscrolling cycle:

  • Set time limits: Use app timers to restrict your usage.
  • Curate your feed: Unfollow accounts that drain you, and follow ones that uplift or educate with nuance.
  • Seek long-form journalism: Get depth, not just hot takes.
  • Stay grounded: Go outside. Touch grass. Talk to people offline.
  • Do something: If the news overwhelms you, turn it into action — donate, volunteer, or vote.

Why It Matters for Creatives

At Flaminky, we believe creativity thrives in clarity. Doomscrolling clouds the mind and kills the spark. In a world that’s constantly screaming for your attention, protecting your mental space is a radical — and necessary — act.

So next time you find yourself 100 videos deep, just ask: is this making me feel anything, or just making me numb?

It’s not about quitting the internet — it’s about using it on your terms.

Your feed doesn’t have to be a trap. It can be a tool. Choose wisely.


RIP Skype: The Death of a Digital Pioneer

Remember Skype? The blue icon, the ringtone that signalled an incoming call from someone across the world, the grainy video chats that were — at the time — revolutionary. It was the way we connected, long before Zoom fatigue and Teams invites ruled our workdays. And now? Skype is quietly slipping into digital history, barely noticed, barely missed.

But it deserves a proper send-off — not just because of nostalgia, but because of what it meant, what it pioneered, and why it ultimately failed.

The Rise of a Tech Titan

Launched in 2003, Skype changed everything. It was the first platform that made free video calls accessible to the masses. You could see your friend in another country in real time, for free. That was magic.

Skype wasn’t just ahead of the curve — it was the curve. It set the standard for internet communication, particularly in the early 2000s when international phone calls were still expensive and unreliable.

By the time Microsoft acquired Skype in 2011 for $8.5 billion, it was a global giant. It had become a verb. “Let’s Skype later” meant catching up, doing interviews, running remote meetings. It was embedded into our digital culture.

Where Did It Go Wrong?

Skype’s downfall isn’t about one bad move — it’s about many missed opportunities. Microsoft’s acquisition, which should have propelled Skype into a new era, instead saw it stagnate. The interface became clunky, updates were confusing, and user trust eroded with every glitchy call and awkward redesign.

Then came the pandemic.

In a twist of fate, a global moment that should have been Skype’s grand resurgence — a world suddenly needing remote communication — was instead the moment it was eclipsed. Zoom, with its smoother interface and faster adaptability, swooped in and took Skype’s crown without even blinking.

While the world turned to Zoom, Google Meet, and later even WhatsApp and FaceTime for daily communication, Skype faded into the background. By 2025, it feels almost like a relic — still technically alive, but largely ignored.

What Skype Symbolised

Skype symbolised a kind of early optimism about the internet. It was about connecting, not controlling. It wasn’t overloaded with ads, algorithms or content feeds. It was pure communication — seeing someone’s face and hearing their voice across borders, wars, and time zones.

It also represented a time when tech companies were disruptors, not monopolies. When services were innovative, not addictive. When “connecting the world” wasn’t a slogan, but a genuine achievement.

A Lesson in Legacy

Skype’s quiet death is a warning to tech giants: no matter how popular you are, complacency will kill you. Innovation doesn’t wait. Users want reliability, simplicity and a product that evolves with them.

And for users? It’s a reminder of how fast our digital lives move. How one day, an app can be indispensable — and the next, forgotten.

So, RIP Skype.

You were the OG. You walked so Zoom could run. You let us hear our mums’ voices from across continents, helped people fall in love long-distance, gave freelancers a way to work globally, and sometimes froze at the worst moment possible.

You were chaotic, charming, and ahead of your time — until time caught up.

And for that, we’ll always remember you.


Duolingo’s AI-First Shift: Replacing People With Bots and the Human Cost of Progress

When Duolingo announced it was going “AI first,” the tech world applauded. But behind the fanfare of efficiency, scale, and innovation lies a more uncomfortable truth — one that’s becoming all too familiar. People are losing their jobs to AI. And it’s not just any people. It’s the educators, the writers, the curriculum designers — the very heart of what once made Duolingo feel human.

In early 2024, Duolingo quietly laid off a significant portion of its contract workforce, many of whom were language and learning experts. In their place? AI. Specifically, OpenAI’s GPT models, retooled and rebranded as chatbots and content generators, capable of producing lesson plans, quizzes, and dialogue scripts with lightning speed. The company celebrated the shift as a way to scale globally and improve personalisation. But what happens when “personalisation” comes at the cost of actual people?

The Ironic Human Cost of Language Learning

Duolingo was built on the promise of making language education accessible to everyone. Its quirky owl mascot, streak reminders, and gamified lessons made it feel less like a classroom and more like a conversation. But now, that conversation is increasingly one-sided.

Replacing expert linguists with AI might make business sense, but it removes the very soul of language learning. Language is cultural. It’s full of nuance, humour, awkward pauses, and real-world context. No AI can replicate the feeling of a human explaining why a phrase matters, or how it changes in different regions, or when it’s appropriate to use.

The irony? Duolingo’s users want to learn language to connect with others. And now, they’re doing it through systems that remove the people from the process.

AI Anxiety and Job Insecurity

Duolingo’s move is just one example of a growing fear across creative and educational sectors: that AI isn’t just a tool, but a replacement. The educators let go weren’t underperforming — they were simply no longer needed, because machines could do the job faster and cheaper.

This has sparked an ethical conversation: should tech companies use AI to support human workers or replace them entirely? And what message does it send when one of the most influential edtech companies in the world chooses the latter?

For many, it’s a chilling sign of what’s to come. If even education — a field deeply rooted in empathy, connection and understanding — is being automated, what’s safe?

Users Still Want People

Despite the shiny new AI features, not all users are on board. Many learners find the chatbot interactions stiff, repetitive, or emotionally hollow. Some have shared on forums that they miss the personal touches — the cultural notes, the humour, the sense that someone real was behind the lesson design.

There’s also growing concern about the way AI learns from user data. With less human oversight, who decides what’s accurate, respectful, or culturally sensitive? When humans are removed from the loop, the risk of bias or misinformation increases.

What’s Next?

Duolingo may be leading the charge, but it’s not alone. Across the tech world, we’re seeing similar stories play out: human jobs vanishing in the name of progress. The question isn’t whether AI will be part of our future — it already is. The question is: what kind of future are we building? One where humans work with AI? Or one where they’re replaced by it?

For all its clever gamification, Duolingo might have underestimated one thing: people don’t just want to learn language. They want to feel seen, heard, and understood. And that’s something no AI — no matter how advanced — can truly replicate.

Perhaps it’s time to remember: the most powerful learning tool of all is still a human being.


Dead Internet Theory: Are We Talking to Real People Anymore?

In recent years, a once-fringe idea known as the Dead Internet Theory has gained surprising traction. It speculates that much of the internet as we know it today is no longer driven by human interaction, but by bots, AI-generated content, and algorithms designed to simulate engagement. Now, with platforms like Instagram (under Meta) rolling out AI-powered chatbot profiles that users can interact with in their DMs, this eerie theory feels less like sci-fi paranoia—and more like a sign of things to come.

Instagram’s new AI profiles are designed to behave like real users. You can talk to them, joke with them, ask them questions. Some even mimic celebrity personas or influencers. To many, they seem harmless, even fun. But when AI becomes indistinguishable from real people in digital spaces that were once rooted in human connection, we have to ask: what does this mean for the future of how we communicate?

There’s already a creeping sense of unreality across social media. Between bots inflating likes, deepfake videos, algorithm-driven content and now AI personas pretending to be your virtual mate, it’s becoming harder to tell what’s real and what’s manufactured. Platforms like X (formerly Twitter) are flooded with AI-generated content. Facebook’s feed is often filled with recycled posts or engagement bait. Instagram’s polished reels are increasingly edited, filtered, or AI-assisted. In this world of synthetic interaction, how do we find authentic connection?

Meta’s AI chatbot profiles take the uncanny valley one step further. Instead of just showing us content, they now talk to us—imitating personalities, offering companionship, mimicking emotional intelligence. While this might serve as novelty or entertainment, it risks undermining our capacity to communicate with and relate to actual people.

There’s also a darker consequence: AI chatbots don’t just fill space—they shape conversations. They can be programmed to nudge political opinions, suggest products, or reinforce brand loyalty under the guise of friendly conversation. In other words, they’re marketing tools disguised as people. The more users engage with these AI profiles, the more Meta learns—about us, our preferences, our vulnerabilities.

And here lies the connection to the Dead Internet Theory. If more and more online interactions are with algorithms and artificially-generated responses, the internet loses its original identity as a democratic space for human expression. It becomes a carefully engineered simulation, a network of walled gardens run by corporations, designed to monetise attention and manipulate behaviour.

This isn’t to say AI has no place in our digital world. Used ethically, it can enhance creativity, accessibility and even mental health services. But when AI replaces genuine interaction, it begins to erode the fabric of what made the internet revolutionary in the first place—human connection.

So next time you’re chatting in your Instagram DMs, you might want to ask: Am I really talking to someone… or something?

Because in the dead internet age, the line between user and illusion is growing fainter by the day.


Katy Perry in Space: Inspiration or Marketing Gimmick?

When news broke that Katy Perry was among a group of women sent to space as part of Jeff Bezos’ Blue Origin space tourism programme, the headlines came thick and fast. A pop star in space? It sounds like something straight out of a sci-fi musical. But behind the daisy tributes and the staged reverence for “Mother Earth,” many are left wondering: was this truly a mission of exploration, or just another glossy PR stunt dressed up as history?

Let’s be clear: space travel is one of humanity’s most extraordinary achievements. It’s about pushing boundaries, discovering the unknown, and, ideally, bettering life on Earth through scientific progress. So when a high-profile celebrity boards a spaceship not to conduct research, but seemingly to promote a tour and pose with a flower for Instagram, the symbolism gets… murky.

Yes, it was billed as an “all-female crew” and a “tribute to empowerment,” and of course, it’s important to celebrate women in space. But are we celebrating the right ones? Suni Williams, a seasoned astronaut, was literally stuck in space for nine months in 2023 due to spacecraft issues—a harrowing, heroic ordeal that received a fraction of the media coverage Katy Perry’s short, curated jaunt did.

There’s also something deeply contradictory about praising the Earth from space, while contributing to the emissions-heavy industry that is commercial space tourism. These flights are not carbon neutral, and for all the talk of love for the planet, rocketing pop stars to the edge of the atmosphere for a selfie feels like more of a spectacle than a statement.

And let’s not forget who’s behind this. Jeff Bezos’ Blue Origin is not just about the wonder of space—it’s a business. A luxury offering for the ultra-wealthy to “experience the overview effect” while the rest of us are grounded, dealing with the real effects of climate change and economic disparity. It’s a new frontier, sure—but one increasingly defined by who can afford to play astronaut for a day.

So what was Katy’s journey really about? Promoting a tour? Boosting a brand? Making headlines? Probably all three. But it certainly wasn’t about advancing science or helping humanity understand the cosmos.

At a time when real astronauts are quietly risking their lives and conducting meaningful research above our heads, the glamorisation of celebrity space trips risks cheapening the entire endeavour. If this is the future of space travel—more influencer campaign than interstellar innovation—maybe it’s time we asked whether we’re truly reaching for the stars, or just staging another photo op.