From Pantone’s Colour of the Year to the Ugliest Colour in the World

From Pantone’s Colour of the Year to the Ugliest Colour in the World

Every December, designers, trend forecasters and brand strategists hold their collective breath waiting for one announcement: the Pantone Colour of the Year.

It’s a tradition that shapes fashion collections, marketing palettes and Instagram feeds for months. The colour isn’t just a shade, it’s a statement about mood, emotion, and culture.

Pantone calls it “a snapshot of what we see taking place in our global culture.”
But while we celebrate colour as art and emotion, there’s another, far darker side to colour psychology, one that’s used not to attract, but to repel.

Colour That Inspires — and Colour That Discourages

Each year, Pantone tells us what’s in.
From Viva Magenta to Classic Blue and Peach Fuzz, these shades are chosen for how they make us feel: hopeful, energetic, calm, connected.

But what about colours chosen specifically to make us feel the opposite,  disgust, discomfort, even shame?

That’s where the infamous “world’s ugliest colour” comes in.

Meet Pantone 448C: The Colour Designed to Make You Quit

Back in 2012, the Australian government commissioned a study to find a colour that would make cigarette packaging as unattractive as possible. After testing dozens of shades with focus groups, they found a clear winner,  or rather, loser.

Pantone 448C, a murky brown-green-grey, was officially declared the world’s ugliest colour.
Psychologists described it as “dirty”, “deathly”, and “tar-like”. Focus groups associated it with decay, filth, and sickness.

It was so repulsive that it became the standard colour for plain cigarette packaging, deliberately chosen to make smoking feel less glamorous and more grim.

And it worked.

Australia saw a noticeable decline in smoking rates after the packaging change, and several countries,  including the UK, France and New Zealand, followed suit.

A single colour had changed behaviour.

The Psychology Behind the Palette

We often think of colour as aesthetic — a creative choice. But colour is psychological before it’s visual. It speaks to our instincts.

  • Warm colours (reds, oranges) evoke passion, energy and appetite. 
  • Cool tones (blues, greens) feel calm, clean, trustworthy. 
  • Muted tones — especially browns and greys — can feel lifeless or institutional. 

That’s why Pantone 448C was so effective: it stripped away the allure. It took something once designed to feel sleek and desirable, and rebranded it as something repulsive.

It’s a clever inversion of what brands normally do — colour not to attract, but to discourage.

Colour as Communication

The contrast between Pantone’s joyful Colour of the Year campaigns and the government’s anti-smoking palette highlights something fascinating: colour can carry meaning without words.

Pantone uses colour to express emotion, to unify global design under shared optimism.
Governments use colour to influence public behaviour and perception.
Both rely on the same truth,  colour speaks directly to the subconscious.

Designers already know this intuitively: a good colour choice can sell a product, shift a mood, or define a brand. But the “ugliest colour in the world” reminds us that design can also serve a social purpose,  that sometimes, beauty isn’t the goal.

Beauty, Behaviour, and Branding

There’s a poetic irony in how colour theory bridges design and psychology.
While brands chase Pantone trends to appear relevant or aspirational, governments use the very same science to save lives.

It shows that colour isn’t neutral; it carries cultural, emotional, and even ethical weight.
What’s “beautiful” in one context might be “disgusting” in another, depending entirely on what we’re trying to achieve.

Maybe that’s the real lesson behind Pantone’s yearly ritual: it’s not just about the colour we celebrate, but how we understand the power of colour itself.

From inspiring art to discouraging addiction, colour design sits quietly at the crossroads of creativity and psychology — shaping how we feel, and sometimes, how we behave.


When Artists Couldn’t Draw Horses Running

When Artists Couldn’t Draw Horses Running

When Artists Couldn’t Draw Horses Running

Before photography galloped into the picture, even the greatest artists were getting one thing consistently and hilariously  wrong:

How horses run.

For centuries, painters and sculptors depicted galloping horses with their legs stretched out like leaping greyhounds, front legs forward, back legs extended behind, suspended in mid-air, frozen in a majestic, yet physically impossible stride.

It looked powerful. It looked elegant. It looked completely wrong.

Before the Lens, There Was Guesswork

Before cameras, artists had only one tool for capturing motion: observation.
But the human eye and brain can’t process movement that fast. Horses gallop at such speed that, without photographic reference, it’s impossible to see the exact positioning of their legs.

So, for centuries, artists simply guessed.

They used logic and aesthetics instead of science. If a horse runs fast, surely its legs must stretch far apart, right?
It was a natural assumption,  until technology proved otherwise.

Enter Eadweard Muybridge: The Man Who Froze Motion

In the 1870s, a man named Eadweard Muybridge, a British photographer working in the United States, was hired to solve a hotly debated question:
When a horse gallops, are all four hooves ever off the ground at once?

To find out, Muybridge set up a line of cameras triggered by tripwires along a racetrack.
The resulting series of photographs, captured in rapid succession, revealed something no one had ever seen before.

Yes, all four hooves do leave the ground, but not when the legs are stretched out.
Instead, it happens when the legs are tucked under the body, mid-stride, the complete opposite of what centuries of art had shown.

It was a revelation.
And it changed the way people understood movement forever.

When Art Meets Evidence

Muybridge’s photos didn’t just correct an artistic mistake, they shifted how humans thought about seeing, truth, and representation.

For the first time, artists and scientists had proof that the eye could deceive.
Movement, once fluid and mysterious, could now be dissected frame by frame.

Painters, sculptors and animators began to study these sequences, leading to more realistic depictions of not just horses, but all living motion.
You can see Muybridge’s influence ripple through everything from classical painting to early animation to the biomechanics used in CGI today.

The Poetry of Imperfection

And yet, there’s something charming about those pre-photography depictions.
They remind us that art isn’t just about accuracy, it’s about interpretation.

When artists painted horses before photography, they weren’t trying to lie. They were capturing the feeling of speed, power, and grace, the essence of motion, rather than its exact mechanics.

In a way, those elongated, impossible strides were more about emotion than anatomy.
They show how art has always been a collaboration between imagination and perception, a dialogue between what we think we see and what we feel to be true.

Technology and Truth

The horse paintings of the pre-photography era tell a larger story about technology’s role in art.

Every new tool, from the camera to AI, reshapes how we define creativity.
When photography arrived, artists didn’t become obsolete. They adapted. Impressionists like Degas and Monet began using photographic reference and motion studies to create new forms of realism — emotional, fleeting, human.

In the same way, today’s artists are learning to coexist with digital tools and AI, finding new ways to express timeless questions:
What’s real? What’s beautiful? What’s worth capturing?

A Gallop Through Time

So next time you see an old painting of a horse sprinting with legs stretched out like a cartoon, smile. It’s not a mistake. It’s a snapshot of a world before cameras, before slow motion, before science could catch up with imagination.

It’s proof that art has always been about more than getting it right.
It’s about trying to see, and sometimes, the trying itself is what makes it beautiful.

Because long before photography showed us the truth, art taught us to look for it.

 


Xania Monet: The AI R&B Artist Causing a Real-World Backlash

Xania Monet

The AI R&B Artist Causing a Real-World Backlash

The music industry has officially entered a new era — one where record deals aren’t just being signed by singers, but by software.

This month, the internet has been buzzing over Xania Monet, an AI-generated R&B artist who reportedly signed a multi-million-dollar record deal with Hallwood Media after climbing the Billboard R&B Airplay charts.
She’s got a voice, a catalogue, a label — everything you’d expect from a real artist. Except she isn’t one.

Xania Monet doesn’t exist. Not in the way we understand existence.

And not everyone’s applauding.

A New Kind of Star

Xania Monet was created by Telisha “Nikki” Jones, a poet and lyricist from Mississippi, who used the AI music platform Suno to bring her creative vision to life.
Monet’s songs are sultry, soulful, and algorithmically perfect — her tone silky, her rhythm precise, her imperfections… nonexistent.

On paper, it sounds revolutionary: an artist who can sing forever, never gets tired, and doesn’t need a recording booth, tour budget, or vocal warm-up.

But to many musicians, it’s not innovation. It’s an invasion.

The Backlash Begins

The announcement of Monet’s record deal was met with immediate backlash across social media and within the music community.

Kehlani, the acclaimed R&B singer-songwriter, was one of the first to speak out.
In a viral post, she said:

“There’s an AI R&B artist who just signed a multi-million dollar deal… and the person is doing none of the work. Nothing and no one on Earth will ever be able to justify that to me.”

It’s a sentiment shared by many: that celebrating a machine-made artist in a genre built on emotion, storytelling, and lived experience feels like a betrayal of everything R&B stands for.

Because R&B isn’t just about sound — it’s about soul.

When the Machine Sings the Blues

There’s something almost poetic about an AI entering the world of R&B — a genre rooted in human vulnerability, pain, and love.

The irony isn’t lost on anyone.
How can a non-human entity sing about heartbreak, desire, or loss when it can’t feel any of those things?

For many artists, the rise of AI performers threatens to reduce art to data — stripping away the humanity that makes music meaningful.

Even if Xania Monet’s songs sound beautiful, they’re missing something invisible yet vital: a heartbeat.

The Industry’s AI Obsession

Record labels see AI differently.
They see efficiency. Consistency. Control.

An AI artist doesn’t demand royalties, take holidays, or go off-script in interviews.
It can be replicated endlessly, customised for audiences, and marketed across multiple languages.

It’s a label’s dream — and a human artist’s nightmare.

But there’s another layer too: AI models like Monet are trained using existing music. That means fragments of real artists’ vocals, melodies, and styles may have been used to create her sound — without their consent.

For many, that’s not innovation. It’s exploitation.

A Question of Authenticity

Every generation of musicians faces disruption — from autotune to streaming, technology has always redefined what’s possible.
But AI feels different because it doesn’t just enhance human creativity — it replaces it.

When an algorithm can now perform, produce, and promote itself, we’re forced to ask:
Where does human artistry fit in?

And more importantly:
Will audiences actually care who — or what — made the music, as long as it sounds good?

The Heart of the Matter

R&B is a genre that’s always been about emotion and truth — from Aretha Franklin to SZA. It’s the sound of real experience.

That’s why Kehlani’s response hit a nerve.
Her frustration isn’t just about job security — it’s about meaning.
Art isn’t just about output; it’s about connection.

AI can write, sing, and simulate emotion — but it can’t feel it.

And maybe that’s the point where audiences will draw the line.

The Future: Collaboration or Competition?

Xania Monet’s rise doesn’t have to spell the end of human artistry — but it should serve as a wake-up call.

If used responsibly, AI could be a collaborator — helping artists experiment, compose, or visualise ideas in new ways.
But when AI becomes the artist, it raises a deeper ethical question: what does it mean to create, when creation no longer requires being alive?

Whether you love or hate her, Xania Monet is here to stay — a mirror held up to an industry that’s racing ahead faster than it can define its own values.

The real challenge now isn’t whether AI can make music.
It’s whether music made by AI can still move us.


The IKEA Effect and the Rise of the ‘AI Artist’

There’s a peculiar psychological phenomenon known as the IKEA Effect — named after the Swedish flat-pack furniture empire. It describes how people place disproportionately high value on things they’ve partially created themselves. In other words, if you build it (even just part of it), you’re likely to love it more.

You spend an hour assembling a wobbly bookshelf, and suddenly, it’s not just furniture — it’s a personal triumph. A reflection of you. A thing you made. That pride and ownership are powerful. But what happens when we apply the IKEA Effect to art — more specifically, AI-generated art?

The New Wave of “AI Artists”

In the past few years, we’ve seen a rise in people proudly calling themselves AI artists. Using tools like Midjourney, DALL·E, or Leonardo.Ai, users input a few descriptive words — a “prompt” — and in seconds, a beautiful, fully-formed image appears.

The result can be stunning, surreal, and emotionally evocative. And yet… the person behind it has only provided the ingredients. The AI is the real chef. Or rather, the entire factory.

This is where the IKEA Effect kicks in. Because the user typed the words, they feel they’ve created the art. Just like assembling a table with Allen keys, that sense of partial authorship gives them a burst of pride — and for some, that’s enough to claim a title like “artist.”

Prompting ≠ Craft

Let’s be clear: Prompt engineering is a skill, especially when creating complex or consistent series of images. But is it the same as studying anatomy for years to draw a human figure? Is it the same as mastering oil paint, or understanding light and texture, or dedicating your life to understanding how art moves people?

Many traditional artists feel a growing sense of frustration. They’ve trained for years — often at great personal and financial cost — to hone their craft. And now, a person with no formal experience can type “renaissance-style portrait of a cat playing the violin in a flower field” and get instant praise, clicks, or even paid commissions.

That’s not to say AI art can’t be beautiful or meaningful. But should it be valued the same way? Should the prompt engineer be celebrated like a painter, illustrator, or photographer?

Art vs Assembly: The Emotional Disconnect

Art is often about the process — the hours of sketching, revising, reworking, and the human stories behind each mark. AI shortcuts this entirely. There’s no mistake-making, no happy accident, no soulful imperfection. It’s mass generation dressed as creativity.

AI art feels good to the maker, because they’ve added the egg to the pre-made cake mix. But that’s not baking. It’s assembling. And while there’s nothing wrong with a Betty Crocker moment now and then, we should be careful about how we frame it — especially when real bakers have spent years perfecting their recipes.

The Economic and Cultural Shift

There’s also an economic layer here. Many artists now find themselves priced out of commissions — replaced by AI tools and those who can use them to undercut with speed and scale. Others see their styles mimicked and their work fed into the very data sets that power these AI tools, without permission or credit.

This isn’t just about individual recognition — it’s about the value we place on human creativity in a world increasingly defined by machines.

Final Thoughts: Don’t Dismiss the Makers

The IKEA Effect can be a wonderful thing — it reminds us that participation creates pride. But in art, we need to ask: How much participation is enough to justify the label of “artist”?

At Flaminky, we believe creativity comes in many forms. AI can absolutely be a tool in the creative toolbox. But let’s not forget — or undervalue — the people who have dedicated their lives to understanding and creating true art.

Because in a world where AI can do almost anything, what might become rare — and truly valuable — is the human hand, human struggle, and human story behind the work.


The Hidden Environmental Cost of Your AI Prompt

When we ask artificial intelligence to generate an image, video, or piece of writing — whether that’s a holiday itinerary, a blog post, or a deepfake of a celebrity eating a Greggs sausage roll — we rarely think about what it takes to make that response happen. It’s just a line of text and a click, right?

Wrong.

Behind every AI-generated answer is a massive environmental footprint, one that’s growing faster than we realise — and that footprint has a name: data centres.

The AI Industry’s Dirty Secret

Every time you interact with AI — whether it’s ChatGPT, Midjourney, DALL·E, or Google Veo — your request is processed by thousands of computers stored in vast server rooms. These servers don’t run on magic. They consume electricity, pump out heat, and require huge amounts of water to cool down. This is particularly true for large language models and video-generation AIs, which are computationally intensive.

And with the world’s obsession with AI skyrocketing, the environmental cost is scaling with it.

Water: The Invisible Cost of Intelligence

A single AI model, during its training phase, can consume millions of litres of water. When AI companies say they’re “training” a model, they’re essentially putting thousands of GPUs (graphics processing units) through months of high-intensity computation, which generates immense heat.

How is that heat managed?

Water cooling systems.

According to a 2023 report, training OpenAI’s GPT-3 in Microsoft’s data centres consumed approximately 700,000 litres of clean water. And that’s just one model. Every AI response you prompt after that adds to the ongoing usage. Meta, Google, and Amazon also use millions of litres of water per day to keep their AI servers stable and functioning.

Electricity and Carbon Emissions

It’s not just about water. AI consumes an astonishing amount of power, often sourced from fossil-fuel-heavy grids. In 2022, data centres accounted for roughly 1–1.5% of the world’s electricity consumption — and with AI exploding in popularity since then, that figure is only climbing.

To put things in perspective:

  • Generating one AI image can use as much power as charging your phone 30 times.

  • Training a single AI model can emit up to 284 tonnes of CO₂ — that’s the equivalent of 60 petrol cars driving for a year.

What Even Is a Server Room?

Imagine a gigantic warehouse filled with rows upon rows of machines stacked in towers — constantly running, constantly humming, constantly consuming power. These are server farms, and they’re the backbone of the internet and AI.

The irony? Some of these server centres are now being built in deserts, including Arizona — one of the driest places on Earth — where water for cooling is already in scarce supply.

AI Sustainability: Greenwashing or Progress?

Tech giants like Google and Microsoft claim they are working towards carbon neutrality and “green AI.” Some are investing in air cooling, renewable energy, and liquid immersion cooling (where servers are dunked in non-conductive liquid to keep them cool without evaporating water).

But critics argue these efforts are slow and performative — especially as companies race to release bigger, faster, and more powerful AI tools without pause. If each model release requires exponentially more energy, is “sustainability” even possible at that scale?

What Can We Do?

We’re not saying “don’t use AI” — it’s an incredible tool and part of the future. But we need awareness and responsibility. Here’s what that can look like:

  • Use AI mindfully — not just for novelty or spammy content.

  • Support AI tools from companies making transparent, measurable green efforts.

  • Advocate for tech policy and regulations that hold AI companies accountable for environmental impacts.

  • If you’re a creator, brand or business, ask how your content is being made — and what it costs the planet.

Final Thoughts: There’s No Such Thing as a “Free” Prompt

Just because AI feels weightless doesn’t mean it’s without weight. Every time we ask a bot to dream, something in the real world works harder, burns hotter, and drinks more water to make that dream come true.

At Flaminky, we believe in technology that not only fuels creativity but does so responsibly. In a world increasingly shaped by code, it’s time we thought beyond the keyboard — and considered what our prompts are really asking of the planet.


Google Veo 3 and the Blurring Line Between Reality and Illusion

Google’s recent unveiling of Veo 3, its most advanced AI video generation tool yet, marks a massive leap in artificial intelligence — and not without consequence. With the ability to generate photorealistic, cinematic-quality video from text prompts, Veo 3 raises exciting possibilities but also serious concerns across industries and societies. From the world of cinema to the realms of misinformation, jobs, and creativity — nothing will remain untouched.

What Is Google Veo 3?

Think ChatGPT, but instead of words, it produces full videos. Veo 3 can create scenes from scratch based on natural language descriptions, add realistic camera movements, lighting, and even emotional tones to visuals. It can emulate specific film styles, recreate environments, and build entire narratives from a prompt. This isn’t just animation — this is AI-generated cinema.

The Future of Cinema and Content Creation

Let’s be honest: Veo 3 is a game-changer for filmmakers, marketers, and content creators. Agencies that once required entire crews, cameras, actors, editors, and thousands in production costs can now conjure an entire campaign with just a keyboard and an imagination.

On the flip side? Real creatives, from set designers to DOPs, may find their roles threatened.

This could give rise to a new kind of filmmaker — a “prompt director” — but what about the value of human-crafted stories, imperfections, and the magic of on-set collaboration? Will we crave authenticity in a world where everything can be perfectly faked?

Deepfakes, Fake News & Dead Internet Theory

Veo 3 brings the Dead Internet Theory uncomfortably closer to reality — the idea that much of the internet is no longer created or interacted with by real people, but by bots and AI.

Soon, you may not be able to tell if that video of a celebrity saying something inflammatory is real. Deepfakes, which once required high technical knowledge, are now democratised — and that’s dangerous. Combine this with political agendas, fake news, and conspiracy echo chambers, and we’re looking at a future where truth becomes optional.

Expect a flood of AI-generated media that’s indistinguishable from reality. And if people already distrust mainstream news, how will they cope when nothing can be verified?

The Scammer’s New Playground

Imagine receiving a video call or message from a loved one — or so you think — only to realise it was a scammer using Veo-like tools to deepfake their likeness. The tools that were once the preserve of high-end studios are becoming accessible to anyone. The scammer from Facebook Marketplace doesn’t need Photoshop anymore — they have Veo 3.

AI-generated misinformation could cause identity theft, reputational damage, and even geopolitical tensions. We’re not just fighting misinformation — we’re fighting hyperrealism.

Marketing Agencies and the Collapse of “Real”

From brands creating entire ad campaigns without shooting a single frame to influencers that don’t exist, Veo 3 may accelerate the AI-first marketing era. It’s cheaper, faster, and often indistinguishable from real footage. But as more brands embrace it, the human touch — that raw authenticity that builds trust — may start to erode.

What happens when every influencer is AI-generated, every advert a prompt, every model digitally sculpted?

The Creativity Question

Veo 3 brings us back to the central question: What is creativity in the age of AI?

Are we entering a post-human artistic phase, where ideas matter more than execution? Or are we devaluing the skill, effort, and emotional depth behind human-made art?

There’s no doubt AI tools like Veo 3 can assist creatives — offering new ways to ideate, prototype, and tell stories. But we must also be aware of how easy it is to let the machine do all the work — and how quickly human talent can become undervalued, or even obsolete.

Final Thoughts: A Fork in the Algorithm

Google Veo 3 is both a revolution and a warning. It offers power, convenience, and breathtaking possibilities — but also a mirror to the darkest parts of our digital culture: manipulation, job displacement, surveillance, and the erosion of truth.

As we marvel at what’s possible, we also need to ask better questions: Who controls these tools? Who verifies what’s real? Who gets left behind?

At Flaminky, we celebrate the intersection of culture, tech, and society — and right now, we’re at one of those defining crossroads. The future isn’t just coming fast… it’s being generated.


Why It Took Decades to Test Female Crash Dummies – And the Deadly Risk Women Still Face

It’s 2025. We’ve got AI companions, billionaires in space, and yet… women are only just being accurately included in car safety testing. Shocking, isn’t it?

For decades, the standard crash test dummy has been based on the “average male body” — and that’s had devastating consequences for women behind the wheel or in the passenger seat. It’s a disturbing oversight, and one that’s only recently started to be addressed.

The Gender Bias in Crash Testing

Crash test dummies have existed since the 1950s. But for the majority of that time, they’ve been designed around the male anatomy — typically based on a 76kg, 1.77m tall man. The problem? That doesn’t reflect half the population.

It wasn’t until 2011 that a smaller “female” dummy began being used in U.S. tests — but even that version was simply a scaled-down male dummy, not accurately representing female physiology. In Europe, the situation has been much the same.

In 2022, Swedish researchers developed the world’s first crash test dummy designed to reflect the average female body, accounting for differences in:

  • Muscle mass and strength
  • Pelvic structure
  • Neck size and strength
  • Sitting posture

And the results were eye-opening.

Women Are More Likely to Die or Be Injured

Because of these design flaws, women are at a significantly higher risk of injury or death in car accidents.

According to a 2019 study by the University of Virginia:

  • Women are 73% more likely to be seriously injured in a car crash.
  • They are 17% more likely to die in the same crash scenario as a man.

These aren’t small margins — they’re life-threatening gaps in safety that have gone unaddressed for far too long.

Why Has It Taken So Long?

The short answer: systemic bias.

The auto industry, historically dominated by men, has long seen the “male” body as the default. Car designs — from seat belts and airbags to headrests and dashboards — have been tailored to male proportions. Meanwhile, female bodies were seen as outliers or variations, not a core part of the safety equation, and we still don’t have pregnancy safety seatbelts.

There’s also the issue of regulatory lag. Even though new female-specific crash test dummies exist, they’re still not required in many official safety tests. That means many manufacturers aren’t using them unless pressured to do so.

The Push for Change

In the UK and EU, awareness is slowly growing. The European New Car Assessment Programme (Euro NCAP) has begun revising its protocols, and researchers like Dr. Astrid Linder (featured in the BBC article) are pushing for sex-specific crash testing to become a global standard.

Dr. Linder’s research has been pivotal in showing that differences in how men and women move during a crash — especially in whiplash scenarios — demand better representation in crash simulations.

But change needs to be systemic, not symbolic.

What Needs to Happen Next

For true equity in car safety, we need:

  • Female crash dummies required in all crash tests — not just optional extras.
  • Updated regulations reflecting the average dimensions and biomechanics of women.
  • Inclusion of diverse body types, including pregnant women, elderly passengers, and various body sizes.
  • Transparent data on how vehicles perform for all genders — not just men.

Final Thoughts

It shouldn’t take decades to realise that safety should apply to everyone equally. Women have been literally dying from being left out of the testing process. And for all our talk of equality and progress, something as fundamental as car safety still reveals the blind spots of a male-centric world.

Since I’ve recently been in a car collision myself and have had my own experience, I remembered about this design safety feature, which is unfortunately still not in all cars around the world and that still affects nearly half of the world population and motor users.

At Flaminky, we believe visibility matters. Whether it’s crash dummies, representation in tech, or storytelling — including everyone isn’t a luxury. It’s a basic right.

Let’s hope the auto industry finally gets the crash course it desperately needs.


AI Job Interviews: A Technological Step Forward or a Step Back for Fair Hiring?

Imagine preparing for a job interview, only to be greeted not by a friendly face, but by a robotic interface with no human behind it. No chance to charm with your personality, explain the nuance of your CV, or clarify a misunderstood answer. Just an algorithm, scanning your expressions, analysing your tone, and crunching numbers you can’t see.

Welcome to the growing world of AI job interviews — and the very real fears that come with it.

The Rise of AI in Recruitment

More companies, especially large corporations and tech firms, are turning to AI to handle the initial stages of recruitment. From parsing CVs with automated filters to conducting video interviews analysed by machine learning, AI promises to save time and money while “removing human bias”.

But here’s the problem: AI might actually be introducing more bias — just in a subtler, harder-to-challenge way.

Flawed from the Start: Data Bias

AI doesn’t think for itself — it’s only as good as the data it’s trained on. If that data reflects societal biases (spoiler: it often does), the AI will learn and repeat those same biases.

For example, if a company’s past hiring decisions favoured a particular gender, accent, or ethnicity, the AI might learn to prioritise those traits — and penalise others. It’s not just unethical; it’s illegal in many countries. Yet it’s quietly happening in background code.

Dehumanising the Hiring Process

Interviews are supposed to be a conversation. A chance for employers and candidates to connect, share, and assess suitability beyond just a checklist. AI, on the other hand, can’t gauge human nuance, empathy, or potential — it can only look at surface data.

This means:

  • Neurodivergent candidates may be misjudged based on non-standard eye contact or tone.
  • People from diverse cultural backgrounds may be filtered out due to accent or mannerisms.
  • Technical errors (like a poor internet connection) might wrongly signal lack of engagement or skill.

Worse still, candidates often have no one to speak to when things go wrong. No follow-up contact, no appeal process — just a rejection email, if anything at all.

Locking Out Opportunity

What happens when the “gatekeeper” to a job is an AI that doesn’t understand people? We risk creating a system where brilliant, capable individuals are excluded not because of their talent or values, but because they didn’t score highly on a robotic rubric they never got to understand.

In sectors like creative industries, teaching, or customer-facing roles — where emotional intelligence is crucial — AI interviews often fail to capture what really matters. Human connection.

The Future of Hiring: People First

We’re not anti-tech at Flaminky. In fact, we love when tech helps streamline systems and remove unnecessary barriers. But replacing humans entirely in such a sensitive, life-changing process as recruitment is not just flawed — it’s dangerous.

Instead of removing humans, companies should be using AI as a tool — not a replacement. That means:

  • Letting AI help shortlist, but not finalise decisions.
  • Allowing candidates to request a human-led interview instead.
  • Being transparent about how AI is used, and giving people the chance to appeal.

In Summary

Jobs are about more than just data. They’re about people — their growth, values, adaptability, and potential. AI interviews may tick boxes, but they miss the heart of what makes someone the right fit.

Until AI can truly understand humans, humans should be the ones doing the hiring.

After all, we’re not algorithms. We’re people. Let’s keep it that way.


The Doomscrolling Spiral: How Endlessly Scrolling Is Messing With Our Minds

It starts innocently enough. You open your phone to check a message, maybe scroll through TikTok or the news while waiting for your coffee to brew. Next thing you know, 45 minutes have passed and you’re deep into videos about climate disaster, global conflict, political chaos, or some stranger’s heartbreak — all while your coffee’s gone cold.

Welcome to the world of doomscrolling.

What Is Doomscrolling?

Doomscrolling is the act of endlessly consuming negative news or content online, especially via social media. Whether it’s updates on war, economic collapse, political scandals, celebrity break-ups or climate panic — the stream is infinite, and often feels inescapable.

It’s a fairly new term, but the behaviour is ancient: humans are wired to look for threats. In a modern, digital world, that primal instinct gets hijacked by infinite scroll feeds and clickbait headlines — feeding our anxiety while keeping us hooked.

Why Can’t We Look Away?

There’s a certain psychological trap at play. Negative information captures more of our attention than neutral or positive stories. It feels urgent, like something we need to know. Add algorithms to the mix — which prioritise content that provokes strong emotional reactions — and suddenly you’re trapped in a digital echo chamber of despair.

Apps like Twitter (now X), TikTok and Instagram are designed to hold your attention. Doomscrolling doesn’t happen because you’re weak-willed — it happens because it’s literally engineered that way.

Mental Health Fallout

The impact isn’t just digital; it’s deeply emotional and psychological. Studies have linked excessive doomscrolling to:

  • Increased anxiety and depression
  • Disrupted sleep patterns
  • Feelings of helplessness and burnout
  • Decreased focus and productivity

It can also desensitise you — numbing your reaction to genuinely important news because you’re overloaded by a constant stream of disaster.

The Post-TikTok Era: Worse or Better?

With TikTok’s looming ban in places like the US, users are already jumping ship to alternatives like Red Note and Reels. But if these platforms operate on the same engagement-driven model, are we just jumping from one doomscrolling feed to another?

The real question isn’t what platform we’re using — it’s how we’re using them.

Reclaiming Control

Here’s the thing: information isn’t the enemy. We should stay informed. But not at the cost of our mental health or inner peace.

Here’s how you can break the doomscrolling cycle:

  • Set time limits: Use app timers to restrict your usage.
  • Curate your feed: Unfollow accounts that drain you, and follow ones that uplift or educate with nuance.
  • Seek long-form journalism: Get depth, not just hot takes.
  • Stay grounded: Go outside. Touch grass. Talk to people offline.
  • Do something: If the news overwhelms you, turn it into action — donate, volunteer, or vote.

Why It Matters for Creatives

At Flaminky, we believe creativity thrives in clarity. Doomscrolling clouds the mind and kills the spark. In a world that’s constantly screaming for your attention, protecting your mental space is a radical — and necessary — act.

So next time you find yourself 100 videos deep, just ask: is this making me feel anything, or just making me numb?

It’s not about quitting the internet — it’s about using it on your terms.

Your feed doesn’t have to be a trap. It can be a tool. Choose wisely.


RIP Skype: The Death of a Digital Pioneer

Remember Skype? The blue icon, the ringtone that signalled an incoming call from someone across the world, the grainy video chats that were — at the time — revolutionary. It was the way we connected, long before Zoom fatigue and Teams invites ruled our workdays. And now? Skype is quietly slipping into digital history, barely noticed, barely missed.

But it deserves a proper send-off — not just because of nostalgia, but because of what it meant, what it pioneered, and why it ultimately failed.

The Rise of a Tech Titan

Launched in 2003, Skype changed everything. It was the first platform that made free video calls accessible to the masses. You could see your friend in another country in real time, for free. That was magic.

Skype wasn’t just ahead of the curve — it was the curve. It set the standard for internet communication, particularly in the early 2000s when international phone calls were still expensive and unreliable.

By the time Microsoft acquired Skype in 2011 for $8.5 billion, it was a global giant. It had become a verb. “Let’s Skype later” meant catching up, doing interviews, running remote meetings. It was embedded into our digital culture.

Where Did It Go Wrong?

Skype’s downfall isn’t about one bad move — it’s about many missed opportunities. Microsoft’s acquisition, which should have propelled Skype into a new era, instead saw it stagnate. The interface became clunky, updates were confusing, and user trust eroded with every glitchy call and awkward redesign.

Then came the pandemic.

In a twist of fate, a global moment that should have been Skype’s grand resurgence — a world suddenly needing remote communication — was instead the moment it was eclipsed. Zoom, with its smoother interface and faster adaptability, swooped in and took Skype’s crown without even blinking.

While the world turned to Zoom, Google Meet, and later even WhatsApp and FaceTime for daily communication, Skype faded into the background. By 2025, it feels almost like a relic — still technically alive, but largely ignored.

What Skype Symbolised

Skype symbolised a kind of early optimism about the internet. It was about connecting, not controlling. It wasn’t overloaded with ads, algorithms or content feeds. It was pure communication — seeing someone’s face and hearing their voice across borders, wars, and time zones.

It also represented a time when tech companies were disruptors, not monopolies. When services were innovative, not addictive. When “connecting the world” wasn’t a slogan, but a genuine achievement.

A Lesson in Legacy

Skype’s quiet death is a warning to tech giants: no matter how popular you are, complacency will kill you. Innovation doesn’t wait. Users want reliability, simplicity and a product that evolves with them.

And for users? It’s a reminder of how fast our digital lives move. How one day, an app can be indispensable — and the next, forgotten.

So, RIP Skype.

You were the OG. You walked so Zoom could run. You let us hear our mums’ voices from across continents, helped people fall in love long-distance, gave freelancers a way to work globally, and sometimes froze at the worst moment possible.

You were chaotic, charming, and ahead of your time — until time caught up.

And for that, we’ll always remember you.