40s Baby - £25.00
40s Baby
£25.00
Step back in time with this striking pop art portrait of a woman, blending 1940s-inspired style with a modern digital twist. This artwork features a traditional pencil drawing digitally enhanced, showcasing brown hair, bold red blush, and imperfect red lipstick for a rebellious edge.
The old newspaper ink-style background gives it a vintage, textured feel, while a vibrant yellow paint swoop across the eyes adds a dramatic pop of colour and mystery. Perfect for lovers of retro aesthetics, pop art collectors, or anyone seeking a statement piece that fuses nostalgia with contemporary artistry.
Style
Traditional pencil drawing + digital pop art
Subject
1940s-inspired woman with red lipstick & blush
Background
Old newspaper ink-style texture
Series
Pop | Nostalgia - Bold, retro, and expressive
Defender - £25.00
DEFENDER
£25.00
Celebrate adventure and classic design with this hand-drawn digital illustration of a Grey Land Rover Discovery, set against a subtle light blue-grey background. The clean, modern backdrop enhances the rugged yet refined lines of the vehicle, making it a perfect piece for car enthusiasts, off-road lovers, or anyone who appreciates iconic automotive style.
Part of our Automotive / Vehicle Series, this artwork adds a touch of personality to home décor, offices, or garages — and makes an excellent gift for Land Rover fans.
Style
Hand-drawn digital illustration
Subject
Grey Land Rover Discovery
Background
Light blue-grey
Series
Automotive / Vehicle Series
Dalmatian - £25.00
DALMATIAN
£25.00
Bring a splash of colour and personality to your space with this hand-drawn digital portrait of a Dalmatian dog, captured with its tongue out in a playful and charming expression. The bold, vibrant purple background enhances the unique black-and-white coat of the Dalmatian, making it a striking piece for any dog lover or art enthusiast.
Part of our exclusive Dog / Pet Series, this artwork is perfect for home décor, office spaces, or as a thoughtful gift for pet owners who adore this iconic breed.
Style
Hand-drawn digital art
Subject
Dalmatian dog with tongue out
Background
Bold and vibrant purple
Series
Dog / Pet Series
Why It Took Decades to Test Female Crash Dummies – And the Deadly Risk Women Still Face

It’s 2025. We’ve got AI companions, billionaires in space, and yet… women are only just being accurately included in car safety testing. Shocking, isn’t it?
For decades, the standard crash test dummy has been based on the “average male body” — and that’s had devastating consequences for women behind the wheel or in the passenger seat. It’s a disturbing oversight, and one that’s only recently started to be addressed.
The Gender Bias in Crash Testing
Crash test dummies have existed since the 1950s. But for the majority of that time, they’ve been designed around the male anatomy — typically based on a 76kg, 1.77m tall man. The problem? That doesn’t reflect half the population.
It wasn’t until 2011 that a smaller “female” dummy began being used in U.S. tests — but even that version was simply a scaled-down male dummy, not accurately representing female physiology. In Europe, the situation has been much the same.
In 2022, Swedish researchers developed the world’s first crash test dummy designed to reflect the average female body, accounting for differences in:
- Muscle mass and strength
- Pelvic structure
- Neck size and strength
- Sitting posture
And the results were eye-opening.
Women Are More Likely to Die or Be Injured
Because of these design flaws, women are at a significantly higher risk of injury or death in car accidents.
According to a 2019 study by the University of Virginia:
- Women are 73% more likely to be seriously injured in a car crash.
- They are 17% more likely to die in the same crash scenario as a man.
These aren’t small margins — they’re life-threatening gaps in safety that have gone unaddressed for far too long.
Why Has It Taken So Long?
The short answer: systemic bias.
The auto industry, historically dominated by men, has long seen the “male” body as the default. Car designs — from seat belts and airbags to headrests and dashboards — have been tailored to male proportions. Meanwhile, female bodies were seen as outliers or variations, not a core part of the safety equation, and we still don’t have pregnancy safety seatbelts.
There’s also the issue of regulatory lag. Even though new female-specific crash test dummies exist, they’re still not required in many official safety tests. That means many manufacturers aren’t using them unless pressured to do so.
The Push for Change
In the UK and EU, awareness is slowly growing. The European New Car Assessment Programme (Euro NCAP) has begun revising its protocols, and researchers like Dr. Astrid Linder (featured in the BBC article) are pushing for sex-specific crash testing to become a global standard.
Dr. Linder’s research has been pivotal in showing that differences in how men and women move during a crash — especially in whiplash scenarios — demand better representation in crash simulations.
But change needs to be systemic, not symbolic.
What Needs to Happen Next
For true equity in car safety, we need:
- Female crash dummies required in all crash tests — not just optional extras.
- Updated regulations reflecting the average dimensions and biomechanics of women.
- Inclusion of diverse body types, including pregnant women, elderly passengers, and various body sizes.
- Transparent data on how vehicles perform for all genders — not just men.
Final Thoughts
It shouldn’t take decades to realise that safety should apply to everyone equally. Women have been literally dying from being left out of the testing process. And for all our talk of equality and progress, something as fundamental as car safety still reveals the blind spots of a male-centric world.
Since I’ve recently been in a car collision myself and have had my own experience, I remembered about this design safety feature, which is unfortunately still not in all cars around the world and that still affects nearly half of the world population and motor users.
At Flaminky, we believe visibility matters. Whether it’s crash dummies, representation in tech, or storytelling — including everyone isn’t a luxury. It’s a basic right.
Let’s hope the auto industry finally gets the crash course it desperately needs.
Life Can Change in a Second
On May 10th, I was in a four-car collision that I can only describe as the most terrifying and stressful experience of my life. I haven’t fully processed it yet — and maybe I won’t for a while — but I wanted to write this not just to share what happened, but to highlight what often gets overlooked: the aftermath.
I was driving on the slip road westbound by the tunnels in Cardiff Bay. The car in front of me suddenly emergency stopped, because the car in front of them had stopped on the dual carriageway to let someone in from the slip road… even though they had the right of way.
I slammed on my brakes and just lightly bumped the taxi in front. It could’ve ended there. But two cars came speeding out of the tunnel behind me — and all they saw was stationary traffic. They hit me at 70mph, twice — once into the right rear of my car, and again into the driver’s side door.
Their airbags deployed.
Mine didn’t.
And somehow, I walked away from that crash.
But not unscathed.
Since that day, I’ve been dealing with daily migraines, whiplash, nightmares, PTSD flashbacks, and a fear of slip roads and cars driving close behind me. I’ve lost my confidence on the road. I feel anxious in places I never did before.
Please, don’t speed in tunnels.
The signs are there for a reason.
Please don’t stop unnecessarily on dual carriageways. If it’s your right of way — take it. Stopping without cause nearly cost lives that day.
I hit my head on the seatbelt panel, and I distinctly remember the feeling — my brain moving inside my skull. I’ve since spoken to medical professionals who confirmed what I felt: “If it felt like it moved, then it did.” Your brain isn’t fixed in place — it floats in fluid. And it’s fragile. So, so fragile.
I’ve come away from this experience with a new perspective on how delicate life is, and how quickly everything can change. I appreciate life more. I appreciate the people who showed up and checked in — and I now know who truly cares.
But it hasn’t just been the crash that’s been hard — it’s everything after:
-
Hearing the car I saved to buy myself is now written off.
-
Police statements and crash investigations.
-
A WalesOnline article about the crash.
-
Insurers, car hires, and trying to find a new car in less than two weeks.
-
Medical appointments, hospital waiting rooms, therapists, solicitors.
-
Missing work.
-
The financial pressure of replacing a car when the crash wasn’t even my fault.
I was just in the wrong place, at the wrong time.
And the emotional toll is real.
I’ve now been told I need to slowly phase back into physical activity. Head trauma isn’t something you bounce back from overnight. It’s been two weeks without fitness — which for someone like me who finds movement essential for mental health, feels unbearable.
But this week I’m easing in with yoga, and later on I’ll reintroduce running — gently, mindfully. I do have the Paris 10k coming up in three weeks with my boyfriend, and while I’m excited, I’m cautious too. He’s been incredibly supportive, reminding me:
“Health comes before medals — always.”
** Just wanted to add that I am incredibly thankful, grateful to the people who have supported me during this time. Especially to my dad, who has helped me throughout this process and the admin of the aftermath, that’s all new and alien to me. His support has meant the world to me, and I honestly don’t know where I’d be during this whole process without him.
So I’m taking it one day at a time. Healing doesn’t follow a straight line. If you’ve ever been through a traumatic accident, please know that it’s not just okay, but necessary, to ask for help. To feel it all. To go slow.
And if you’re reading this — thank you. Whether it’s to be informed, to feel less alone, or to remember to slow down behind the wheel — I hope it helps. ❤️
Stay safe. Slow down. Life is precious.
– Lorr x
AI Job Interviews: A Technological Step Forward or a Step Back for Fair Hiring?

Imagine preparing for a job interview, only to be greeted not by a friendly face, but by a robotic interface with no human behind it. No chance to charm with your personality, explain the nuance of your CV, or clarify a misunderstood answer. Just an algorithm, scanning your expressions, analysing your tone, and crunching numbers you can’t see.
Welcome to the growing world of AI job interviews — and the very real fears that come with it.
The Rise of AI in Recruitment
More companies, especially large corporations and tech firms, are turning to AI to handle the initial stages of recruitment. From parsing CVs with automated filters to conducting video interviews analysed by machine learning, AI promises to save time and money while “removing human bias”.
But here’s the problem: AI might actually be introducing more bias — just in a subtler, harder-to-challenge way.
Flawed from the Start: Data Bias
AI doesn’t think for itself — it’s only as good as the data it’s trained on. If that data reflects societal biases (spoiler: it often does), the AI will learn and repeat those same biases.
For example, if a company’s past hiring decisions favoured a particular gender, accent, or ethnicity, the AI might learn to prioritise those traits — and penalise others. It’s not just unethical; it’s illegal in many countries. Yet it’s quietly happening in background code.
Dehumanising the Hiring Process
Interviews are supposed to be a conversation. A chance for employers and candidates to connect, share, and assess suitability beyond just a checklist. AI, on the other hand, can’t gauge human nuance, empathy, or potential — it can only look at surface data.
This means:
- Neurodivergent candidates may be misjudged based on non-standard eye contact or tone.
- People from diverse cultural backgrounds may be filtered out due to accent or mannerisms.
- Technical errors (like a poor internet connection) might wrongly signal lack of engagement or skill.
Worse still, candidates often have no one to speak to when things go wrong. No follow-up contact, no appeal process — just a rejection email, if anything at all.
Locking Out Opportunity
What happens when the “gatekeeper” to a job is an AI that doesn’t understand people? We risk creating a system where brilliant, capable individuals are excluded not because of their talent or values, but because they didn’t score highly on a robotic rubric they never got to understand.
In sectors like creative industries, teaching, or customer-facing roles — where emotional intelligence is crucial — AI interviews often fail to capture what really matters. Human connection.
The Future of Hiring: People First
We’re not anti-tech at Flaminky. In fact, we love when tech helps streamline systems and remove unnecessary barriers. But replacing humans entirely in such a sensitive, life-changing process as recruitment is not just flawed — it’s dangerous.
Instead of removing humans, companies should be using AI as a tool — not a replacement. That means:
- Letting AI help shortlist, but not finalise decisions.
- Allowing candidates to request a human-led interview instead.
- Being transparent about how AI is used, and giving people the chance to appeal.
In Summary
Jobs are about more than just data. They’re about people — their growth, values, adaptability, and potential. AI interviews may tick boxes, but they miss the heart of what makes someone the right fit.
Until AI can truly understand humans, humans should be the ones doing the hiring.
After all, we’re not algorithms. We’re people. Let’s keep it that way.
The Doomscrolling Spiral: How Endlessly Scrolling Is Messing With Our Minds

It starts innocently enough. You open your phone to check a message, maybe scroll through TikTok or the news while waiting for your coffee to brew. Next thing you know, 45 minutes have passed and you’re deep into videos about climate disaster, global conflict, political chaos, or some stranger’s heartbreak — all while your coffee’s gone cold.
Welcome to the world of doomscrolling.
What Is Doomscrolling?
Doomscrolling is the act of endlessly consuming negative news or content online, especially via social media. Whether it’s updates on war, economic collapse, political scandals, celebrity break-ups or climate panic — the stream is infinite, and often feels inescapable.
It’s a fairly new term, but the behaviour is ancient: humans are wired to look for threats. In a modern, digital world, that primal instinct gets hijacked by infinite scroll feeds and clickbait headlines — feeding our anxiety while keeping us hooked.
Why Can’t We Look Away?
There’s a certain psychological trap at play. Negative information captures more of our attention than neutral or positive stories. It feels urgent, like something we need to know. Add algorithms to the mix — which prioritise content that provokes strong emotional reactions — and suddenly you’re trapped in a digital echo chamber of despair.
Apps like Twitter (now X), TikTok and Instagram are designed to hold your attention. Doomscrolling doesn’t happen because you’re weak-willed — it happens because it’s literally engineered that way.
Mental Health Fallout
The impact isn’t just digital; it’s deeply emotional and psychological. Studies have linked excessive doomscrolling to:
- Increased anxiety and depression
- Disrupted sleep patterns
- Feelings of helplessness and burnout
- Decreased focus and productivity
It can also desensitise you — numbing your reaction to genuinely important news because you’re overloaded by a constant stream of disaster.
The Post-TikTok Era: Worse or Better?
With TikTok’s looming ban in places like the US, users are already jumping ship to alternatives like Red Note and Reels. But if these platforms operate on the same engagement-driven model, are we just jumping from one doomscrolling feed to another?
The real question isn’t what platform we’re using — it’s how we’re using them.
Reclaiming Control
Here’s the thing: information isn’t the enemy. We should stay informed. But not at the cost of our mental health or inner peace.
Here’s how you can break the doomscrolling cycle:
- Set time limits: Use app timers to restrict your usage.
- Curate your feed: Unfollow accounts that drain you, and follow ones that uplift or educate with nuance.
- Seek long-form journalism: Get depth, not just hot takes.
- Stay grounded: Go outside. Touch grass. Talk to people offline.
- Do something: If the news overwhelms you, turn it into action — donate, volunteer, or vote.
Why It Matters for Creatives
At Flaminky, we believe creativity thrives in clarity. Doomscrolling clouds the mind and kills the spark. In a world that’s constantly screaming for your attention, protecting your mental space is a radical — and necessary — act.
So next time you find yourself 100 videos deep, just ask: is this making me feel anything, or just making me numb?
It’s not about quitting the internet — it’s about using it on your terms.
Your feed doesn’t have to be a trap. It can be a tool. Choose wisely.
RIP Skype: The Death of a Digital Pioneer

Remember Skype? The blue icon, the ringtone that signalled an incoming call from someone across the world, the grainy video chats that were — at the time — revolutionary. It was the way we connected, long before Zoom fatigue and Teams invites ruled our workdays. And now? Skype is quietly slipping into digital history, barely noticed, barely missed.
But it deserves a proper send-off — not just because of nostalgia, but because of what it meant, what it pioneered, and why it ultimately failed.
The Rise of a Tech Titan
Launched in 2003, Skype changed everything. It was the first platform that made free video calls accessible to the masses. You could see your friend in another country in real time, for free. That was magic.
Skype wasn’t just ahead of the curve — it was the curve. It set the standard for internet communication, particularly in the early 2000s when international phone calls were still expensive and unreliable.
By the time Microsoft acquired Skype in 2011 for $8.5 billion, it was a global giant. It had become a verb. “Let’s Skype later” meant catching up, doing interviews, running remote meetings. It was embedded into our digital culture.
Where Did It Go Wrong?
Skype’s downfall isn’t about one bad move — it’s about many missed opportunities. Microsoft’s acquisition, which should have propelled Skype into a new era, instead saw it stagnate. The interface became clunky, updates were confusing, and user trust eroded with every glitchy call and awkward redesign.
Then came the pandemic.
In a twist of fate, a global moment that should have been Skype’s grand resurgence — a world suddenly needing remote communication — was instead the moment it was eclipsed. Zoom, with its smoother interface and faster adaptability, swooped in and took Skype’s crown without even blinking.
While the world turned to Zoom, Google Meet, and later even WhatsApp and FaceTime for daily communication, Skype faded into the background. By 2025, it feels almost like a relic — still technically alive, but largely ignored.
What Skype Symbolised
Skype symbolised a kind of early optimism about the internet. It was about connecting, not controlling. It wasn’t overloaded with ads, algorithms or content feeds. It was pure communication — seeing someone’s face and hearing their voice across borders, wars, and time zones.
It also represented a time when tech companies were disruptors, not monopolies. When services were innovative, not addictive. When “connecting the world” wasn’t a slogan, but a genuine achievement.
A Lesson in Legacy
Skype’s quiet death is a warning to tech giants: no matter how popular you are, complacency will kill you. Innovation doesn’t wait. Users want reliability, simplicity and a product that evolves with them.
And for users? It’s a reminder of how fast our digital lives move. How one day, an app can be indispensable — and the next, forgotten.
So, RIP Skype.
You were the OG. You walked so Zoom could run. You let us hear our mums’ voices from across continents, helped people fall in love long-distance, gave freelancers a way to work globally, and sometimes froze at the worst moment possible.
You were chaotic, charming, and ahead of your time — until time caught up.
And for that, we’ll always remember you.
Duolingo’s AI-First Shift: Replacing People With Bots and the Human Cost of Progress

When Duolingo announced it was going “AI first,” the tech world applauded. But behind the fanfare of efficiency, scale, and innovation lies a more uncomfortable truth — one that’s becoming all too familiar. People are losing their jobs to AI. And it’s not just any people. It’s the educators, the writers, the curriculum designers — the very heart of what once made Duolingo feel human.
In early 2024, Duolingo quietly laid off a significant portion of its contract workforce, many of whom were language and learning experts. In their place? AI. Specifically, OpenAI’s GPT models, retooled and rebranded as chatbots and content generators, capable of producing lesson plans, quizzes, and dialogue scripts with lightning speed. The company celebrated the shift as a way to scale globally and improve personalisation. But what happens when “personalisation” comes at the cost of actual people?
The Ironic Human Cost of Language Learning
Duolingo was built on the promise of making language education accessible to everyone. Its quirky owl mascot, streak reminders, and gamified lessons made it feel less like a classroom and more like a conversation. But now, that conversation is increasingly one-sided.
Replacing expert linguists with AI might make business sense, but it removes the very soul of language learning. Language is cultural. It’s full of nuance, humour, awkward pauses, and real-world context. No AI can replicate the feeling of a human explaining why a phrase matters, or how it changes in different regions, or when it’s appropriate to use.
The irony? Duolingo’s users want to learn language to connect with others. And now, they’re doing it through systems that remove the people from the process.
AI Anxiety and Job Insecurity
Duolingo’s move is just one example of a growing fear across creative and educational sectors: that AI isn’t just a tool, but a replacement. The educators let go weren’t underperforming — they were simply no longer needed, because machines could do the job faster and cheaper.
This has sparked an ethical conversation: should tech companies use AI to support human workers or replace them entirely? And what message does it send when one of the most influential edtech companies in the world chooses the latter?
For many, it’s a chilling sign of what’s to come. If even education — a field deeply rooted in empathy, connection and understanding — is being automated, what’s safe?
Users Still Want People
Despite the shiny new AI features, not all users are on board. Many learners find the chatbot interactions stiff, repetitive, or emotionally hollow. Some have shared on forums that they miss the personal touches — the cultural notes, the humour, the sense that someone real was behind the lesson design.
There’s also growing concern about the way AI learns from user data. With less human oversight, who decides what’s accurate, respectful, or culturally sensitive? When humans are removed from the loop, the risk of bias or misinformation increases.
What’s Next?
Duolingo may be leading the charge, but it’s not alone. Across the tech world, we’re seeing similar stories play out: human jobs vanishing in the name of progress. The question isn’t whether AI will be part of our future — it already is. The question is: what kind of future are we building? One where humans work with AI? Or one where they’re replaced by it?
For all its clever gamification, Duolingo might have underestimated one thing: people don’t just want to learn language. They want to feel seen, heard, and understood. And that’s something no AI — no matter how advanced — can truly replicate.
Perhaps it’s time to remember: the most powerful learning tool of all is still a human being.
Dead Internet Theory: Are We Talking to Real People Anymore?

In recent years, a once-fringe idea known as the Dead Internet Theory has gained surprising traction. It speculates that much of the internet as we know it today is no longer driven by human interaction, but by bots, AI-generated content, and algorithms designed to simulate engagement. Now, with platforms like Instagram (under Meta) rolling out AI-powered chatbot profiles that users can interact with in their DMs, this eerie theory feels less like sci-fi paranoia—and more like a sign of things to come.
Instagram’s new AI profiles are designed to behave like real users. You can talk to them, joke with them, ask them questions. Some even mimic celebrity personas or influencers. To many, they seem harmless, even fun. But when AI becomes indistinguishable from real people in digital spaces that were once rooted in human connection, we have to ask: what does this mean for the future of how we communicate?
There’s already a creeping sense of unreality across social media. Between bots inflating likes, deepfake videos, algorithm-driven content and now AI personas pretending to be your virtual mate, it’s becoming harder to tell what’s real and what’s manufactured. Platforms like X (formerly Twitter) are flooded with AI-generated content. Facebook’s feed is often filled with recycled posts or engagement bait. Instagram’s polished reels are increasingly edited, filtered, or AI-assisted. In this world of synthetic interaction, how do we find authentic connection?
Meta’s AI chatbot profiles take the uncanny valley one step further. Instead of just showing us content, they now talk to us—imitating personalities, offering companionship, mimicking emotional intelligence. While this might serve as novelty or entertainment, it risks undermining our capacity to communicate with and relate to actual people.
There’s also a darker consequence: AI chatbots don’t just fill space—they shape conversations. They can be programmed to nudge political opinions, suggest products, or reinforce brand loyalty under the guise of friendly conversation. In other words, they’re marketing tools disguised as people. The more users engage with these AI profiles, the more Meta learns—about us, our preferences, our vulnerabilities.
And here lies the connection to the Dead Internet Theory. If more and more online interactions are with algorithms and artificially-generated responses, the internet loses its original identity as a democratic space for human expression. It becomes a carefully engineered simulation, a network of walled gardens run by corporations, designed to monetise attention and manipulate behaviour.
This isn’t to say AI has no place in our digital world. Used ethically, it can enhance creativity, accessibility and even mental health services. But when AI replaces genuine interaction, it begins to erode the fabric of what made the internet revolutionary in the first place—human connection.
So next time you’re chatting in your Instagram DMs, you might want to ask: Am I really talking to someone… or something?
Because in the dead internet age, the line between user and illusion is growing fainter by the day.

