The Tools Fighting AI Bots & Deepfakes
Deep Dive 003
Apr 28, 2025
We tested out the latest in deepfake and AI bot technology to see how easy it was to create hyper-convincing content.
We then compared the leading edge in prevention and detection, to see how the future battle of bots vs humans will unfold.
The authenticity problem
As AI-powered bots get better and deepfakes keep improving, it becomes near impossible to distinguish between AI generated content and real human content.

Above: a deep fake video by @aideeptomcruise on TikTok.
Adversarial attacks are becoming more sophisticated and more convincing to an ever-larger audience.
Experts now warn that by the end of 2025, up to 90% of digital content online will be synthetically generated (truepic.com).
In response, a diverse collection of tools and techniques is being designed to help prove you’re human amongst the swathes of digital entities, some with significant usage and traction already.
Here’s the story of the tools tackling our bot problems.
The Orb to scan eyeballs

World ID has developed a physical (and slightly ominous-looking) Orb device, that scans your Iris to mint a unique ‘proof-of-human’ token, which you can use online throughout the World ID network to verify that you are a real human.
Since launching in 2019, co-founded by OpenAI CEO Sam Altman, engineer Alex Blania, and economist Max Novendstern under the developer arm Tools for Humanity, World has experienced pretty remarkable growth: by early 2025, over 12 million people have verified via Orbs in 50+ countries, using more than 1,500 active devices, the World app has been installed ~25.8 million times and recorded ~160 million on-chain transactions World.
The team hopes to onboard 1 billion users within two years, requiring ~50,000 Orbs per year—an industrial-scale hardware rollout WIRED.
If it becomes the widely accepted norm for proving identity online, it will be the ultimate ‘verified’ badge for all of your online interactions. Helping to combat bots, deepfakes and spam.
So far in 2025, World has announced partnerships with:
Gaming giant Razer, to help combat AI-driven bots in online games by launching Razer ID, verified by World ID
Dating giant Match Group, implementing its proof-of-human technology first into the Tinder app with the hopes to combat turbo-charged catfishing
Card payment provider Visa and online payments provider Stripe, to enable seamless crypto-to-fiat transactions in the real world
How does the Orb work?
The Orb is a chrome-finished sphere (bowling-ball size) equipped with a high-res camera and an Nvidia Jetson AI chip for low-light iris scanning.

When you look into the Orb, it captures an encrypted “IrisCode” via zero-knowledge proofs, then permanently deletes the raw image—meaning your biometric data never leaves the device in raw form.
Quick aside: Zero-knowledge proofs (ZKPs) are cryptographic programmatic methods that allow one entity to convince another entity that a statement is true without revealing any information about the underlying data beyond the fact that the statement is true.
Essentially, the prover can prove their knowledge of a secret or a fact without sharing the secret itself.For example, prove you’re over 18 without revealing your birthdate, or in this case, to prove you’re human without revealing any of your underlying biometric data.


Once you have stared into the Orb, users are prompted to download the World App to store your World ID and receive free WLD tokens.
WLD token has primarily served as an incentive mechanism—granting newly Orb-verified users approximately 40 WLD, distributed across the course of a year, and passport-credential holders an additional 20 WLD over the same period.
On the open market, WLD can be traded and exchanged for other crypto and for fiat currencies on the major centralized and decentralized exchanges, providing liquidity and price discovery, with a circulating supply of 1.31 billion WLD (13 % of the 10 billion cap) as of April 2025.
Today, WLD is fully integrated into the World App wallet, where users can exchange tokens peer-to-peer, pay for Mini Apps directly embedded into the World app, and claim their monthly allocations across the World Chain networks.

Looking ahead, WLD is designed to evolve into a global internet currency, underpin decentralized “one-person-one-vote” governance powered by proof-of-personhood, fuel a burgeoning Mini App economy, and reward network operators and developers through grants and staking mechanisms.

Navigating regulatory waters
Since its mid-2023 launch, World’s Orb iris-scanning technology has encountered growing regulatory pushback worldwide.
In Spain, the data protection authority imposed a three-month ban on Orb operations and later ordered deletion of all stored iris scans; the suspension has been extended pending Bavarian GDPR review.
In Kenya, police suspended Worldcoin last August over alleged illegal data transfers, launched a parliamentary inquiry recommending a shutdown, and only in June 2024 dropped its probe, allowing operations to resume.
France’s privacy watchdog (CNIL) has repeatedly warned that Worldcoin’s biometric data collection “seems questionable,” conducted unannounced inspections at its Paris hub, and opened formal investigations into its compliance with GDPR.
Beyond these, authorities in Portugal, Hong Kong, and Germany have issued stop-processing orders or enforcement notices, while India and Brazil have seen Orb services paused amid ongoing inquiries into data-protection and child-safety risks.
Scanning the population’s eyeballs was also going to come with some regulatory pushback.
In response to regulatory actions and investigations across multiple jurisdictions, World has consistently maintained that its Orb technology and World App comply with applicable legal frameworks.
However, there are some valid questions and concerns that remain for World to tackle head on:
As we look forward to a world with ever more powerful AI systems, and the possibility of Quantum computing (story on this soon) on the (perhaps distant) horizon, could the zero-knowledge design ever be reverse-engineered to deanonymize users or trick verification?
Is this a solution where the critical mass of people needed to maximise the network effect benefits equals the entire global online population? What happens to the people who opt out of getting their iris scanned? What about certain eye conditions or disabilities that may prevent verification?
Vocal fingerprints

Atlanta-based Pindrop has positioned itself as a global leader in voice biometric authentication, employing acoustic fingerprinting and machine learning to differentiate legitimate human callers from synthetic deepfake audio.
The amount of audio needed to clone a voice has rapidly shrunk over the last few years; generally, a few seconds to a few minutes are all that is needed to sufficiently clone a voice that is good enough to be indistinguishable over lossy phone calls.
How do audio fingerprints work?
Pindrop's system can capture a caller's voice sample with less than 30 seconds of audio. The system then analyzes the acoustic characteristics to convert the audio into a fixed-length numerical representation ('voice profile').
During any future calls, Prindrops 'Phoneprinting' system extracts up to 147 distinct features from the call audio, including:
Packet loss patterns that hint at the sender's network quality
Background noise signatures
Device and microphone characteristics
that are then fed into their Deep Voice Engine, a suite of deep neural networks trained to distinguish between legitimate callers and fraud.

Pindrop's systems help companies compare live calls with existing customers' voice profiles to help determine if they are really speaking with who they say they are. They also have tools that flag the likelihood that any given call is a deepfake or contains manipulated audio.
Demand surges for the enterprise
Pindrop recently surpassed USD 100 million in annual recurring revenue, showcasing the urgent demand for enterprise protection against voice cloning fraud.
The company also submitted recommendations to the White House’s AI Action Plan, underscoring its role in shaping policy to counter deepfake voice scams.
Where is the Pindrop for consumers? I want to know if I'm really talking to my grandma or a fraudster halfway across the world.
Social (re)engineering
It can be said that the biggest attack vector for quite some time has been social engineering. Tricking or manipulating humans to create vulnerabilities in a system's defence.
Antiquated versions include phishing emails (don't click that dodgy link on your work laptop), but now vishing (voice audio phishing) and smishing (SMS phishing) are prevalent.
One of the most renowned companies helping to battle against social engineering vulnerabilities is Adaptive Security.
They simulate generative AI attacks, helping organizations identify vulnerabilities and train employees to detect sophisticated scams.

In its latest funding round co-led by OpenAI and a16z, Adaptive Security raised USD 43 million to scale its platform and integrate advanced detection algorithms into its simulation suite.
Clients range from top U.S. banks and hedge funds to leading tech firms and healthcare systems.
Deepfake detection
Sensity AI, formerly known as Deeptrace, offers a deepfake detection platform that analyzes video and audio streams in real time, boasting up to 98% accuracy in identifying manipulated media (socradar.io).
How do you detect a deepfake?
Sensity's system examines every layer of a video to determine whether it is a deepfake or not:
Frame-by-frame pixel performance
File structure and metadata
Voice patterns and audio frequencies
and applying deep learning models, trained on extensive datasets of GAN and diffusion-based examples, in an attempt to detect cues that distinguish an authentic visual from a deepfake.


Sensity states it has detected more than 35,000 malicious deepfakes over the past year, providing companies and platforms with continuous monitoring, moderation, and filtering.
Content watermarks
Truepic is a San Diego–based company that is leveraging hardware-rooted metadata to certify the authenticity of images and videos at the moment of capture.
How does it work?
By embedding cryptographically signed metadata directly into media files, Truepic enables downstream consumers—whether people, platforms, or AI systems—to verify content integrity and detect manipulation.

The Coalition for Content Provenance and Authenticity (C2PA) specification defines a standardized “manifest” format that embeds metadata—such as creation time, device claims, edit history, and digital signatures—within media files (c2pa.org).
Truepic integrates this by signing each asset upon capture (or edit) and embedding a tamper-evident manifest, enabling any party with a C2PA-aware viewer to validate the chain of custody.
But what if you simply remove the metadata? Well, the latest C2PA updates introduce “Durable Content Credentials,” combining visible watermarks with database-referenced fingerprints for robust provenance retrieval even if the embedded manifest is stripped (U.S. Department of Defense).
First 'transparent' deepfake
In partnership with synthetic media studio Revel.ai and AI expert Nina Schick, Truepic released the world’s first digitally transparent “deepfake” video, demonstrating that even AI-generated content can carry an unforgeable provenance stamp, viewable by clicking the “cr” (Content Credentials) icon to inspect its cryptographic manifest.

Becoming an industry standard?
Major technology vendors and standards bodies are embracing C2PA: Qualcomm has announced chipset support for capturing Content Credentials directly on mobile devices (Forbes).
It will be interesting to follow to see if this does become an industry standard, and whether publishing platforms and content creation platforms also embrace this approach.
Conclusions
There is a perpetual cat-and-mouse game at play here, where the stakes get perpetually higher as the fidelity of fraud gets better and better.
It can often feel that defensive measures are only able to play 'whack-a-mole', taking a reactive measure against bots and deepfakes. However, as we've seen, many companies aim to be the proactive measures to help determine authenticity.
Will we see a critical number of people verify themselves using the iris-scanning Orb and Voice Profiles? Will technology providers enforce a global standard to help authenticate content? As always, the future is here, it's just not evenly distributed.
The battle of the bots and deepfakes will continue, and as corporate and state-of-the-art defenses improve, I worry that the least informed and least technologically sophisticated of society will be the only viable victims remaining in the future.
— Alex BB