In Could 2025 alone, DataDome detected 976 million requests from OpenAI-identified crawlers, and earlier this 12 months noticed request quantity surge by 48% in simply 48 hours following the launch of OpenAI’s Operator agent. Removed from being an anomaly, these are clear indicators of the ‘new regular’ of internet site visitors for companies at the moment.
Whereas bots and crawlers have been a part of the web for years, AI-powered autonomous brokers are a comparatively new growth. These brokers – starting from LLM crawlers to extra refined applications performing on-line duties autonomously – are extra persistent and far tougher to categorise than their easy bot ancestors. This introduces new challenges for fraud and safety groups.
As AI brokers make up a rising proportion of internet site visitors, organizations’ safety groups should take a special method. As a result of faking identification is way simpler than disguising intent, they need to not solely establish if a consumer is human or bot, but additionally ask why the consumer is interacting with their platform. That is the place intent-based cybersecurity methods are available.
Co-founder and Chief Technique Officer at DataDome.
The tip of the bot iceberg
Whereas our analysis noticed an enormous surge in OpenAI crawler exercise, this is just one knowledge level in a much wider development. Throughout our community, 36.7% of site visitors at the moment comes from non-browser sources, like APIs, SDKs, cellular apps, and a rising inhabitants of autonomous brokers.
These AI-driven brokers scrape, synthesize and simulate exercise in ways in which bypass conventional safety defenses. For example, many ignore the industry-standard robots.txt protocol, tripping up companies that depend on this examine to handle crawler entry. Different brokers mimic actual consumer conduct to slide beneath the radar – not essentially as a result of they’ve malicious intent, however usually simply to keep away from entry restrictions.
Herein lies the problem for safety groups: not all AI brokers are malicious, however many are ungoverned, and old fashioned protection techniques don’t have any manner of distinguishing between the 2. This distinction is crucial, not simply by way of blocking dangerous exercise like scraping or account abuse, but additionally facilitating helpful use instances like LLM-powered search, content material summarization, and API-driven integrations.
Transferring previous a binary method
Conventional methods depend on binary logic: enable or block. These strategies depend upon predefined guidelines, IP fame lists, and static thresholds for price limiting. Whereas these approaches may work for rudimentary spam bots or simplistic crawlers, they aren’t efficient in opposition to clever, dynamic brokers that adapt in real-time.
If safety groups block every thing, they threat shutting out helpful AI site visitors… but when they let every thing in, they open the door to fraud and knowledge leakage. The neatest method is one which’s knowledgeable by intent evaluation.
As an alternative of specializing in what the site visitors is, safety groups want to start out specializing in why the customers are visiting their platforms.
An intent-based system consistently evaluates conduct and context to find out whether or not to permit, problem, or block a request. For example, if a consumer is accessing a retailer’s web site in the course of the launch of a restricted version product drop and solely concentrating on probably the most high-value objects repeatedly – quite than looking the web site organically as a real consumer would – it is a telltale signal of a scalper bot, and the conduct could be flagged as suspicious.
Or if an AI agent spams an airline’s web site with hundreds of worth checks to scrape fare knowledge, this may appear to be regular looking, however it could possibly decelerate the positioning and warp pricing for real clients. An intent-based system would flag the weird scale behind this site visitors, and block entry earlier than any injury is brought about.
By drawing on behavioral indicators, machine intelligence, and real-time telemetry, intent-based defenses can differentiate between a reputable AI agent and a malicious one.
Writing a brand new playbook
First, safety groups have to rethink their foundations. The previous processes for monitoring site visitors now not apply; groups have to re-audit their environments to know the place non-browser site visitors is coming from, the way it usually behaves, and what intent it serves.
Subsequent, safety groups want to maneuver past static protection methods, like price limiting or blocklists, as an alternative choosing an intent-based method that may assess conduct in actual time and make dynamic, clever selections.
A transparent entry coverage can also be key. This implies product, safety, and authorized groups should sit down and agree on which AI brokers are welcome on their digital platforms, and beneath what situations. As soon as these guidelines are outlined, they need to be enforced persistently throughout each platform.
The way forward for cybersecurity isn’t about stopping each bot – or trusting each human. It’s about understanding the ‘why’ behind each request.
We record the most effective web site monitoring software program.
This text was produced as a part of TechRadarPro’s Professional Insights channel the place we characteristic the most effective and brightest minds within the expertise {industry} at the moment. The views expressed listed here are these of the creator and should not essentially these of TechRadarPro or Future plc. In case you are concerned about contributing discover out extra right here: https://www.techradar.com/information/submit-your-story-to-techradar-pro