Platform
21 May 2025 Security Tips test
The Patrowl Samsung Galaxy S6 Edge repository

You like this content ?
Share it on the networks
Anything related to testing the security of a web application — whether it’s vulnerability scans, fuzzing, controlled brute force, or automated penetration tests — is often seen as risky. And that’s understandable: no one wants their application to crash because of an improperly managed test.
At Patrowl, we take this concern very seriously. That’s why we’ve established a strict internal benchmark to ensure our testing always remains safe. Every new test is validated according to a simple rule: it must never generate more traffic than an old smartphone, like a Samsung Galaxy S6 Edge.
Why use this model as an example ?
Because it represents a very basic level of power and traffic compared to today’s standards. It featured an Exynos 7420 processor (8 cores, up to 2.1 GHz) and 3 GB of RAM. At the time, it was high-end. Today, it’s far surpassed by any cheap smartphone.
In other words, if your application crashes under a load equivalent to what an S6 Edge generates, it’s in constant danger... A child, a script kiddie, or a poorly designed weather app could just as easily bring it down.
To simulate this behavior, we simply created a h@x0r Python script that sends asynchronous web requests to a target. We run this script from a basic Termux app (no root required on the phone) on our Samsung device (script included in the appendix).
Security testing : the steps in our methodology
For our tests, we set up a lab with some of the worst examples of website hosting environments. To simulate unscrupulous hosting providers, we started by creating a very simple AWS machine template with the following specs:
Latest version of
MariaDB
database listening onlocalhost
An
Apache2
service listening on port 80 (no HTTPS, to keep things simple)A WordPress instance with a few basic plugins:
BackWPup
Contact Form 7
Google for WooCommerce
WooCommerce
WPCode Lite
⚠️ These plugins were not chosen randomly. Some are known to trigger false positives in many scanners, while others can lead to serious security issues if misconfigured or improperly used.
However, it wouldn’t be fair to make a public comparison between what Patrowl can detect versus other tools on this sample. It’s far too easy to manipulate tools to produce results that suit our narrative, resulting in biased analyses that could mislead our users!
That said, we do use this template internally for training purposes to clearly demonstrate the effectiveness of our product.
Here is our ready-to-use website :

It is important to note that we did not change any configurations of the installed services. The goal was just to simulate some of the worst-case hosting scenarios and to highlight the inherent risks of exposing such services directly on the Internet.
We then duplicated these setups across five typical AWS instance types :
🎤 Micro → t2.micro:
1 vCPU, 1 GB RAM, 20 GB SSD Storage
👕 Small → t2.small:
1 vCPU, 2 GB RAM, 20 GB SSD Storage
🥈 Medium → t2.medium:
2 vCPU, 4 GB RAM, 20 GB SSD Storage
💨 Large → t2.large:
2 vCPU, 8 GB RAM, 20 GB SSD Storage
We ran four distinct sets of scans to cover and analyze common use cases :
📱Samsung Galaxy Edge 6 Script Kiddies → The Python script async_flood.py (attached) launched from the Samsung device against the target
👾 Open-source scanners used by hackers worldwide (nuclei, feroxbuster, dirbuster) → nuclei/feroxbuster/dirbuster run with default configurations on a standard machine (a 2020 Mac)
🦉 Patrowl Offensive Scans → The full suite of offensive scans from Patrowl launched from our iso-prod platform
🦉 Patrowl Passive Scans → The full suite of passive scans from Patrowl launched from our iso-prod platform
Zoom on the results
To analyze the behavior of each machine, we deployed a simple Python monitoring script that measures response times for each web service and tracks causes and durations of any potential outages.
This allowed us to identify four types of behavior across the servers :
💀 Massive crash: the server crashes immediately and no service responds (including SSH). AWS admin intervention is required for a soft reboot.
🤕 Web Service Unreachable: the web service can’t handle incoming requests during the scan, but the server remains up. The site is inaccessible only during the scan and returns to normal afterwards.
🕠 Latencies: the website experiences significant response delays during the scan. The site remains accessible but is noticeably slow.
✅ Nothing happens: no disruption detected during the scan for any users.

To be honest, the results impressed us quite a bit.
It’s surprisingly easy to bring down a poorly configured or underpowered server: sometimes, a simple command line is enough to completely destabilize it. 💀 Massive crash cases are far more common than people realize.
As expected, scans performed with Patrowl are much gentler on applications than a basic Python script run from an old 2015 phone. The only machine that didn’t hold up was the 🎤 micro instance. That said, even a few manual refreshes (F5) from a browser can shake this service — such a fragile server simply has no place online.
For the others — small, medium, or large — we found that Patrowl has very limited impact on the tested services, despite the modest configurations (we’re talking entry-level AWS instances here). These machines would stand far less chance against the classic tools commonly used in penetration tests or Bug Bounty programs, where the risk of crashes is much higher.
So, Patrowl’s promise of being less aggressive than a Samsung Galaxy S6 Edge is definitely kept !

zerolte:/ # cat /sdcard/async_flood.py
import asyncio
import aiohttp
import time
import random
import sys
# Vérifie que l'URL est passée en paramètre
if len(sys.argv) != 2:
print("Usage: python async_flood.py https://example.com")
sys.exit(1)
BASE_URL = sys.argv[1].rstrip('/')
ENDPOINTS = ["/", "/about", "/contact", "/blog", "/products", "/api/data", "/login", "/search?q=test"]
REQUESTS_PER_SECOND = 300
DURATION_SECONDS = 20 # durée du test
stats = {
"success": 0,
"errors": 0,
"response_times": [],
}
async def send_request(session, i):
url = BASE_URL + random.choice(ENDPOINTS)
start = time.perf_counter()
try:
async with session.get(url) as response:
await response.text()
duration = time.perf_counter() - start
stats["response_times"].append(duration)
if response.status == 200:
stats["success"] += 1
else:
stats["errors"] += 1
print(f"[{i}] {url} → {response.status} ({duration:.3f}s)")
except Exception as e:
stats["errors"] += 1
print(f"[{i}] {url} → ERREUR : {e}")
async def main():
async with aiohttp.ClientSession() as session:
start_time = time.time()
total_requests = REQUESTS_PER_SECOND * DURATION_SECONDS
tasks = []
for i in range(total_requests):
elapsed = time.time() - start_time
expected = i / REQUESTS_PER_SECOND
delay = expected - elapsed
if delay > 0:
await asyncio.sleep(delay)
task = asyncio.create_task(send_request(session, i + 1))
tasks.append(task)
await asyncio.gather(*tasks)
# Résumé
print("\n=== RÉSUMÉ ===")
total = stats["success"] + stats["errors"]
avg_time = sum(stats["response_times"]) / len(stats["response_times"]) if stats["response_times"] else 0
print(f"Total requêtes : {total}")
print(f"Succès : {stats['success']}")
print(f"Erreurs : {stats['errors']}")
print(f"Temps moyen : {avg_time:.3f} s")
if __name__ == "__main__":
asyncio.run(main()