Advanced Web App Security Testing Guide
Deep testing is about coverage and focus. This guide shows where automation stops, where manual techniques shine, and how to combine both to produce verified issues, instead of false positives. For background on specific areas, see the API security testing checklist, Security headers, CSP, and the TLS 1.3 upgrade.
What automation is great at
Automation shines when the rules are clear and repeatable.
- Crawl-and-check coverage: missing headers, TLS and cookie hygiene, obvious misconfigurations
- Dependency and known‑CVE alerts across many services
- Regressions and drift across releases (spot when a header disappears or TLS downgrades)
These sorts of issues can quickly be detected using an automated security scanning service such as Barrion. Using continuous monitoring, you can also get informed of new weaknesses as they pop up, instead of having people to do frequent checks manually.
Where humans add the most value
Humans catch context that tools miss.
- Business logic abuse and authorization edge cases where the “what” is allowed but the “who” or “when” is wrong
- Multi‑step/complex flows, chained attacks, state/sequence issues that require understanding how web applications and APIs work together
- Assumption checks: “what if I skip step two or replay step three?” that break the happy path
Thinking like a hacker: change order, timing, inputs, and roles. Trying to get access to areas and data that should not be allowed.
A proven mixed strategy
- Run continuous automated scans for hygiene and drift - they’re your early‑warning system
- Before major launches, schedule a short, focused manual session on one risky flow - depth over breadth
- Log only verified issues as tickets. Add preventive solutions so the issue does not return (e.g., security header, rate limit, automated tests)
This keeps signal high while costs stay reasonable.
Signals that matter (outcomes over noise)
- A small, prioritized list of verified issues with direct business impact
- Clear reproduction steps and owner, and the fix is testable and monitored
- Fewer regressions over time (headers/TLS/cookies stable across releases)
A 90‑minute manual session template
Scope: one high‑value flow (such as new user to checkout)
Hypotheses (pick 5-7): IDOR (Insecure Direct Object Reference), CSRF (Cross-Site Request Forgery), rate limit bypass, replay, field tampering, forced browsing, privilege escalation
Steps:
- Use two accounts (low and high privilege) and an unauthed client
- Capture requests (browser devtools or proxy)
- Replay with small mutations: IDs, headers, methods, timing
- Write results briefly; when opening a ticket, include one short “how to reproduce” step so engineers can verify quickly
Outcome: 1-3 verified issues with clear next actions, or confirmation that this flow currently looks secure.
Hands-on test snippets
Pre‑reqs and scope for snippets
- Run in a test/staging environment
- Have two users (low and high privilege) and an unauthenticated client available
IDOR (Insecure Direct Object Reference) - verify object access
- When to run: Any API that returns objects by ID
- Command/example:
curl -i -H "Authorization: Bearer <token-user-A>" https://api.example.com/v1/orders/123
curl -i -H "Authorization: Bearer <token-user-B>" https://api.example.com/v1/orders/123
- Expected result: 403 (or equivalent denial) for the non‑owner, no data leakage
- Common false positives: Public resources or shared/team objects by design
Auth rate limiting - throttle brute force
- When to run: For existing login, password reset & OTP verification flows
- Command/example:
for i in {1..20}; do curl -s -o /dev/null -w "%{http_code}\n" \
-X POST https://example.com/login -d 'u=test&p=wrong'; done
- Expected result: 429 after a small burst without lockouts for other users
- Common false positives: WAF blocking by IP while user‑based throttles are missing
CORS (Cross-Origin Resource Sharing) - block wildcard with credentials
- When to run: Any page/API that returns sensitive data to browsers
- Command/example:
curl -I https://example.com -H "Origin: https://attacker.example" -H "Access-Control-Request-Method: GET"
- Expected result: Do not allow both credentials and a wildcard. Sensitive endpoints should not be CORS‑open
- Common false positives: Static assets intentionally public
CSRF (Cross-Site Request Forgery) - token + SameSite
- When to run: State‑changing POST in browser flows
- Command/example:
<form action="https://example.com/account/email" method="POST">
<input type="hidden" name="email" value="[email protected]">
</form>
<script>document.forms[0].submit()</script>
- Expected result: Request fails without a valid CSRF token, and SameSite cookies are enforced
- Common false positives: Non‑browser clients where CSRF does not apply
Open redirect - validate return URLs
- When to run: Login/logout flows with returnUrl/redirect params
- Command/example:
curl -I "https://example.com/logout?returnUrl=https://attacker.example"
- Expected result: Param ignored or validated against an allowlist of paths/domains
- Common false positives: Intended cross‑domain SSO flows with strict allowlists
SSRF (Server-Side Request Forgery) - block metadata IPs
- When to run: Endpoints that fetch user‑supplied URLs
- Command/example:
curl -i "https://example.com/fetch?url=http://169.254.169.254/latest/meta-data/"
- Expected result: Request is denied by allowlists, and internal/metadata IPs are unreachable
- Common false positives: Server‑side allowlists that omit private ranges but still block metadata IPs
Reduce noise
- Use accurate security scan services such as Barrion to detect security flaws that can be found with automation
- Use nonces/hashes to align CSP with modern apps, and avoid blanket unsafe‑inline
- Prefer short‑lived tokens and revoke flows to reduce latent risk
Where to go next
Focus your manual time where automation stops: business logic, authz edges, and multi‑step flows. Use OWASP ASVS as a lightweight map of what to cover and pull a few items per release. For API‑heavy apps, follow the practical probes in the API security testing checklist.
To keep momentum between releases, set up automated testing and monitoring in the Barrion dashboard. Barrion runs the hygiene checks continuously and flags drift, so your team can spend human time on the deeper issues that matter.