the same red-team mistakes keep showing up

after enough time poking at real systems, the surprising part is not how many bugs exist. it is how often the same ones reappear with slightly different paint on top.

the endpoints change. the branding changes. the stack changes. the failure modes do not.

what keeps showing up is usually one of four things:

  1. a trust decision made too early
  2. a security check done in the wrong place
  3. an internal tool exposed like a public feature
  4. a "temporary" exception that quietly became permanent

that is most of the job, honestly. not discovering brand new classes of bugs every week, but recognizing the shape of old mistakes fast enough to ask the right question before the wrong assumption hardens into production.


weak reset flows

password reset is one of those flows teams think they understand until they have to support it across multiple roles, multiple tenants, and multiple email templates.

the recurring mistake is simple: the reset flow starts as a convenience path and ends up acting like an authentication bypass.

the bad pattern usually looks like this:

POST /api/auth/forgot-password HTTP/1.1
Content-Type: application/json

{"email":"victim@example.com"}

and the server responds with something that should never leave the backend:

{
  "token": "eyJhbGciOi...",
  "status": "EMAIL_SENT"
}

at that point the email step is decorative. the API already made the security decision.

the version i see most often is not even that dramatic. it is a reset token that is technically sent by email, but also logged, reflected, cached, embedded into a callback URL, or returned to the browser "for debugging". every one of those shortcuts erases the original trust boundary.

the right question is not "does the reset email work?" it is "what can see the token before the user does?"

if the answer is more than one actor, the flow is already drifting.


cors mistakes

cors bugs are usually boring until they are not.

teams often think of cors as a browser restriction, which is only half the story. the browser is just the place where the policy gets enforced. the actual mistake is upstream: the application is treating origin as identity.

the pattern repeats in a few flavors:

Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true

or the more subtle version:

Access-Control-Allow-Origin: https://whatever-example.com

with logic that reflects back almost any origin matching a loose substring.

the broken part is not the header itself. it is the assumption that "browser will block it" is somehow equivalent to "this is safe."

it is especially common on endpoints that feel internal:

  • profile data
  • billing history
  • reset or invite workflows
  • admin-only reporting tools

those endpoints are often built with the quiet assumption that nobody outside the company will ever see them. then one day the origin policy becomes the only thing standing between a public browser and private data.

if i can read the response in a normal session with credentials attached, i want to know exactly why that response is not readable from a malicious origin.

that answer should not be "because the frontend never calls it."


exposed admin panels

admin panels fail in a different way.

they are often built with the right intent and the wrong boundary. the panel is "private" because the route is not linked from the homepage, or because it sits behind a path people do not advertise, or because it uses a separate subdomain that looks admin-ish enough to feel safe.

that is not access control. that is naming.

the usual sequence is:

  1. find the panel
  2. notice the login form is stronger than the rest of the app
  3. notice the pre-auth endpoints are less careful than the post-auth ones
  4. notice the panel leaks enough metadata to map the rest of the system

the real issue is that admin surfaces are treated as exceptions to product engineering. they get different review, different assumptions, and sometimes different ownership. then the auth model drifts until the panel is protected by a mix of obscurity, stale middleware, and one forgotten allowlist entry.

what i look for first is not whether the panel is visible.

it is whether the application behaves differently before and after login in a way that reveals internal structure:

/admin
/dashboard
/internal
/staff
/console

those paths are not vulnerabilities by themselves. they become interesting when the app gives them more trust than the rest of the stack deserves.


callback bugs

anything called callbackUrl, redirect, returnTo, next, or continue should make you pause a little.

those parameters tend to start as harmless usability features and end up controlling a critical branch of a security flow.

the common mistake is not "open redirect" in the obvious sense. it is using an untrusted URL as part of a trusted workflow:

  • password reset links
  • magic login links
  • invite acceptance
  • SSO handoffs
  • email verification

when that happens, the app starts asking the browser to participate in the trust model. that is a bad trade.

the bug often looks like this:

POST /api/reset-password HTTP/1.1
Content-Type: application/json

{
  "email": "user@example.com",
  "callbackUrl": "https://attacker.example/reset"
}

if the application accepts that value without checking where it points, the rest of the flow is no longer anchored to the real product. the "reset" button in the email can become just a delivery mechanism for someone else’s page.

and if the application also leaks the token anywhere else, the callback bug stops being the main problem and becomes the fallback problem.

that is what makes this family of issues annoying: the visible bug is often only one piece of the chain.


the pattern underneath all of it

the bugs above are different on paper, but the root cause is usually the same.

some piece of state was assumed to be trusted because it came from:

  • the browser
  • the frontend
  • the email template
  • the internal network
  • the hidden route
  • the staff-only panel

those are all contexts, not trust decisions.

the mistake is letting context become authorization.

that is why the same classes of bugs keep showing up. people build flows around what feels operationally convenient, then the security model quietly inherits the convenience assumptions.


what i check first now

when a system looks normal on the surface, i usually start with the boring questions:

  • where does the token first appear?
  • who can observe it before it is used?
  • what changes if i move the request across origins?
  • does the backend enforce the same rule the frontend hints at?
  • is the "internal" path actually protected, or just hidden?

those questions do not sound exciting, but they are efficient.

they also keep me from pretending the bug is more novel than it is.

the same mistakes keep showing up because the incentives keep showing up too: ship faster, reuse the existing helper, trust the client a little more than you should, and patch the symptom instead of the boundary.

that is the real red-team pattern.

not just finding flaws, but seeing how often the same design pressure produces them.


closing note

if a system’s security depends on the user not noticing a path, header, token, or redirect target, the system is already negotiating with the wrong layer.

that is the part worth remembering.

↑↑↓↓←→←→BA