Security Guide

Basic Threat Modeling for Small Web Apps

A practical guide to mapping assets, entry points, trust boundaries, abuse cases, and mitigation ideas before launch.

Back to tutorials

Threat modeling sounds formal, but for small web apps it can be very lightweight. You do not need a large security team or a wall-sized diagram to get value from it. The goal is simpler: look at your app before launch and ask what could go wrong, who could trigger it, and what small changes would reduce the risk.

Small apps are often more exposed than teams expect. A portfolio, resource hub, admin panel, or side project may still collect contact details, rely on third party APIs, accept form input, or expose publishing workflows. A threat model helps you catch obvious mistakes before the internet catches them for you.

Why small apps still need threat modeling

Most teams skip threat modeling because the app feels too small to justify the effort. That usually means security review happens only after a problem appears. A basic model creates shared awareness early. It helps you notice where secrets live, where untrusted input enters the system, and which features deserve tighter controls before the launch rush takes over.

The useful version of threat modeling is not a list of dramatic attack stories. It is a short planning exercise that turns vague worry into a few concrete improvements. This pairs especially well with the security header planner because the model tells you where browser protections and policy settings matter most.

Step 1: List what you need to protect

Start with assets. These are the things an attacker might want to read, modify, abuse, or break. For a small app, the list is usually short: user accounts, admin actions, session cookies, contact form submissions, API tokens, uploaded files, payment data, or private drafts.

asset list
Assets
- Admin login
- Session cookie
- Contact form submissions
- CMS draft content
- API key for email delivery
- GitHub deployment token

Keep the list practical. If a value would hurt you, your users, or your business if leaked or abused, write it down.

Step 2: Map entry points

Next, mark where input enters the system. Entry points include public pages with forms, login screens, admin routes, upload flows, webhooks, API endpoints, query parameters, and even analytics scripts. Anything that accepts or processes external input deserves attention.

entry points
Entry points
- /contact form
- /login
- /admin/posts/new
- POST /api/messages
- POST /api/webhook/github
- query string filters on /resources

This is where a lot of security problems begin. When you can see the entry points, you can ask better questions about validation, auth, rate limits, and logging.

Step 3: Mark trust boundaries

A trust boundary is any place where data moves from one trust level to another. Browser to server is a trust boundary. Public user to admin panel is another. Your app to a third party email provider or AI API is also a boundary.

These boundaries matter because assumptions often break there. Data that looks safe in one context may be dangerous in another. A markdown field that is fine in storage may become risky when rendered into HTML. A webhook that looks like a normal POST request may need signature verification because it crosses a public boundary.

Step 4: Write abuse cases

Now turn the map into realistic abuse cases. Keep them specific. Instead of writing "hacker steals data," write things like "an attacker submits script tags through the contact form and the admin dashboard renders them unsafely" or "a user guesses unpublished draft URLs because the IDs are predictable."

abuse cases
Abuse cases
1. Spam bot floods the contact form and email provider.
2. Admin session cookie is reused on a shared machine.
3. Unescaped rich text leads to stored XSS in the dashboard.
4. Webhook endpoint accepts unsigned requests.
5. Draft content becomes public through weak route protection.

Good abuse cases are easier to mitigate because they describe the path, not just the fear.

Step 5: Turn risk into action

For each important abuse case, decide what control reduces the risk. Common improvements for small apps include stronger input validation, safer output encoding, CSRF protection, rate limiting, server-side auth checks, content security policy, webhook signature verification, and more useful logs.

mitigations
Risk -> Mitigation
Stored XSS -> sanitize rich text and set CSP
Form spam -> add rate limiting and bot checks
Weak admin access -> require server-side auth and role checks
Secret exposure -> move tokens to env vars and rotate old values
Unsigned webhook -> verify signature before processing payload

Not every risk needs a perfect solution right away. The point is to leave the session with actions you can actually implement.

A lightweight review checklist

  • Do we know which data is sensitive?
  • Have we listed public forms, APIs, webhooks, and admin routes?
  • Do we validate input at the server, not only in the browser?
  • Are important admin actions protected and logged?
  • Do secrets stay out of the client bundle and public repo?
  • Have we chosen the next two or three mitigations to implement first?

If you want structure for the browser-facing part of this work, the security header planner and deployment readiness audit are good companions to this article.

Worked example

Imagine a small tutorial site with a contact form, a private admin area, and a webhook that rebuilds the site after content changes. The asset list includes admin credentials, message submissions, deployment tokens, and unpublished draft content. Entry points include the form, login route, admin editor, and webhook endpoint. Trust boundaries exist between browser and app, app and email provider, and CMS and deployment pipeline.

Likely abuse cases are spam submissions, XSS in admin previews, replayed webhook calls, and accidental secret exposure in client-side code. The first mitigation batch might be rate limiting on the form, stricter output encoding in previews, signature verification for webhooks, and a review of environment variable usage. That is already a meaningful threat model, and it probably took less than an hour.

Threat modeling does not replace testing or code review. It makes both of those more focused. As your app grows, revisit the model when you add auth changes, uploads, new APIs, payment flows, or a richer admin workflow. For a small app, a few good review habits are often enough to prevent the obvious security mistakes.