Skip to content

[AI Security Audit] Suggestions #14

@MauScheff

Description

@MauScheff

Dear Procivis Team,

I asked codex-cli to scan for security vulnerabilities. Here's what it found.

I'm sharing these in a best effort to contribute. I can't tell if these are actual problems that need to be fixed, or if they're less
critical than codex suggests because of missing context and it's safe to ignore them.

Hope this helps make in making Procivis One even more secure!

Respectfully,
Maurice

Findings

  • High Risk – SSRF through JSON-LD context resolution. When insecureVcApiEndpointsEnabled is true (it
    is in the sample config) unauthenticated callers can hit /vc-api/credentials/verify and submit
    a credential whose @context references an internal URL; the verifier resolves that context via
    JsonLdResolver, issuing an arbitrary GET with your service identity (config/config-local.yml:11,
    apps/core-server/src/router.rs:517, lib/one-core/src/provider/credential_formatter/json_ld_classic/
    mod.rs:264, lib/one-core/src/provider/caching_loader/json_ld_context.rs:29). That enables
    straightforward SSRF against internal networks or metadata services. Restrict the resolver to
    an allowlist (and scheme), set tight timeouts/size limits, and keep these “insecure” endpoints
    disabled unless they sit behind an API gateway that performs the validation instead.

  • Medium Risk – Panic messages leak to clients. The panic handler converts the panic payload into the
    HTTP response body, so any panic! (including ones that format secrets) is echoed to the caller,
    bypassing hide_error_response_cause (apps/core-server/src/router.rs:593, apps/core-server/src/
    dto/response.rs:35). Return a generic 500 message and log the detailed panic on the server side
    instead.

  • Medium Risk – Management API guarded by static bearer token with a weak default. The middleware
    compares the Authorization header directly to config.auth_token, and the bundled config ships
    with authToken: "test" (apps/core-server/src/middleware.rs:94, config/config-local.yml:3). Without
    rotation, rate limiting, or a stronger default, an attacker who gets read access to config (or
    guesses the trivial default) gains full management control. Treat the token as a secret (env/secret
    store), require a stronger random default, and consider defense-in-depth (e.g., mTLS or OAuth)
    plus throttling.

Open questions

  • Are the “insecure” VC API endpoints ever exposed on an internet-facing deployment, or are they
    always fronted by another service that performs context fetching/validation?
  • Do any of your current panic sites include sensitive data in the panic message that would already
    be observable via the current response behavior?

Next steps

  1. Lock down JSON-LD fetching: disable the insecure endpoints by default, enforce an allowlist/
    timeout, and add regression tests that ensure disallowed hosts are rejected.
  2. Sanitize panic handling so clients only ever see a generic 500.
  3. Rotate the management bearer secret from configuration to a proper secret channel and add
    throttling/monitoring around bearer_check.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions