
Adding a security agent to containerized workloads is not as simple as installing a package. Containers are immutable by design. Your CI/CD pipeline builds images and pushes them through a promotion chain — development, staging, production — with image digests that guarantee what runs in production is exactly what was tested. Injecting a runtime agent into that chain requires a deliberate strategy, or you end up with either a broken pipeline or a security layer that was only partially deployed.
This post covers the three main approaches to deploying Raven.io in Kubernetes, with the tradeoffs of each.
Option 1: Bake the Agent Into the Application Image
The most straightforward approach is to add the Raven.io agent to your application's Dockerfile. For a Java application, this means adding the agent JAR as a layer and setting the JAVA_TOOL_OPTIONS environment variable to include the -javaagent flag. For Node.js, it means adding the agent package to package.json and requiring it at application startup.
Example Dockerfile addition for a Java application:
COPY ravenio-agent.jar /opt/ravenio/agent.jar
ENV JAVA_TOOL_OPTIONS="-javaagent:/opt/ravenio/agent.jar"
ENV RAVENIO_API_KEY=${RAVENIO_API_KEY}
ENV RAVENIO_APP_NAME=my-service
The advantage of baking the agent in is simplicity — the agent is part of the image, promoted through the same pipeline as the application, and guaranteed to be present in every environment. The disadvantage is that agent updates require rebuilding and redeploying the application image. If a critical security update to the agent is released, you cannot push it independently of the application.
This approach works well for teams with frequent deployment cycles (daily or more) where agent updates can ride along with application releases. It is less suitable for teams with slow deployment cycles or heavily regulated change management processes where the application team and the security team operate on different timelines.
Option 2: Init Container Injection
Kubernetes init containers run before the main application container starts and can share volume mounts. The Raven.io init container copies the agent binary into a shared volume, and the application container picks it up via an environment variable at startup. The agent binary lives in the init container image rather than the application image.
A minimal pod spec for Java using this pattern:
initContainers:
- name: ravenio-init
image: ravenio/agent:latest
command: ["cp", "/agent/ravenio-agent.jar", "/ravenio-mount/agent.jar"]
volumeMounts:
- name: ravenio-agent
mountPath: /ravenio-mount
containers:
- name: my-app
image: my-registry/my-service:1.2.3
env:
- name: JAVA_TOOL_OPTIONS
value: "-javaagent:/ravenio/agent.jar"
volumeMounts:
- name: ravenio-agent
mountPath: /ravenio
With this approach, updating the agent is as simple as updating the init container image tag in your deployment YAML. The application image is untouched. The security team can push agent updates by updating the Helm chart or Kustomize overlay that manages the init container version, completely independently of the application release cycle.
The limitation is that env variable injection into existing pods requires the pod to restart. You cannot update a running pod's init container configuration without a rollout. If you need zero-downtime agent updates, you need to coordinate with your rollout strategy.
Option 3: Admission Webhook Automatic Injection
The most operationally scalable approach for large Kubernetes deployments is a mutating admission webhook. When a pod creation request arrives at the Kubernetes API server, the webhook intercepts it, inspects the pod spec, and modifies it to inject the init container and environment variables before the pod is created. Application teams deploy without any Raven.io-specific configuration in their manifests.
Raven.io ships a webhook controller that you deploy into your cluster once. You enable injection for namespaces by adding a label:
kubectl label namespace production ravenio.io/inject=enabled
Every pod that starts in that namespace automatically receives the agent. Application teams commit no agent-specific code. Security teams can update the webhook controller to change the agent version cluster-wide without touching application repositories.
The tradeoff is operational complexity at the webhook layer. The webhook itself must be highly available — if it fails, pod admission fails, and your deployments stop. The webhook must handle failure modes gracefully (failurePolicy: Ignore for most production use cases, with monitoring on the webhook to catch silent injection failures). It also requires cluster-admin access to install, which creates a change management discussion in regulated environments.
CI/CD Pipeline Integration
Regardless of which deployment pattern you choose, the CI/CD pipeline integration follows the same steps. First, your pipeline pulls the agent version that you have pinned in your security configuration — never use :latest tags in production; pin to a specific digest like ravenio/agent@sha256:abc123.... Second, the agent version is verified against the checksum provided in the Raven.io release manifest. Third, the agent is included in the artifact that goes through your staging environment for testing before production promotion.
The most common CI/CD failure mode we see with RASP deployments is testing in staging with the agent in observe mode and deploying to production with the agent in blocking mode without an intermediate step. The alert volume difference between the two modes can be significant. Always test in blocking mode in a pre-production environment that receives representative traffic before enabling blocking mode in production.
Resource Sizing for Agent-Instrumented Pods
The Raven.io Java agent adds approximately 80MB to JVM heap usage at startup for the instrumentation bytecode and baseline storage. Node.js agents add approximately 30MB of V8 heap. Python agents add approximately 15MB of process memory. These numbers are measured at idle; under load, the agent's per-request working memory is small (under 2KB per concurrent request for most workloads).
Adjust your pod memory requests and limits accordingly. For Java pods currently running with 512MB heap, add 100MB of headroom when introducing the agent. Do not set memory limits below 1.2x of what the agent-instrumented pod uses at peak observed load — the OOMKiller is an unpleasant way to discover that your memory limits were too tight.
Handling Horizontal Pod Autoscaling
HPA events — pods being added during traffic spikes — require that new pods start with the agent already instrumented and connected to the Raven.io control plane. All three deployment patterns above handle this correctly because agent injection happens before the application process starts, either at image build time or in the init container phase.
One nuance: the behavioral baseline is stored at the agent level, not at the individual pod level. New pods that join a deployment inherit the existing baseline from the control plane rather than starting a fresh observation period. This means new pods are in blocking mode from their first request rather than spending 48 hours in observation mode. The baseline is shared across all instances of a deployment.
Rollback Procedures
When an agent update causes unexpected behavior — elevated false positive rate, performance regression, or application startup failure — rollback procedures differ by deployment pattern. For baked-image deployments, roll back the application image. For init container deployments, roll back the Helm chart to the previous agent image tag. For admission webhook deployments, update the webhook controller configuration to the previous agent version and restart affected pods.
Document your rollback procedure before your first production deployment. An incident during a security tool deployment is not the moment to work out rollback steps from first principles.
The admission webhook approach, despite its operational complexity, is the pattern we recommend for teams running more than 20 microservices. The overhead of managing agent versions across dozens of application repositories quickly becomes prohibitive, and centralized webhook management is the cleaner long-term solution.
Kubernetes Deployment Guide
The full Raven.io Kubernetes deployment guide includes Helm chart templates, admission webhook installation YAML, and a pre-production validation checklist. Available to all trial customers.
Start a Trial