top of page

​​​

DevSecOps Platform Setup

DoW publishes a DevSecOps Reference Design that describes how a compliant DevSecOps environment is expected to be structured. It defines architectural patterns (CI/CD pipelines, registries, and clusters), security expectations (SBOM generation, container scanning and signing, centralized logging), and operational assumptions aligned with RMF.

​

By deploying into a compliant environment, systems can inherit a substantial number of security controls rather than implementing and justifying them independently. This inherited-control model is one of the primary ways teams reduce documentation burden and assessment friction.

​

[PLAT-1] Set up internal Big Bang cluster for testing

The most concrete implementations of this reference design are Big Bang and Unicorn Delivery Service (UDS) Core. Either could work, but Big Bang has been chosen for the following reasons:

  1. It has more wide support for additional packages,

  2. It has formed the basis for the DoW Platform One Party Bus,

  3. It doesn't rely on some more custom tooling that UDS Core provides,

  4. UDS Core is based on Big Bang anyway, and

  5. Big Bang provides a lower-level baseline that is likely to be easier to adapt to a variety of DevSecOps platforms (whether they're using Big Bang, UDS, or something different).

​

We deployed a minimal internal Big Bang environment on four unused developer workstations that met the minimum hardware requirements (see Prerequisites - Big Bang Docs).

​

This section assumes a foundational understanding of Kubernetes. It does not attempt to deeply optimize lower-level Kubernetes infrastructure components, since production deployments typically place those responsibilities within the managed Kubernetes platform. Our objective was to establish a stable, functional test environment to empower higher-priority integration and security validation tasks.

​

As noted in the official, public repo (Big Bang / bigbang · GitLab):

Big Bang is an umbrella Helm chart that packages together a collection of open-source and commercial software tools into a cohesive platform. It leverages Flux CD for GitOps-based deployments and provides:

  • Zero Trust Security: Built-in security controls with defense-in-depth architecture

  • Compliance by Design: Implementation of the DoW DevSecOps Reference Architecture and industry standards

  • Observability Stack: Comprehensive monitoring, logging, and tracing capabilities

  • Service Mesh: Istio-based secure service-to-service communication

  • Developer Experience: Integrated CI/CD pipelines and development tools

​

Platform One operates the reference Big Bang cloud environment, but Big Bang itself is explicitly designed to be deployable anywhere.

​

For more information on Big Bang’s architecture and the services it includes, see Packages - Big Bang Docs and Architecture - Big Bang Docs. Big Bang’s documentation has markedly improved in recent months, so we recommend revisiting it even if you have read it before.

​

Using Big Bang means you are not building IAM (Identity and Access Management), SIEM (Security Information and Event Management), network security, endpoint protection, or CI/CD from scratch. But this does NOT mean ATO is automatic. Big Bang gives you mechanisms and evidence at the platform layer; your application still must use them correctly, not bypass them, and provide app-layer evidence.

 

When deploying on Big Bang, this implementation effort concentrates in a small number of control families where Big Bang provides the how but not the what:

​

  1. Cryptography, Key management, and PKI [SC-13, SC-12, SC-17]
    This is the usual schedule-killer in anything that might touch classified. It concerns things like what crypto is allowed; how keys are generated, stored, rotated, and escrowed; what PKI/certs you must trust; and how you prove it.

  2. Encryption in Transit & Session Authenticity [SC-08, SC-08(01), SC-23]
    Even if Big Bang gives you mTLS, Ingress, and service-mesh primitives, your app still has to use them correctly. That means no plaintext sidecars, no “temporarily” skipping cert validation, no weak ciphers, no insecure callbacks to external services.

  3. Protection of Information at Rest [SC-28, SC-28(01)]
    Big Bang can often provide encrypted storage, but the ATO question is, “What data do you store, where, and with whose keys?” This guidance commonly expects encryption at rest and tight key control.

  4. Monitoring, Near-real-time Analysis, and Alerts [SI-04, SI-04(02), SI-04(05)]
    Big Bang is designed around continuous monitoring, but you still must emit the right app/security events, ensure they’re parsable, avoid logging secrets, and wire alert conditions that matter (auth anomalies, privilege actions, data access, admin changes).

  5. Flaw Remediation & Automated Status [SI-02, SI-02(02)]
    In practice, this means dependency hygiene, patch SLAs, a continuous SBOM/vulnerability workflow, and proving you actually deploy fixes within the required window. Big Bang pipelines can help enforce gates, but your team must operationalize the cadence and exceptions process.

  6. Integrity Monitoring & Code Authentication/Signing [SI-07, SI-07(15)]
    This is “prove the thing you run is the thing you built.” Signed images, verified provenance, admission control, and (often) stronger expectations in higher-impact environments. This tends to be a moderate-to-heavy lift if you don’t already do signed artifacts end-to-end.

  7. Configuration Management Plan [CM-09]
    Not hard engineering, but it’s a documentation-and-process sink: defining CIs, baselines, environments, change control, approvals, emergency changes, and how Big Bang + your repo process enforce it. Without this, many other items look hand-wavy.

  8. Enumerate Ports/Protocols/Services [CM-07, SA-04(09), SA-09(02)]
    This becomes painful if your architecture is “microservices plus a dozen third-party APIs.” You’ll need a crisp inventory: what’s exposed, what’s internal-only, what’s egressing, and why each item is mission-essential.

  9. Malicious Code Protection [SI-03]
    Often largely inherited through platform scanning and EDR (Endpoint Detection and Response), but your app and build chain must avoid becoming the gap (e.g., pulling unsigned binaries at runtime, curl | bash installers, uncontrolled plugins).

  10. Input Validation & Error Handling [SI-10, SI-11]
    Purely in your team’s control and very audit-friendly. You need to validate all external inputs, avoid injection classes, and ensure errors don’t leak internals (stack traces, secrets, environment details). Usually not the biggest calendar item, but it’s a common assessor finding.

​

Set up RKE2 (RKE Government) Kubernetes distribution

RKE2 (RKE Government) is Kubernetes platform we chose based on ease of installation, base OS support, and specific security hardening guidance. Other good options include OpenShift and Konvoy (see Kubernetes Distribution Specific Notes - Big Bang Docs to learn more). â€‹We also found the following resources very valuable in getting started:

​

This setup ultimately wasn’t that different from a “normal” Kubernetes cluster. We needed to set up the initial control plane node, additional control plane nodes, any worker nodes.

​

However, RKE2 has its own Kubernetes executables that aren’t available in the path by default. It is a huge time savings to get kubectl set up to run on your local machine with the RKE2 config, rather than requiring you to SSH into a control plane node and run a complicated command like this:​

sudo /var/lib/rancher/rke2/bin/kubectl --kubeconfig /etc/rancher/rke2/rke2.yaml get secret

​

Instead, you can install kubectl on your local machine and then copy the RKE config to the default location:

scp administrator@<CLUSTER_IP>:/etc/rancher/rke2/rke2.yaml ~/.kube/config

​

Then, you can just run kubectl directly without any config filepath:

kubectl get secret

​

Kick off GitOps

GitOps is an operational model in which Git is the authoritative source of truth for the desired state of a system, and automated controllers continuously reconcile the running environment to match what is declared in Git. The following sections provide specific details about kicking off GitOps:

​

Deploy Big Bang customer template

To start off, jump into the Big Bang customer template. Don’t worry about using the quickstart script. Copying the contents of the customer template’s gitRepo folder to a new repo. Since this repo will be the main GitOps repository, the RKE2 cluster must be able to clone this repo via a URL and credentials. In practice, this usually means standing up an internal Git service (for example, Gitea or GitLab) that’s reachable from the cluster.

​

Deploy secrets with SOPS

Now, you need to provide your Registry One credentials so Big Bang can pull all the images it needs from Iron Bank. To do this, install Mozilla Secret Ops (SOPS) in your environment, and follow the instructions at Manage Kubernetes secrets with SOPS | Flux to generate a GPG key and update .sops.yaml with the fingerprint of this key.

​

Then, use SOPS to add your Iron Bank credentials to dev/secrets/dev-bb-secret.yaml and automatically encrypt it, like in the example at AtoReadinessCodeExamples/BigBangCustom/dev/secrets/dev-bb-secret.yaml at main · Edensoft-Labs/AtoReadinessCodeExamples.

​

 The beauty of SOPS is that you do NOT need to keep an unencrypted copy around to edit and then manually re-encrypt it. In the appropriate folder, you can just run:

sops dev/secrets/dev-bb-secret.yaml

​

SOPS will open an editor and then re-encrypt appropriate parts of the file immediately after closing the editor.

Next, update the bigbang.yaml template so that it points to your GitOps repository URL, like in the example at gitRepo/dev/bigbang.yaml · main · Big Bang / Customers / template · GitLab.

​

You need to provide the credentials for this GitOps repo in that private-git secret. Because GitOps cannot work without these credentials, we used a bootstrap script to deploy this secret to the cluster: AtoReadinessCodeExamples/BigBangCustom/DeploySecretsToCluster.sh at main · Edensoft-Labs/AtoReadinessCodeExamples.

​

Install Flux controllers

Next, you need to install the Flux controllers. To do this, clone the main Big Bang repo (not the customer template we’ve been using) and run this script with your Platform One SSO username/CLI secret:

./bigbang/scripts/install_flux.sh -u $PLATFORM_ONE_USERNAME -p $PLATFORM_ONE_CLI_SECRET

​

Now, kubectl get pods --all-namespaces -o wide should show the following Flux controllers running.

​

Finally, initiate the Big Bang deployment:

kubectl apply -f dev/bigbang.yaml

​

You should now see the initial Kyverno images being pulled and those pods starting. (Check kubectl get helmreleases -n bigbang.)

​

Set up kube-dns service

At this point, we noticed errors like the following:

$ kubectl -n logging logs deploy/logging-loki-gateway

/docker-entrypoint.sh: No files found in /docker-entrypoint.d/, skipping configuration

[emerg] 1#1: host not found in resolver "kube-dns.kube-system.svc.cluster.local." in /etc/nginx/nginx.conf:33

nginx: [emerg] host not found in resolver "kube-dns.kube-system.svc.cluster.local." in /etc/nginx/nginx.conf:33

​

The Loki gateway is just an NGINX pod, and its config hard-codes the resolver to kube-dns.kube-system.svc.cluster.local. (This comes from the Loki upstream chart; Big Bang just wraps it.) On RKE2, the cluster DNS service is not named kube-dns, so that hostname doesn’t exist. Then NGINX refuses to start when the resolver host can’t be resolved. To solve this, we created a kube-dns service that points to the actual CoreDNS service.

​

First, find your existing DNS service:

kubectl -n kube-system get svc

​

On RKE2 you’ll usually see something like rke2-coredns-rke2-coredns as a ClusterIP service. Next, inspect it to see labels & ports:

kubectl -n kube-system describe svc <your-dns-svc-name>

​

Then, create an alias service named kube-dns that selects the same pods. See the example manifest we've provided for you at AtoReadinessCodeExamples/kube-dns-alias.yml at main · Edensoft-Labs/AtoReadinessCodeExamples.

​​

Apply it and then let Loki reconcile again:

kubectl apply -f kube-dns-alias.yaml

flux reconcile helmrelease loki -n bigbang

​

Set up default storage class

Big Bang requires a functioning CSI-backed dynamic provisioner and exactly one default storage class. Otherwise, packages like Loki that need PVCs will not reconcile.

​

Because we had prior experience with Rook/Ceph, we used it for our test cluster. However, for standing up an on-premises Big Bang cluster, it might be better to start with a simpler CSI-backed option such as a local-path provisioner or a lightweight solution like Longhorn.

​

Get Big Bang services running and exposed for access

First, you need a load balancer or other way to expose services (see the Big Bang docs). When you are lacking this, the symptom is that services are stuck as “pending”.

​

Because our local environment did not include a managed cloud LoadBalancer (such as AWS or Azure), we needed a bare-metal alternative to support Kubernetes LoadBalancer Services. We used kube-vip for this purpose, deploying it in a DaemonSet-based configuration. In this model, kube-vip handles virtual IP (VIP) advertisement via Address Resolution Protocol, since ARP is the only practical way to make a floating IP move between nodes on bare metal. This was required not only for basic service exposure, but also to support Big Bang’s use of Istio, which assumes the presence of a functional LoadBalancer abstraction. We configured kube-vip with a fixed pool of assignable IP addresses and paired it with a DNS wildcard record on our internal network (*.dev.bigbang.mil) so services could be accessed consistently without modifying hosts files across internal systems.

​

Then, we use kubectl get virtualservice -A to get the URLs of all running Big Bang services. Many of them should be running now.

​

Tip: It might feel like a waste of time to set up TLS certificates for these Big Bang services early on, but it certainly is not. Because Big Bang emphasizes security by default, you will waste a lot of time trying to “get around” the lack of TLS, and things will fail in very odd ways!

​

Set up proper user accounts on Big Bang services

See Default Credentials - Big Bang Docs for the kubectl commands to get all the default credentials and/or tokens.

​

[PLAT-2] Package and deploy custom software to Big Bang cluster

First, you need to make your software container images available to the cluster by pushing them to the on-cluster Harbor container registry.

​

Push container images to Harbor

You need a user account and a Harbor “project,” which is a workspace where you can store various containers.

Note that the image you want to push MUST be tagged appropriately. For example, if the image is edensoft-web:1.0.0.0, you CANNOT merely run this:

docker push edensoft-web:1.0.0.0

​

Instead, you must first do this:

docker tag edensoft-web:1.0.0.0 harbor.dev.bigbang.mil/edensoft-labs/edensoft-web:1.0.0.0

​

And then you can push that new tag.

​

Configure pulling from Harbor registry

To allow Big Bang to pull from your Harbor instance, you have to provide credentials in dev/secrets/dev-bb-secret.yaml, just like you did with your Registry One SSO credentials.​

​

Cautiously leverage Harbor SBOM generation and vulnerability scans

Harbor comes with built-in Software Bill of Materials (SBOM) generation and vulnerability scanning for containers.

 

WARNING: As described in more detail below, these Harbor-produced scans are based only on the container image itself. These reports will not know about your custom build system and will likely have one of the other common SBOM failure modes described later.

​

Deploy Helm chart referencing custom software

Now, you need to actually tell the Big Bang cluster about your custom software. Create a Helm chart to actually deploy your custom software on the cluster. Here’s an example of such a chart: AtoReadinessCodeExamples/BigBangCustom/chart/your-custom-software/values.yaml at main · Edensoft-Labs/AtoReadinessCodeExamples.

​

And finally, you need to reference this chart in the main Big Bang configmap (dev/configmap.yaml), like the example at AtoReadinessCodeExamples/BigBangCustom/dev/configmap.yaml at main · Edensoft-Labs/AtoReadinessCodeExamples​.

​

[PLAT-3] Integrate custom software with Big Bang services

As shown in the Big Bang configmap example above, integration with Istio and some other services is rather simple. However, other services, like Keycloak, are more complex to integrate and likely require application code changes.

​

Example: Keycloak integration

As a concrete step toward inheriting identity-related controls, we converted our server-side web application to integrate with Keycloak early in the process.

​

We converted our server-side web application to integrate with Keycloak in only a few hundred lines of code: AtoReadinessCodeExamples/KeycloakIntegration/KeycloakIntegrationSnippet.cs at main · Edensoft-Labs/AtoReadinessCodeExamples.

​

Keycloak provides a way to import/export realms, but this functionality has proven to be fragile and likely only works properly when the exact same version of Keycloak is used for export and import. Thus, expect to need to create roles, clients, and users in the Keycloak GUI when transferring from local testing to Big Bang.

​​

Another item you might need to keep in mind is ensuring application logs are emitted to standard output/error (rather than a file) for collection by platform logging services.

​

Example: Kiali Network Traffic Reports

On a Big Bang cluster, all Kubernetes network traffic goes through an Istio service mesh, which can be visualized with the Kiali web dashboards. Once your software is actually running in the cluster, we should provide some network traffic reports (like PDF web printouts) of appropriate traffic for our software. That would likely include, but is not limited to, the traffic graph, application-specific reports, workload-specific reports, service-specific reports, Istio config, and overall service mesh visualization​.​

Connect with Us

  • Youtube
  • LinkedIn
bottom of page