​​​DevSecOps Baseline Evidence Automation
Automated evidence generation for ATO is no longer optional. AOs increasingly expect security-relevant evidence (such as SBOMs and vulnerability data) to be generated and refreshed as part of normal system operation, not produced once and left to decay.
​
As a result, tasks related to generating this technical evidence should not be viewed as “completed” once you have produced some static artifacts. Instead, this evidence generation must be integrated directly into the DevSecOps platform itself to produce living evidence over time.
​
[EVID-1] Implement automated container build and deployment pipeline
We implemented an automated Jenkins build pipeline that produces reproducible container images, generates SBOMs, runs vulnerability and baseline scans, signs images, and publishes them to the registry consumed by the Big Bang cluster. The development pipeline generates trusted, security-analyzed artifacts. The Big Bang cluster enforces runtime policy and provides continuous monitoring. This model turns security evidence into a routine output of engineering workflow rather than a separate compliance activity.
​
[EVID-2] Integrate automated SBOM generation
Before generating any SBOMs for your software, study 2025 Minimum Elements for a Software Bill of Materials (SBOM) | CISA very carefully. It specifies in great detail the items that DoW expects in an SBOM.
We use Anchore Syft to generate SBOMs. Whatever generator you use, you must validate that your SBOM is comprehensive for all the software you are planning to deploy. Several failure modes are worth calling out explicitly:
​
-
Custom build systems. If there are dependencies in your build system that aren’t represented in a universal standard format (like package-lock.json and such), Syft or other SBOM generators will likely NOT pick them up because they simply don’t know where to look.
-
Unmappable components. SBOM entries that cannot be reliably correlated with vulnerability intelligence undermine downstream risk analysis. SBOM-based scanning depends on accurate identifiers (such as package URLs, versions, and ecosystems). If components are mislabeled, ambiguously identified, or mapped to the wrong namespace, known vulnerabilities may be silently missed. An SBOM can be “complete” yet operationally useless if its components cannot be mapped to vulnerability data.
-
Static binary blind spots. If you copy a statically linked Go, Rust, or C++ binary into an image, Syft will often report only the binary itself, missing the libraries compiled into it. The fix is to merge build-time SBOM data with packaging-time SBOM data. Do not rely on post hoc binary inspection.
-
Sideloaded libraries. Dependencies fetched via curl, copied from vendor directories, or otherwise bypassing a package manager will likely not appear unless you deliberately include them. These must either be modeled in your build system or manually appended.
-
Missing transitive dependencies. DoW heavily emphasizes supply-chain depth. An SBOM that lists only top-level third-party packages, without their dependency trees, is unlikely to be considered sufficient.
-
Build versus runtime confusion. Multi-stage Docker builds can accidentally include compilers and build tools if scanned incorrectly, or they can omit runtime dependencies entirely if the final image is minimal. Ensure scans target the final artifact while preserving build-time dependency knowledge.
-
Base image drift. Ambiguous tags such as ubuntu:latest or node:18 introduce silent SBOM staleness. Your SBOM may describe one image while the tag now resolves to another. Reproducible builds require pinned digests, not floating tags.
​
Because of these issues, just scanning your final container images likely won’t meet DoW requirements. DoW cares very much about having complete and accurate SBOMs to assess software supply chain risks, so we implemented an SBOM utility that hooks into our custom build system and generates CycloneDX-compliant based on our custom dependency tree. You can read the code here as inspiration for your own integrations: AtoReadinessCodeExamples/Syft/SbomCommand.py at main · Edensoft-Labs/AtoReadinessCodeExamples.
​​
You can also run a more lightweight shell script that runs Syft in a Docker container to directly scan another Docker image: AtoReadinessCodeExamples/Syft/RunAnchoreSyftScanAgainstContainer.sh at main · Edensoft-Labs/AtoReadinessCodeExamples.
​
When evaluating your SBOMs, ask yourself these questions:
-
Can every executable or loadable file in the final image be traced to a component in the SBOM?
-
For each compiled artifact, can you explain whether its dependencies are dynamic, static, or vendored? Where is that relationship declared?
-
If a reviewer points at an arbitrary binary and asks what it is and where it came from, can you answer in under a minute?
-
For each SBOM component, can its identifiers (package URL, ecosystem, version) be reliably mapped to vulnerability intelligence?
​
[EVID-3] Integrate automated vulnerability scanning
The Software Fast Track Initiative (SWFT) and other DoW publications identify these scans as a key part of continuous security verification. We chose Aqua Trivy for container image vulnerability scanning, since it’s used as part of the Harbor registry in Big Bang and also recommended by Rise8. However, Anchore Grype is also a good choice.
​
It’s crucial to understand that these vulnerability scanners don’t analyze your code or binaries for new bugs. They generate an SBOM, identify the third-party packages and versions in your image, and match them against known vulnerability databases (like NVD and vendor advisories). If a package version is listed as vulnerable, they report the associated CVE (Common Vulnerabilities and Exposures) identifier. This is dependency/version pattern matching for detecting supply chain risks, not deep code inspection. Thus, such scanning is NOT a replacement for SAST (Static Application Security Testing), DAST (Dynamic Application Security Testing), fuzzing, or manual security review.
​
The same weaknesses with SBOMs described above also apply here. To ensure that we get accurate vulnerability scans, we scan our container images with Trivy and then “scan” the SBOMs that we generated in the previous step. That will flag any vulnerabilities in our container that is not otherwise visible.
See how we integrated this in our custom build system: AtoReadinessCodeExamples/Trivy/Trivy.py at main · Edensoft-Labs/AtoReadinessCodeExamples.
​
[EVID-4] Integrate automated OpenSCAP container scanning
OpenSCAP (the Open Security Content Assessment Protocol) evaluates physical or virtual machines against STIGs and other sets of security rules. OpenSCAP provides machine-readable answers a concrete compliance question: “Does this system’s configuration match a prescribed baseline?” It is not designed to reason about application logic, distributed architectures, or runtime behavior. Here is an example of the start of a human-readable OpenSCAP report:
​​
​​

​When run on containers, OpenSCAP simply evaluates whether the container image’s internal state matches the security baseline. Iron Bank provides OpenSCAP reports for its images on this basis, and we chose to generate equivalent reports for our own images to strengthen our ATO posture.
​
However, OpenSCAP does NOT evaluate against the ASD STIG or Container SRG described above. It only concerns itself with checking state that can me meaningfully verified by machine.
​
To actually do this scanning, we found it easiest to reuse Iron Bank’s own deployment pipeline containers, namely Iron Bank Containers / Iron Bank Pipeline Images / pipeline-runner-alpine · GitLab, which include all the necessary tools.​​
​
[EVID-5] Integrate Static Application Security Testing and Secret Scanning
We enabled GitHub Advanced Security on our repos, which includes CodeQL (SAST) and secret scanning. You might be able to do similar scanning fully on-premises by leveraging the GitLab instance included with Big Bang.
​
[EVID-6] Leverage OSCAL and other compliance-as-code solutions to help generate control narratives
As ATO packages grow in size and complexity, teams naturally look for ways to reduce the manual effort involved in tracking controls, evidence, and assessment artifacts. This has led to increased interest in “compliance-as-code” approaches; that is, using structured, machine-readable formats and tooling to manage authorization data alongside the system itself.
​
OSCAL (Open Security Controls Assessment Language) is a set of schemas for JSON, XML, and YAML to help machines track and merge security control implementation information. It is the poster child for “compliance-as-code” initiatives, which do indeed hold much promise for reducing the manual verification and validation burden. However, there is much hype around OSCAL and compliance-as-code. Here, you’ll learn what we actually found valuable and what just isn’t mature yet.
​
To start learning more about OSCAL, see https://pages.nist.gov/OSCAL/learn/concepts/layer/:

It is essential to understand that despite the hype surrounding OSCAL and compliance-as-code, it’s just a set of data interchange formats and linking them to technical evidence. OSCAL does not “magically” determine if your system meets a given NIST SP 800-53 control or not. There is no tool that reads an OSCAL SSP or component definition, checks it against your system, and then declares, “You are compliant with AC-3, CM-6, SI-2.” The linking of the two (“this evidence supports this OSCAL control implementation”) is performed outside the tools, typically by a Governance, Risk, and Compliance (GRC) platform, a custom pipeline, or an assessor during evaluation. There seem to be projects that are trying to achieve this dream, notably oscal-compass/compliance-to-policy; however, such efforts don’t seem mature yet.
​
A major use case for OSCAL is pulling in the OSCAL versions of your chosen NIST 800-53 baseline and comparing it against the Big Bang OSCAL components to see exactly what controls Big Bang might let you inherit, and why. Indeed, the whole NIST 800-53 catalog, along with numerous baselines, is available in OSCAL format officially from NIST: oscal-content/nist.gov/SP800-53/rev5 at main · usnistgov/oscal-content. Big Bang also provides detailed information in OSCAL format about controls you can inherit from it and its components. These can be helpful when you’re seeking to inherit controls from Big Bang, because they uniquely identify controls and what Big Bang provides for you. (However, you need to still need to verify the OSCAL files are not out-of-date or inconsistent.) Here are some examples:
​
​
(You can find more by looking at the respective public repos for Big Bang and its components. The OSCAL file should always be in the repository root at oscal-component.yaml.) These OSCAL component definitions look like this:
​
control-implementations:
- uuid: 06717F3D-CE1E-494C-8F36-99D1316E0D13
description:
Controls implemented by authservice for inheritance by applications
implemented-requirements:
- uuid: 1822457D-461B-482F-8564-8929C85C04DB
control-id: ac-3
description: >-
Istio RequestAuthentication and AuthorizationPolicies are applied after Authservice. Istio is configured to only allow access to applications if they have a valid JWT, denying access by default. Applications that do not use Authservice do not have these
policies.
​
To “merge” these various OSCAL files together to see what Big Bang can offer us, we experimented with compliance-trestle (one of the more mature OSCAL toolkits). We found the learning curve too steep for our needs as a very small team pursuing our first ATO. We got much more mileage out of using Excel/Power Query to handle OSCAL data. The general principle we’ve discovered is that OSCAL is a very comprehensive format, but the tooling around OSCAL is rather immature. So don’t be afraid to create your own simple tools to work with OSCAL files.​​