​Preliminary ATO Documentation Generation
This documentation translates the technical reality into artifacts that an Authorizing Official can evaluate: what the system is, how it works, and how risk is managed.
​
Prerequisite: Produce the technical evidence
The most important evidence required is the actual container images containing your software under review, which should be traceable back to source code, CI/CD pipelines, SBOMs, and scans. Supporting technical evidence includes the items we discussed in previous sections, such as the following:
​
-
Supply chain and vulnerability evidence (Syft Software Bills of Materials, Trivy vulnerability reports). These show that all dependencies are known, tracked, and assessed.
-
Configuration and compliance scanning (OpenSCAP compliance reports, NeuVector security reports, Kyverno policy reports, and Kiali network traffic reports). These demonstrate enforcement of runtime and configuration expectations, particularly in Kubernetes environments.
-
Secrets scanning (Trivy, Trufflehog). These primarily support Application Security & Development (ASD) STIG requirements.
-
Malware and adversarial testing (ClamAV scan results, manual penetration testing reports). These provide assurance against classes of threats that static analysis cannot detect.
​
If an ATO package allows the AO to understand what risks exist, why they exist, and how you intend to manage them, you are doing it correctly.
​
Tip: Judiciously leverage AI for drafting ATO documents
Large language models (LLMs) can accelerate your path to ATO in two key ways:
​
First, AI can help you understand the technical reality. ​AI can directly analyze your actual codebase and infrastructure to help identify compliance gaps and control mappings. The key is giving AI your actual technical artifacts rather than abstract descriptions. Specific approaches include:
​
-
Code analysis with Claude Code or Cursor: Point AI at your actual source code, Dockerfiles, Kubernetes manifests, and configuration files. Ask it to identify potential STIG violations, trace data flows, map authentication/authorization implementations to AC controls, or explain how your current logging setup does (or doesn't) satisfy SI-04 requirements. For example: "Analyze my Kubernetes network policies in the /k8s directory and tell me which SC (System and Communications Protection) controls they satisfy."
-
Control mapping from implementation: Give AI your actual architecture (Helm charts, Terraform configs, network diagrams) and ask it to map your technical implementation to NIST 800-53 controls. For example: "Based on this Istio AuthorizationPolicy YAML, which access control requirements from AC-3 and AC-6 does this satisfy, and what's still missing?"
​
Once your system's technical reality is understood and you have credible evidence, AI can accelerate the creation of ATO documents. This includes drafting control narratives for your SSP based on your actual architecture, generating initial POA&M entries from real scan results, or structuring technical descriptions from your deployment manifests.
​
It is vital to understand that AI is not a substitute for robust evidence collection. You still need to generate SBOMs, run vulnerability scans, create architecture diagrams, define authorization boundaries, and document operational procedures. AI can help you understand how these pieces fit together and what they mean for compliance, but it cannot fabricate the artifacts.
​
Used correctly, AI shortens the path from “we understand our system” to “we have a readable SSP.” Used prematurely, it produces confident-sounding fiction that assessors will dismantle quickly. Without proper evidence, AI-generated content will be vague at best and factually incorrect at worst. This is why upstream work like build hygiene, dependency traceability, and evidence generation matters so acutely in ATO efforts.
​
[GEN-5] System Security Plan (SSP)
There is no tool that tells you whether you are ATO-ready. At its core, the Risk Management Framework (RMF) requires English narratives explaining why controls are satisfied, supported by concrete technical evidence. The SSP does this, and that’s why it is the single most important document in your package; authorization hinges on whether the SSP is coherent, comprehensive, and honest.
​
The following sections describe more specific components of the SSP:​
​
[GEN-1] Authorization Boundary
As described above, this is a precise list of every component you are asking the AO to trust. List every virtual machine, container, and database included in the boundary, and describe the mechanisms (e.g., Firewalls, Ingress Controllers, Service Mesh) that isolate your system from external environments.
​
[GEN-2, 3, 6] Detailed Software Architecture, Data Flows and Encryption
This provides the "why" and "where" of your system.
​
A technical description of how data enters, moves through, and leaves the system. Identify every API endpoint, UI portal, or database connection where data enters or leaves. Map the internal route data takes. An example data path could be from a user’s browser to an Ingress gateway, through a sidecar proxy, and into an encrypted database. Highlight exactly where data is encrypted in transit (e.g., mTLS) and at rest (e.g., encrypted PVCs).
​
Documenting ports, protocols, and services is also a key part of the SSP documentation. This information can likely be incorporated in a table in the SSP and possibly in the architecture diagram as well.
​
[GEN-4] Control Implementation Narratives
This is the “meat” of the SSP, as it provides an exhaustive control-by-control account against the selected baseline. Your task is to clearly articulate where the system stands today relative to each control in the baseline.
​
Avoid vague policy statements. A "good" narrative is a technical specification of your security architecture:
-
Bad (Vague): “The system implements access control to ensure only authorized users can see data. We use RBAC.”
-
Good (Traceable): “Access enforcement is managed via Keycloak integrated into the application. The system denies all requests by default. Authorization is enforced through Istio AuthorizationPolicies at the Big Bang service mesh layer, which validates the JWT ‘roles’ claim against the required application feature role (e.g., AdministratorTool_v2). Technical evidence is provided in the NeuVector Security Reports. Big Bang provides the FIPS-validated ingress controller for TLS termination, and the application delegates all session authenticity to the platform and provides the required app-layer metadata for centralized logging.”
​
Many technical controls must be implemented in ways specified by Security Technical Implementation Guides (STIGs). A key part of the documentation is mapping controls to specific STIG rules using Control Correlation Identifiers (CCIs) provided with specific STIG rules. Here’s an example:​​​​
ASD STIG Rule | Control Correlation Identifier (CCI) | Requirement | Technical Evidence |
|---|---|---|---|
SRG-APP-000456 | CCI-002607 | Check for vulnerabilities before production. | Trivy Image Scan: Reported 0 Critical/High findings in build #402. |
SRG-APP-000516 | CCI-000366 | No embedded credentials. | Trivy Secret Scan: 0 findings for keys or certs in image layers. |
[GEN-7] SRG/STIG Compliance Documentation
We described above the specific SRG/STIG requirements you’ll likely need to meet, as well as where OpenSCAP can automate producing some useful evidence. This should likely form a spreadsheet referenced in the SSP.
​
[GEN-8] Continuous Monitoring Plan Documentation
Continuous monitoring is a key part of risk management for software approvals and Continuous Authority to Operate (cATO). As noted in articles like the following, continuous monitoring is a key part of a cATO approach:​
You’ll want to provide at least some brief documentation as part of the System Security Plan outlining how continuous monitoring of our software can work, largely leveraging existing DevSecOps platform infrastructure and also noting any specifics for your software that may be useful.
​
[GEN-9] Plan of Actions & Milestones (POA&M)
The POA&M is a formal acknowledgement of your cybersecurity technical debt. It is a “living” backlog that shows the AO you understand your system's flaws. (See this article for more details on POA&Ms.) Each item in the POA&M should be structured like the following:
​
-
Nature of Finding: Explicitly state the source of the weakness, such as a Trivy vulnerability scan or an OpenSCAP report.
-
Risk and Severity: Categorize findings by impact (e.g., CAT I, II, or III).
-
Interim Mitigations: Describe the "compensating controls" protecting the system now while the fix is being built. (e.g., "While a CVE exists in the web server, we have implemented a Kyverno policy to block all egress traffic from that pod.")
-
Realistic Remediation Timeline: Define clear milestones, such as "Re-base on hardened image" and "Regression test in staging".​
​
[GEN-10] Privacy Documentation
The Privacy Impact Assessment (PIA) documents how the system collects, uses, stores, shares, and protects personally identifiable information (PII) and other privacy-sensitive data. While the SSP addresses security controls and technical risk, the PIA focuses on privacy risk, legal authority for data collection, data minimization, retention, sharing, and lifecycle handling of PII, along with safeguards against misuse or overexposure.
​
For a Big Bang deployment, the PIA should explicitly address whether PII appears in application logs or observability data, whether platform telemetry or monitoring components collect user identifiers, how third-party integrations handle personal data, and how backups, snapshots, deletion, and retention controls protect PII across the platform.
​
If the system does not process PII, the PIA should explicitly state this and document the analysis supporting that determination.
​
[GEN-11] Compile the initial ATO package
Once all required controls have been implemented, inherited, or otherwise formally acknowledged, the ATO work culminates in a single, coherent ATO package. This is not merely a collection of reports; it is the body of evidence an Authorizing Official relies upon to make a personal risk decision. In practice, this package should be assembled as a well-structured archive. A directory structure might look like the following:
​
-
Documents/
-
System Security Plan
-
Architecture Diagram
-
Privacy Impact Assessment
-
Security Control Compliance/Inheritance Sheet
-
SRG and STIG Compliance Spreadsheet
-
Plan of Actions & Milestones (POA&M)
-
-
Evidence/
-
CodeQL Scanning Results
-
Secrets Scanning Results
-
SBOMs
-
Trivy Vulnerability Reports
-
OpenSCAP Compliance Reports
-
NeuVector Security Reports
-
Kyverno Policy Reports
-
Kiali Network Traffic Reports
-
ClamAV Scan Results
-
Penetration Testing Results
-
Iron Bank Base Image Reports
-
​
Due to size, container images themselves must often be delivered separately.
​
While DoW is heavily leaning into continuous evidence delivery from live systems, initial ATO review is currently still based on a snapshot-in-time package that tells a complete, internally consistent story.​​