The AWS Cloud Control Provider Documentation Challenge
The AWS Cloud Control (AWSCC) provider for Terraform offers automated weekly updates to support new AWS resources through the Cloud Control API, abstracting service-specific interactions into a consistent interface. However, this automation created a documentation gap: while the provider could automatically generate resource schemas, practical examples required manual creation. By 2023, the AWSCC provider included 700 resources, but only 200 had working examples despite two years of contributor effort. The challenge intensified as new AWS services launched faster than examples could be written, creating a growing backlog of undocumented resources that limited adoption.
Evolving the LLM Approach: From Prompts to Agentic Workflows
Initial experiments in early 2024 used large language models with extended context windows to generate examples from resource schemas and AWS documentation. While promising, this approach suffered from attention loss across long contexts and hallucinations due to limited AWSCC-specific training data. The breakthrough came with Anthropic's Claude computer use capability, which enabled an agentic workflow where the LLM could access tools like Terraform CLI, validate its own output, and iteratively refine examples. This shift from passive generation to active validation fundamentally changed the quality and reliability of generated documentation.
Production Architecture and Orchestration
The production implementation uses AWS Step Functions to orchestrate containerized Lambda functions, each handling specific phases: creation, validation, review, cleanup, and summarization. The system provides Claude with a secure, isolated environment containing Terraform binaries and access to resource schemas from the CloudFormation registry. System prompts establish governance rules around security best practices, tag formatting conventions, and resource-specific considerations like EKS cluster creation times. User prompts guide the LLM through sequential steps—downloading schemas, running terraform init and validate, applying configurations, and setting completion markers that trigger state transitions in the workflow.
Results and Future Applications
Over a three-day holiday period, the automated system generated 450 working resource examples at a cost of approximately $400 in Amazon Bedrock inference charges. This output nearly doubled the 250 examples created manually over two years by multiple contributors. The approach significantly reduced hallucinations by enabling the LLM to validate its own work through tool access and iterative refinement. Beyond documentation generation, the team identified potential applications including pre-release resource validation, automated testing of schema changes, continuous validation of existing examples as schemas evolve, and potentially autonomous pull request creation with appropriate human oversight.