Terraforming my AWS with Agentic Coding
For a long time, I have been on the fence with AI programming. I could see there was potential for a huge disruption to the tech industry, but at the start of 2025, AI models were still a very long way from producing results that were consistently production-grade and reliable. At least from my experience. Sure, for greenfield projects where AI needs no context or can quickly grep the context it needs, I saw value. But every business I’ve worked in that’s been around for more than a few years has its share of large, challenging and at times unwieldy codebases. I experimented throughout 2024 and 2025 with AI coding in these types of codebases and only saw value delivered with extreme levels of handholding.
Added to that, the ethical issues with the lack of consent around training data, as well as the vast energy and resource consumption, I didn’t see the value versus the cost. Fast-forward to 2026, and my mindset and workflow have shifted. I’m asking more and more of AI and finding huge benefits both in my professional and personal life.
AI’s Superpower: Bringing order to chaos
One pattern I see consistent value in is AI’s ability to bring order to chaos. Whether it be making sense of unstructured notes from a meeting, or understanding the business value tangled within legacy code, taking a sprawling set of requirements and distilling them into structured acceptance criteria, I’m increasingly using AI as a jumping-off point to bring order to chaos. I’ve found results in increasingly complicated tasks, and so I’ve come back to the idea of tackling non-trivial development tasks with AI.
Spec-driven development
I think my eureka moment came when an engineer on my team shared Speckit with me. It’s a simple set of tools for driving development changes within a codebase that you can install for most agentic tools that are around at the moment. What struck me was the sheer number of steps you execute before committing your agent to write code.
You have a concept of a constitution (the overarching set of principles of how to AI should engage with the codebase), a spec (what you want to build), a plan (technical implementation details), tasks (an actionable task list), and an execute method (the coding bit). Each step produces md files structured to guide the agent towards the result you’re looking for. Scope, design principles, implementation details and a checklist. Each document builds ontop of the last one and can be produced in a new context window. I tried it out, and the results were pretty good. For me, this was mind-blowing.
Beforehand, my expectations of agentic coding were simply to craft a single “perfect” prompt and for the AI to (in one shot) magically make all of the decisions correctly and deliver perfect code within a single context window. Now I understand the value of the context you give these tools, as well as the value of generating that context with AI. In my head, this raised the question: what’s a relatively challenging change I can make to a pre-existing codebase with agentic coding?
How about making my blog SEO friendly?
For a long time now, my blog has been powered by AWS. It does the job. There is a CloudFront distribution hosting a static S3 website behind it with a Lambda at edge function controlling some small bits of behaviour, as well as CodeBuild to build the static Jekyll site.
Several times in the past, I’ve found it tricky to manage this infrastructure. Whenever I return to it, I kind of forget which pieces there are and where they are in AWS, and it takes me a good while to ramp back up and figure out how to make the changes that I want.
Added to that, a lot of the infrastructure is not source-controlled. It was just patched together from different tutorials and articles that I read online, copied and pasted, etc. So I’ve been looking to make some upgrades. There are lots of SEO best practices that are quite easy to implement if you know what you’re doing. Canonicalising URLs and de-duplicating pages were top of that list for me. Currently, the site has 2 duplication issues.
All pages exist both on the www and non-www version of the site, e.g.
www.mattbridgeman.co.uk/blog/mattbridgeman.co.uk/blog/
All blog posts exist with and without a trailing slash, e.g.
mattbridgeman.co.uk/blog/post-namemattbridgeman.co.uk/blog/post-name/
This isn’t good for users or search engines. URLs should contain unique content.
The SEO solution to both issues involves redirects. For (1), the solution is a basic upgrade to the hostname if “www” isn’t present. Easy right? Well… when I went into AWS to start looking at how I should perform this change, I realised that, as a result of using my AWS account both as my production and also as a Sandbox for a very long time, there’s a sprawling mixture of important and obsolete services in AWS. Lambda functions in various regions, hello-world projects, and failed attempts at implementing features. Never deleted of course. Past me was experimenting and learning. Future me is paying the price. And so, again, coming back to the infrastructure years later, I found it difficult to reason with which services I actually needed and which ones were obsolete.
Terraform: Infrastructure as code
I’ve always thought it would be nice to manage the blog infrastructure as code, but the undertaking has never really felt like a worthwhile investment of my time, given how much time it would take to produce, versus fumbling my way through the AWS interface. Agentic coding has changed that. I set out to see if I could retrofit Terraform into my blog’s AWS Service infrastructure. I didn’t use Speckit, but I followed a similar philosophy of breaking the project down into chunks that could be managed within a context window.
Step 1: Discovery
I started with some basic prompts, asking AI to explore and discover which AWS services existed and in which regions. It produced a script to perform this discovery and, when executed, dumped JSON of the AWS services that exist and their configuration in a “discovery-output” folder.
acm-certificates-20260119-134551.json
cloudfront-distributions-20260119-134551.json
discovery-summary-20260119-134551.txt
iam-roles-20260119-134551.json
lambda-MattBridgemanBlogLambda-20260119-134551.json
lambda-MattBridgemanBlogLambda-config-20260119-134551.json
lambda-MattBridgemanBlogLambda-policy-20260119-134551.json
Step 2: Plan
I then asked it to create a plan to transform the infrastructure into Terraform which looked a little like this:
## Phase 2: Set Up Source Control Structure
### Step 2.1: Create Function Directory
Create a dedicated directory for the Lambda function code within the Terraform module:
cd terraform/modules/lambda-edge
mkdir -p function
**Directory Structure:**
terraform/modules/lambda-edge/
├── function/
│ ├── index.js # Main Lambda handler
│ ├── package.json # (if dependencies exist)
│ └── .gitignore # Ignore node_modules if present
├── main.tf
├── outputs.tf
├── variables.tf
└── versions.tf
Step 3: Review
I reviewed the plan, added a few items and removed a few. For example, it wanted to add state management via S3 and DynamoDB. If this was a site managed by multiple engineers and the Terraform ran the risk of being in conflict in multiple branches, I could absolutely see the value in this step. But since it’s just me and my blog, I removed that as a requirement.
Step 4: Execute
Once I was happy with the plan, I asked my agent to execute it by creating a new code repository that would house all of the Terraform. The AI managed to produce the infrastructure needed with very few errors.
Step 5: Validation
One thing I have noticed when using AI is that, when it needed to talk with a third-party service, its interpretation of “success” from that third-party was not always correct. For example, when it was running AWS command-line methods it was regularly getting a credential error, and it didn’t stop to ask the question, “why”?
As I noticed this, I paused the AI, told it the CLI calls were producing credential errors and got it to validate the credentials it was using as well as check the default AWS region that was set. What was interesting was that in the initial discovery phase, it correctly identified the credentials and region that it needed to use. However, on a second execution of the plan, it forgot, or assumed that these were defaults.
I asked it to explicitly check credentials and do the same moving forward, and from there on out, it didn’t make any further mistakes. It produced Terraform that, when planned and applied, worked the first time, and now all of the AWS services that power my site are backed by Terraform.
Step 6: Fast iteration
With my AWS services Terraformed, I’ve been able to rapidly iterate. I’ve added all the redirect logic needed to remove and redirect duplicates, as well as add new services to the stack. This is incredibly powerful and for me opens up all new possibilities. I am by no means a DevOps engineer. I know and work with very highly skilled DevOps engineers who understand enormous amounts about the different kinds of errors, workflows and problems that you get when implementing AWS services in Terraform. But, in spite of not being a DevOps engineer, AI has unlocked the ability to quickly produce Terraform-backed infrastructure for my website. It did so with the nuance of how my website works; it was able to investigate, analyse, plan, execute and validate its implementation. Historically, at the companies I’ve worked for, if we were to undertake the effort of retrofitting Terraform into production infrastructure, it would require investigation tickets and fairly decent number of tickets delivered by expert engineers and a lot of manual effort to ensure that the Terraform produced represented the complex web of AWS services required for a given service.
Of course, the fear is: does this make the value of engineers obsolete? In the longer term, that does feel like the logical conclusion. How long that will take remains to be seen. And whether, for example, the enormous losses AI companies are posting can be sustained.
In the short and medium term, I’m optimistic. I’m increasingly asking, “what projects have I discarded as too long, too complex, beyond my technical abilities?” The same at work in a team setting. This year, I’m committed to asking what new pathways, workflows and projects can be undertaken with agentic coding.