How does using packer and terraform in a CD pipeline compare to docker images built from git?

HOW TO -️ October 18, 2021

At work we have a Go CD pipeline for producing docker images, and scheduling docker containers with rancher.

This works quite well. You can commit a change to the Docker image in Git, the pipeline will pick it up, and upgrade the container. (The CD pipeline tags the version of the docker image with the build number of the pipeline, and this provides continuity throughout the upgrade.)

We have a similar arrangement with AWS AMIs using Hashicorp Packer and terraform. You can make a change to a packer json file in git, the pipeline will rebuild a new AMI. Given the user's approval in Go (we have some stage-gates in Go) this can then in theory to stop the existing ec2 instance and start a new one based on the AMI that was built. (The ID of the AMI is passed through the pipeline.)

In practice terraform doesn't work quite as well in a pipeline. (Compared to docker). Sometimes it is just the sheer complexity of things associated with an EC2 instance and corresponding VPC/routing arrangements can tangle it. Sometimes it is that you need to custom-write your own tests to see if the EC2 instance was stopped, and then started.

Is there a broader consensus on this issue? Are there some patterns I'm missing? Or is it just that when you're on the bleeding edge, things hurt a bit?

My question is: How does using packer and terraform in a CD pipeline compare to docker images built from git?


I'm a fan of CI/CD, but I'd be afraid that terraform's tendency to want to recreate the world from scratch doesn't fit with the waterfall nature of pipelines. Great observation- Terraform does have a global not local approach. Are there any other tendencies or terraform you'd like to mention? shows that some people have made this work so maybe you can too. This question might be easier to answer as "how do these solutions compare?", rather than "please support my assertion that this solution is a bad idea".