Skip to main content
Career Paths
Concepts
Artifact Management
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Resume Builder
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Artifact Management

Build once, deploy everywhere. Artifact management stores immutable build outputs (container images, JARs, npm packages) in registries so every environment deploys the exact same tested artifact — eliminating the "works in staging, fails in prod" problem.

🎯Key Takeaways
Build once, deploy everywhere: the same artifact (identified by SHA digest) should be deployed to every environment — never rebuild at deploy time
Tags are mutable; SHA digests are immutable. Pin production deployments to digests to guarantee you always pull the tested artifact
Container registries (ECR, ACR, ghcr.io) store images; package registries (Artifactory, Nexus) store JARs, npm, and PyPI packages — use the right tool for each artifact type
Retention policies are mandatory: without them, registries grow unbounded and storage costs become significant
Vulnerability scanning should run on every pushed image; fail the pipeline on critical CVEs before the artifact can be promoted

Artifact Management

Build once, deploy everywhere. Artifact management stores immutable build outputs (container images, JARs, npm packages) in registries so every environment deploys the exact same tested artifact — eliminating the "works in staging, fails in prod" problem.

~9 min read
Be the first to complete!
What you'll learn
  • Build once, deploy everywhere: the same artifact (identified by SHA digest) should be deployed to every environment — never rebuild at deploy time
  • Tags are mutable; SHA digests are immutable. Pin production deployments to digests to guarantee you always pull the tested artifact
  • Container registries (ECR, ACR, ghcr.io) store images; package registries (Artifactory, Nexus) store JARs, npm, and PyPI packages — use the right tool for each artifact type
  • Retention policies are mandatory: without them, registries grow unbounded and storage costs become significant
  • Vulnerability scanning should run on every pushed image; fail the pipeline on critical CVEs before the artifact can be promoted

Lesson outline

Why Artifact Management Matters: Build Once, Deploy Many

The most common cause of "it worked in staging but not in production" is rebuilding the artifact for each environment. If your CI pipeline builds a Docker image for staging, then builds another image for production — even from the same commit — you have two different artifacts. A transitive dependency updated between builds, a different npm registry cache hit, a different base image layer: all of these produce subtly different binaries.

The Build Once Principle

Build the artifact exactly once in CI. Push it to a registry. Then deploy that exact artifact — identified by its immutable digest — to dev, staging, and production. The same bits that passed your tests are what runs in production. No rebuilding at deploy time.

Artifact management is the practice of: (1) storing build outputs in dedicated registries or repositories, (2) tagging them with meaningful versions, (3) making them immutable once published, (4) enforcing retention policies so storage does not grow unbounded, and (5) scanning them for vulnerabilities before deployment.

Types of artifacts and their storage

  • Container images — Docker images stored in container registries: AWS ECR, GCP Artifact Registry, Azure ACR, Docker Hub, GitHub Container Registry (ghcr.io). Referenced by tag (e.g. v1.2.3) or digest (sha256:abc123...).
  • Maven/Gradle JARs — Java artifacts stored in Maven repositories: Artifactory, Nexus, GitHub Packages, AWS CodeArtifact. Versioned by Maven coordinates (groupId:artifactId:version).
  • npm/PyPI packages — JavaScript and Python packages in npm registries and PyPI-compatible repos. Artifactory, Nexus, and cloud-native options (AWS CodeArtifact) act as proxies and private repos.
  • Helm charts — Kubernetes application packages stored in Helm chart repositories: OCI registries (ECR, ACR), ChartMuseum, or GitHub Pages. Versioned with semver.
  • Infrastructure artifacts — Terraform modules, Ansible roles, AMIs (Amazon Machine Images). Stored in Terraform Registry, Ansible Galaxy, or as AMI snapshots in AWS.

Container Registries: ECR, ACR, GCR, Docker Hub

Container registries are the most common type of artifact registry for modern DevOps teams. They store Docker/OCI images and serve them to container runtimes (Docker, Kubernetes, ECS).

RegistryProviderCost modelKey features
AWS ECRAWS$0.10/GB/month after free tier; no transfer cost within AWSIAM-integrated auth, image scanning, lifecycle policies, private by default
Azure ACRAzureBasic/Standard/Premium tiers; ~$0.10/GB/monthGeo-replication (Premium), integrated with AKS, image scanning via Defender
GCP Artifact RegistryGCP$0.10/GB/month after free tierSupports Docker, Maven, npm, Python; CMEK encryption; integrated VPC controls
Docker HubDocker IncFree (1 private repo, rate limited) or Pro/Team paidLargest public image catalog; rate limits make it unsuitable for production CI without auth
GitHub Container RegistryGitHubFree for public; included in GitHub Actions minutes for privateDeep GitHub Actions integration; packages linked to repos; great for OSS
Artifactory/JFrogJFrogSaaS and self-hosted pricingUniversal registry (Docker, Maven, npm, PyPI, Helm); advanced access control; xRay scanning

Use Cloud-Native Registries in Production

For AWS workloads, use ECR. For GCP, use Artifact Registry. For Azure, use ACR. These integrate natively with IAM/RBAC, are co-located with your compute (no egress costs pulling images), and offer built-in vulnerability scanning. Docker Hub is excellent for public images but should not be used as your private production registry due to rate limits and dependency on a third-party service.

Immutable Tagging and Versioning Strategies

Tags in container registries are mutable by default — you can push a new image and overwrite the existing :latest or :v1.0.0 tag. This is a trap. Immutable tagging (available in ECR, ACR) makes tags permanent once pushed, which is the correct production posture.

Never Use :latest in Production

:latest is the default tag when no tag is specified. It is mutable — anyone can push a new image and overwrite it. If two services both reference :latest, they may run completely different code without knowing it. In production, always pin to a specific version tag or, better yet, a SHA digest.

Versioning strategies from worst to best

  • :latest (avoid) — Mutable, unclear what version is running, no rollback path. Never use in production.
  • Semantic version tags (:v1.2.3) — Clear, human-readable, supports rollback. But tags can be overwritten in mutable registries. Use immutable tag enforcement.
  • Git SHA tags (:abc1234) — Ties the image to an exact commit. Immutable by definition if you never push the same SHA twice. Makes tracing builds trivial.
  • Combined tags (:v1.2.3-abc1234) — Combines semver readability with git SHA immutability. Best of both: human-friendly version + traceable commit.
  • SHA digest (@sha256:abc...) — The most immutable reference. Guaranteed to always pull the exact same image bytes, even if tags are moved. Use for production Kubernetes manifests: image: myapp@sha256:abc...

Pin Production Deployments to SHA Digests

In your Kubernetes Deployment manifests or ECS task definitions, reference images by SHA digest, not tag: image: 123456789.dkr.ecr.us-east-1.amazonaws.com/myapp@sha256:abc123def456. This guarantees that even if someone pushes a new image with the same tag, your production workload pulls the exact tested bytes.

Promoting Artifacts Across Environments

The promotion pattern is the correct way to move artifacts from CI through environments to production. Build the artifact once, store it, and promote the same artifact (by updating which version is deployed) through your environments.

→

01

CI builds image: On every merge to main: docker build, docker push to registry with tag v1.2.3-abc1234 and digest. Record the digest in CI artifacts.

→

02

Deploy to dev: Automated deploy to dev environment using the new digest. Run smoke tests. If they pass, artifact is eligible for staging.

→

03

Deploy to staging: Promote the same digest to staging. Run full integration and performance tests. No rebuilding — same artifact.

→

04

Deploy to production: After staging validation, deploy the same digest to production. The artifact is now known-good: it passed all tests in both dev and staging.

05

Rollback if needed: If production has issues, revert the deployment manifest to the previous digest. The old image is still in the registry — instant rollback.

1

CI builds image: On every merge to main: docker build, docker push to registry with tag v1.2.3-abc1234 and digest. Record the digest in CI artifacts.

2

Deploy to dev: Automated deploy to dev environment using the new digest. Run smoke tests. If they pass, artifact is eligible for staging.

3

Deploy to staging: Promote the same digest to staging. Run full integration and performance tests. No rebuilding — same artifact.

4

Deploy to production: After staging validation, deploy the same digest to production. The artifact is now known-good: it passed all tests in both dev and staging.

5

Rollback if needed: If production has issues, revert the deployment manifest to the previous digest. The old image is still in the registry — instant rollback.

Artifact Promotion vs Environment Branching

Resist the temptation to build separate images per environment (dev-image, staging-image, prod-image). This defeats the purpose of build once. Instead, use environment-specific configuration (ConfigMaps, environment variables, Secrets) to customize behavior. The image is the same; only the config differs.

Retention Policies and Storage Cost Management

Without retention policies, artifact registries grow unbounded. A busy team pushing 20 images per day to ECR will accumulate 7,300 images per year. At $0.10/GB, even small images add up quickly.

ECR Lifecycle Policy: Keep Last 10 Tagged, Expire Untagged After 1 Day

Set ECR lifecycle policies to keep the 10 most recent versioned images and expire untagged (intermediate build) images after 1 day. Also keep any image currently deployed to production regardless of count — these are your rollback safety net.

Retention policy best practices

  • Keep N most recent production-deployed versions — Always retain the last 3-5 versions that have been deployed to production, regardless of age. These are your rollback options.
  • Expire untagged images quickly — Untagged images (created during multi-stage builds) have no rollback value. Remove them after 1-7 days.
  • Separate retention by tag prefix — Release tags (v1.2.3) get longer retention. Branch tags (feature-*) get shorter retention (7 days). Use tag patterns in lifecycle rules.
  • Scan and expire vulnerable images — Some registries (ECR with Inspector, Artifactory with xRay) can automatically expire images with critical CVEs. Remove artifacts you would never deploy.
How this might come up in interviews

Artifact management comes up in CI/CD design questions, DevOps maturity assessments, and "how do you ensure production matches staging" scenarios. Security-focused interviews may probe vulnerability scanning in registries.

Common questions:

  • What is the "build once, deploy many" principle and why does it matter?
  • How would you prevent the "works in staging, fails in production" problem at the artifact level?
  • What is the difference between a mutable tag like :latest and an image digest? When would you use each?
  • Walk me through how you would set up artifact management for a team with Docker, Maven, and npm artifacts.
  • How do you handle artifact retention to control registry storage costs?

Strong answer: Explaining the build once principle clearly. Knowing the difference between tags and digests. Mentioning immutable tag enforcement. Having a concrete retention policy strategy. Understanding the promotion pattern (same digest through dev → staging → prod).

Red flags: Rebuilding images at deploy time. Using :latest in production. Not knowing what an image digest is. No retention policy strategy. Storing credentials in images.

Quick check · Artifact Management

1 / 3

Your team builds a Docker image in CI, deploys it to staging, and after it passes tests, rebuilds the image from the same commit for production. What risk does this introduce?

Key takeaways

  • Build once, deploy everywhere: the same artifact (identified by SHA digest) should be deployed to every environment — never rebuild at deploy time
  • Tags are mutable; SHA digests are immutable. Pin production deployments to digests to guarantee you always pull the tested artifact
  • Container registries (ECR, ACR, ghcr.io) store images; package registries (Artifactory, Nexus) store JARs, npm, and PyPI packages — use the right tool for each artifact type
  • Retention policies are mandatory: without them, registries grow unbounded and storage costs become significant
  • Vulnerability scanning should run on every pushed image; fail the pipeline on critical CVEs before the artifact can be promoted
🧠Mental Model

💡 Analogy

Artifact management is like a pharmaceutical supply chain. A drug is manufactured once in a certified facility (CI builds the image). It is given a batch number and stored in a regulated warehouse (registry with immutable tags). The same batch is tested in clinical trials (staging tests). If it passes, the same physical batch (same SHA digest) is shipped to pharmacies (production). You never manufacture a new batch for each hospital — that would defeat the purpose of quality control.

⚡ Core Idea

Build once, store immutably, deploy everywhere. The artifact that passes CI tests is the exact artifact that runs in production — tracked by SHA digest. Rebuilding for each environment is the enemy of reproducibility.

🎯 Why It Matters

Every time you rebuild an artifact, you risk subtle differences. Package registries change. Base images update. Network flakiness causes partial downloads. Proper artifact management eliminates this entire class of problems: if it passed tests in CI, it will behave identically in production.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.