Images, Dockerfile, and container runtime: packaging and running apps consistently.
Images, Dockerfile, and container runtime: packaging and running apps consistently.
Containers run processes in isolated environments that share the host kernel. Unlike VMs, they do not boot a full OS—they start quickly and use less memory. Docker is the most common tool to build, ship, and run containers. Images are read-only templates; containers are running instances of an image.
Containers get their own filesystem, network stack, and process space (via Linux namespaces and cgroups), but they run on the same kernel as the host. That is why they are lightweight and fast.
A Dockerfile defines how to build an image: base image (FROM), copy files (COPY), run commands (RUN), set environment (ENV), expose ports (EXPOSE), and define the command to run (CMD or ENTRYPOINT). Each instruction adds a layer; layers are cached, so order matters for build speed.
Multi-stage builds use one stage to compile or prepare assets and a later stage to produce the final image, keeping the image small and free of build tools. Best practice: use specific version tags (e.g. node:18-alpine), run as non-root, and minimize the number of layers.
docker run starts a container from an image; you can map ports (-p), mount volumes (-v), set env vars (-e), and run in the background (-d). docker compose defines multi-container apps in a YAML file—services, networks, volumes—so you can bring the whole stack up with one command.
In production, an orchestrator (Kubernetes, ECS, AKS) manages scheduling, scaling, health checks, and rolling updates. Understanding Docker and Dockerfile is the foundation for working with any of them.
DevOps and platform interviews: expect to explain containers, Dockerfile best practices, and how you would containerize an app.
Common questions:
Quick check · Docker and containers
1 / 3
Key takeaways
Why use a multi-stage Docker build?
To keep the final image small and free of build tools: one stage compiles, another stage copies only the runtime artifacts.
💡 Analogy
Containers are like standardised shipping containers — the same metal box works on any ship, truck, or crane (any Docker host). Before containers, every team had custom "we ship in a wooden crate" processes that broke whenever the crate arrived at a different port. Docker standardised the box; registries are the ports; orchestrators are the shipping network.
⚡ Core Idea
A container is a process with its own isolated filesystem, network, and process space, built from an immutable image. The image is the recipe; the container is the running meal.
🎯 Why It Matters
Containers eliminate "works on my machine" by packaging the runtime alongside the application. Every environment — laptop, CI, staging, production — runs the exact same image bytes. This is the foundation of modern DevOps: build once, promote everywhere.
Related concepts
Explore topics that connect to this one.
Ready to see how this works in the cloud?
Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.
View role-based pathsSign in to track your progress and mark lessons complete.
Questions? Discuss in the community or start a thread below.
Join DiscordSign in to start or join a thread.