Skip to main content
Career Paths
Concepts
Linked Lists
The Simplified Tech

Role-based learning paths to help you master cloud engineering with clarity and confidence.

Product

  • Career Paths
  • Interview Prep
  • Scenarios
  • AI Features
  • Cloud Comparison
  • Pricing

Community

  • Join Discord

Account

  • Dashboard
  • Credits
  • Updates
  • Sign in
  • Sign up
  • Contact Support

Stay updated

Get the latest learning tips and updates. No spam, ever.

Terms of ServicePrivacy Policy

© 2026 TheSimplifiedTech. All rights reserved.

BackBack
Interactive Explainer

Linked Lists

The data structure that makes insertion O(1) and random access O(n). When a Redis endpoint went from 2ms to 30 seconds, the cause was treating a linked list like an array.

Relevant for:JuniorMid-levelSenior
Why this matters at your level
Junior

Implement reversal, cycle detection, and find-middle. Know the O(1) vs O(n) operations for linked lists. Can explain why Redis LRANGE is O(n).

Mid-level

Design LRU cache using doubly linked list + hash map. Identify when a linked list is the right choice vs array vs hash map in system design.

Senior

Know how real systems (Redis, Java LinkedList, browser history) use linked lists. Recognize O(n) list operations disguised as "simple" API calls and flag them in code review.

Linked Lists

The data structure that makes insertion O(1) and random access O(n). When a Redis endpoint went from 2ms to 30 seconds, the cause was treating a linked list like an array.

~5 min read
Be the first to complete!
LIVEProduction Incident — Redis LRANGE O(n) — Startup, 2021
Breaking News
Month 0

App launches with Redis LPUSH for writes, LRANGE 0 -1 for reads. 2ms response with <1,000 events.

Month 3

List grows to 50,000 events. Feed load time climbs to 800ms. Engineers attribute it to "growing pains."

WARNING
Month 6

List hits 1,000,000 events. LRANGE 0 -1 scans entire linked list — 30 second response time.

CRITICAL
Month 6 +3 days

Engineer checks Redis SLOWLOG — discovers LRANGE taking 28 seconds per call.

WARNING
Month 6 +5 days

Migrated to Redis sorted set with pagination (ZREVRANGE, max 50 items). Response time: 2ms again.

—feed API response time when LRANGE ran on a 1M-element linked list
—before anyone noticed — no alerting on response time degradation

The question this raises

If the data structure you chose for its fast writes has O(n) reads that scale with data size — how do you spot this before your users do?

Test your assumption first

A data structure holds 1 million items. You need to add a new item to the front. How many steps does this take for an array vs a linked list?

What you'll learn
  • A linked list is a chain of nodes — each node holds a value and a pointer to the next node
  • O(1) insertion at the head — just update two pointers, no shifting. O(n) insertion in the middle — must traverse to find position.
  • O(n) access by index — must walk from head, no shortcut to position 500
  • Redis lists are doubly linked lists — LPUSH/RPUSH is O(1), LINDEX and full LRANGE are O(n)
  • Floyd's tortoise and hare algorithm detects cycles in O(n) time and O(1) space — two pointers at different speeds
  • Most linked list interview bugs come from losing a pointer before saving the next one — always save next before overwriting

Lesson outline

What this looks like in real life

The treasure hunt that does not scale

You have a list of 1 million items stored as a linked list. Finding item 500,000 requires following 500,000 "next" pointers — one at a time. An array would compute the memory address in one step. This is why Redis LRANGE 0 -1 on a 1M-item list took 30 seconds: Redis lists are linked lists internally, and reading the whole thing requires walking every single node.

Linked List

node = { value: x, next: null }
head.next = node  // O(1) prepend

Use for: Reach for this when you need O(1) insertion at the head or tail AND you never need random access by position — queues, LRU cache backing, undo history

The insight that makes it work

The key insight: a linked list is fast where arrays are slow, and slow where arrays are fast. Arrays are O(1) by index, O(n) to insert (shift everything). Linked lists are O(1) to insert at head, O(n) by index. They are complementary — the right choice depends entirely on which operation your system does more.

How this concept changes your thinking

Situation
Before
After

I need to add 1 million items to the front of a list, one at a time

“I would use an array and push to the front. Each push shifts all existing items right — that is O(n) per insert, O(n²) total for 1M inserts.”

“A linked list prepends in O(1) — just update the head pointer. 1M prepends = 1M x O(1) = O(n) total. Arrays are wrong here.”

I need to get the 500,000th item by position

“I would use a linked list because I just built it. I would call .get(500000) and it would return the value.”

“This traverses 500,000 nodes one by one — O(n). If I need random access, I should have used an array. Linked lists are the wrong choice here.”

I need a data structure for an LRU cache (least recently used)

“I would use an array. To remove an item from the middle (evict it), I shift everything after it left.”

“An LRU cache needs O(1) remove from anywhere and O(1) add to front. A doubly linked list + hash map achieves this: hash map gives O(1) lookup, doubly linked list gives O(1) remove (update prev and next pointers).”

Walk through it in plain English

Reversing a linked list: A → B → C → D → null

→

01

We start with: head → A → B → C → D → null. Goal: D → C → B → A → null.

→

02

Set up three pointers: prev = null (nothing comes before A yet), curr = A (we start here), next = null (we will set this each loop).

→

03

Step 1: Save next = B (save where curr.next points BEFORE overwriting it). Reverse the arrow: A.next = null (prev). Move forward: prev = A, curr = B.

→

04

Step 2: Save next = C. Reverse: B.next = A. Move forward: prev = B, curr = C.

→

05

Step 3: Save next = D. Reverse: C.next = B. Move forward: prev = C, curr = D.

→

06

Step 4: Save next = null. Reverse: D.next = C. Move forward: prev = D, curr = null.

07

curr is null — we are done. The new head is prev (D). Result: D → C → B → A → null. The critical move: always save next BEFORE overwriting curr.next, or you lose the rest of the list.

1

We start with: head → A → B → C → D → null. Goal: D → C → B → A → null.

2

Set up three pointers: prev = null (nothing comes before A yet), curr = A (we start here), next = null (we will set this each loop).

3

Step 1: Save next = B (save where curr.next points BEFORE overwriting it). Reverse the arrow: A.next = null (prev). Move forward: prev = A, curr = B.

4

Step 2: Save next = C. Reverse: B.next = A. Move forward: prev = B, curr = C.

5

Step 3: Save next = D. Reverse: C.next = B. Move forward: prev = C, curr = D.

6

Step 4: Save next = null. Reverse: D.next = C. Move forward: prev = D, curr = null.

7

curr is null — we are done. The new head is prev (D). Result: D → C → B → A → null. The critical move: always save next BEFORE overwriting curr.next, or you lose the rest of the list.

Now in code

The reversal above in code — notice how the three-pointer dance matches the steps exactly:

linked-list.js
1function reverseList(head) {
2 let prev = null;
prev starts null because the new tail (old head) must point to null.
3 let curr = head;
4
5 while (curr !== null) {
We stop when curr is null — we have processed the entire original list.
6 const next = curr.next; // save next FIRST
CRITICAL: save next before overwriting curr.next on the next line. If you forget this, you lose the rest of the list forever.
7 curr.next = prev; // reverse the arrow
This is the reversal — point the current node back to its previous neighbour instead of forward.
8 prev = curr; // advance prev
9 curr = next; // advance curr
10 }
prev is the new head because it was the last node we processed — the old tail.
11
12 return prev; // prev is the new head
13}

The trap — what most people try first

When asked to detect a cycle in a linked list, the obvious approach uses extra memory. Floyd's algorithm does it in O(1) space.

O(n) space visited set vs O(1) Floyd's algorithm

Bug
// Naive — stores visited nodes
// O(n) space — may OOM on large lists
function hasCycle_slow(head) {
  const visited = new Set();
  let curr = head;
  while (curr !== null) {
    if (visited.has(curr)) return true;
    visited.add(curr);
    curr = curr.next;
  }
  return false;
}
Fix
// Floyd's tortoise and hare
// O(n) time, O(1) space
function hasCycle(head) {
  let slow = head;
  let fast = head;
  while (fast !== null && fast.next !== null) {
    slow = slow.next;
    fast = fast.next.next;
    if (slow === fast) return true;
  }
  return false;
}

The left version stores every visited node in a Set — if you visit a node twice, there is a cycle. This works but uses O(n) memory. Floyd's algorithm uses two pointers: slow moves one step, fast moves two steps. If there is a cycle, fast will eventually lap slow and they will meet. If there is no cycle, fast reaches null. O(1) space. The insight: in a cycle, a faster pointer must eventually catch a slower one, just like runners on a circular track.

When to reach for this

Linked list or array?

Do you need O(1) access to any element by position?
YesUse an array — linked lists cannot do this without O(n) traversal
NoContinue...
Do you need O(1) insertion or deletion at the head or tail?
YesUse a linked list — arrays shift all elements for head insertion (O(n))
NoContinue...
Do you need O(1) deletion from the middle (like an LRU cache)?
YesUse a doubly linked list + hash map — the hash map gives you the node, the list gives you O(1) removal
NoAn array is usually simpler and better-suited

How fast is it?

OperationLinked ListArray
Access element by positionSlow (O(n)) — must traverse from headInstant (O(1)) — direct address
Insert at headInstant (O(1)) — update two pointersSlow (O(n)) — shift all elements right
Insert at tail (with tail pointer)Instant (O(1))Instant (O(1)) — if within capacity
Delete from middle (with node reference)Instant (O(1)) — update prev and next pointersSlow (O(n)) — shift elements left
Search for a valueSlow (O(n)) — linear scanSlow (O(n)) — linear scan (or O(log n) if sorted)

Exam Answer vs. Production Reality

1 / 1

Linked list vs array

📖 What the exam expects

Arrays provide O(1) random access but O(n) insertion at arbitrary positions due to shifting. Linked lists provide O(1) insertion at known positions but O(n) access by index due to pointer traversal.

Toggle between what certifications teach and what production actually requires

How this might come up in interviews

Linked list problems test pointer manipulation under pressure: reverse a list, detect a cycle (Floyd's algorithm), find the middle node (slow/fast pointer), merge two sorted lists, remove the nth node from the end. The hidden evaluation: can you think with pointers without losing track of the chain?

Common questions:

  • Reverse a linked list (iterative and recursive)
  • Detect a cycle in a linked list (Floyd's algorithm)
  • Find the middle of a linked list in one pass
  • Merge two sorted linked lists
  • Remove the nth node from the end in one pass

Strong answer: names all three pointers before writing reversal / handles edge cases first: empty list, single node / knows Floyd's algorithm and can explain WHY two different speeds guarantees meeting in a cycle

Red flags: tries to reverse a linked list by collecting into an array (misses the O(1) space requirement) / cannot do cycle detection without extra memory / loses the next pointer before saving it (a very common pointer bug)

🧠Mental Model

💡 Analogy

A linked list is a treasure hunt. Each clue (node) tells you only where the next clue is. To reach clue 500, you must follow clues 1 through 499 — there is no shortcut. Adding a new first clue is instant: write "next clue is the old first clue" on a new card and hand it out (O(1)). But counting all clues takes one step per clue (O(n)). Redis LRANGE 0 -1 on 1 million items is following 1 million treasure hunt clues.

⚡ Core Idea

A linked list trades random access for cheap insertion and deletion. Arrays store items at consecutive memory addresses — position 500 is address (start + 500 * item_size), instant lookup. Linked lists store a pointer at each node — to reach position 500 you follow 500 pointers. This is why LRANGE on a long Redis list is slow even though LPUSH is fast.

🎯 Why It Matters

Understanding linked lists explains why your Redis feed slowed down, why doubly linked lists power LRU caches (O(1) remove from middle), and why the interview question "reverse a linked list" is really testing pointer manipulation under pressure — the most common source of bugs in real systems code.

Ready to see how this works in the cloud?

Switch to Career Paths for structured paths (e.g. Developer, DevOps) and provider-specific lessons.

View role-based paths

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.

Sign in to track your progress and mark lessons complete.

Discussion

Questions? Discuss in the community or start a thread below.

Join Discord

In-app Q&A

Sign in to start or join a thread.