Skip to the content.

Visualization: CAP and PACELC Theorem with Real-World Examples

To understand the CAP theorem and its extended variant PACELC theorem, we’ll visualize scenarios and map real-world distributed systems to different consistency and availability models.


1. CAP Theorem Visualized

The CAP theorem states that in a distributed system, you can only achieve two out of three guarantees:

  1. Consistency (C): All nodes see the same data at the same time.
  2. Availability (A): Every request receives a response (success or failure).
  3. Partition Tolerance (P): The system continues to operate even if communication between nodes is lost.

CAP Triangle Diagram

graph TD
    C[Consistency]
    A[Availability]
    P[Partition Tolerance]
    C -- "Choose two" --> A
    C -- "Choose two" --> P
    A -- "Choose two" --> P

Real-World Examples:


2. PACELC Theorem Visualized

PACELC extends CAP by considering latency trade-offs in the absence of a partition. It posits:

PACELC Decision Flow

graph TD
    Start[Distributed System]
    Start --> Partition["Is there a network partition?"]
    Partition -->|Yes| CAPChoice["Choose: Consistency or Availability"]
    Partition -->|No| PACELCChoice["Choose: Latency or Consistency"]

Real-World Examples:


3. Comparison Table: CAP and PACELC

Property CAP Choice PACELC Choice Real-World Example
Consistency Priority (CP) Consistency + Partition Latency vs Consistency Google Spanner, Zookeeper
Availability Priority (AP) Availability + Partition Latency vs Consistency DynamoDB, Cassandra
Latency Optimized (EL) - Latency prioritization DynamoDB (default eventually consistent reads)

4. Visualization of Real-World Systems

DynamoDB: AP/EL Scenario

  1. Partition Scenario (CAP):
    • DynamoDB chooses Availability (A) over Consistency (C).
    • Even if some nodes are unreachable, requests are served from available nodes with eventual consistency.
graph TD
    Request["Client Request"]
    DynamoDBNode1["Node 1 (Available)"]
    DynamoDBNode2["Node 2 (Unavailable)"]
    DynamoDBNode3["Node 3 (Available)"]
    Request --> DynamoDBNode1
    Request --> DynamoDBNode3
    DynamoDBNode1 --> Response1["Response: Stale Data Possible"]
    DynamoDBNode3 --> Response2["Response: Stale Data Possible"]
  1. No Partition Scenario (PACELC):
    • DynamoDB defaults to eventual consistency, reducing latency for most reads.
    • Strong consistency can be requested, increasing latency.

Google Spanner: CP/EC Scenario

  1. Partition Scenario (CAP):
    • Spanner prioritizes Consistency (C) over Availability (A).
    • If a partition occurs, it refuses writes that could cause inconsistency.
graph TD
    WriteRequest["Write Request"]
    Partition["Network Partition"]
    Node1["Node 1"]
    Node2["Node 2"]
    WriteRequest --> Partition
    Partition -->|Consistency Prioritized| RefuseWrite["Write Refused"]
  1. No Partition Scenario (PACELC):
    • Spanner guarantees strong consistency by synchronizing data across regions using TrueTime, increasing latency.

5. Latency vs Consistency: PACELC in Practice

Measuring Latency Trade-offs

Using DynamoDB as an example:

Latency vs Consistency Diagram

graph LR
    Client["Client Request"]
    StrongRead["Strongly Consistent Read"]
    EventualRead["Eventually Consistent Read"]
    Response["Response"]
    Client --> StrongRead
    StrongRead -->|High Latency| Response
    Client --> EventualRead
    EventualRead -->|Low Latency| Response