Random Notes
  • Introduction
  • Reading list
  • Theory
    • Index
      • Impossibility of Distributed Consensus with One Faulty Process
      • Time, Clocks, and the Ordering of Events in a Distributed System
      • Using Reasoning About Knowledge to analyze Distributed Systems
      • CAP Twelve Years Later: How the “Rules” Have Changed
      • A Note on Distributed Computing
  • Operating System
    • Index
  • Storage
    • Index
      • Tachyon: Reliable, Memory Speed Storage for Cluster Computing Frameworks
      • Exploiting Commutativity For Practical Fast Replication
      • Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS
      • Building Consistent Transactions with Inconsistent Replication
      • Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System
      • Spanner: Google's Globally-Distributed Database
      • Bigtable: A Distributed Storage System for Structured Data
      • The Google File System
      • Dynamo: Amazon’s Highly Available Key-value Store
      • Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications
      • Replicated Data Consistency Explained Through Baseball
      • Session Guarantees for Weakly Consistent Replicated Data
      • Flat Datacenter Storage
      • Small Cache, Big Effect: Provable Load Balancing forRandomly Partitioned Cluster Services
      • DistCache: provable load balancing for large-scale storage systems with distributed caching
      • Short Summaries
  • Coordination
    • Index
      • Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases
      • Paxos made simple
      • ZooKeeper: Wait-free coordination for Internet-scale systems
      • Just Say NO to Paxos Overhead: Replacing Consensus with Network Ordering
      • Keeping CALM: When Distributed Consistency is Easy
      • In Search of an Understandable Consensus Algorithm
      • A comprehensive study of Convergent and Commutative Replicated Data Types
  • Fault Tolerance
    • Index
      • The Mystery Machine: End-to-end Performance Analysis of Large-scale Internet Services
      • Gray Failure: The Achilles’ Heel of Cloud-Scale Systems
      • Capturing and Enhancing In Situ System Observability for Failure Detection
      • Check before You Change: Preventing Correlated Failures in Service Updates
      • Efficient Scalable Thread-Safety-Violation Detection
      • REPT: Reverse Debugging of Failures in Deployed Software
      • Redundancy Does Not Imply Fault Tolerance
      • Fixed It For You:Protocol Repair Using Lineage Graphs
      • The Good, the Bad, and the Differences: Better Network Diagnostics with Differential Provenance
      • Lineage-driven Fault Injection
      • Short Summaries
  • Cloud Computing
    • Index
      • Improving MapReduce Performance in Heterogeneous Environments
      • CLARINET: WAN-Aware Optimization for Analytics Queries
      • MapReduce: Simplified Data Processing on Large Clusters
      • Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks
      • Resource Management
      • Apache Hadoop YARN: Yet Another Resource Negotiator
      • Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center
      • Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
      • Large-scale cluster management at Google with Borg
      • MapReduce Online
      • Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling
      • Reining in the Outliers in Map-Reduce Clusters using Mantri
      • Effective Straggler Mitigation: Attack of the Clones
      • Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing
      • Discretized Streams: Fault-Tolerant Streaming Computation at Scale
      • Sparrow: Distributed, Low Latency Scheduling
      • Making Sense of Performance in Data Analytics Framework
      • Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks
      • Drizzle: Fast and Adaptable Stream Processing at Scale
      • Naiad: A Timely Dataflow System
      • The Dataflow Model:A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale
      • Interruptible Tasks:Treating Memory Pressure AsInterrupts for Highly Scalable Data-Parallel Program
      • PACMan: Coordinated Memory Caching for Parallel Jobs
      • Multi-Resource Packing for Cluster Schedulers
      • Other interesting papers
  • Systems for ML
    • Index
      • A Berkeley View of Systems Challenges for AI
      • Tiresias: A GPU Cluster Managerfor Distributed Deep Learning
      • Gandiva: Introspective Cluster Scheduling for Deep Learning
      • Workshop papers
      • Hidden Technical Debt in Machine Learning Systems
      • Inference Systems
      • Parameter Servers and AllReduce
      • Federated Learning at Scale - Part I
      • Federated Learning at Scale - Part II
      • Learning From Non-IID data
      • Ray: A Distributed Framework for Emerging AI Applications
      • PipeDream: Generalized Pipeline Parallelism for DNN Training
      • DeepXplore: Automated Whitebox Testingof Deep Learning Systems
      • Distributed Machine Learning Misc.
  • ML for Systems
    • Index
      • Short Summaries
  • Machine Learning
    • Index
      • Deep Learning with Differential Privacy
      • Accelerating Deep Learning via Importance Sampling
      • A Few Useful Things to Know About Machine Learning
  • Video Analytics
    • Index
      • Scaling Video Analytics on Constrained Edge Nodes
      • Focus: Querying Large Video Datasets with Low Latency and Low Cost
      • NoScope: Optimizing Neural Network Queriesover Video at Scale
      • Live Video Analytics at Scale with Approximation and Delay-Tolerance
      • Chameleon: Scalable Adaptation of Video Analytics
      • End-to-end Learning of Action Detection from Frame Glimpses in Videos
      • Short Summaries
  • Networking
    • Index
      • Salsify: Low-Latency Network Video through Tighter Integration between a Video Codec and a Transport
      • Learning in situ: a randomized experiment in video streaming
      • Short Summaries
  • Serverless
    • Index
      • Serverless Computing: One Step Forward, Two Steps Back
      • Encoding, Fast and Slow: Low-Latency Video Processing Using Thousands of Tiny Threads
      • SAND: Towards High-Performance Serverless Computing
      • Pocket: Elastic Ephemeral Storage for Serverless Analytics
      • Fault-tolerant and Transactional Stateful Serverless Workflows
  • Resource Disaggregation
    • Index
  • Edge Computing
    • Index
  • Security/Privacy
    • Index
      • Differential Privacy
      • Honeycrisp: Large-Scale Differentially Private Aggregation Without a Trusted Core
      • Short Summaries
  • Misc.
    • Index
      • Rate Limiting
      • Load Balancing
      • Consistency Models in Distributed System
      • Managing Complexity
      • System Design
      • Deep Dive into the Spark Scheduler
      • The Actor Model
      • Python Global Interpreter Lock
      • About Research and PhD
Powered by GitBook
On this page
  • TL;DR;
  • Summary:
  • Comments:
  • Update(8/16/2019):
  • Related Post:

Was this helpful?

  1. Storage
  2. Index

Dynamo: Amazon’s Highly Available Key-value Store

https://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf

TL;DR;

Amazon DynamoDB is a key-value and document database that delivers single-digit millisecond performance at any scale. It's a fully managed, multiregion, multimaster database with built-in security, backup and restore, and in-memory caching for internet-scale applications. DynamoDB can handle more than 10 trillion requests per day and can support peaks of more than 20 million requests per second.

Summary:

This paper describes the design and implementation of Dynamo, which achieves high availability and low latency. To achieve a high level of availability, Dynamo sacrifices consistency under certain failure scenarios. The problem they were trying to solve is that given partial failure is common, how to design a storage system which provides an "always-on" experience to the user. Using a conventional relational database would lead to inefficiencies and limit scale and availability.

Dynamo is targeted mainly at applications that 1.require high availability 2. Operate in a secure network in which every node can be trusted 3. does not require hierarchical namespace or complex relational schema 4. are latency-sensitive. Dynamo uses many techniques to solve different classic problems when you design such a storage system. It provides two simple APIs to reads and writes: get(key) and put(key, context).

Partitioning:

Dynamo implements a modified version of consistent hashing[1] to allow it scales incrementally. The problems with consistent hashing are 1. it can lead to non-uniform data and load distribution. 2. it is oblivious to the heterogeneity in the performance of nodes(such as capacity). To cope with these problems, Dynamo uses the concept of virtual nodes in which each node gets assigned to multiple virtual nodes in the ring(the number of virtual nodes that a node is responsible can decided by its capacity).

Replication:

Dynamo replicates its data on multiple nodes. The node which the key is assigned and its N-1 clockwise successor(N is configurable). Unlike many other systems, Dynamo uses asynchronous replication protocol[2] to achieve high availability, which provides eventual consistency[3].

Data versioning:

Because Dynamo uses an optimistic replication technique[4], it needs to be able to resolve conflicts. It uses vector clocks[5] to capture causality. (The vector clock is passed as casual payload via writes). One vector clock is associated with every version of every object.

One thing to note is that, different from tradition vector clocks, where we have a entry in the vector for each servers, Dynamo's vector clock only consists of list of (coordinator node, counter)pairs. For example, [(A,1), (B, 3)] means server A serves one writes and server B serves three writes.

Handling Failures:

(Partial) Failures are common in Dynamo. It uses a "sloppy quorum" replication protocol[6] to handle temporarily network partition or node failure. For permanent failures, Dynamo implements an anti-entropy protocol for synchronization. To minimize data transfers, Dynamo uses Merkle trees.

Membership and Failure detection:

Dynamo uses a gossip-based protocol to propagate membership changes and a local notion of failure detection to avoid communication.[7]

[2] Quorum-like replication in which it does not wait until all N-1 nodes acknowledge the write

[3] Eventual consistency means "if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value" (from Werner Vogels). Note, there does not provide an upper bound of when the conflicts are resolved.

[4] We need to be aware that under certain failure modes, Dynamo can potentially result in having not just two but several versions of the same data.

[5] Dynamo also adds a timestamp to each node to reduce the size of the vector clock.

[6] Reads and writes are perform on first N health nodes. Once the nodes recover from failure, other nodes(which shouldn't have the data) will send to the recovered node.

[7] A may consider B failed if B does not respond to A's message within a time interval.

Comments:

I love this paper, and I think it's a must-read paper. The paper shows how many different techniques and algorithms we learned were used to build a distributed storage. It also shows how they carefully made trade-offs between availability, consistency, cost-effectiveness, and performance.

Update(8/16/2019):

Well, I think I was confused. This won't be a problem because client won't directly talk to each other.

Related Post:

PreviousThe Google File SystemNextChord: A Scalable Peer-to-peer Lookup Service for Internet Applications

Last updated 5 years ago

Was this helpful?

[1]

I read a tech blog about vector clocks recently, called Why (you should read it). Since Dynamo makes servers act as actors, in which vector clocks of earlier read are passed as causal payloads, is Dynamo going to silently lose data some times?(Like the example in this post.)

https://www.toptal.com/big-data/consistent-hashing
Vector Clocks Are Hard
Version Vectors are not Vector ClocksHASlab
Logo