Random Notes
  • Introduction
  • Reading list
  • Theory
    • Index
      • Impossibility of Distributed Consensus with One Faulty Process
      • Time, Clocks, and the Ordering of Events in a Distributed System
      • Using Reasoning About Knowledge to analyze Distributed Systems
      • CAP Twelve Years Later: How the “Rules” Have Changed
      • A Note on Distributed Computing
  • Operating System
    • Index
  • Storage
    • Index
      • Tachyon: Reliable, Memory Speed Storage for Cluster Computing Frameworks
      • Exploiting Commutativity For Practical Fast Replication
      • Don’t Settle for Eventual: Scalable Causal Consistency for Wide-Area Storage with COPS
      • Building Consistent Transactions with Inconsistent Replication
      • Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System
      • Spanner: Google's Globally-Distributed Database
      • Bigtable: A Distributed Storage System for Structured Data
      • The Google File System
      • Dynamo: Amazon’s Highly Available Key-value Store
      • Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications
      • Replicated Data Consistency Explained Through Baseball
      • Session Guarantees for Weakly Consistent Replicated Data
      • Flat Datacenter Storage
      • Small Cache, Big Effect: Provable Load Balancing forRandomly Partitioned Cluster Services
      • DistCache: provable load balancing for large-scale storage systems with distributed caching
      • Short Summaries
  • Coordination
    • Index
      • Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases
      • Paxos made simple
      • ZooKeeper: Wait-free coordination for Internet-scale systems
      • Just Say NO to Paxos Overhead: Replacing Consensus with Network Ordering
      • Keeping CALM: When Distributed Consistency is Easy
      • In Search of an Understandable Consensus Algorithm
      • A comprehensive study of Convergent and Commutative Replicated Data Types
  • Fault Tolerance
    • Index
      • The Mystery Machine: End-to-end Performance Analysis of Large-scale Internet Services
      • Gray Failure: The Achilles’ Heel of Cloud-Scale Systems
      • Capturing and Enhancing In Situ System Observability for Failure Detection
      • Check before You Change: Preventing Correlated Failures in Service Updates
      • Efficient Scalable Thread-Safety-Violation Detection
      • REPT: Reverse Debugging of Failures in Deployed Software
      • Redundancy Does Not Imply Fault Tolerance
      • Fixed It For You:Protocol Repair Using Lineage Graphs
      • The Good, the Bad, and the Differences: Better Network Diagnostics with Differential Provenance
      • Lineage-driven Fault Injection
      • Short Summaries
  • Cloud Computing
    • Index
      • Improving MapReduce Performance in Heterogeneous Environments
      • CLARINET: WAN-Aware Optimization for Analytics Queries
      • MapReduce: Simplified Data Processing on Large Clusters
      • Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks
      • Resource Management
      • Apache Hadoop YARN: Yet Another Resource Negotiator
      • Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center
      • Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
      • Large-scale cluster management at Google with Borg
      • MapReduce Online
      • Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling
      • Reining in the Outliers in Map-Reduce Clusters using Mantri
      • Effective Straggler Mitigation: Attack of the Clones
      • Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing
      • Discretized Streams: Fault-Tolerant Streaming Computation at Scale
      • Sparrow: Distributed, Low Latency Scheduling
      • Making Sense of Performance in Data Analytics Framework
      • Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks
      • Drizzle: Fast and Adaptable Stream Processing at Scale
      • Naiad: A Timely Dataflow System
      • The Dataflow Model:A Practical Approach to Balancing Correctness, Latency, and Cost in Massive-Scale
      • Interruptible Tasks:Treating Memory Pressure AsInterrupts for Highly Scalable Data-Parallel Program
      • PACMan: Coordinated Memory Caching for Parallel Jobs
      • Multi-Resource Packing for Cluster Schedulers
      • Other interesting papers
  • Systems for ML
    • Index
      • A Berkeley View of Systems Challenges for AI
      • Tiresias: A GPU Cluster Managerfor Distributed Deep Learning
      • Gandiva: Introspective Cluster Scheduling for Deep Learning
      • Workshop papers
      • Hidden Technical Debt in Machine Learning Systems
      • Inference Systems
      • Parameter Servers and AllReduce
      • Federated Learning at Scale - Part I
      • Federated Learning at Scale - Part II
      • Learning From Non-IID data
      • Ray: A Distributed Framework for Emerging AI Applications
      • PipeDream: Generalized Pipeline Parallelism for DNN Training
      • DeepXplore: Automated Whitebox Testingof Deep Learning Systems
      • Distributed Machine Learning Misc.
  • ML for Systems
    • Index
      • Short Summaries
  • Machine Learning
    • Index
      • Deep Learning with Differential Privacy
      • Accelerating Deep Learning via Importance Sampling
      • A Few Useful Things to Know About Machine Learning
  • Video Analytics
    • Index
      • Scaling Video Analytics on Constrained Edge Nodes
      • Focus: Querying Large Video Datasets with Low Latency and Low Cost
      • NoScope: Optimizing Neural Network Queriesover Video at Scale
      • Live Video Analytics at Scale with Approximation and Delay-Tolerance
      • Chameleon: Scalable Adaptation of Video Analytics
      • End-to-end Learning of Action Detection from Frame Glimpses in Videos
      • Short Summaries
  • Networking
    • Index
      • Salsify: Low-Latency Network Video through Tighter Integration between a Video Codec and a Transport
      • Learning in situ: a randomized experiment in video streaming
      • Short Summaries
  • Serverless
    • Index
      • Serverless Computing: One Step Forward, Two Steps Back
      • Encoding, Fast and Slow: Low-Latency Video Processing Using Thousands of Tiny Threads
      • SAND: Towards High-Performance Serverless Computing
      • Pocket: Elastic Ephemeral Storage for Serverless Analytics
      • Fault-tolerant and Transactional Stateful Serverless Workflows
  • Resource Disaggregation
    • Index
  • Edge Computing
    • Index
  • Security/Privacy
    • Index
      • Differential Privacy
      • Honeycrisp: Large-Scale Differentially Private Aggregation Without a Trusted Core
      • Short Summaries
  • Misc.
    • Index
      • Rate Limiting
      • Load Balancing
      • Consistency Models in Distributed System
      • Managing Complexity
      • System Design
      • Deep Dive into the Spark Scheduler
      • The Actor Model
      • Python Global Interpreter Lock
      • About Research and PhD
Powered by GitBook
On this page
  • AdaptSize: Orchestrating the Hot Object Memory Cache in a Content Delivery Network - Berger et al., NSDI' 17
  • Pegasus: Tolerating Skewed Workloads in Distributed Storage with In-Network Coherence Directories - Li et al., OSDI' 20
  • PACEMAKER: Avoiding HeART attacks in storage clusters with disk-adaptive redundancy - Kadekodi et al., OSDI' 20

Was this helpful?

  1. Storage
  2. Index

Short Summaries

PreviousDistCache: provable load balancing for large-scale storage systems with distributed cachingNextIndex

Last updated 4 years ago

Was this helpful?

- Berger et al., NSDI' 17

This paper inspects the caching mechanism of Content Delivery Networks(CDNs). A CDN server typically employs two levels of caching: a small but fast in-memory cache called Hot Object Cache(HOC) and a large second-level Disk Cache(DC). The goal of AdaptSize is to maximize the object hit ratio of HOC.

The key insights are 1. the HOC is subject to extreme variability in request patterns and object size( up to 9x difference). As a result, not all objects should be admitted to the HOC and 2. existing works only focus on cache eviction and assume objects are of the same size. Based on these observations, the authors propose AdaptSize, which is a near-optimal method for size-aware cache admission. AdaptSize admits objects with probability e−size/ce^{-size/c}e−size/c and evicts objects using a concurrent of LRU. As the optimal c changes over time, AdaptSize uses a Markov chain model to find the best c.

- Li et al., OSDI' 20

Many real-workloads are skewed and dynamic. This paper introduces a new system, called Pegasus, that leverages programmable switch ASICs to balance load across storage servers. The key observations are: 1) the top-of-rack switch is on the path of every client request and server reply. and 2) it is possible to achieve provable load balancing guarantees if we only replicate the most popular O(nlogn)O(nlogn)O(nlogn) objects, where n is the number of servers. Thus, Pegasus uses a selective replication and in-network coherence directory approach.

Thus, Pegasus uses a selective replication and in-network coherence directory approach. The ToR switch maintains the coherence directory; it stores the set of replicated keys, and for each key, a list of servers with a valid copy of the data. Pegasus selectively replicates popular objects and it decides which objects to replicate by tracking the access rate of each key and uses a lightweight version-based coherence protocol to ensure consistency(linearizability).

Pegasus can handle arbitrary object sizes, guarantee linearizability, and support any read-write ratio. The evaluation shows that Pegasus can increase the throughput by up to 10x or reduce by 90% the number of servers required and still satisfy a 99%-latency SLO.

- Kadekodi et al., OSDI' 20

This a follow-up work of their HeART paper in FAST' 19. Storage clusters consist of heterogeneous disks with highly varying failure rates. HeART proposes that we should treat subsets of disks with different AFR characteristics differently. It adapts redundancy of each disk by observing its failure rate on the fly depending on its make/model and current age. The key idea is that we can reduce the redundancy level during the disks' useful life.

However, such design is reactive, the data will continue to be under-protected until the redundancy scheme transition completes. This paper introduces PACEMAKER, which is a low-overhead disk-adaptive redundancy orchestrator. The key component is a proactive-transition-initiator, which determines when to transition the disk, and a transition executor, which determine how to transition the disks. PACEMAKER introduces two new approaches for changing the redundancy scheme to avoid the expensive reading-re-encoding-writing approach.

AdaptSize: Orchestrating the Hot Object Memory Cache in a Content Delivery Network
variant
Pegasus: Tolerating Skewed Workloads in Distributed Storage with In-Network Coherence Directories
PACEMAKER: Avoiding HeART attacks in storage clusters with disk-adaptive redundancy