SYSTEM DESIGNDesign a Distributed File System (GFS/HDFS)GoogleMetaApache
TRAFFIC LEVEL
—/3
CONSTRAINTS
File chunk size64 MB
Replication factor3x on different racks
Master metadata memory~64 bytes per chunk
Chunk server count~1000-10000
Failure rate~1 server/day at 1000-node scale
Compute & Network
Load BalancerDistribute traffic
API GatewayEntry point / auth
API ServerBusiness logic
Worker NodeAsync processing
CDN EdgeGlobal cache
WebSocket GatewayPersistent connections
Data Stores
PostgreSQLRelational DB
MySQLRelational DB
CassandraWide-Column DB
DynamoDBNoSQL / Managed
S3 BucketObject storage
Queues & Cache
Redis CacheIn-memory store
KafkaEvent stream
ZookeeperCoordination
Specialized
Bloom FilterProbabilistic set
Rate LimiterThrottling
Geohash ServiceGeospatial index
Trie ServerPrefix search
APNS / FCMPush notifications
AggregatorBatch / roll-up
Drag to canvas · Hover node for × to delete · Draw from handle to connect
Design your architecture
Drag components from the left panel · Connect them by drawing from a node handle · Hit Start Simulation to validate
🚨 INCIDENT
GoogleMetaApache

Design Google File System (GFS). Store exabytes of data across thousands of commodity machines. Files are split into 64MB chunks, each replicated 3x across different servers. A central master tracks file metadata; chunk servers store actual data. Optimize for sequential reads, large batch appends, and fault tolerance.

📥 Assigned to:You — Senior Engineer
SCALE LEVELS
1
1,000 RPS
Target: <200ms
2
10,000 RPS
Target: <100ms
3
100,000 RPS
Target: <50ms
GLOBAL SUCCESS RATE
100.0%
P99 LATENCY
45ms
Target: < 200ms
TOTAL RPS INGESTED0 / 11,000
EngPrep — Real Engineering. Real Interviews.