# Overview
Contention occurs when multiple processes are fighting for the same resource
# Key Considerations
## Choosing the Right Approach #flashcard
- Can you keep all the contended data in a single data? If yes, use pessimistic or optimistic concurrency based on your conflict frequency
- Single Database, high contention - [["pessimistic" concurrency control strategy]] with explicit locks (`FOR UPDATE`). This provide predictable performance.
- Single database, low contention - [["optimistic" concurrency control strategy]]
- Multiple databases, must be atomic:
- [[Two-phase Commit (2PC)]] for strong consistency
- [[Saga Pattern]] for resilience
- If user experience is a priority:
- [[Distributed Lock]]
# Implementation Details
## Common Issues
### How do you prevent deadlocks? #flashcard
- [[Deadlocks]] occur when transaction A locks a row needed for transaction B and transaction B locks a row needed for transaction A. The result is the transactions wait indefinitely.
- The standard solution is ordered locking, which means always acquiring locks in a consistent order regardless of your business logic flow. Sort the resources you need to lock by some deterministic key like user ID, database primary key, or even memory address. If you need to lock users 123 and 456, always lock 123 first even if your business logic processes 456 first. This prevents circular waiting because all transactions follow the same acquisition order.
- In practice, this might mean sorting your user IDs before locking them, or establishing conventions like "always lock the user initiating the action first, then lock recipients in ascending ID order." The exact ordering doesn't matter as long as it's consistent across all transactions in your system
- As a fallback, database timeout configurations serve as your safety net when ordered locking isn't practical or when you miss edge cases. Set transaction timeouts so deadlocked transactions get killed after a reasonable wait period and can retry with proper ordering.
### What if your coordinator service crashes during a distributed transaction? #flashcard
- Coordinator failover - a new cooridinator reads persistent logs to determine which transactions were in-flight and completes them.
- Transaction Recovery - need to design for coordinator high availability and maintain transaction state across failures
### How do you handle the [[ABA Problem]]? #flashcard
- The solution is using a column that you know will always change. In the Yelp case, use the review count instead of the average rating as your optimistic concurrency check. Every new review increases the count, so it's a perfect monotonically increasing version. Your update becomes "set new average and increment count to 101, but only if current count is 100."
- When you can't find a naturally changing column, fall back to an explicit version column that increments on every update regardless of whether your business data actually changed. Even if the average rating stays the same, the version number shows that processing occurred.
### What about performance when everyone wants the same resource? #flashcard
# Useful Links
# Related Topics
- Use Cases
- [[Design a Reservation Booking System]]
- [[Design an Auction Website]]
- [[Design a Payment System]]
- [[Design a Rideshare Service]]
## Reference
#### Working Notes
#### Sources
# Situation
There are many writes to the same piece of data (i.e., same key), thus causing the writes to conflict with one another.
![[Contending Updates 2024-10-21 11.47.49.excalidraw.svg]]
# Solutions for Contending Updates #flashcard
## Bad Solution - Use a Simple Locking Mechanism
![[Contending Updates 2024-10-21 11.49.30.excalidraw.svg]]
Use a [[Locking]] mechanism to determine when a write in in-progress against a key. This is not a great solution, because the lock will impact performance while nodes wait to be given the lock.
## Good Solution - Multi-leader Database with Eventual Consistency
If suitable for the use case, you can replace the database with a database that supports multiple leaders. This will cause eventual consistency for the data, which means their may be stale or conflicting reads.
## Good Solution - Batch Writes Using a Stream and Log-based Message Broker
![[Contending Updates 2024-10-21 11.55.12.excalidraw.svg]]
Use a [[Message Broker]] like [[Kafka]] or [[Spark Streaming]] to accumulate writes and send in micro-batches to the database. Since there are less individual write transactions going to the database there is a lower chance of contention.
<!--ID: 1751507776435-->
# Use Cases
- [[Design a Rate Limiter]]
- [[Design a URL Shortener]]