# Overview
A timeout is based on an amount of time that can pass before a client or server will wait for a corresponding response. This prevents holding onto resources for requests that are taking too long and may have an issue. After a timeout occurs, the system may be design to complete retries based on the [[Exception Handling]] design.
# Key Considerations
Setting a timeout value too high reduces its usefulness and setting one too low may cause increate traffic and latency as requests are retried too often.
To pick an appropriate timeout time:
- Choose an acceptable rate of false timeouts (such a 0.1%)
- Observe corresponding latency percentile ont he downstream service (e.g., p99.9)
This approach can have issues:
• This approach doesn't work in cases where clients have substantial network latency, such as over the internet. In these cases, we factor in reasonable worst-case network latency, keeping in mind that clients could span the globe.
• This approach also doesn’t work with services that have tight latency bounds, where p99.9 is close to p50. In these cases, adding some padding helps us avoid small latency increases that cause high numbers of timeouts.
• We’ve encountered a common pitfall when implementing timeouts. Linux's SO_RCVTIMEO is powerful, but has some disadvantages that make it unsuitable as an end-to-end socket timeout. Some languages, such as Java, expose this control directly. Other languages, such as Go, provide more robust timeout mechanisms.
• There are also implementations where the timeout doesn't cover all remote calls, like DNS or TLS handshakes. In general, we prefer to use the timeouts built into well-tested clients. If we implement our own timeouts, we pay careful attention to the exact meaning of the timeout socket options, and what work is being done.
# Pros
# Cons
# Use Cases
# Related Topics
# Sources:
[Title Unavailable \| Site Unreachable](https://aws.amazon.com/builders-library/timeouts-retries-and-backoff-with-jitter/)