# Overview WebRTC is... #flashcard - A [[peer-to-peer]] communication approach between browsers, often used for video/audio calls and some document sharing like document editors. Once a client has the connection information for another peer, they can try to establish a direct connection without going through any intermediary servers. This is done using [[User Datagram Protocol (UDP)]]. - The WebRTC standard includes two methods to allow for this type of communication: - **STUN**: "Session Traversal Utilities for NAT" is a protocol and a set of techniques like "hole punching" which allows peers to establish publicly routable addresses and ports. It's a standard way to deal with NAT traversal and it involves repeatedly creating open ports and sharing them via the signaling server with peers. - **TURN**: "Traversal Using Relays around NAT" is effectively a relay service, a way to bounce requests through a central server which can then be routed to the appropriate peer. <!--ID: 1751507777095--> How WebRTC works... #flashcard 1. Peers discover each other through signaling server. 2. Exchange connection info (ICE candidates) 3. Establish direct peer connection, using STUN/TURN if needed 4. Stream audio/video or send data directly <!--ID: 1752427993548--> # Key Considerations # Pros of WebRTC #flashcard - Direct peer communication - Lower latency - Reduced server costs - Native audio/video support <!--ID: 1751507777097--> # Cons of WebRTC #flashcard - Complex setup (> WebSockets) - Requires signaling server - NAT/firewall issues - Connection setup delay <!--ID: 1751507777099--> # Use Cases - Video/audio calls - Screen sharing - Gaming # Related Topics - [[Canva]] # References [Real-time mouse pointers - Canva Engineering Blog](https://www.canva.dev/blog/engineering/realtime-mouse-pointers/) --- Items to discuss when using WebSockets in a system design interview: #flashcard - Deployments. When servers are redeployed, we either need to sever all old connections and have them reconnect or have the new servers take over and keep the connections alive. Generally speaking you should prefer the former since it's simpler, but it does have some ramifications on how "persistent" you expect the connection to be. You also need to be able to handle situations where a client needs to reconnect and may have missed updates while they were disconnected. - Balancing load across websocket servers can be more complex. If the connections are truly long-running, we have to "stick with" each allocation decision we made. If we have a load balancer that wants to send a new request to a different server, it can't do that if it would break an existing websocket connection. - Using a "least connections" strategy for the load balancer can help, as well as minimizing the amount of work the WebSocket servers need to do as they process messages. Using the reference architecture above and offloading more intensive processing to other services (which can scale independently) can help. <!--ID: 1752427993551--> ---