Chapter 24. High Availability, Load Balancing, and Replication

Table of Contents
24.1. Comparison of Different Solutions
24.2. Log-Shipping Standby Servers
24.2.1. Planning
24.2.2. Standby Server Operation
24.2.3. Preparing the Master for Standby Servers
24.2.4. Setting Up a Standby Server
24.2.5. Streaming Replication
24.2.6. Cascading Replication
24.2.7. Synchronous Replication
24.3. Failover
24.4. Alternative Method for Log Shipping
24.4.1. Implementation
24.4.2. Record-based Log Shipping
24.5. Hot Standby
24.5.1. User's Overview
24.5.2. Handling Query Conflicts
24.5.3. Administrator's Overview
24.5.4. Hot Standby Parameter Reference
24.5.5. Caveats

Note: XCONLY: The following description applies only to Postgres-XC.

Postgres-XC inherits most of the high availability feature from PostgreSQL. In this chapter, please read PostgreSQL as Postgres-XC if not described explicitly.

Note: The following description applies both to Postgres-XC and PostgreSQL if not described explicitly. You can read PostgreSQL as Postgres-XC except for version number, which is specific to each product.

Database servers can work together to allow a second server to take over quickly if the primary server fails (high availability), or to allow several computers to serve the same data (load balancing). Ideally, database servers could work together seamlessly. Web servers serving static web pages can be combined quite easily by merely load-balancing web requests to multiple machines. In fact, read-only database servers can be combined relatively easily too. Unfortunately, most database servers have a read/write mix of requests, and read/write servers are much harder to combine. This is because though read-only data needs to be placed on each server only once, a write to any server has to be propagated to all servers so that future read requests to those servers return consistent results.

This synchronization problem is the fundamental difficulty for servers working together. Because there is no single solution that eliminates the impact of the sync problem for all use cases, there are multiple solutions. Each solution addresses this problem in a different way, and minimizes its impact for a specific workload.

Some solutions deal with synchronization by allowing only one server to modify the data. Servers that can modify data are called read/write, master or primary servers. Servers that track changes in the master are called standby or slave servers. A standby server that cannot be connected to until it is promoted to a master server is called a warm standby server, and one that can accept connections and serves read-only queries is called a hot standby server.

Some solutions are synchronous, meaning that a data-modifying transaction is not considered committed until all servers have committed the transaction. This guarantees that a failover will not lose any data and that all load-balanced servers will return consistent results no matter which server is queried. In contrast, asynchronous solutions allow some delay between the time of a commit and its propagation to the other servers, opening the possibility that some transactions might be lost in the switch to a backup server, and that load balanced servers might return slightly stale results. Asynchronous communication is used when synchronous would be too slow.

Solutions can also be categorized by their granularity. Some solutions can deal only with an entire database server, while others allow control at the per-table or per-database level.

Performance must be considered in any choice. There is usually a trade-off between functionality and performance. For example, a fully synchronous solution over a slow network might cut performance by more than half, while an asynchronous one might have a minimal performance impact.

The remainder of this section outlines various failover, replication, and load balancing solutions. A glossary is also available.

Note: XCONLY: The following description applies only to Postgres-XC.

At failover, you should be careful to synchronize restoration point of all the coordinators and datanodes. Barrier will help this synchronization. See CREATE BARRIER for details.