PostgreSQL 8.1.23 Documentation | ||||
---|---|---|---|---|
Prev | Fast Backward | Fast Forward | Next |
Author: Written by Martin Utesch (
<[email protected]>
) for the Institute of Automatic Control at the University of Mining and Technology in Freiberg, Germany.
Among all relational operators the most difficult one to process and optimize is the join. The number of alternative plans to answer a query grows exponentially with the number of joins included in it. Further optimization effort is caused by the support of a variety of join methods (e.g., nested loop, hash join, merge join in PostgreSQL) to process individual joins and a diversity of indexes (e.g., R-tree, B-tree, hash in PostgreSQL) as access paths for relations.
The current PostgreSQL optimizer implementation performs a near-exhaustive search over the space of alternative strategies. This algorithm, first introduced in the "System R" database, produces a near-optimal join order, but can take an enormous amount of time and memory space when the number of joins in the query grows large. This makes the ordinary PostgreSQL query optimizer inappropriate for queries that join a large number of tables.
The Institute of Automatic Control at the University of Mining and Technology, in Freiberg, Germany, encountered the described problems as its folks wanted to take the PostgreSQL DBMS as the backend for a decision support knowledge based system for the maintenance of an electrical power grid. The DBMS needed to handle large join queries for the inference machine of the knowledge based system.
Performance difficulties in exploring the space of possible query plans created the demand for a new optimization technique to be developed.
In the following we describe the implementation of a Genetic Algorithm to solve the join ordering problem in a manner that is efficient for queries involving large numbers of joins.