Author: Written by Tom Lane.
A PostgreSQL operator definition can include
several optional clauses that tell the system useful things about how
the operator behaves. These clauses should be provided whenever
appropriate, because they can make for considerable speedups in execution
of queries that use the operator. But if you provide them, you must be
sure that they are right! Incorrect use of an optimization clause can
result in backend crashes, subtly wrong output, or other Bad Things.
You can always leave out an optimization clause if you are not sure
about it; the only consequence is that queries might run slower than
they need to.
Additional optimization clauses might be added in future versions of
PostgreSQL. The ones described here are all
the ones that release 7.3 understands.
The COMMUTATOR clause, if provided, names an operator that is the
commutator of the operator being defined. We say that operator A is the
commutator of operator B if (x A y) equals (y B x) for all possible input
values x, y. Notice that B is also the commutator of A. For example,
operators < and > for a particular data type are usually each others'
commutators, and operator + is usually commutative with itself.
But operator - is usually not commutative with anything.
The left operand type of a commuted operator is the same as the
right operand type of its commutator, and vice versa. So the name of
the commutator operator is all that PostgreSQL
needs to be given to look up the commutator, and that's all that needs to
be provided in the COMMUTATOR clause.
When you are defining a self-commutative operator, you just do it.
When you are defining a pair of commutative operators, things are
a little trickier: how can the first one to be defined refer to the
other one, which you haven't defined yet? There are two solutions
to this problem:
One way is to omit the COMMUTATOR clause in the first operator that
you define, and then provide one in the second operator's definition.
Since PostgreSQL knows that commutative
operators come in pairs, when it sees the second definition it will
automatically go back and fill in the missing COMMUTATOR clause in
the first definition.
The other, more straightforward way is just to include COMMUTATOR clauses
in both definitions. When PostgreSQL processes
the first definition and realizes that COMMUTATOR refers to a non-existent
operator, the system will make a dummy entry for that operator in the
system catalog. This dummy entry will have valid data only
for the operator name, left and right operand types, and result type,
since that's all that PostgreSQL can deduce
at this point. The first operator's catalog entry will link to this
dummy entry. Later, when you define the second operator, the system
updates the dummy entry with the additional information from the second
definition. If you try to use the dummy operator before it's been filled
in, you'll just get an error message. (Note: This procedure did not work
reliably in PostgreSQL versions before 6.5,
but it is now the recommended way to do things.)
The NEGATOR clause, if provided, names an operator that is the
negator of the operator being defined. We say that operator A
is the negator of operator B if both return Boolean results and
(x A y) equals NOT (x B y) for all possible inputs x, y.
Notice that B is also the negator of A.
For example, < and >= are a negator pair for most data types.
An operator can never validly be its own negator.
Unlike commutators, a pair of unary operators could validly be marked
as each others' negators; that would mean (A x) equals NOT (B x)
for all x, or the equivalent for right unary operators.
An operator's negator must have the same left and/or right operand types
as the operator itself, so just as with COMMUTATOR, only the operator
name need be given in the NEGATOR clause.
Providing a negator is very helpful to the query optimizer since
it allows expressions like NOT (x = y) to be simplified into
x <> y. This comes up more often than you might think, because
NOT operations can be inserted as a consequence of other rearrangements.
Pairs of negator operators can be defined using the same methods
explained above for commutator pairs.
The RESTRICT clause, if provided, names a restriction selectivity
estimation function for the operator (note that this is a function
name, not an operator name). RESTRICT clauses only make sense for
binary operators that return boolean. The idea behind a restriction
selectivity estimator is to guess what fraction of the rows in a
table will satisfy a WHERE-clause condition of the form
column OP constant
for the current operator and a particular constant value.
This assists the optimizer by
giving it some idea of how many rows will be eliminated by WHERE
clauses that have this form. (What happens if the constant is on
the left, you may be wondering? Well, that's one of the things that
COMMUTATOR is for...)
Writing new restriction selectivity estimation functions is far beyond
the scope of this chapter, but fortunately you can usually just use
one of the system's standard estimators for many of your own operators.
These are the standard restriction estimators:
eqsel for = |
neqsel for <> |
scalarltsel for < or <= |
scalargtsel for > or >= |
It might seem a little odd that these are the categories, but they
make sense if you think about it.
= will typically accept only
a small fraction of the rows in a table;
<> will typically reject
only a small fraction.
< will accept a fraction that depends on
where the given constant falls in the range of values for that table
column (which, it just so happens, is information collected by
ANALYZE and made available to the selectivity estimator).
<= will accept a slightly larger fraction than
< for the same
comparison constant, but they're close enough to not be worth
distinguishing, especially since we're not likely to do better than a
rough guess anyhow. Similar remarks apply to
> and
>=.
You can frequently get away with using either eqsel or neqsel for
operators that have very high or very low selectivity, even if they
aren't really equality or inequality. For example, the
approximate-equality geometric operators use eqsel on the assumption that
they'll usually only match a small fraction of the entries in a table.
You can use scalarltsel and scalargtsel for comparisons on data types that
have some sensible means of being converted into numeric scalars for
range comparisons. If possible, add the data type to those understood
by the routine convert_to_scalar() in src/backend/utils/adt/selfuncs.c.
(Eventually, this routine should be replaced by per-data-type functions
identified through a column of the pg_type system catalog; but that hasn't happened
yet.) If you do not do this, things will still work, but the optimizer's
estimates won't be as good as they could be.
There are additional selectivity functions designed for geometric
operators in src/backend/utils/adt/geo_selfuncs.c: areasel, positionsel,
and contsel. At this writing these are just stubs, but you may want
to use them (or even better, improve them) anyway.
The JOIN clause, if provided, names a join selectivity
estimation function for the operator (note that this is a function
name, not an operator name). JOIN clauses only make sense for
binary operators that return boolean. The idea behind a join
selectivity estimator is to guess what fraction of the rows in a
pair of tables will satisfy a WHERE-clause condition of the form
table1.column1 OP table2.column2
for the current operator. As with the RESTRICT clause, this helps
the optimizer very substantially by letting it figure out which
of several possible join sequences is likely to take the least work.
As before, this chapter will make no attempt to explain how to write
a join selectivity estimator function, but will just suggest that
you use one of the standard estimators if one is applicable:
eqjoinsel for = |
neqjoinsel for <> |
scalarltjoinsel for < or <= |
scalargtjoinsel for > or >= |
areajoinsel for 2D area-based comparisons |
positionjoinsel for 2D position-based comparisons |
contjoinsel for 2D containment-based comparisons |
The HASHES clause, if present, tells the system that
it is permissible to use the hash join method for a join based on this
operator. HASHES only makes sense for binary operators that
return boolean, and in practice the operator had better be
equality for some data type.
The assumption underlying hash join is that the join operator can
only return true for pairs of left and right values that hash to the
same hash code. If two values get put in different hash buckets, the
join will never compare them at all, implicitly assuming that the
result of the join operator must be false. So it never makes sense
to specify HASHES for operators that do not represent equality.
In fact, logical equality is not good enough either; the operator
had better represent pure bitwise equality, because the hash function
will be computed on the memory representation of the values regardless
of what the bits mean. For example, equality of
time intervals is not bitwise equality; the interval equality operator
considers two time intervals equal if they have the same
duration, whether or not their endpoints are identical. What this means
is that a join using = between interval fields would yield different
results if implemented as a hash join than if implemented another way,
because a large fraction of the pairs that should match will hash to
different values and will never be compared by the hash join. But
if the optimizer chose to use a different kind of join, all the pairs
that the equality operator says are equal will be found.
We don't want that kind of inconsistency, so we don't mark interval
equality as hashable.
There are also machine-dependent ways in which a hash join might fail
to do the right thing. For example, if your data type
is a structure in which there may be uninteresting pad bits, it's unsafe
to mark the equality operator HASHES. (Unless, perhaps, you write
your other operators to ensure that the unused bits are always zero.)
Another example is that the floating-point data types are unsafe for hash
joins. On machines that meet the IEEE floating-point standard, minus
zero and plus zero are different values (different bit patterns) but
they are defined to compare equal. So, if the equality operator on floating-point data types were marked
HASHES, a minus zero and a plus zero would probably not be matched up
by a hash join, but they would be matched up by any other join process.
The bottom line is that you should probably only use HASHES for
equality operators that are (or could be) implemented by memcmp().
The MERGES clause, if present, tells the system that
it is permissible to use the merge join method for a join based on this
operator. MERGES only makes sense for binary operators that
return boolean, and in practice the operator must represent
equality for some data type or pair of data types.
Merge join is based on the idea of sorting the left- and right-hand tables
into order and then scanning them in parallel. So, both data types must
be capable of being fully ordered, and the join operator must be one
that can only succeed for pairs of values that fall at the
"same place"
in the sort order. In practice this means that the join operator must
behave like equality. But unlike hash join, where the left and right
data types had better be the same (or at least bitwise equivalent),
it is possible to merge-join two
distinct data types so long as they are logically compatible. For
example, the int2-versus-int4 equality operator
is merge-joinable.
We only need sorting operators that will bring both data types into a
logically compatible sequence.
Execution of a merge join requires that the system be able to identify
four operators related to the merge-join equality operator: less-than
comparison for the left input data type, less-than comparison for the
right input data type, less-than comparison between the two data types, and
greater-than comparison between the two data types. (These are actually
four distinct operators if the merge-joinable operator has two different
input data types; but when the input types are the same the three
less-than operators are all the same operator.)
It is possible to
specify these operators individually by name, as the SORT1,
SORT2, LTCMP, and GTCMP options
respectively. The system will fill in the default names
<, <, <, >
respectively if any of these are omitted when MERGES is
specified. Also, MERGES will be assumed to be implied if any
of these four operator options appear, so it is possible to specify
just some of them and let the system fill in the rest.
The input data types of the four comparison operators can be deduced
from the input types of the merge-joinable operator, so just as with
COMMUTATOR, only the operator names need be given in these
clauses. Unless you are using peculiar choices of operator names,
it's sufficient to write MERGES and let the system fill in
the details.
(As with COMMUTATOR and NEGATOR, the system is
able to make dummy
operator entries if you happen to define the equality operator before
the other ones.)
There are additional restrictions on operators that you mark
merge-joinable. These restrictions are not currently checked by
CREATE OPERATOR, but errors may occur when
the operator is used if any are not true:
A merge-joinable equality operator must have a merge-joinable
commutator (itself if the two data types are the same, or a related
equality operator if they are different).
If there is a merge-joinable operator relating any two data types
A and B, and another merge-joinable operator relating B to any
third data type C, then A and C must also have a merge-joinable
operator; in other words, having a merge-joinable operator must
be transitive.
Bizarre results will ensue at runtime if the four comparison
operators you name do not sort the data values compatibly.
Note: In PostgreSQL versions before 7.3,
the MERGES shorthand was not available: to make a
merge-joinable operator one had to write both SORT1 and
SORT2 explicitly. Also, the LTCMP and
GTCMP
options did not exist; the names of those operators were hardwired as
< and > respectively.