In version 17 we added support for cross-partition EXCLUDE
constraints, as long as they included all partition key columns and
compared them with equality (see 8c852ba9a4). I updated the docs for
exclusion constraints, but I missed that the docs for CREATE TABLE
still said that they were not supported. This commit fixes that.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Co-authored-by: Jeff Davis <pgsql@j-davis.com>
Discussion: https://postgr.es/m/c955d292-b92d-42d1-a2a0-1ec6715a2546@illuminatedcomputing.com
Backpatch-through: 17
Increase consistency of --help and man page synopses between pg_dump
and pg_dumpall. These should now be very similar, as pg_dumpall can
now also produce non-text dump output. But actually, they had drifted
further apart.
- Use verb "export" consistently, instead of "dump" or "extract".
- Use "SQL script" instead of just "script" or "text file".
- Maintain consistent distinction between SQL script and other
formats/archives (which is relevant for pg_restore).
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://www.postgresql.org/message-id/flat/3f71d8a7-095b-4829-9b0b-fce09e9866b3%40eisentraut.org
Improve the clarity of LOG messages when a failover logical slot
synchronization fails, making the reasons more explicit for easier
debugging.
Update the documentation to outline scenarios where slot synchronization
can fail, especially during the initial sync, and emphasize that
pg_sync_replication_slot() is primarily intended for testing and
debugging purposes.
We also discussed improving the functionality of
pg_sync_replication_slot() so that it can be used reliably, but we would
take up that work for next version after some more discussion and review.
Reported-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Author: shveta malik <shveta.malik@gmail.com>
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Backpatch-through: 17, where it was introduced
Discussion: https://postgr.es/m/CAF1DzPWTcg+m+x+oVVB=y4q9=PYYsL_mujVp7uJr-_oUtWNGbA@mail.gmail.com
Commit 8492feb98f added support for parallel CREATE INDEX on GIN indexes.
However, previously two places in the documentation and two in the source
code comments still stated that only B-tree and BRIN indexes support
parallel builds.
This commit updates those references to correctly include GIN indexes.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/7d27d068-90e2-4022-9bd7-09b0fd3d4f47@oss.nttdata.com
Commit 302cf15759 added support for LIKE in CREATE FOREIGN TABLE.
In this feature, since indexes are not created for foreign tables,
comments on indexes are not copied either.
However, the documentation incorrectly stated that index comments
would be copied when using INCLUDING COMMENTS. This commit
corrects that by removing the mention of index comments.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/f86cd84f-a6a3-4451-bae7-5cca9e63b06d@oss.nttdata.com
I thought underscores wouldn't even work in "id"s, so I never checked to
see if anything referenced it, but it seems it does work, so adjust the
calling site for the dash syntax.
Commit 1fd1bd8710 introduced support for dumping statistics with
pg_dump and pg_dumpall, covering tables, materialized views, and indexes.
However, it overlooked foreign tables, even though functions like
pg_restore_relation_stats() support them.
This commit fixes that oversight by allowing pg_dump and pg_dumpall
to include statistics for foreign tables.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/3772e4e4-ef39-4deb-bb76-aa8165f33fb6@oss.nttdata.com
Commit 285613c60a introduced the min_protocol_version and
max_protocol_version connection options for libpq, but their descriptions
were placed in the middle of the unrelated ssl_min_protocol_version and
ssl_max_protocol_version entries.
This commit moves the min_protocol_version and max_protocol_version
descriptions to appear after the SSL-related options. This improves
the logical order and makes it easier for users to locate the relevant
settings in the libpq documentation.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/a3391f36-30f5-4d4a-825b-232476819de8@oss.nttdata.com
Commit bba2fbc623 introduced a new implementation of the \conninfo
command in psql. That new code uses the term "TLS" while the rest of
PostgreSQL, including the rest of psql, consistently uses "SSL". This
is uselessly confusing. This changes the new code to use "SSL" as
well.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/f4ff9294-b491-4053-83f5-11c10ab8c999@eisentraut.org
In the \conninfo psql command, the "Client User" column shows the user who
established the connection, while the "Superuser" column reflects whether
the current user in the current execution context is a superuser. This means
the users referred to in these columns can differ, for example, if the current
user was changed with the SET ROLE command.
This commit adds a note to the \conninfo documentation to clarify
this behavior and avoid potential confusion.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/685961b8-b6ce-40bb-b2d5-c2ff135d3388@oss.nttdata.com
Reword the documentation around the default value to make interaction
between WATCH_INTERVAL and the \watch command clearer. While there,
also remove a stray parenthesis left over from a previous version of
the patch.
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/c34a650b-6f8b-4da7-9ebb-b6df03ce009d@eisentraut.org
Running COPY within a pipeline can break protocol synchronization in
multiple ways. psql is limited in terms of result processing if mixing
COPY commands with normal queries while controlling a pipeline with the
new meta-commands, as an effect of the following reasons:
- In COPY mode, the backend ignores additional Sync messages and will
not send a matching ReadyForQuery expected by the frontend. Doing a
\syncpipeline just after COPY will leave the frontend waiting for a
ReadyForQuery message that won't be sent, leaving psql out-of-sync.
- libpq automatically sends a Sync with the Copy message which is not
tracked in the command queue, creating an unexpected synchronisation
point that psql cannot really know about. While it is possible to track
such activity for a \copy, this cannot really be done sanely with plain
COPY queries. Backend failures during a COPY would leave the pipeline
in an aborted state while the backend would be in a clean state, ready
to process commands.
At the end, fixing those issues would require modifications in how libpq
handles pipeline and COPY. So, rather than implementing workarounds in
psql to shortcut the libpq internals (with command queue handling for
one), and because meta-commands for pipelines in psql are a new feature
with COPY in a pipeline having a limited impact compared to other
queries, this commit forbids the use of COPY within a pipeline to avoid
possible break of protocol synchronisation within psql. If there is a
use-case for COPY support within pipelines in libpq, this could always
be added in the future, if necessary.
Most of the changes of this commit impacts the tests for psql pipelines,
removing the tests related to COPY. Some TAP tests still exist for COPY
TO/FROM and \copy to/from, to check that that connections are aborted
when this operation is attempted.
Reported-by: Nikita Kalinin <n.kalinin@postgrespro.ru>
Author: Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>
Discussion: https://postgr.es/m/AC468509-06E8-4E2A-A4B1-63046A4AC6AB@postgrespro.ru
Commit bde2fb797a added the --with-schema, --with-data, and --with-statistics
options to pg_restore. These options control whether to restore schema, data,
or statistics if present in the archive. However, the help message and
documentation incorrectly described them as affecting what gets dumped.
This commit corrects those descriptions to clarify that the options control
restoration, not dumping.
Bug: #18952
Reported-by: TAKATSUKA Haruka <harukat@sraoss.co.jp>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: TAKATSUKA Haruka <harukat@sraoss.co.jp>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/18952-be40a620f8b1e755@postgresql.org
Database object statistics manipulation functions were introduced
in PostgreSQL 18 and are permitted under the MAINTAIN privilege.
However, the documentation previously did not mention these functions
in the list of allowed operations.
This commit updates the MAINTAIN privilege documentation to
explicitly include statistics manipulation functions, clarifying
what the privilege covers.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/7c7e1ad5-fdf9-486f-bc63-40ac99b0461d@oss.nttdata.com
We concluded that commit e5a3c9d9b is a feature rather than a fix; since
it was added after feature freeze, revert it.
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reported-by: Michael Paquier <michael@paquier.xyz>
Reported-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/ed2296f1-1a6b-4932-b870-5bb18c2591ae%40oss.nttdata.com
The documentation for the pg_authid system catalog and the
pg_shadow system view indicates that passwords might be stored in
cleartext, but that hasn't been possible for some time.
Oversight in commit eb61136dc7.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aD2yKkZro4nbl5ol%40nathan
Backpatch-through: 13
This commit renames the GUC log_lock_failure to log_lock_failures
to align with the existing similar setting log_lock_waits, which uses
the plural form. This improves naming consistency across related GUCs.
Suggested-by: Peter Eisentraut <peter@eisentraut.org>
Author: Fujii Masao <masao.fujii@gmail.com
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/7a8198b6-d5b8-4910-b41e-8d3efcbb015d@eisentraut.org
Commit 7406ab623f added a gist support function that we internally
refer to by the symbol GIST_STRATNUM_PROC. This translated from
"well-known" strategy numbers to opfamily-specific strategy numbers.
However, we later (commit 630f9a43ce) changed this to fit into
index-AM-level compare type mapping, so this function actually now
maps from compare type to opfamily-specific strategy numbers. So this
name is no longer fitting.
Moreover, the index AM level also supports the opposite, a function to
map from strategy number to compare type. This is currently not
supported in gist, but one might wonder what this function is supposed
to be called when it is added.
This patch changes the naming of the gist-level functionality to be
more in line with the index-AM-level functionality. This makes sense
because these are essentially the same thing on different levels.
This also changes the names of the externally visible functions that
are provided for use as such a support function.
Reviewed-by: Paul A Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/37ebb1d9-9036-485f-a215-e55435689917%40eisentraut.org
Previously, postgres_fdw always 1) opened a remote transaction in READ
WRITE mode even when the local transaction was READ ONLY, causing a READ
ONLY transaction using it that references a foreign table mapped to a
remote view executing a volatile function to write in the remote side,
and 2) opened the remote transaction in NOT DEFERRABLE mode even when
the local transaction was DEFERRABLE, causing a SERIALIZABLE READ ONLY
DEFERRABLE transaction using it to abort due to a serialization failure
in the remote side.
To avoid these, modify postgres_fdw to open a remote transaction in the
same access/deferrable modes as the local transaction. This commit also
modifies it to open a remote subtransaction in the same access mode as
the local subtransaction.
Although these issues exist since the introduction of postgres_fdw,
there have been no reports from the field. So it seems fine to just fix
them in master only.
Author: Etsuro Fujita <etsuro.fujita@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAPmGK16n_hcUUWuOdmeUS%2Bw4Q6dZvTEDHb%3DOP%3D5JBzo-M3QmpQ%40mail.gmail.com
Set the default behavior of pg_dump and pg_dumpall to be
--no-statistics.
Leave the default for pg_restore and pg_upgrade to be
--with-statistics.
Discussion: https://postgr.es/m/CA+TgmoZ9=RnWcCOZiKYYjZs_AW1P4QXCw--h4dOLLHuf1Omung@mail.gmail.com
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
To improve the usability of logical replication examples, we need to
enable bulk copy-pasting of DML/DDL series.
Currently, output command tags and prompts disrupt this workflow. While
prompts are typically removed, converting them to comments is acceptable
here, given the multi-server context.
Additionally, ensure all examples containing operators like < and > are
wrapped in CDATA blocks to guarantee correct rendering and consistency
with other places.
Author: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Discussion: https://postgr.es/m/CAKFQuwbhbL1uaDTuo9shmo1rA-fX6XGotR7qZQ7rd-ia5ZDoQA@mail.gmail.com
The documentation for exported snapshots in logical decoding previously
stated that snapshot creation may fail on a hot standby. This is no longer
accurate, as snapshot exporting on standbys has been supported since
PostgreSQL 10. This commit removes the outdated description.
Additionally, the docs referred to the NOEXPORT_SNAPSHOT option to
suppress snapshot exporting in CREATE_REPLICATION_SLOT. However,
since PostgreSQL 15, NOEXPORT_SNAPSHOT is considered legacy syntax
and retained only for backward compatibility. This commit updates
the documentation for v15 and later to use the modern equivalent:
SNAPSHOT 'nothing'. The older syntax is preserved in documentation for
v14 and earlier.
Back-patched to all supported branches.
Reported-by: Kevin K Biju <kevinkbiju@gmail.com>
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Kevin K Biju <kevinkbiju@gmail.com>
Discussion: https://postgr.es/m/174791480466.798.17122832105389395178@wrigleys.postgresql.org
Backpatch-through: 13
9219093cab modularized log_connections output to allow more
granular control over which aspects of connection establishment are
logged. It converted the boolean log_connections GUC into a list of strings
and deprecated previously supported boolean-like values on, off, true,
false, 1, 0, yes, and no. Those values still work, but they are
supported mainly for backwards compatability. As such, documented
examples of log_connections should not use these deprecated values.
Update references in the docs to deprecated log_connections values. Many
of the tests use log_connections. This commit also updates the tests to
use the new values of log_connections. In some of the tests, the updated
log_connections value covers a narrower set of aspects (e.g. the
'authentication' aspect in the tests in src/test/authentication and the
'receipt' aspect in src/test/postmaster). In other cases, the new value
for log_connections is a superset of the previous included aspects (e.g.
'all' in src/test/kerberos/t/001_auth.pl).
Reported-by: Peter Eisentraut <peter@eisentraut.org>
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/e1586594-3b69-4aea-87ce-73a7488cdc97%40eisentraut.org
As pointed out by Tom Lane, the patch introduced fragile and invasive
design around plan invalidation handling when locking of prunable
partitions was deferred from plancache.c to the executor. In
particular, it violated assumptions about CachedPlan immutability and
altered executor APIs in ways that are difficult to justify given the
added complexity and overhead.
This also removes the firstResultRels field added to PlannedStmt in
commit 28317de72, which was intended to support deferred locking of
certain ModifyTable result relations.
Reported-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/605328.1747710381@sss.pgh.pa.us
Commit db6a4a985b categorized md5_password_warnings as an
authentication setting, and the placement in postgresql.conf.sample
matches that, but in the documentation it ended up under logging
settings, which isn't unreasonable but inconsistent. This moves the
documentation chunk to authentication settings as well.
Invalid indexes are suffixed with "_ccnew" or "_ccold". The
documentation missed to mention the initial underscore.
ChooseRelationName() may also append an extra number if indexes with a
similar name already exist; let's add a note about that too.
Author: Alec Cozens <acozens@pixelpower.com>
Discussion: https://postgr.es/m/174733277404.1455388.11471370288789479593@wrigleys.postgresql.org
Backpatch-through: 13
This aligns the copyright and legal notice wordig with commit
a233a603ba and pgweb commit 2d764dbc083ab8. Backpatch down
to all supported versions.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Dave Page <dpage@pgadmin.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/744E414E-3F52-404C-97FB-ED9B3AA37DC8@yesql.se
Backpatch-through: 13
The macros INJECTION_POINT() and INJECTION_POINT_CACHED() are extended
with an optional argument that can be passed down to the callback
attached when an injection point is run, giving to callbacks the
possibility to manipulate a stack state given by the caller. The
existing callbacks in modules injection_points and test_aio have their
declarations adjusted based on that.
da7226993f (core AIO infrastructure) and 93bc3d75d8 (test_aio) and
been relying on a set of workarounds where a static variable called
pgaio_inj_cur_handle is used as runtime argument in the injection point
callbacks used by the AIO tests, in combination with a TRY/CATCH block
to reset the argument value. The infrastructure introduced in this
commit will be reused for the AIO tests, simplifying them.
Reviewed-by: Greg Burd <greg@burd.me>
Discussion: https://postgr.es/m/Z_y9TtnXubvYAApS@paquier.xyz
The title for AT TIME ZONE and AT LOCAL was accidentally wrapping the
"and" in the <literal> tag. Backpatch to v17 where it was introduced
in 97957fdbaa.
Author: Noboru Saito <noborusai@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Tatsuo Ishii <ishii@postgresql.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/CAAM3qn+7QUWW9R6_YwPKXmky0xGE4n63U3EsxZeWE_QtogeU8g@mail.gmail.com
Backpatch-through: 17
A few places that access this catalog don't set up an active
snapshot before potentially accessing its TOAST table. However,
roname (the replication origin name) is the only varlena column, so
this is only a problem if the name requires out-of-line storage.
This commit removes its TOAST table to avoid needing to set up a
snapshot. It also places a limit on replication origin names so
that attempts to set long names will fail with a more user-friendly
error. Those chosen limit of 512 bytes should be sufficient to
avoid "row is too big" errors independent of BLCKSZ, but it should
also be lenient enough for all reasonable use-cases.
Bumps catversion.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Nisha Moond <nisha.moond412@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/ZvMSUPOqUU-VNADN%40nathan
I recently added this option to pg_dump, but I forgot to add it to
pg_dumpall, too. There's probably little use for it at the moment,
but we will need it if/when we teach pg_upgrade to use pg_dumpall
to dump the database schemas.
Oversight in commit 9c49f0e8cd.
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/aBE8rHFo922xQUwh%40nathan
The extension_control_path setting (commit 4f7f7b0375) did not
support extensions that set a custom "directory" setting in their
control file. Very few extensions use that and during the discussion
on the previous commit it was suggested to maybe remove that
functionality. But a fix was easier than initially thought, so this
just adds that support. The fix is to use the control->control_dir as
a share dir to return the path of the extension script files.
To make this work more sensibly overall, the directory suffix
"extension" is no longer to be included in the extension_control_path
value. To quote the patch, it would be
-extension_control_path = '/usr/local/share/postgresql/extension:/home/my_project/share/extension:$system'
+extension_control_path = '/usr/local/share/postgresql:/home/my_project/share:$system'
During the initial patch, there was some discussion on which of these
two approaches would be better, and the committed patch was a 50/50
decision. But the support for the "directory" setting pushed it the
other way, and also it seems like many people didn't like the previous
behavior much.
Author: Matheus Alcantara <mths.dev@pm.me>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: David E. Wheeler <david@justatheory.com>
Discussion: https://www.postgresql.org/message-id/flat/aAi1VACxhjMhjFnb%40msg.df7cb.de#0cdf7b7d727cc593b029650daa3c4fbc
SQL "SET search_path = 'pg_catalog, pg_temp'" is silently equivalent to
"SET search_path = pg_temp, pg_catalog, "pg_catalog, pg_temp"" instead
of the intended "SET search_path = pg_catalog, pg_temp". (The intent
was a two-element search path. With the single quotes, it instead
specifies one element with a comma and a space in the middle of the
element.) In addition to the SET statement, this affects SET clauses of
CREATE FUNCTION, ALTER ROLE, and ALTER DATABASE. It does not affect the
set_config() SQL function.
Though the documentation did not show an insecure command, remove single
quotes that could entice a reader to write an insecure command.
Back-patch to v13 (all supported versions).
Reported-by: Sven Klemm <sven@timescale.com>
Author: Sven Klemm <sven@timescale.com>
Backpatch-through: 13
The variable is a bit magical in how it requires "postgresql" or
"pgsql" to be part of the path, and files end up in its "share" and
"lib" subdirectories. So mention all that and show an example of
setting "extension_control_path" and "dynamic_library_path" to use
those locations.
Author: David E. Wheeler <david@justatheory.com>
Reviewed-by: Matheus Alcantara <matheusssilv97@gmail.com>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://www.postgresql.org/message-id/6B5BF07B-8A21-48E3-858C-1DC22F3A28B4@justatheory.com
The additional packaging footprint of the OAuth Curl dependency, as well
as the existence of libcurl in the address space even if OAuth isn't
ever used by a client, has raised some concerns. Split off this
dependency into a separate loadable module called libpq-oauth.
When configured using --with-libcurl, libpq.so searches for this new
module via dlopen(). End users may choose not to install the libpq-oauth
module, in which case the default flow is disabled.
For static applications using libpq.a, the libpq-oauth staticlib is a
mandatory link-time dependency for --with-libcurl builds. libpq.pc has
been updated accordingly.
The default flow relies on some libpq internals. Some of these can be
safely duplicated (such as the SIGPIPE handlers), but others need to be
shared between libpq and libpq-oauth for thread-safety. To avoid
exporting these internals to all libpq clients forever, these
dependencies are instead injected from the libpq side via an
initialization function. This also lets libpq communicate the offsets of
PGconn struct members to libpq-oauth, so that we can function without
crashing if the module on the search path came from a different build of
Postgres. (A minor-version upgrade could swap the libpq-oauth module out
from under a long-running libpq client before it does its first load of
the OAuth flow.)
This ABI is considered "private". The module has no SONAME or version
symlinks, and it's named libpq-oauth-<major>.so to avoid mixing and
matching across Postgres versions. (Future improvements may promote this
"OAuth flow plugin" to a first-class concept, at which point we would
need a public API to replace this anyway.)
Additionally, NLS support for error messages in b3f0be788a was
incomplete, because the new error macros weren't being scanned by
xgettext. Fix that now.
Per request from Tom Lane and Bruce Momjian. Based on an initial patch
by Daniel Gustafsson, who also contributed docs changes. The "bare"
dlopen() concept came from Thomas Munro. Many people reviewed the design
and implementation; thank you!
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Wolfgang Walther <walther@technowledgy.de>
Discussion: https://postgr.es/m/641687.1742360249%40sss.pgh.pa.us
Add a documentation warning to ts_headline() pointing out that, when
working with untrusted input documents, the output is not guaranteed
to be safe for direct inclusion in web pages. This is because, while
it does remove some XML tags from the input, it doesn't remove all
HTML markup, and so the result may be unsafe (e.g., it might permit
XSS attacks).
To guard against that, all HTML markup should be removed from the
input, making it plain text, or the output should be passed through an
HTML sanitizer.
In addition, document precisely what the default text search parser
recognises as valid XML tags, since that's what determines which XML
tags ts_headline() will remove.
Reported-by: Richard Neill <richard.neill@telos.digital>
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Backpatch-through: 13
Further improvement for commit 11bd831860. That commit confused
identity and generated columns; fix that. Also, virtual generated
columns have since been added; add more details about that. Also some
small rewordings and reformattings to further improve clarity.
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/00e6eb5f5c793b8ef722252c7a519c9a@oss.nttdata.com
30a6ed0ce4 has added four attributes to pg_stat_all_tables to track the
cumulative time spent in [auto]vacuum and [auto]analyze. It was not
mentioned that the vacuum cost-based delays are included in these
numbers, which could be confusing now that the delays are included in
the vacuum progress view (bb8dff9995).
This commit adds an extra note about this matter.
Reported-by: Magnus Hagander <magnus@hagander.net>
Author: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Discussion: https://postgr.es/m/CABUevEz9v1ZNToPyD98JnWDGZgG=SmPZKkSNzU9hXQ-nGTQF0g@mail.gmail.com
Since pg_upgrade does not transfer the cumulative statistics used
to trigger autovacuum and autoanalyze, the server may take much
longer than expected to process them post-upgrade. Currently, we
recommend analyzing only relations for which optimizer statistics
were not transferred by using the --analyze-in-stages and
--missing-stats-only options. This commit appends another
recommendation to analyze all relations to update the relevant
cumulative statistics by using the --analyze-only option. This is
similar to the recommendation for pg_stat_reset().
Reported-by: Christoph Berg <myon@debian.org>
Reviewed-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/aAfxfKC82B9NvJDj%40msg.df7cb.de
The current ordering strategy for these pages is to list the short
options in alphabetical order followed by the long options in
alphabetical order. If an option has both a short variant and a
long variant, the short variant takes precedence. This commit
moves a few recently added options to match this style. We should
probably adjust all pages and --help output to list the long and
short options in one combined alphabetical list (with the long
variants taking precedence), but that is a much larger change, so
it is left as a future exercise.
Oversights in commits a5cf808be5, 1fd1bd8710, and bde2fb797a.
Reviewed-by: Álvaro Herrera <alvherre@kurilemu.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/aBFBtsltgu3-IU1d%40nathan
This reverts commit 38da053463, which
attempted to preserve our ability to start with only 60 semaphores.
Subsequent changes (particularly 55b454d0e) have put that idea pretty
much permanently out of reach: people wishing to use Postgres v18 on
OpenBSD or NetBSD will have no choice but to increase those platforms'
default values of SEMMNI and SEMMNS.
Hence, revert 38da05346's changes in SEMAS_PER_SET and the minimum
tested value of max_connections. Adjust a comment from the subsequent
patch 6d0154196, and tweak the wording in runtime.sgml to make it
clear that changing SEMMNI/SEMMNS is no longer even a little bit
optional on these platforms.
Although 38da05346 was later back-patched into v17, leave that branch
alone: it's still capable of starting with 60 semaphores, and there's
no reason to break that.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Discussion: https://postgr.es/m/E1tuZNv-0037Gs-34@gemulon.postgresql.org
Discussion: https://postgr.es/m/1052019.1745947915@sss.pgh.pa.us
Python 3.2 is no longer tested by the buildfarm, and there are only a
handful of buildfarm animals running versions older than 3.6, which
itself went end-of-life in 2021. Python 3.6.8 is the default version
shipped in RHEL8, so that seems like a reasonable baseline for PG18.
Now that we use the Python Limited API as of 0793ab810, older versions
of Python should continue functioning for users of PL/Python in
particular, so soften the language from "required" to "supported".
Wording by Tom Lane. Separate from the review of the patch itself,
several people provided input on the choice of cutoff: Christoph Berg,
Devrim Gündüz, Florents Tselai, Jelte Fennema-Nio, and Renan Alves
Fonseca. Thank you!
Suggested-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/16098.1745079444%40sss.pgh.pa.us
Previously, config.sgml included secondary index terms for
max_replication_slots and max_active_replication_origins. These are
no longer necessary, as each parameter now has a single distinct index entry.
The secondary index terms were originally useful because
max_active_replication_origins was part of max_replication_slots,
and separate index entries helped users locate each setting. However,
commit 04ff636cbc split them into independent parameters,
making the secondary terms redundant.
This commit removes the unnecessary secondary index entries to
simplify the documentation.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Robert Treat <rob@xzilla.net>
Discussion: https://postgr.es/m/e825e7a7-4877-441d-93c1-25377db36c31@oss.nttdata.com
All the injection points used in the tree have relied on an implied
rule: their names should be made of lower-case characters, with dashes
between the words used.
This commit adds a light mention about that in the docs, encouraging the
practice.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/OSCPR01MB14966E14C1378DEE51FB7B7C5F5B32@OSCPR01MB14966.jpnprd01.prod.outlook.com
Backpatch-through: 17
* Use <symbol> tags for CONNECTION_* #defines
We were using an inconsistent mix of <literal> and sometimes <function>
tags.
* Use <application> tag for libpq
There was a mix of <literal> and <productname>
Also fix a whitespace issue.
None of these seem critical enough mistakes to backpatch.
Author: Noboru Saito <noborusai@gmail.com>
Discussion: https://postgr.es/m/CAAM3qnJtv5YbjpwDfVOYN2gZ9zGSLFM1UGJgptSXmwfifOZJFQ@mail.gmail.com
Commit d9e03864b6 changed the memory context level numbers shown by
pg_log_backend_memory_contexts() to be 1-based. However, the example in
the documentation was not updated and still used 0-based numbering.
This commit updates the example to match the current 1-based output.
Author: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: David Rowley <drowleyml@gmail.com>
Discussion: https://postgr.es/m/1ad6d388-1b43-400d-bec9-36d52f755f74@oss.nttdata.com
Backpatch to 17, where the line was added.
Reported by Noboru Saito while he was working on translating the file
into Japanese.
Discussion: https://postgr.es/m/20250417.203047.1321297410457834775.ishii%40postgresql.org
Reported-by: Noboru Saito <noborusai@gmail.com>
Reviewed-by: Daniel Gustafs <daniel@yesql.se>
Backpatch-through: 17
Previously, a space was missing between "<option>--exclude-schema</option>"
and "for" in the pg_restore documentation. This commit fixes the typo by
adding the missing whitespace.
Back-patch to v17 where the typo was added.
Author: Lele Gaifax <lele@metapensiero.it>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/87lds3ysm0.fsf@metapensiero.it
Backpatch-through: 17
Per the precedent set by 04539e73f, adjust article prefixes for "SQL" to
use "an" consistently rather than "a", i.e., "an es-que-ell" rather than
"a sequel".
Both of these are new to v18. Also see b1b13d2b5, d866f0374 and
7bdd489d3.
Add a paragraph break per suggestion from David G. Johnston.
Use a consistent voice for all the different parameter
descriptions, and fix a couple of grammatical issues.
Reported-by: Igor Korot <ikorot01@gmail.com>
Co-authored-by: "David G. Johnston" <david.g.johnston@gmail.com>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CA+FnnTz=EW1VQRpWB9J+G-NSchrPFcw4nR7d0JqzEK9jWKB35A@mail.gmail.com
It's been twenty years since we generated constraint names that
look like "$N". So this advice about double-quoting such names
is well past its sell-by date, and now it merely seems confusing.
Reported-by: Yaroslav Saburov <y.saburov@gmail.com>
Author: "David G. Johnston" <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/174393459040.678.17810152410419444783@wrigleys.postgresql.org
Backpatch-through: 13
It's weird to have the core regression tests depending on contrib
code, and coverage testing shows that those test queries add nothing
to the core-code coverage of the core tests. So pull those test bits
out and put them into ordinary test scripts inside contrib/spi/,
making that more like other contrib modules.
Aside from being structurally nicer, anything we can take out of the
core tests (which are executed multiple times per check-world run)
and put into tests executed only once should be a win. It doesn't
look like this change will buy a whole lot of milliseconds, but a
cycle saved is a cycle earned.
Also, there is some discussion around possibly removing refint and/or
autoinc altogether. I don't know if that will happen, but we'd
certainly need to decouple them from the core tests to do so.
The tests for autoinc were quite intertwined with the undocumented
"ttdummy" trigger in regress.c. That made the tests very hard to
understand and contributed nothing to autoinc's testing either.
So I just deleted ttdummy and rewrote the autoinc tests without it.
I realized while doing this that the description of autoinc in
the SGML docs is not a great description of what the function
actually does, so the patch includes some updates to those docs.
Author: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/3872677.1744077559@sss.pgh.pa.us
The page_num was defined as integer, which should be sufficient for the
near future (with 4K pages it's 8TB). But it's virtually free to return
bigint, and get a wider range. This was agreed on the thread, but I
forgot to tweak this in ba2a3c2302.
While at it, make the data types in CREATE VIEW a bit more consistent.
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.co
It can be set to either COPY (the default) or CLONE if the system
supports it. CLONE causes callers of copydir(), currently CREATE
DATABASE ... STRATEGY=FILE_COPY and ALTER DATABASE ... SET TABLESPACE =
..., to use copy_file_range (Linux, FreeBSD) or copyfile (macOS) to copy
files instead of a read-write loop over the contents.
CLONE gives the kernel the opportunity to share block ranges on
copy-on-write file systems and push copying down to storage on others,
depending on configuration. On some systems CLONE can be used to clone
large databases quickly with CREATE DATABASE ... TEMPLATE=source
STRATEGY=FILE_COPY.
Other operating systems could be supported; patches welcome.
Co-authored-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Discussion: https://postgr.es/m/CA%2BhUKGLM%2Bt%2BSwBU-cHeMUXJCOgBxSHLGZutV5zCwY4qrCcE02w%40mail.gmail.com
This adds a function for retrieving memory context statistics
and information from backends as well as auxiliary processes.
The intended usecase is cluster debugging when under memory
pressure or unanticipated memory usage characteristics.
When calling the function it sends a signal to the specified
process to submit statistics regarding its memory contexts
into dynamic shared memory. Each memory context is returned
in detail, followed by a cumulative total in case the number
of contexts exceed the max allocated amount of shared memory.
Each process is limited to use at most 1Mb memory for this.
A summary can also be explicitly requested by the user, this
will return the TopMemoryContext and a cumulative total of
all lower contexts.
In order to not block on busy processes the caller specifies
the number of seconds during which to retry before timing out.
In the case where no statistics are published within the set
timeout, the last known statistics are returned, or NULL if
no previously published statistics exist. This allows dash-
board type queries to continually publish even if the target
process is temporarily congested. Context records contain a
timestamp to indicate when they were submitted.
Author: Rahila Syed <rahilasyed90@gmail.com>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Reviewed-by: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Discussion: https://postgr.es/m/CAH2L28v8mc9HDt8QoSJ8TRmKau_8FM_HKS41NeO9-6ZAkuZKXw@mail.gmail.com
In addition to the added functions, the pg_buffercache_evict() function now
shows whether the buffer was flushed.
pg_buffercache_evict_relation(): Evicts all shared buffers in a
relation at once.
pg_buffercache_evict_all(): Evicts all shared buffers at once.
Both functions provide mechanism to evict multiple shared buffers at
once. They are designed to address the inefficiency of repeatedly calling
pg_buffercache_evict() for each individual buffer, which can be time-consuming
when dealing with large shared buffer pools. (e.g., ~477ms vs. ~2576ms for
16GB of fully populated shared buffers).
These functions are intended for developer testing and debugging
purposes and are available to superusers only.
Minimal tests for the new functions are included. Also, there was no test for
pg_buffercache_evict(), test for this added too.
No new extension version is needed, as it was already increased this release
by ba2a3c2302.
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Aidar Imamov <a.imamov@postgrespro.ru>
Reviewed-by: Joseph Koshakow <koshy44@gmail.com>
Discussion: https://postgr.es/m/CAN55FZ0h_YoSqqutxV6DES1RW8ig6wcA8CR9rJk358YRMxZFmw%40mail.gmail.com
A PG 17 optimization allowed columns with NOT NULL constraints to skip
table scans for IS NULL queries, and to skip IS NOT NULL checks for IS
NOT NULL queries. This didn't work for domain types, since domain types
don't follow the IS NULL/IS NOT NULL constraint logic. To fix, disable
this optimization for domains for PG 17+.
Reported-by: Jan Behrens
Diagnosed-by: Tom Lane
Discussion: https://postgr.es/m/Z37p0paENWWUarj-@momjian.us
Backpatch-through: 17
Introduces a new view pg_buffercache_numa, showing NUMA memory nodes
for individual buffers. For each buffer the view returns an entry for
each memory page, with the associated NUMA node.
The database blocks and OS memory pages may have different size - the
default block size is 8KB, while the memory page is 4K (on x86). But
other combinations are possible, depending on configure parameters,
platform, etc. This means buffers may overlap with multiple memory
pages, each associated with a different NUMA node.
To determine the NUMA node for a buffer, we first need to touch the
memory pages using pg_numa_touch_mem_if_required, otherwise we might get
status -2 (ENOENT = The page is not present), indicating the page is
either unmapped or unallocated.
The view may be relatively expensive, especially when accessed for the
first time in a backend, as it touches all memory pages to get reliable
information about the NUMA node. This may also force allocation of the
shared memory.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
Introduce new pg_shmem_alloctions_numa view with information about how
shared memory is distributed across NUMA nodes. For each shared memory
segment, the view returns one row for each NUMA node backing it, with
the total amount of memory allocated from that node.
The view may be relatively expensive, especially when executed for the
first time in a backend, as it has to touch all memory pages to get
reliable information about the NUMA node. This may also force allocation
of the shared memory.
Unlike pg_shmem_allocations, the view does not show anonymous shared
memory allocations. It also does not show memory allocated using the
dynamic shared memory infrastructure.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
Add basic NUMA awareness routines, using a minimal src/port/pg_numa.c
portability wrapper and an optional build dependency, enabled by
--with-libnuma configure option. For now this is Linux-only, other
platforms may be supported later.
A built-in SQL function pg_numa_available() allows checking NUMA
support, i.e. that the server was built/linked with the NUMA library.
The main function introduced is pg_numa_query_pages(), which allows
determining the NUMA node for individual memory pages. Internally the
function uses move_pages(2) syscall, as it allows batching, and is more
efficient than get_mempolicy(2).
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Co-authored-by: Bertrand Drouvot <bertranddrouvot.pg@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Tomas Vondra <tomas@vondra.me>
Discussion: https://postgr.es/m/CAKZiRmxh6KWo0aqRqvmcoaX2jUxZYb4kGp3N%3Dq1w%2BDiH-696Xw%40mail.gmail.com
check_foreign_key incorrectly used a single cache entry for its saved
plans for a 'c' (cascade) trigger, although there are two different
queries to execute depending on whether it fires for an update or a
delete. This caused the wrong things to be done if both types of
event occur in one session. (This was indeed visible in the triggers
regression test, but apparently nobody ever questioned it.) To fix,
add the operation type to the cache key.
Its debug log output failed to distinguish update from delete
events, too.
Also, change the intended trigger usage from BEFORE ROW to AFTER ROW,
and add checks insisting on that usage. BEFORE is really rather
unsafe, since if there are other BEFORE triggers they might change or
cancel the operation we are trying to check. AFTER triggers are the
standard way to propagate changes to other rows, so we should follow
that way here.
In passing, remove a useless duplicate lookup of the cache entry.
This code is mostly intended as a documentation example, so we
won't consider a back-patch.
Author: Dmitrii Bondar <d.bondar@postgrespro.ru>
Reviewed-by: Paul Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Lilian Ontowhee <ontowhee@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/79755a2b18ed4fe5e29da6a87a1e00d1@postgrespro.ru
This allows them to be added without scanning the table, and validating
them afterwards without holding access exclusive lock on the table after
any violating rows have been deleted or fixed.
Doing ALTER TABLE ... SET NOT NULL for a column that has an invalid
not-null constraint validates that constraint. ALTER TABLE .. VALIDATE
CONSTRAINT is also supported. There are various checks on whether an
invalid constraint is allowed in a child table when the parent table has
a valid constraint; this should match what we do for enforced/not
enforced constraints.
pg_attribute.attnotnull is now only an indicator for whether a not-null
constraint exists for the column; whether it's valid or invalid must be
queried in pg_constraint. Applications can continue to query
pg_attribute.attnotnull as before, but now it's possible that NULL rows
are present in the column even when that's set to true.
For backend internal purposes, we cache the nullability status in
CompactAttribute->attnullability that each tuple descriptor carries
(replacing CompactAttribute.attnotnull, which was a mirror of
Form_pg_attribute.attnotnull). During the initial tuple descriptor
creation, based on the pg_attribute scan, we set this to UNRESTRICTED if
pg_attribute.attnotnull is false, or to UNKNOWN if it's true; then we
update the latter to VALID or INVALID depending on the pg_constraint
scan. This flag is also copied when tupledescs are copied.
Comparing tuple descs for equality must also compare the
CompactAttribute.attnullability flag and return false in case of a
mismatch.
pg_dump deals with these constraints by storing the OIDs of invalid
not-null constraints in a separate array, and running a query to obtain
their properties. The regular table creation SQL omits them entirely.
They are then dealt with in the same way as "separate" CHECK
constraints, and dumped after the data has been loaded. Because no
additional pg_dump infrastructure was required, we don't bump its
version number.
I decided not to bump catversion either, because the old catalog state
works perfectly in the new world. (Trying to run with new catalog state
and the old server version would likely run into issues, however.)
System catalogs do not support invalid not-null constraints (because
commit 14e87ffa5c didn't allow them to have pg_constraint rows
anyway.)
Author: Rushabh Lathia <rushabh.lathia@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Tested-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/CAGPqQf0KitkNack4F5CFkFi-9Dqvp29Ro=EpcWt=4_hs-Rt+bQ@mail.gmail.com
This escape shows the numeric server IP address that the client
has connected to. Unix-socket connections will show "[local]".
Non-client processes (e.g. background processes) will show "[none]".
We expect that this option will be of interest to only a fairly
small number of users. Therefore the implementation is optimized
for the case where it's not used (that is, we don't do the string
conversion until we have to), and we've not added the field to
csvlog or jsonlog formats.
Author: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Cary Huang <cary.huang@highgo.ca>
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Jim Jones <jim.jones@uni-muenster.de>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/CAKAnmmK-U+UicE-qbNU23K--Q5XTLdM6bj+gbkZBZkjyjrd3Ow@mail.gmail.com
Clarify the project naming in the history section of the docs
to match the recent license preamble changes.
Backpatch to all supported versions.
Author: Dave Page <dpage@pgadmin.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/CA+OCxozLzK2+Jc14XZyWXSp6L9Ot+3efwXUE35FJG=fsbib2EA@mail.gmail.com
Backpatch-through: 13
pg_dumpall acquires a new -F/--format option, with the same meanings as
pg_dump. The default is p, meaning plain text. For any other value, a
directory is created containing two files, globals.data and map.dat. The
first contains SQL for restoring the global data, and the second
contains a map from oids to database names. It will also contain a
subdirectory called databases, inside which it will create archives in
the specified format, named using the database oids.
In these casess the -f argument is required.
If pg_restore encounters a directory containing globals.dat, and no
toc.dat, it restores the global settings and then restores each
database.
pg_restore acquires two new options: -g/--globals-only which suppresses
restoration of any databases, and --exclude-database which inhibits
restoration of particualr database(s) in the same way the same option
works in pg_dumpall.
Author: Mahendra Singh Thalor <mahi6run@gmail.com>
Co-authored-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Srinath Reddy <srinath2133@gmail.com>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/cb103623-8ee6-4ba5-a2c9-f32e3a4933fa@dunslane.net
This new option instructs pg_recvlogical to create the logical
replication slot with the failover option enabled. It can be used in
conjunction with the --create-slot option.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Michael Banck <mbanck@gmx.net>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/OSCPR01MB14966C54097FC83AF19F3516BF5AC2@OSCPR01MB14966.jpnprd01.prod.outlook.com
Teach nbtree multi-column index scans to opportunistically skip over
irrelevant sections of the index given a query with no "=" conditions on
one or more prefix index columns. When nbtree is passed input scan keys
derived from a predicate "WHERE b = 5", new nbtree preprocessing steps
output "WHERE a = ANY(<every possible 'a' value>) AND b = 5" scan keys.
That is, preprocessing generates a "skip array" (and an output scan key)
for the omitted prefix column "a", which makes it safe to mark the scan
key on "b" as required to continue the scan. The scan is therefore able
to repeatedly reposition itself by applying both the "a" and "b" keys.
A skip array has "elements" that are generated procedurally and on
demand, but otherwise works just like a regular ScalarArrayOp array.
Preprocessing can freely add a skip array before or after any input
ScalarArrayOp arrays. Index scans with a skip array decide when and
where to reposition the scan using the same approach as any other scan
with array keys. This design builds on the design for array advancement
and primitive scan scheduling added to Postgres 17 by commit 5bf748b8.
Testing has shown that skip scans of an index with a low cardinality
skipped prefix column can be multiple orders of magnitude faster than an
equivalent full index scan (or sequential scan). In general, the
cardinality of the scan's skipped column(s) limits the number of leaf
pages that can be skipped over.
The core B-Tree operator classes on most discrete types generate their
array elements with the help of their own custom skip support routine.
This infrastructure gives nbtree a way to generate the next required
array element by incrementing (or decrementing) the current array value.
It can reduce the number of index descents in cases where the next
possible indexable value frequently turns out to be the next value
stored in the index. Opclasses that lack a skip support routine fall
back on having nbtree "increment" (or "decrement") a skip array's
current element by setting the NEXT (or PRIOR) scan key flag, without
directly changing the scan key's sk_argument. These sentinel values
behave just like any other value from an array -- though they can never
locate equal index tuples (they can only locate the next group of index
tuples containing the next set of non-sentinel values that the scan's
arrays need to advance to).
A skip array's range is constrained by "contradictory" inequality keys.
For example, a skip array on "x" will only generate the values 1 and 2
given a qual such as "WHERE x BETWEEN 1 AND 2 AND y = 66". Such a skip
array qual usually has near-identical performance characteristics to a
comparable SAOP qual "WHERE x = ANY('{1, 2}') AND y = 66". However,
improved performance isn't guaranteed. Much depends on physical index
characteristics.
B-Tree preprocessing is optimistic about skipping working out: it
applies static, generic rules when determining where to generate skip
arrays, which assumes that the runtime overhead of maintaining skip
arrays will pay for itself -- or lead to only a modest performance loss.
As things stand, these assumptions are much too optimistic: skip array
maintenance will lead to unacceptable regressions with unsympathetic
queries (queries whose scan can't skip over many irrelevant leaf pages).
An upcoming commit will address the problems in this area by enhancing
_bt_readpage's approach to saving cycles on scan key evaluation, making
it work in a way that directly considers the needs of = array keys
(particularly = skip array keys).
Author: Peter Geoghegan <pg@bowt.ie>
Reviewed-By: Masahiro Ikeda <masahiro.ikeda@nttdata.com>
Reviewed-By: Heikki Linnakangas <heikki.linnakangas@iki.fi>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-By: Tomas Vondra <tomas@vondra.me>
Reviewed-By: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-By: Alena Rybakina <a.rybakina@postgrespro.ru>
Discussion: https://postgr.es/m/CAH2-Wzmn1YsLzOGgjAQZdn1STSG_y8qP__vggTaPAYXJP+G4bw@mail.gmail.com
Previously, "COPY table TO" command worked only with plain tables and
did not support materialized views, even when they were populated and
had physical storage. To copy rows from materialized views,
"COPY (query) TO" command had to be used, instead.
This commit extends "COPY table TO" to support populated materialized
views directly, improving usability and performance, as "COPY table TO"
is generally faster than "COPY (query) TO". Note that copying from
unpopulated materialized views will still result in an error.
Author: jian he <jian.universality@gmail.com>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Fujii Masao <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/CACJufxHVxnyRYy67hiPePNCPwVBMzhTQ6FaL9_Te5On9udG=yg@mail.gmail.com
Previously, ALTER DEFAULT PRIVILEGES did not support large objects.
This meant that to grant privileges to users other than the owner,
permissions had to be manually assigned each time a large object
was created, which was inconvenient.
This commit extends ALTER DEFAULT PRIVILEGES to allow defining default
access privileges for large objects. With this change, specified privileges
will automatically apply to newly created large objects, making privilege
management more efficient.
As a side effect, this commit introduces the new keyword OBJECTS
since it's used in the syntax of ALTER DEFAULT PRIVILEGES.
Original patch by Haruka Takatsuka, with some fixes and tests by Yugo Nagata,
and rebased by Laurenz Albe.
Author: Takatsuka Haruka <harukat@sraoss.co.jp>
Co-authored-by: Yugo Nagata <nagata@sraoss.co.jp>
Co-authored-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Masao Fujii <masao.fujii@gmail.com>
Discussion: https://postgr.es/m/20240424115242.236b499b2bed5b7a27f7a418@sraoss.co.jp
The documentation for the special value "system" for sslrootcert could
be misinterpreted to mean the default operating system CA store, which
it may be, but it's defined to be the default CA store of the SSL lib
used.
Backpatch down to v16 where support for the system value was added.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: George MacKerron <george@mackerron.co.uk>
Discussion: https://postgr.es/m/B3CBBAA3-6EA3-4AB7-8619-4BBFAB93DDB4@yesql.se
Backpatch-through: 16
Previously, invalidated logical and physical replication slots could
be copied using the pg_copy_logical_replication_slot and
pg_copy_physical_replication_slot functions. Replication slots that
were invalidated for reasons other than WAL removal retained their
restart_lsn. This meant that a new slot copied from an invalidated
slot could have a restart_lsn pointing to a WAL segment that might
have already been removed.
This commit restricts the copying of invalidated replication slots.
Backpatch to v16, where slots could retain their restart_lsn when
invalidated for reasons other than WAL removal.
For v15 and earlier, this check is not required since slots can only
be invalidated due to WAL removal, and existing checks already handle
this issue.
Author: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Discussion: https://postgr.es/m/CANhcyEU65aH0VYnLiu%3DOhNNxhnhNhwcXBeT-jvRe1OiJTo_Ayg%40mail.gmail.com
Backpatch-through: 16
This adds a new connection parameter which instructs libpq to
write out keymaterial clientside into a file in order to make
connection debugging with Wireshark and similar tools possible.
The file format used is the standardized NSS format.
Author: Abhishek Chanda <abhishek.becs@gmail.com>
Co-authored-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://postgr.es/m/CAKiP-K85C8uQbzXKWf5wHQPkuygGUGcufke713iHmYWOe9q2dA@mail.gmail.com
This enables sortsupport in the btree_gist extension for faster builds
of gist indexes.
Sorted gist index build strategy is the new default now. Regression
tests are unchanged (except for one small change in the 'enum' test to
add coverage for enum values added later) and are using the sorted
build strategy instead.
One version of this was committed a long time ago already, in commit
9f984ba6d2, but it was quickly reverted because of buildfarm
failures. The failures were presumably caused by some small bugs, but
we never got around to debug and commit it again. This patch was
written from scratch, implementing the same idea, with some fragments
and ideas from the original patch.
Author: Bernd Helmle <mailings@oopsware.de>
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Discussion: https://www.postgresql.org/message-id/64d324ce2a6d535d3f0f3baeeea7b25beff82ce4.camel@oopsware.de
The issue is that the transactions prepared before two-phase decoding is
enabled can fail to replicate to the subscriber after being committed on a
promoted standby following a failover. This is because the two_phase_at
field of a slot, which tracks the LSN from which two-phase decoding
starts, is not synchronized to standby servers. Without two_phase_at, the
logical decoding might incorrectly identify prepared transaction as
already replicated to the subscriber after promotion of standby server,
causing them to be skipped.
To address the issue on HEAD, the two_phase_at field of the slot is
exposed by the pg_replication_slots view and allows the slot
synchronization to copy this value to the corresponding synced slot on the
standby server.
This bug is likely to occur if the user toggles the two_phase option to
true after initial slot creation. Given that altering the two_phase option
of a replication slot is not allowed in PostgreSQL 17, this bug is less
likely to occur. We can't change the view/function definition in
backbranch so we can't push the same fix but we are brainstorming an
appropriate solution for PG17.
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Masahiko Sawada <sawada.mshk@gmail.com>
Discussion: https://postgr.es/m/TYAPR01MB5724CC7C288535BBCEEE65DA94A72@TYAPR01MB5724.jpnprd01.prod.outlook.com
In the historical implementation of SQL functions (if they don't get
inlined), we built plans for all the contained queries at first call
within an outer query, and then re-used those plans for the duration
of the outer query, and then forgot everything. This was not ideal,
not least because the plans could not be customized to specific values
of the function's parameters. Our plancache infrastructure seems
mature enough to be used here. That will solve both the problem with
not being able to build custom plans and the problem with not being
able to share work across successive outer queries.
Aside from those performance concerns, this change fixes a
longstanding bugaboo with SQL functions: you could not write DDL that
would affect later statements in the same function. That's mostly
still true with new-style SQL functions, since the results of parse
analysis are baked into the stored query trees (and protected by
dependency records). But for old-style SQL functions, it will now
work much as it does with PL/pgSQL functions, because we delay parse
analysis and planning of each query until we're ready to run it.
Some edge cases that require replanning are now handled better too;
see for example the new rowsecurity test, where we now detect an RLS
context change that was previously missed.
One other edge-case change that might be worthy of a release note
is that we now insist that a SQL function's result be generated
by the physically-last query within it. Previously, if the last
original query was deleted by a DO INSTEAD NOTHING rule, we'd be
willing to take the result from the preceding query instead.
This behavior was undocumented except in source-code comments,
and it seems hard to believe that anyone's relying on it.
Along the way to this feature, we needed a few infrastructure changes:
* The plancache can now take either a raw parse tree or an
analyzed-but-not-rewritten Query as the starting point for a
CachedPlanSource. If given a Query, it is caller's responsibility
that nothing will happen to invalidate that form of the query.
We use this for new-style SQL functions, where what's in pg_proc is
serialized Query(s) and we trust the dependency mechanism to disallow
DDL that would break those.
* The plancache now offers a way to invoke a post-rewrite callback
to examine/modify the rewritten parse tree when it is rebuilding
the parse trees after a cache invalidation. We need this because
SQL functions sometimes adjust the parse tree to make its output
exactly match the declared result type; if the plan gets rebuilt,
that has to be re-done.
* There is a new backend module utils/cache/funccache.c that
abstracts the idea of caching data about a specific function
usage (a particular function and set of input data types).
The code in it is moved almost verbatim from PL/pgSQL, which
has done that for a long time. We use that logic now for
SQL-language functions too, and maybe other PLs will have use
for it in the future.
Author: Alexander Pyhalov <a.pyhalov@postgrespro.ru>
Co-authored-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com>
Discussion: https://postgr.es/m/8216639.NyiUUSuA9g@aivenlaptop
Move the discussion on protocol versions and version negotiation to a
new "Protocol versions" section. Add a table listing all the different
protocol versions, starting from the obsolete protocol version 2, and
the PostgreSQL versions that support each.
Discussion: https://www.postgresql.org/message-id/69f53970-1d55-4165-9151-6fb524e36af9@iki.fi
Currently, the cancel request key is a 32-bit token, which isn't very
much entropy. If you want to cancel another session's query, you can
brute-force it. In most environments, an unauthorized cancellation of
a query isn't very serious, but it nevertheless would be nice to have
more protection from it. Hence make the key longer, to make it harder
to guess.
The longer cancellation keys are generated when using the new protocol
version 3.2. For connections using version 3.0, short 4-bytes keys are
still used.
The new longer key length is not hardcoded in the protocol anymore,
the client is expected to deal with variable length keys, up to 256
bytes. This flexibility allows e.g. a connection pooler to add more
information to the cancel key, which might be useful for finding the
connection.
Reviewed-by: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Discussion: https://www.postgresql.org/message-id/508d0505-8b7a-4864-a681-e7e5edfe32aa@iki.fi
All supported version of the PostgreSQL server send the
NegotiateProtocolVersion message when an unsupported minor protocol
version is requested by a client. But many other applications that
implement the PostgreSQL protocol (connection poolers, or other
databases) do not, and the same is true for PostgreSQL server versions
older than 9.3. Connecting to such other applications thus fails if a
client requests a protocol version different than 3.0.
This patch adds a max_protocol_version connection option to libpq that
specifies the protocol version that libpq should request from the
server. Currently only 3.0 is supported, but that will change in a
future commit that bumps the protocol version. Even after that version
bump the default will likely stay 3.0 for the time being. Once more of
the ecosystem supports the NegotiateProtocolVersion message we might
want to change the default to the latest minor version.
This also adds the similar min_protocol_version connection option, to
allow the client to specify that connecting should fail if a lower
protocol version is attempted by the server. This can be used to
ensure that certain protocol features are used, which can be
particularly useful if those features impact security.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier versions)
Discussion: https://www.postgresql.org/message-id/CAGECzQTfc_O%2BHXqAo5_-xG4r3EFVsTefUeQzSvhEyyLDba-O9w@mail.gmail.com
Discussion: https://www.postgresql.org/message-id/CAGECzQRbAGqJnnJJxTdKewTsNOovUt4bsx3NFfofz3m2j-t7tA@mail.gmail.com
The reasoning for why all the message formats are parseable without
the explicit message length field is anachronistic; the real reason is
that protocol version 2 did not have a message length field. There's
nothing wrong with relying on the message length, like we do in the
CopyData messags, even though it often still makes sense to have
length fields for individual parts in messages.
Discussion: https://www.postgresql.org/message-id/02a4eed2-98f0-4796-9d4f-12128ff44fe0@iki.fi
This expands the NOT ENFORCED constraint flag, previously only
supported for CHECK constraints (commit ca87c415e2), to foreign key
constraints.
Normally, when a foreign key constraint is created on a table, action
and check triggers are added to maintain data integrity. With this
patch, if a constraint is marked as NOT ENFORCED, integrity checks are
no longer required, making these triggers unnecessary. Consequently,
when creating a NOT ENFORCED foreign key constraint, triggers will not
be created, and the constraint will be marked as NOT VALID.
Similarly, if an existing foreign key constraint is changed to NOT
ENFORCED, the associated triggers will be dropped, and the constraint
will also be marked as NOT VALID. Conversely, if a NOT ENFORCED
foreign key constraint is changed to ENFORCED, the necessary triggers
will be created, and the will be changed to VALID by performing
necessary validation.
Since not-enforced foreign key constraints have no triggers, the
shortcut used for example in psql and pg_dump to skip looking for
foreign keys if the relation is known not to have triggers no longer
applies. (It already didn't work for partitioned tables.)
Author: Amul Sul <sulamul@gmail.com>
Reviewed-by: Joel Jacobson <joel@compiler.org>
Reviewed-by: Andrew Dunstan <andrew@dunslane.net>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Isaac Morland <isaac.morland@gmail.com>
Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com>
Tested-by: Triveni N <triveni.n@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAAJ_b962c5AcYW9KUt_R_ER5qs3fUGbe4az-SP-vuwPS-w-AGA@mail.gmail.com
Even after reaching the minimum recovery point, if there are long-lived
write transactions with 64 subtransactions on the primary, the recovery
snapshot may not yet be ready for hot standby, delaying read-only
connections on the standby. Previously, when read-only connections were
not accepted due to this condition, the following error message was logged:
FATAL: the database system is not yet accepting connections
DETAIL: Consistent recovery state has not been yet reached.
This DETAIL message was misleading because the following message was
already logged in this case:
LOG: consistent recovery state reached
This contradiction, i.e., indicating that the recovery state was consistent
while also stating it wasn’t, caused confusion.
This commit improves the error message to better reflect the actual state:
FATAL: the database system is not yet accepting connections
DETAIL: Recovery snapshot is not yet ready for hot standby.
HINT: To enable hot standby, close write transactions with more than 64 subtransactions on the primary server.
To implement this, the commit introduces a new postmaster signal,
PMSIGNAL_RECOVERY_CONSISTENT. When the startup process reaches
a consistent recovery state, it sends this signal to the postmaster,
allowing it to correctly recognize that state.
Since this is not a clear bug, the change is applied only to the master
branch and is not back-patched.
Author: Atsushi Torikoshi <torikoshia@oss.nttdata.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Yugo Nagata <nagata@sraoss.co.jp>
Discussion: https://postgr.es/m/02db8cd8e1f527a8b999b94a4bee3165@oss.nttdata.com
Create a function that will sort the elements of an array
according to the element type's sort order. If the array
has more than one dimension, the sub-arrays of the first
dimension are sorted per normal array-comparison rules,
leaving their contents alone.
In support of this, add pg_type.typarray to the set of fields
cached by the typcache.
Author: Junwang Zhao <zhjwpku@gmail.com>
Co-authored-by: Jian He <jian.universality@gmail.com>
Reviewed-by: Aleksander Alekseev <aleksander@timescale.com>
Discussion: https://postgr.es/m/CAEG8a3J41a4dpw_-F94fF-JPRXYxw-GfsgoGotKcjs9LVfEEvw@mail.gmail.com
--copy-file-range and --swap were not mentioned in a few places
that discuss the available file transfer modes. This entire page
would likely benefit from an overhaul, but that's v19 material at
this point.
Oversights in commits d93627bcbe and 626d7236b6.
These are fairly basic, but better than nothing. While there are several
opportunities to link to these entries, this patch does not add any. They will
however be referenced by future patches.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/20250326183102.92.nmisch@google.com
hot_standby_feedback mechanics assume that clocks are synchronized,
but it was not clear from documentation.
Author: Jakub Wartak <jakub.wartak@enterprisedb.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Discussion: https://postgr.es/m/CAKZiRmwBcALLrDgCyEhHP1enUxtPMjyNM_d1A2Lng3_6Rf4Qfw%40mail.gmail.com
Previously effective_io_concurrency and maintenance_io_concurrency could not
be set above 0 on machines without fadvise support. AIO enables IO concurrency
without such support, via io_method=worker.
Currently only subsystems using the read stream API will take advantage of
this. Other users of maintenance_io_concurrency (like recovery prefetching)
which leverage OS advice directly will not benefit from this change. In those
cases, maintenance_io_concurrency will have no effect on I/O behavior.
Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/CAAKRu_atGgZePo=_g6T3cNtfMf0QxpvoUh5OUqa_cnPdhLd=gw@mail.gmail.com
With AIO it does not make sense anymore to track the time for each individual
IO, as multiple IOs can be in-flight at the same time. Instead we now track
the time spent *waiting* for IOs.
This should be reflected in the docs. While, so far, we only do a subset of
reads, and no other operations, via AIO, describing the GUC and view columns
as measuring IO waits is accurate for synchronous and asynchronous IO.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/5dzyoduxlvfg55oqtjyjehez5uoq6hnwgzor4kkybkfdgkj7ag@rbi4gsmzaczk
Adds a new function, validating two kinds of invariants on a GIN index:
- parent-child consistency: Paths in a GIN graph have to contain
consistent keys. Tuples on parent pages consistently include tuples
from child pages; parent tuples do not require any adjustments.
- balanced-tree / graph: Each internal page has at least one downlink,
and can reference either only leaf pages or only internal pages.
The GIN verification is based on work by Grigory Kryachko, reworked by
Heikki Linnakangas and with various improvements by Andrey Borodin.
Investigation and fixes for multiple bugs by Kirill Reshke.
Author: Grigory Kryachko <GSKryachko@gmail.com>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Andrey Borodin <amborodin@acm.org>
Reviewed-By: José Villanova <jose.arthur@gmail.com>
Reviewed-By: Aleksander Alekseev <aleksander@timescale.com>
Reviewed-By: Nikolay Samokhvalov <samokhvalov@gmail.com>
Reviewed-By: Andres Freund <andres@anarazel.de>
Reviewed-By: Tomas Vondra <tomas.vondra@enterprisedb.com>
Reviewed-By: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-By: Mark Dilger <mark.dilger@enterprisedb.com>
Reviewed-By: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/45AC9B0A-2B45-40EE-B08F-BDCF5739D1E1%40yandex-team.ru
The '--all' option indicates that the tool queries the source server
(publisher) for all databases and creates subscriptions on the target
server (subscriber) for databases with matching names. Without this user
needs to explicitly specify all databases by using -d option for each
database.
This simplifies converting a physical standby to a logical subscriber,
particularly during upgrades.
The options '--database', '--publication', '--subscription', and
'--replication-slot' cannot be used when '--all' is specified.
Author: Shubham Khanna <khannashubham1197@gmail.com>
Reviewed-by: vignesh C <vignesh21@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Euler Taveira <euler@eulerto.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Discussion: https://postgr.es/m/CAHv8RjKhA=_h5vAbozzJ1Opnv=KXYQHQ-fJyaMfqfRqPpnC2bA@mail.gmail.com
Commit 62d712ecfd introduced the capability to calculate the same
queryId for queries with different lengths of constants in a list for an
IN clause. This behavior was originally enabled with a GUC
query_id_squash_values. After a discussion about the value of such a
GUC, it was decided to back out of the use of a GUC and make the
squashing behavior the only available option.
Author: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/Z-LZyygkkNyA8-kR@msg.df7cb.de
Discussion: https://postgr.es/m/CA+q6zcVTK-3C-8NWV1oY2NZrvtnMCDqnyYYyk1T7WMUG65MeOQ@mail.gmail.com
Commit d45597f72f introduced the ability to change a not-null
constraint from NO INHERIT to INHERIT and vice versa, but we included
the SET noise word in the syntax for it. The SET turns out not to be
necessary and goes against what the SQL standard says for other ALTER
TABLE subcommands, so remove it.
This changes the way this command is processed for constraint types
other than not-null, so there are some error message changes.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Suraj Kharage <suraj.kharage@enterprisedb.com>
Discussion: https://postgr.es/m/202503251602.vsxaehsyaoac@alvherre.pgsql
The implementation of FSM for indexes is simpler than heap, where 0 is
used to track if a page is in-use and (BLCKSZ - 1) if a page is free.
One comment in indexfsm.c and one description in the documentation of
pg_freespacemap were incorrect about that.
Author: Alex Friedman <alexf01@gmail.com>
Discussion: https://postgr.es/m/71eef655-c192-453f-ac45-2772fec2cb04@gmail.com
Backpatch-through: 13
Performing AIO using io_uring can be considerably faster than
io_method=worker, particularly when lots of small IOs are issued, as
a) the context-switch overhead for worker based AIO becomes more significant
b) the number of IO workers can become limiting
io_uring, however, is linux specific and requires an additional compile-time
dependency (liburing).
This implementation is fairly simple and there are substantial optimization
opportunities.
The description of the existing AIO_IO_COMPLETION wait event is updated to
make the difference between it and the new AIO_IO_URING_EXECUTION clearer.
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Jakub Wartak <jakub.wartak@enterprisedb.com>
Discussion: https://postgr.es/m/uvrtrknj4kdytuboidbhwclo4gxhswwcpgadptsjvjqcluzmah%40brqs62irg4dt
Discussion: https://postgr.es/m/20210223100344.llw5an2aklengrmn@alap3.anarazel.de
Discussion: https://postgr.es/m/stj36ea6yyhoxtqkhpieia2z4krnam7qyetc57rfezgk4zgapf@gcnactj4z56m
Ephemeral inconsistencies across multiple attributes of pg_stat_activity
can exist as the system is designed to be efficient with a low overhead.
This question is raised by users from time to time based on the data
read in the view, so let's add a note in the docs about this
possibility.
Author: Alex Friedman <alexf01@gmail.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/8a275154-a654-44b0-ab37-197802f04c7b@gmail.com
It added bogus whitespace at the end of a line in the documentation.
It should not have done that.
The pg_overexplain tests must SET debug_parallel_query = false,
not just RESET debug_parallel_query, or we get failures on test
machines that make debug_parallel_query = true the defualt.
There's a fair amount of information in the Plan and PlanState trees
that isn't printed by any existing EXPLAIN option. This means that,
when working on the planner, it's often necessary to rely on facilities
such as debug_print_plan, which produce excessively voluminous
output. Hence, use the new EXPLAIN extension facilities to implement
EXPLAIN (DEBUG) and EXPLAIN (RANGE_TABLE) as extensions to the core
EXPLAIN facility.
A great deal more could be done here, and the specific choices about
what to print and how are definitely arguable, but this is at least
a starting point for discussion and a jumping-off point for possible
future improvements.
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviweed-by: Andrei Lepikhov <lepihov@gmail.com> (who didn't like it)
Discussion: http://postgr.es/m/CA+TgmoZfvQUBWQ2P8iO30jywhfEAKyNzMZSR+uc2xr9PZBw6eQ@mail.gmail.com
This macro allows dynamically loaded shared libraries (modules) to
provide a wired-in module name and version, and possibly other
compile-time-constant fields in future. This information can be
retrieved with the new pg_get_loaded_modules() function.
This feature is expected to be particularly useful for modules
that do not have any exposed SQL functionality and thus are
not associated with a SQL-level extension object. But even for
modules that do belong to extensions, being able to verify the
actual code version can be useful.
Author: Andrei Lepikhov <lepihov@gmail.com>
Reviewed-by: Yurii Rashkovskii <yrashk@omnigres.com>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/dd4d1b59-d0fe-49d5-b28f-1e463b68fa32@gmail.com
This enables SCRAM authentication for dblink (using dblink_fdw) when
connecting to a foreign server without having to store a plain-text
password on user mapping options
This uses the same approach as it was implemented for postgres_fdw in
commit 761c79508e. (It also contains the equivalent of the
subsequent fixes 76563f88cf and d2028e9bbc1.)
Author: Matheus Alcantara <mths.dev@pm.me>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFY6G8ercA1KES%3DE_0__R9QCTR805TTyYr1No8qF8ZxmMg8z2Q%40mail.gmail.com
These are useful general-purpose math functions which are included in
POSIX and C99, and are commonly included in other math libraries, so
expose them as SQL-callable functions.
Author: Dean Rasheed <dean.a.rasheed@gmail.com>
Reviewed-by: Stepan Neretin <sncfmgg@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Dmitry Koval <d.koval@postgrespro.ru>
Reviewed-by: Alexandra Wang <alexandra.wang.oss@gmail.com>
Discussion: https://postgr.es/m/CAEZATCXpGyfjXCirFk9au+FvM0y2Ah+2-0WSJx7MO368ysNUPA@mail.gmail.com
custom_query_jumble (introduced in 5ac462e2b7 as a node field
attribute) is now assigned to the expanded reference name "eref" of
RangeTblEntry, adding in the query jumble computation the non-qualified
aliased relation name, without the list of column names. The relation
OID is removed from the query jumbling.
The effects of this change can be seen in the tests added by
3430215fe3, where pg_stat_statements (PGSS) entries are now grouped
using the relation name, ignoring the relation search_path may point at.
For example, these two relations are different, but are now grouped in a
single PGSS entry as they are assigned the same query ID:
CREATE TABLE foo1.tab (a int);
CREATE TABLE foo2.tab (b int);
SET search_path = 'foo1';
SELECT count(*) FROM tab;
SET search_path = 'foo2';
SELECT count(*) FROM tab;
SELECT count(*) FROM foo1.tab;
SELECT count(*) FROM foo2.tab;
SELECT query, calls FROM pg_stat_statements WHERE query ~ 'FROM tab';
query | calls
--------------------------+-------
SELECT count(*) FROM tab | 4
(1 row)
It is still possible to use an alias in the FROM clause to split these.
This behavior is useful for relations re-created with the same name,
where queries based on such relations would be grouped in the same
PGSS entry. For permanent schemas, it should not really matter in
practice. The main benefit is for workloads that use a lot of temporary
relations, which are usually re-created with the same name continuously.
These can be a heavy source of bloat in PGSS depending on the workload.
Such entries can now be grouped together, improving the user experience.
The original idea from Christoph Berg used catalog lookups to find
temporary relations, something that the query jumble has never done, and
it could cause some performance regressions. The idea to use
RangeTblEntry.eref and the relation name, applying the same rules for
all relations, temporary and not temporary, has been proposed by Tom
Lane. The documentation additions have been suggested by Sami Imseih.
Author: Michael Paquier <michael@paquier.xyz>
Co-authored-by: Sami Imseih <samimseih@gmail.com>
Reviewed-by: Christoph Berg <myon@debian.org>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Sami Imseih <samimseih@gmail.com>
Discussion: https://postgr.es/m/Z9iWXKGwkm8RAC93@msg.df7cb.de
By adding the positive variants of options, in addition to the
negative variants that already exist, users can be explicit about what
pg_dump should produce.
Discussion: https://postgr.es/m/bd0513e4b1ea2b2f2d06f02720c6579711cb62a6.camel@j-davis.com
Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
This new option instructs pg_upgrade to move the data directories
from the old cluster to the new cluster and then to replace the
catalog files with those generated for the new cluster. This mode
can outperform --link, --clone, --copy, and --copy-file-range,
especially on clusters with many relations.
However, this mode creates many garbage files in the old cluster,
which can prolong the file synchronization step if
--sync-method=syncfs is used. To handle that, we recommend using
--sync-method=fsync with this mode, and pg_upgrade internally uses
"initdb --sync-only --no-sync-data-files" for file synchronization.
pg_upgrade will synchronize the catalog files as they are
transferred. We assume that the database files transferred from
the old cluster were synchronized prior to upgrade.
This mode also complicates reverting to the old cluster, so we
recommend restoring from backup upon failure during or after file
transfer. We did consider teaching pg_upgrade how to generate a
revert script for such failures, but we decided against it due to
the rarity of failing during file transfer, the complexity of
generating the script, and the potential for misusing the script.
The new mode is limited to clusters located in the same file
system. With some effort, we could probably support upgrades
between different file systems, but this mode is unlikely to offer
much benefit if we have to copy the files across file system
boundaries.
It is also limited to upgrades from version 10 or newer. There are
a few known obstacles for using swap mode to upgrade from older
versions. For example, the visibility map format changed in v9.6,
and the sequence tuple format changed in v10. In fact, swap mode
omits the --sequence-data option in its uses of pg_dump and instead
reuses the old cluster's sequence data files. While teaching swap
mode to deal with these kinds of changes is surely possible (and we
may have to deal with similar problems in the future, anyway), it
doesn't seem worth the effort to support upgrades from
long-unsupported versions.
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
This new option instructs pg_dump to dump sequence data when the
--no-data, --schema-only, or --statistics-only option is specified.
This was originally considered for commit a7e5457db8, but it was
left out at that time because there was no known use-case. A
follow-up commit will use this to optimize pg_upgrade's file
transfer step.
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
This new option instructs initdb to skip synchronizing any files
in database directories, the database directories themselves, and
the tablespace directories, i.e., everything in the base/
subdirectory and any other tablespace directories. Other files,
such as those in pg_wal/ and pg_xact/, will still be synchronized
unless --no-sync is also specified. --no-sync-data-files is
primarily intended for internal use by tools that separately ensure
the skipped files are synchronized to disk. A follow-up commit
will use this to help optimize pg_upgrade's file transfer step.
The --sync-method=fsync implementation of this option makes use of
a new exclude_dir parameter for walkdir(). When not NULL,
exclude_dir specifies a directory to skip processing. The
--sync-method=syncfs implementation of this option just skips
synchronizing the non-default tablespace directories. This means
that initdb will still synchronize some or all of the database
files, but there's not much we can do about that.
Discussion: https://postgr.es/m/Zyvop-LxLXBLrZil%40nathan
For import and export, use schemaname/relname rather than
regclass.
This is more natural during export, fits with the other arguments
better, and it gives better control over error handling in case we
need to downgrade more errors to warnings.
Also, use text for the argument types for schemaname, relname, and
attname so that casts to "name" are not required.
Author: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://postgr.es/m/CADkLM=ceOSsx_=oe73QQ-BxUFR2Cwqum7-UP_fPe22DBY0NerA@mail.gmail.com
The default interval for \watch to wait between executing queries,
when executed without a specified interval, was hardcoded to two
seconds. This adds the new variable WATCH_INTERVAL which is used
to set the default interval, making it configurable for the user.
This makes \watch the first command which has a user configurable
default setting.
Author: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Kirill Reshke <reshkekirill@gmail.com>
Reviewed-by: Masahiro Ikeda <ikedamsh@oss.nttdata.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Reviewed-by: Greg Sabino Mullane <htamfids@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://postgr.es/m/B2FD26B4-8F64-4552-A603-5CC3DF1C7103@yesql.se
Previously we used pg_int64 in three function prototypes in libpq. It
was added by commit 461ef73f to expose the platform-dependent type used
for int64 in the C89 era. As of commit 962da900 it is defined as
standard int64_t, and the dust seems to have settled.
Let's just use int64_t directly in these three client-facing functions
instead of (yet) another name. We've required C99 and thus <stdint.h>
since PostgreSQL 12, C89 and C++98 compilers are long gone, and client
applications very likely use standard types for their own 64-bit needs.
This also cleans up the obscure placement of a new #include <stdint.h>
directive in postgres_ext.h, required for the new definition. The
typedef was hiding in there for historical reasons, but it doesn't fit
postgres_ext.h's own description of its purpose and there is no evidence
of client applications including postgres_ext.h directly to see it.
Keep a typedef marked deprecated for backward compatibility, but move it
into libpq-fe.h where it was used.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CA%2BhUKGKn_EkNNGMY5RzMcKP%2Ba6urT4JF%3DCPhw_zHtQwjvX6P2g%40mail.gmail.com
Until now max_files_per_process=N limited each backend to open N files in
total (minus a safety factor), even if there were already more files opened in
postmaster and inherited by backends. Change max_files_per_process to control
how many additional files each process is allowed to open.
The main motivation for this is the patch to add io_method=io_uring, which
needs to open one file for each backend. Without this patch, even if
RLIMIT_NOFILE is high enough, postmaster will fail in set_max_safe_fds() if
started with a high max_connections. The cause of the failure is that, until
now, set_max_safe_fds() subtracted the already open files from
max_files_per_process.
Reviewed-by: Noah Misch <noah@leadboat.com>
Discussion: https://postgr.es/m/w6uiicyou7hzq47mbyejubtcyb2rngkkf45fk4q7inue5kfbeo@bbfad3qyubvs
Discussion: https://postgr.es/m/CAGECzQQh6VSy3KG4pN1d=h9J=D1rStFCMR+t7yh_Kwj-g87aLQ@mail.gmail.com
Each pg_recvlogical action requires specific options. For example,
--slot, --dbname, and --file must be specified with the --start action.
Previously, the documentation did not clearly outline these requirements.
This commit updates the documentation to explicitly state
the necessary options for each action.
Author: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Co-authored-by: Fujii Masao <masao.fujii@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: David G. Johnston <david.g.johnston@gmail.com>
Discussion: https://postgr.es/m/OSCPR01MB14966930B4357BAE8C9D68A8AF5C72@OSCPR01MB14966.jpnprd01.prod.outlook.com
SCRAM pass-through should not bypass the FDW security check as it was
implemented for postgres_fdw in commit 761c79508e.
This commit improves the security check by adding new SCRAM
pass-through checks to ensure that the required SCRAM connection
options are not overwritten by the user mapping or foreign server
options. This is meant to match the security requirements for a
password-using connection.
Since libpq has no SCRAM-specific equivalent of
PQconnectionUsedPassword(), we enforce this instead by making the
use_scram_passthrough option of postgres_fdw imply
require_auth=scram-sha-256. This means that if use_scram_passthrough
is set, some situations that might otherwise have worked are
preempted, for example GSSAPI with delegated credentials. This could
be enhanced in the future if there is desire for more flexibility.
Reported-by: Jacob Champion <jacob.champion@enterprisedb.com>
Author: Matheus Alcantara <mths.dev@pm.me>
Co-authored-by: Jacob Champion <jacob.champion@enterprisedb.com>
Reviewed-by: Jacob Champion <jacob.champion@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/CAFY6G8ercA1KES%3DE_0__R9QCTR805TTyYr1No8qF8ZxmMg8z2Q%40mail.gmail.com
Introduce a new conflict type, multiple_unique_conflicts, to handle cases
where an incoming row during logical replication violates multiple UNIQUE
constraints.
Previously, the apply worker detected and reported only the first
encountered key conflict (insert_exists/update_exists), causing repeated
failures as each constraint violation needs to be handled one by one
making the process slow and error-prone.
With this patch, the apply worker checks all unique constraints upfront
once the first key conflict is detected and reports
multiple_unique_conflicts if multiple violations exist. This allows users
to resolve all conflicts at once by deleting all conflicting tuples rather
than dealing with them individually or skipping the transaction.
In the future, this will also allow us to specify different resolution
handlers for such a conflict type.
Add the stats for this conflict type in pg_stat_subscription_stats.
Author: Nisha Moond <nisha.moond412@gmail.com>
Author: Zhijie Hou <houzj.fnst@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Peter Smith <smithpb2250@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Discussion: https://postgr.es/m/CABdArM7FW-_dnthGkg2s0fy1HhUB8C3ELA0gZX1kkbs1ZZoV3Q@mail.gmail.com