Commit 534287403 invented SHARED_DEPENDENCY_INITACL entries in
pg_shdepend, but installed them only for non-owner roles mentioned
in a pg_init_privs entry. This turns out to be the wrong thing,
because there is nothing to cue REASSIGN OWNED to go and update
pg_init_privs entries when the object's ownership is reassigned.
That leads to leaving dangling entries in pg_init_privs, as
reported by Hannu Krosing. Instead, install INITACL entries for
all roles mentioned in pg_init_privs entries (except pinned roles),
and change ALTER OWNER to not touch them, just as it doesn't
touch pg_init_privs entries.
REASSIGN OWNED will now substitute the new owner OID for the old
in pg_init_privs entries. This feels like perhaps not quite the
right thing, since pg_init_privs ought to be a historical record
of the state of affairs just after CREATE EXTENSION. However,
it's hard to see what else to do, if we don't want to disallow
dropping the object's original owner. In any case this is
better than the previous do-nothing behavior, and we're unlikely
to come up with a superior solution in time for v17.
While here, tighten up some coding rules about how ACLs in
pg_init_privs should never be null or empty. There's not any
obvious reason to allow that, and perhaps asserting that it's
not so will catch some bugs. (We were previously inconsistent
on the point, with some code paths taking care not to store
empty ACLs and others not.)
This leaves recordExtensionInitPrivWorker not doing anything
with its ownerId argument, but we'll deal with that separately.
catversion bump forced because of change of expected contents
of pg_shdepend when pg_init_privs entries exist.
Discussion: https://postgr.es/m/CAMT0RQSVgv48G5GArUvOVhottWqZLrvC5wBzBa4HrUdXe9VRXw@mail.gmail.com
It used to check if the target server is connected to the primary
server (send required WAL) to rapidly react when the process won't
succeed. This code is not enough to guarantee that the recovery
process will complete. There is a window between the walreceiver
shutdown and the pg_is_in_recovery() returns false that can reach
NUM_CONN_ATTEMPTS attempts and fails.
Instead, rely only on the --recovery-timeout option to give up the
process after the specified number of seconds.
This should help with buildfarm failures on slow machines.
Author: Euler Taveira <euler.taveira@enterprisedb.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Discussion: https://www.postgresql.org/message-id/776c5cac-5ef5-4001-b1bc-5b698bc0c62a%40app.fastmail.com
Commit 667e65aac3 changed both num_dead_tuples and max_dead_tuples
columns to dead_tuple_bytes and max_dead_tuple_bytes columns,
respectively. But as per discussion, the number of dead tuples
collected still provides meaningful insights for users.
This commit reintroduces the column for the count of dead tuples,
renamed as num_dead_item_ids. It avoids confusion with the number of
dead tuples removed by VACUUM, which includes dead heap-only tuples
but excludes any pre-existing LP_DEAD items left behind by
opportunistic pruning.
Bump catalog version.
Reviewed-by: Peter Geoghegan, Álvaro Herrera, Andrey Borodin
Discussion: https://postgr.es/m/CAD21AoBL5sJE9TRWPyv%2Bw7k5Ee5QAJqDJEDJBUdAaCzGWAdvZw%40mail.gmail.com
Commit f5e4dedfa exposed libpq's internal function PQsocketPoll
without a lot of thought about whether that was an API we really
wanted to chisel in stone. The main problem with it is the use of
time_t to specify the timeout. While we do want an absolute time
so that a loop around PQsocketPoll doesn't have problems with
timeout slippage, time_t has only 1-second resolution. That's
already problematic for libpq's own internal usage --- for example,
pqConnectDBComplete has long had a kluge to treat "connect_timeout=1"
as 2 seconds so that it doesn't accidentally round to nearly zero.
And it's even less likely to be satisfactory for external callers.
Hence, let's change this while we still can.
The best idea seems to be to use an int64 count of microseconds since
the epoch --- basically the same thing as the backend's TimestampTz,
but let's use the standard Unix epoch (1970-01-01) since that's more
likely for clients to be easy to calculate. Millisecond resolution
would be plenty for foreseeable uses, but maybe the day will come that
we're glad we used microseconds.
Also, since time(2) isn't especially helpful for computing timeouts
defined this way, introduce a new function PQgetCurrentTimeUSec
to get the current time in this form.
Remove the hack in pqConnectDBComplete, so that "connect_timeout=1"
now means what you'd expect.
We can also remove the "#include <time.h>" that f5e4dedfa added to
libpq-fe.h, since there's no longer a need for time_t in that header.
It seems better for v17 not to enlarge libpq-fe.h's include footprint
from what it's historically been, anyway.
I also failed to resist the temptation to do some wordsmithing
on PQsocketPoll's documentation.
Patch by me, per complaint from Dominique Devienne.
Discussion: https://postgr.es/m/913559.1718055575@sss.pgh.pa.us
It wasn't in the documentation at all (even though we document all the
other debugging-like options). Also, change the --help output to show
that it exits after showing, similar to other options.
If the CALL is within an atomic context (e.g. there's an outer
transaction block), _SPI_execute_plan should acquire a fresh snapshot
to execute any such functions with. We failed to do that and instead
passed them the Portal snapshot, which had been acquired at the start
of the current SQL command. This'd lead to seeing stale values of
rows modified since the start of the command.
This is arguably a bug in 84f5c2908: I failed to see that "are we in
non-atomic mode" needs to be defined the same way as it is further
down in _SPI_execute_plan, i.e. check !_SPI_current->atomic not just
options->allow_nonatomic. Alternatively the blame could be laid on
plpgsql, which is unconditionally passing allow_nonatomic = true
for CALL/DO even when it knows it's in an atomic context. However,
fixing it in spi.c seems like a better idea since that will also fix
the problem for any extensions that may have copied plpgsql's coding
pattern.
While here, update an obsolete comment about _SPI_execute_plan's
snapshot management.
Per report from Victor Yegorov. Back-patch to all supported versions.
Discussion: https://postgr.es/m/CAGnEboiRe+fG2QxuBO2390F7P8e2MQ6UyBjZSL_w1Cej+E4=Vw@mail.gmail.com
The documentation for POSIX semaphores is missing a reference to
max_wal_senders. This commit fixes that in the same way that
commit 4ebe51a5fb fixed the same issue in the documentation for
System V semaphores.
Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
Backpatch-through: 12
This HBA entry was using "local" while specifying an address, which was
incorrect. While in it, this adjusts the format of the entry to be
aligned with the surroundings.
Oversight in 8fea86830e.
Reported-by: Stéphane Schildknecht
Reviewed-by: David G. Johnston
Discussion: https://postgr.es/m/44662001-54c4-4bfd-be93-35e01ca25fa1@gmail.com
Backpatch-through: 16
The formulas for SEMMNI and SEMMNS do not include the archiver
process, which was converted to an auxiliary process in v14, and
the WAL summarizer process, which was introduced in v17. This
commit corrects these formulas and adds a missing reference to
max_wal_senders nearby. Since this section of the documentation
tends to be incorrect quite often, we should likely give up on
documenting the exact formulas in favor of something less fragile,
but that is left as a future exercise.
Reported-by: Sami Imseih
Reviewed-by: Sami Imseih
Discussion: https://postgr.es/m/20240517164452.GA1914161%40nathanxps13
Backpatch-through: 12
Parameter column_name must be an existing column because ALTER
MATERIALIZED VIEW cannot add new columns. The old description was
likely copied from ALTER TABLE.
Author: Erik Wienhold
Discussion: https://postgr.es/m/6880ca53-7961-4eeb-86d5-6bd05fc2027e@ewie.name
Backpatch-through: 12
When MERGE/SPLIT created new partitions, it was cloning the extended
statistics of the parent table.
However, extended stats on partitioned tables don't behave like
indexes on partitioned tables (which exist only to create physical
indexes on child tables). Rather, extended stats on a parent 1) cause
extended stats to be collected and computed across the whole partition
hierarchy, and 2) do not cause extended stats to be computed for the
individual partitions.
"CREATE TABLE ... PARTITION OF" command doesn't copy extended
statistics. This commit makes createPartitionTable() behave
consistently.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/ZiJW1g2nbQs9ekwK%40pryzbyj2023
Author: Alexander Korotkov, Justin Pryzby
This reverts commit e1e83a7171.
Per further discussion on pgsql-release, we wish to ship beta1 with
that feature, and patch the bug, rather than shipping beta1 with
that feature reverted.
After further review, we want to move in the direction of always
quoting GUC names in error messages, rather than the previous (PG16)
wildly mixed practice or the intermittent (mid-PG17) idea of doing
this depending on how possibly confusing the GUC name is.
This commit applies appropriate quotes to (almost?) all mentions of
GUC names in error messages. It partially supersedes a243569bf6 and
8d9978a717, which had moved things a bit in the opposite direction
but which then were abandoned in a partial state.
Author: Peter Smith <smithpb2250@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CAHut%2BPv-kSN8SkxSdoHano_wPubqcg5789ejhCDZAcLFceBR-w%40mail.gmail.com
Specifically, it terminates a background worker even if the caller
couldn't terminate the background worker with pg_terminate_backend().
Commit 3a9b18b309 neglected to update
this. Back-patch to v13, which introduced DROP DATABASE FORCE.
Reviewed by Amit Kapila. Reported by Kirill Reshke.
Discussion: https://postgr.es/m/20240429212756.60.nmisch@google.com
There were three problems with the sslnegotiation options:
1. The sslmode=prefer and sslnegotiation=requiredirect combination was
somewhat dangerous, as you might unintentionally fall back to
plaintext authentication when connecting to a pre-v17 server.
2. There was an asymmetry between 'postgres' and 'direct'
options. 'postgres' meant "try only traditional negotiation", while
'direct' meant "try direct first, and fall back to traditional
negotiation if it fails". That was apparent only if you knew that the
'requiredirect' mode also exists.
3. The "require" word in 'requiredirect' suggests that it's somehow
more strict or more secure, similar to sslmode. However, I don't
consider direct SSL connections to be a security feature.
To address these problems:
- Only allow sslnegotiation='direct' if sslmode='require' or
stronger. And for the record, Jacob and Robert felt that we should do
that (or have sslnegotiation='direct' imply sslmode='require') anyway,
regardless of the first issue.
- Remove the 'direct' mode that falls back to traditional negotiation,
and rename what was called 'requiredirect' to 'direct' instead. In
other words, there is no "try both methods" option anymore, 'postgres'
now means the traditional negotiation and 'direct' means a direct SSL
connection.
Reviewed-by: Jelte Fennema-Nio, Robert Haas, Jacob Champion
Discussion: https://www.postgresql.org/message-id/d3b1608a-a1b6-4eda-9ec5-ddb3e4375808%40iki.fi
The CREATE SUBSCRIPTION documentation mentions "false" is the default
option, so let's use true|false rather than on|off in the text which
talks about this option in the ALTER SUBSCRIPTION page.
The other boolean options mentioned in create_subscription.sgml use true
and false so it makes sense to be consistent with these. The "failover"
option is new to v17.
Author: Peter Smith
Discussion: https://postgr.es/m/CAHut+Ps-RqrggaJU5w85BbeQzw9CLmmLgADVJoJ=xx_4D5CWvw@mail.gmail.com
This feature set did not handle empty ranges correctly, and it's now
too late for PostgreSQL 17 to fix it.
The following commits are reverted:
6db4598fcb Add stratnum GiST support function
46a0cd4cef Add temporal PRIMARY KEY and UNIQUE constraints
86232a49a4 Fix comment on gist_stratnum_btree
030e10ff1a Rename pg_constraint.conwithoutoverlaps to conperiod
a88c800deb Use daterange and YMD in without_overlaps tests instead of tsrange.
5577a71fb0 Use half-open interval notation in without_overlaps tests
34768ee361 Add temporal FOREIGN KEY contraints
482e108cd3 Add test for REPLICA IDENTITY with a temporal key
c3db1f30cb doc: clarify PERIOD and WITHOUT OVERLAPS in CREATE TABLE
144c2ce0cc Fix ON CONFLICT DO NOTHING/UPDATE for temporal indexes
Discussion: https://www.postgresql.org/message-id/d0b64a7a-dfe4-4b84-a906-c7dedfa40a3e@eisentraut.org
postgres --name=value and -c name=value are equivalents. This commit
expands the documentation of libpq's "option" connection parameter and
the server startup sequence for shell interactions to mention both
rather than only -c.
Extracted from a larger patch by the same authors.
Reported-by: Alexey Palazhchenko
Author: David Johnston, Aleksander Alekseev
Reviewed-by: Nathan Bossart, Peter Eisentraut, Álvaro Herrera
Discussion: https://postgr.es/m/CAJ7c6TMkuLiLfrA+EFCPYfhXoMKRxxssB5c86+ibxfaz6+=Sdg@mail.gmail.com
One is about subqueries becoming joins and the other is about optimizing
LIMIT by improving startup cost.
Reported-by: Andy Fan
Discussion: https://postgr.es/m/87h6f4ncc6.fsf@163.com
Reviewed-by: David Rowley
Backpatch-through: master
Presently, when this function is called for an unlogged sequence on
a standby server, it will error out with a message like
ERROR: could not open file "base/5/16388": No such file or directory
Since the pg_sequences system view uses pg_sequence_last_value(),
it can error similarly. To fix, modify the function to return NULL
for unlogged sequences on standby servers. Since this bug is
present on all versions since v15, this approach is preferable to
making the ERROR nicer because we need to repair the pg_sequences
view without modifying its definition on released versions. For
consistency, this commit also modifies the function to return NULL
for other sessions' temporary sequences. The pg_sequences view
already appropriately filters out such sequences, so there's no bug
there, but we might as well offer some defense in case someone
invokes this function directly.
Unlogged sequences were first introduced in v15, but temporary
sequences are much older, so while the fix for unlogged sequences
is only back-patched to v15, the temporary sequence portion is
back-patched to all supported versions.
We could also remove the privilege check in the pg_sequences view
definition in v18 if we modify this function to return NULL for
sequences for which the current user lacks privileges, but that is
left as a future exercise for when v18 development begins.
Reviewed-by: Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/20240501005730.GA594666%40nathanxps13
Backpatch-through: 12
There are some problems with the new way to handle these constraints
that were detected at the last minute, and require fixes that appear too
invasive to be doing this late in the cycle. Revert this (again) for
now, we'll try again with these problems fixed.
The following commits are reverted:
b0e96f3119 Catalog not-null constraints
9b581c5341 Disallow changing NO INHERIT status of a not-null constraint
d0ec2ddbe0 Fix not-null constraint test
ac22a9545c Move privilege check to the right place
b0f7dd915b Check stack depth in new recursive functions
3af7217942 Update information_schema definition for not-null constraints
c3709100be Fix propagating attnotnull in multiple inheritance
d9f686a72e Fix restore of not-null constraints with inheritance
d72d32f52d Don't try to assign smart names to constraints
0cd711271d Better handle indirect constraint drops
13daa33fa5 Disallow NO INHERIT not-null constraints on partitioned tables
d45597f72f Disallow direct change of NO INHERIT of not-null constraints
21ac38f498 Fix inconsistencies in error messages
Discussion: https://postgr.es/m/202405110940.joxlqcx4dogd@alvherre.pgsql
This commit extends the backend-side infrastructure of injection points
so as it becomes possible to register some input data when attaching a
point. This private data can be registered with the function name and
the library name of the callback when attaching a point, then it is
given as input argument to the callback. This gives the possibility for
modules to pass down custom data at runtime when attaching a point
without managing that internally, in a manner consistent with the
callback entry retrieved from the hash shmem table storing the injection
point data.
InjectionPointAttach() gains two arguments, to be able to define the
private data contents and its size.
A follow-up commit will rely on this infrastructure to close a race
condition with the injection point detach in the module
injection_points.
While on it, this changes InjectionPointDetach() to return a boolean,
returning false if a point cannot be detached. This has been mentioned
by Noah as useful when it comes to implement more complex tests with
concurrent point detach, solid with the automatic detach done for local
points in the test module.
Documentation is adjusted in consequence.
Per discussion with Noah Misch.
Reviewed-by: Noah Misch
Discussion: https://postgr.es/m/20240509031553.47@rfd.leadboat.com
A lot of the examples in this section are quite ancient and no
longer match the system's current behavior. I ran through them
and updated the sample output to match what I get today.
Notably, it seems that the tenk1 table got a few pages smaller back
around v12, so that a lot of the cost estimates for queries on that
table are a little different now. (This also caused some examples
to surprisingly not match others, depending on when they were made.)
One mergejoin-based example no longer produces the same plan at all,
so I had to work around that, including switching to a different
example of using an enable_xxx setting.
I also changed a couple of examples that were relying on nonexistent
tables or indexes. IMO the premise of this section is that the
examples should be more-or-less reproducible against the regression
database, so that these examples were unhelpful as given. It's not
that hard to find a corresponding example with the tables we do have.
Some of the EXPLAIN ANALYZE examples were out of date in that we show
more numbers than we did at the time.
Possibly chapter 68 could use similar review, but I'm out of
energy for today.
User-written triggers can modify or block the effects of SQL update
and delete operations. That includes operations that are executed
to implement foreign keys' referential integrity actions (such as
ON UPDATE SET NULL or ON DELETE CASCADE). Therefore it's possible
for a misdesigned trigger to result in a database state that violates
the foreign key constraint.
While this isn't great, the alternatives seem worse: in particular,
refusing to fire triggers for such updates would break many valuable
use-cases. We could also try to recheck the constraint after the
action, but that'd roughly double the already-high cost of FK
constraint enforcement, for no benefit in normal cases. So we've
always considered that it's on the trigger programmer's head to
avoid breaking RI actions. This was never documented anywhere,
though. Add a para to the Triggers chapter to explain it.
Laurenz Albe, David Johnston, Tom Lane
Discussion: https://postgr.es/m/b81fe38fcc25a81be6e2e5b3fc1ff624130762fa.camel@cybertec.at
Both the pgbench --help output and the reference page have sections
for initialization options, benchmarking options, and common options.
But the --debug option ended up in the wrong place on the reference
page. Fix that by making the documentation match the --help output.
The catalog view pg_stats_ext fails to consider privileges for
expression statistics. The catalog view pg_stats_ext_exprs fails
to consider privileges and row-level security policies. To fix,
restrict the data in these views to table owners or roles that
inherit privileges of the table owner. It may be possible to apply
less restrictive privilege checks in some cases, but that is left
as a future exercise. Furthermore, for pg_stats_ext_exprs, do not
return data for tables with row-level security enabled, as is
already done for pg_stats_ext.
On the back-branches, a fix-CVE-2024-4317.sql script is provided
that will install into the "share" directory. This file can be
used to apply the fix to existing clusters.
Bumps catversion on 'master' branch only.
Reported-by: Lukas Fittl
Reviewed-by: Noah Misch, Tomas Vondra, Tom Lane
Security: CVE-2024-4317
Backpatch-through: 14
Both the initdb --help output and the reference page have a section
for options and a section for less commonly used options. But some
recently added options were sprinkled around inconsistently. Fix that
by making the documentation match the --help output.
We support changing NO INHERIT constraint to INHERIT for constraints in
child relations when adding a constraint to some ancestor relation, and
also during pg_upgrade's schema restore; but other than those special
cases, command ALTER TABLE ADD CONSTRAINT should not be allowed to
change an existing constraint from NO INHERIT to INHERIT, as that would
require to process child relations so that they also acquire an
appropriate constraint, which we may not be in a position to do. (It'd
also be surprising behavior.)
It is conceivable that we want to allow ALTER TABLE SET NOT NULL to make
such a change; but in that case some more code is needed to implement it
correctly, so for now I've made that throw the same error message.
Also, during the prep phase of ALTER TABLE ADD CONSTRAINT, acquire locks
on all descendant tables; otherwise we might operate on child tables on
which no locks are held, particularly in the mode where a primary key
causes not-null constraints to be created on children.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
Discussion: https://postgr.es/m/7d923a66-55f0-3395-cd40-81c142b5448b@gmail.com
Commit 61461a300c accidentally misspelled the PGcancelConn struct
using the PQ prefix (which admittedly is a very easy typo to make).
Reported off-list.
Reported-by: Alexander Lakhin <exclusion@gmail.com>
The documentation said that you need to pick a suitable LC_COLLATE
setting in addition to setting the DETERMINISTIC flag. This would
have been correct if the libc provider supported nondeterministic
collations, but since it doesn't, you actually need to set the LOCALE
option.
Reviewed-by: Kashif Zeeshan <kashi.zeeshan@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/a71023c2-0ae0-45ad-9688-cf3b93d0d65b%40eisentraut.org
This commit makes new partitions created by ALTER TABLE ... SPLIT PARTITION
and ALTER TABLE ... MERGE PARTITIONS commands inherit the paret table access
method.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/84ada05b-be5c-473e-6d1c-ebe5dd21b190%40gmail.com
Reviewed-by: Pavel Borisov
The createPartitionTable() function is responsible for creating new partitions
for ALTER TABLE ... MERGE PARTITIONS, and ALTER TABLE ... SPLIT PARTITION
commands. It emulates the behaviour of CREATE TABLE ... (LIKE ...), where
new table persistence should be specified by the user. In the table
partitioning persistent of the partition and its parent must match. So, this
commit makes createPartitionTable() copy the persistence of the parent
partition.
Also, this commit makes createPartitionTable() recheck the persistence after
the new table creation. This is needed because persistence might be affected
by pg_temp in search_path.
This commit also changes the signature of createPartitionTable() making it
take the parent's Relation itself instead of the name of the parent relation,
and return the Relation of new partition. That doesn't lead to
complications, because both callers have the parent table open and need to
open the new partition.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/dbc8b96c-3cf0-d1ee-860d-0e491da20485%40gmail.com
Author: Dmitry Koval
Reviewed-by: Alexander Korotkov, Robert Haas, Justin Pryzby, Pavel Borisov
If an ACL recorded in pg_init_privs mentions a non-pinned role,
that reference must also be noted in pg_shdepend so that we know
that the role can't go away without removing the ACL reference.
Otherwise, DROP ROLE could succeed and leave dangling entries
behind, which is what's causing the recent upgrade-check failures
on buildfarm member copperhead.
This has been wrong since pg_init_privs was introduced, but it's
escaped notice because typical pg_init_privs entries would only
mention the bootstrap superuser (pinned) or at worst the owner
of the extension (who can't go away before the extension does).
We lack even a representation of such a role reference for
pg_shdepend. My first thought for a solution was entries listing
pg_init_privs in classid, but that doesn't work because then there's
noplace to put the granted-on object's classid. Rather than adding
a new column to pg_shdepend, let's add a new deptype code
SHARED_DEPENDENCY_INITACL. Much of the associated boilerplate
code can be cribbed from code for SHARED_DEPENDENCY_ACL.
A lot of the bulk of this patch just stems from the new need to pass
the object's owner ID to recordExtensionInitPriv, so that we can
consult it while updating pg_shdepend. While many callers have that
at hand already, a few places now need to fetch the owner ID of an
arbitrary privilege-bearing object. For that, we assume that there
is a catcache on the relevant catalog's OID column, which is an
assumption already made in ExecGrant_common so it seems okay here.
We do need an entirely new routine RemoveRoleFromInitPriv to perform
cleanup of pg_init_privs ACLs during DROP OWNED BY. It's analogous
to RemoveRoleFromObjectACL, but we can't share logic because that
function operates by building a command parsetree and invoking
existing GRANT/REVOKE infrastructure. There is of course no SQL
command that would update pg_init_privs entries when we're not in
process of creating their extension, so we need a routine that can
do the updates directly.
catversion bump because this changes the expected contents of
pg_shdepend. For the same reason, there's no hope of back-patching
this, even though it fixes a longstanding bug. Fortunately, the
case where it's a problem seems to be near nonexistent in the field.
If it weren't for the buildfarm breakage, I'd have been content to
leave this for v18.
Patch by me; thanks to Daniel Gustafsson for review and discussion.
Discussion: https://postgr.es/m/1745535.1712358659@sss.pgh.pa.us
The paragraph in the docs and the comment applied to
sslnegotiaton=direct, but not sslnegotiation=requiredirect. In
'requiredirect' mode, negotiated SSL is never used. Move the paragraph
in the docs under the description of 'direct' mode, and rephrase it.
Also the comment's reference to reusing a plaintext connection was
bogus. Authentication failure in plaintext mode only happens after
sending the startup packet, so the connection cannot be reused.
Reported-by: Jacob Champion
Discussion: https://www.postgresql.org/message-id/CAOYmi+=sj+1uydS0NR4nYzw-LRWp3Q-s5speBug5UCLSPMbvGA@mail.gmail.com
The --tablespace-mapping option was specified with required_argument
rather than no_argument, which is wrong. Since the actual argument
string passed to getopt_long() included "T:", the single-character
form of the option still worked, but the long form did not. Repair.
The call to getopt_long() erroneously included "P", which doesn't
correspond to any supported option. Remove.
The help message used "do not" in one place and "don't" in another.
Standardize on "do not".
The documentation erroneously stated that the tablespace mappings
would be applied relative to the pathnames in the first backup
specified on the command line, rather than the final one. Fix.
Thanks to Tomas Vondra and Daniel Gustafsson for alerting me to
these mistakes.
Discussion: http://postgr.es/m/CA+TgmoYFznwwaZhHSF1Ze7JeyBv-1yOoSrucKMw37WpF=7RP8g@mail.gmail.com
If not all backups have the same checksum status, but the final backup
has checksums enabled, then the output directory may include pages
with invalid checksums. Document this limitation and explain how to
work around it.
In a future release, we may want to teach pg_combinebackup to
recompute page checksums when required, but as feature freeze has come
and gone, it seems a bit too late to do that for this release.
Patch by me, reviewed by Daniel Gustafsson
Discussion: http://postgr.es/m/CA+TgmoZugzOSmgkx97u3pc0M7U8LycWvugqoyWBv6j15a4hE5g@mail.gmail.com
Allow pg_sync_replication_slots() to error out during promotion of standby.
This makes the behavior of the SQL function consistent with the slot sync
worker. We also ensured that pg_sync_replication_slots() cannot be
executed if sync_replication_slots is enabled and the slotsync worker is
already running to perform the synchronization of slots. Previously, it
would have succeeded in cases when the worker is idle and failed when it
is performing sync which could confuse users.
This patch fixes another issue in the slot sync worker where
SignalHandlerForShutdownRequest() needs to be registered *before* setting
SlotSyncCtx->pid, otherwise, the slotsync worker could miss handling
SIGINT sent by the startup process(ShutDownSlotSync) if it is sent before
worker could register SignalHandlerForShutdownRequest(). To be consistent,
all signal handlers' registration is moved to a prior location before we
set the worker's pid.
Ensure that we clean up synced temp slots at the end of
pg_sync_replication_slots() to avoid such slots being left over after
promotion.
Ensure that ShutDownSlotSync() captures SlotSyncCtx->pid under spinlock to
avoid accessing invalid value as it can be reset by concurrent slot sync
exit due to an error.
Author: Shveta Malik
Reviewed-by: Hou Zhijie, Bertrand Drouvot, Amit Kapila, Masahiko Sawada
Discussion: https://postgr.es/m/CAJpy0uBefXUS_TSz=oxmYKHdg-fhxUT0qfjASW3nmqnzVC3p6A@mail.gmail.com
The paragraph describing the JavaScript string literals allowed in
jsonpath expressions unnecessarily mentions JSON by erroneously
listing \v as allowed by JSON and mentioning the \xNN and \u{N...}
backslash escapes as deviations from JSON when in fact both are
accepted by ECMAScript/JavaScript. Fix this by only referring to
JavaScript.
Author: Erik Wienhold <ewie@ewie.name>
Discussion: https://www.postgresql.org/message-id/flat/1EB17DF9-2636-484B-9DD0-3CAB19C4F5C4@justatheory.com
The paragraph on SSL compression is largely describing events which
took place many years ago, so reword with past tense.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/eac70d46-e61c-4d71-a1e1-78e2bfa19485@eisentraut.org
Do not allow ALTER SUBSCRIPTION ... SET (failover = on|off) in a
transaction block as the changed failover option of the slot can't be
rolled back. For the same reason, we refrain from altering the replication
slot's failover property if the subscription is created with a valid
slot_name and create_slot=false.
Reprted-by: Kuroda Hayato
Author: Hou Zhijie
Reviewed-by: Shveta Malik, Bertrand Drouvot, Kuroda Hayato
Discussion: https://postgr.es/m/OS0PR01MB57165542B09DFA4943830BF294082@OS0PR01MB5716.jpnprd01.prod.outlook.com
The documents were clear that queryid should not be assumed to be stable
between major versions but said nothing about minor versions and left
the reader to guess if that was implied by the mention of the
instability of queryid between major versions.
Here we give minor versions an explicit mention to indicate queryid can
generally be assumed stable between minor versions.
Reviewed-by: Michael Paquier
Discussion: https://postgr.es/m/CAApHDvpYGE6h0cD9UO-eHySPynPj1L3J%3DHxT%2BA7Ud8_Yo6AuzA%40mail.gmail.com
Backpatch-through: 12
The optimization for inserts into BRIN indexes added by c1ec02be1d
relies on a cache that needs to be explicitly released after calling
index_insert(). The commit however failed to invoke the cleanup in
validate_index(), which calls index_insert() indirectly through
table_index_validate_scan().
After inspecting index_insert() callers, it seems unique_key_recheck()
is missing the call too.
Fixed by adding the two missing index_insert_cleanup() calls.
The commit does two additional improvements. The aminsertcleanup()
signature is modified to have the index as the first argument, to make
it more like the other AM callbacks. And the aminsertcleanup() callback
is invoked even if the ii_AmCache is NULL, so that it can decide if the
cleanup is necessary.
Author: Alvaro Herrera, Tomas Vondra
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/m/202401091043.e3nrqiad6gb7@alvherre.pgsql
These operators were removed by 2f70fdb064 in the v14 cycle but they were
accidentally left in the table of build-in operator classes. Backpatch down
to v14 where the operators where removed.
Author: Aleksander Alekseev <aleksander@timescale.com>
Reported-by: Colin Caine <cmcaine@gmail.com>
Discussion: https://postgr.es/m/CADwQTQbbr2UQ_fpbyc+8ay=RwEYgYk=TZxH3+RHDqAQfoG+EWA@mail.gmail.com
Backpatch-through: v14
This updates the link from pg_createsubscriber to initial data sync
to actually link to the subsection in question as opposed to the
main logical replication section.
Author: Pavel Luzanov <p.luzanov@postgrespro.ru>
Discussion: https://postgr.es/m/a4af555a-ac60-4416-877d-0440d29b8763@postgrespro.ru
This fixes various typos, duplicated words, and tiny bits of whitespace
mainly in code comments but also in docs.
Author: Daniel Gustafsson <daniel@yesql.se>
Author: Heikki Linnakangas <hlinnaka@iki.fi>
Author: Alexander Lakhin <exclusion@gmail.com>
Author: David Rowley <dgrowleyml@gmail.com>
Author: Nazir Bilal Yavuz <byavuz81@gmail.com>
Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
We don't want users to think that pg_combinebackup is trying to check
the validity of individual backups, because it isn't. Adjust the wording
about sanity checks to make it clear that verification of individual
backups is the job of pg_verifybackup, and that the checks performed
by pg_combinebackup are around the relationships between the backups.
Per discussion with David Steele.
Discussion: http://postgr.es/m/e6f930c3-590c-47b9-b094-217bb2a3e22e@pgmasters.net
This addresses some post-commit review comments for commits 6185c973,
de3600452, and 9425c596a0, with the following changes:
* Fix JSON_TABLE() syntax documentation to use the term
"path_expression" for JSON path expressions instead of
"json_path_specification" to be consistent with the other SQL/JSON
functions.
* Fix a typo in the example code in JSON_TABLE() documentation.
* Rewrite some newly added comments in jsonpath.h.
* In JsonPathQuery(), add missing cast to int before printing an enum
value.
Reported-by: Jian He <jian.universality@gmail.com>
Discussion: https://postgr.es/m/CACJufxG_e0QLCgaELrr2ZNz7AxPeGCNKAORe3fHtFCQLsH4J4Q@mail.gmail.com
The code associated with EXEC SQL DEFINE was unreadable and full of
bugs, notably:
* It'd attempt to free a non-malloced string if the ecpg program
tries to redefine a macro that was defined on the command line.
* Possible memory stomp if user writes "-D=foo".
* Undef'ing or redefining a macro defined on the command line would
change the state visible to the next file, when multiple files are
specified on the command line. (While possibly that could have been
an intentional choice, the code clearly intends to revert to the
original macro state; it's just failing to consider this interaction.)
* Missing "break" in defining a new macro meant that redefinition
of an existing name would cause an extra entry to be added to the
definition list. While not immediately harmful, a subsequent undef
would result in the prior entry becoming visible again.
* The interactions with input buffering are subtle and were entirely
undocumented.
It's not that surprising that we hadn't noticed these bugs,
because there was no test coverage at all of either the -D
command line switch or multiple input files. This patch adds
such coverage (in a rather hacky way I guess).
In addition to the code bugs, the user documentation was confused
about whether the -D switch defines a C macro or an ecpg one, and
it failed to mention that you can write "-Dsymbol=value".
These problems are old, so back-patch to all supported branches.
Discussion: https://postgr.es/m/998011.1713217712@sss.pgh.pa.us
Previously, we had consecutive, very short chapters called "Generic
WAL" and "Custom WAL Resource Managers," explaining different approaches
to the same problem. Merge them into a single chapter. Explain most
of the differences between the approaches in the chapter's
introductory text, rather than in the individual sections.
Discussion: http://postgr.es/m/46ac50c1-6b2a-404f-a683-b67af6ab56e9@eisentraut.org
The documentation for SET ROLE states that superusers who switch to
a non-superuser role lose their superuser privileges. While this
is true for most commands, there are exceptions such as SET ROLE
and SET SESSION AUTHORIZATION, which continue to use the current
session user and the authenticated user, respectively.
Furthermore, the description of this command already describes its
effect, so it is arguably unnecessary to include this special case.
This commit removes the note about the superuser case and adds a
sentence about the aforementioned exceptions to the description.
Co-authored-by: Yurii Rashkovskii
Reviewed-by: Shubham Khanna, Robert Haas, Michael Paquier
Discussion: https://postgr.es/m/CA%2BRLCQysHtME0znk2KUMJN343ksboSRQSU-hCnOjesX6VK300Q%40mail.gmail.com
This addresses a few problems with commit e5da0fe3c2 ("Catalog domain
not-null constraints").
In CREATE DOMAIN, a NOT NULL constraint looks like
CREATE DOMAIN d1 AS int [ CONSTRAINT conname ] NOT NULL
(Before e5da0fe3c2, the constraint name was accepted but ignored.)
But in ALTER DOMAIN, a NOT NULL constraint looks like
ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL VALUE
where VALUE is where for a table constraint the column name would be.
(This works as of e5da0fe3c2. Before e5da0fe3c2, this syntax
resulted in an internal error.)
But for domains, this latter syntax is confusing and needlessly
inconsistent between CREATE and ALTER. So this changes it to just
ALTER DOMAIN d1 ADD [ CONSTRAINT conname ] NOT NULL
(None of these syntaxes are per SQL standard; we are just living with
the bits of inconsistency that have built up over time.)
In passing, this also changes the psql \dD output to not show not-null
constraints in the column "Check", since it's already shown in the
column "Nullable". This has also been off since e5da0fe3c2.
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/9ec24d7b-633d-463a-84c6-7acff769c9e8%40eisentraut.org
In the libpq encryption negotiation tests, don't run the GSSAPI tests
unless PG_TEST_EXTRA='kerberos' is also set. That makes it possible to
still run most of the tests when GSSAPI support is compiled in, but
there's no MIT Kerberos installation.
The tools.ietf.org site has been decommissioned and replaced by a
number of sites serving various purposes. Links to RFCs and BCPs
are now 301 redirected to their new respective IETF sites. Since
this serves no purpose and only adds network overhead, update our
links to the new locations.
Backpatch to all supported versions.
Discussion: https://postgr.es/m/3C1CEA99-FCED-447D-9858-5A579B4C6687@yesql.se
Backpatch-through: v12
A NESTED path allows to extract data from nested levels of JSON
objects given by the parent path expression, which are projected as
columns specified using a nested COLUMNS clause, just like the parent
COLUMNS clause. Rows comprised from a NESTED columns are "joined"
to the row comprised from the parent columns. If a particular NESTED
path evaluates to 0 rows, then the nested COLUMNS will emit NULLs,
making it an OUTER join.
NESTED columns themselves may include NESTED paths to allow
extracting data from arbitrary nesting levels, which are likewise
joined against the rows at the parent level.
Multiple NESTED paths at a given level are called "sibling" paths
and their rows are combined by UNIONing them, that is, after being
joined against the parent row as described above.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewers have included (in no particular order):
Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup,
Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby, Álvaro Herrera, Jian He
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
When testing buffer pool logic, it is useful to be able to evict
arbitrary blocks. This function can be used in SQL queries over the
pg_buffercache view to set up a wide range of buffer pool states. Of
course, buffer mappings might change concurrently so you might evict a
block other than the one you had in mind, and another session might
bring it back in at any time. That's OK for the intended purpose of
setting up developer testing scenarios, and more complicated interlocking
schemes to give stronger guararantees about that would likely be less
flexible for actual testing work anyway. Superuser-only.
Author: Palak Chaturvedi <chaturvedipalak1911@gmail.com>
Author: Thomas Munro <thomas.munro@gmail.com> (docs, small tweaks)
Reviewed-by: Nitin Jadhav <nitinjadhavpostgres@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Cary Huang <cary.huang@highgo.ca>
Reviewed-by: Cédric Villemain <cedric.villemain+pgsql@abcsql.com>
Reviewed-by: Jim Nasby <jim.nasby@gmail.com>
Reviewed-by: Maxim Orlov <orlovmg@gmail.com>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Discussion: https://postgr.es/m/CALfch19pW48ZwWzUoRSpsaV9hqt0UPyaBPC4bOZ4W+c7FF566A@mail.gmail.com
libpq now always tries to send ALPN. With the traditional negotiated
SSL connections, the server accepts the ALPN, and refuses the
connection if it's not what we expect, but connecting without ALPN is
still OK. With the new direct SSL connections, ALPN is mandatory.
NOTE: This uses "TBD-pgsql" as the protocol ID. We must register a
proper one with IANA before the release!
Author: Greg Stark, Heikki Linnakangas
Reviewed-by: Matthias van de Meent, Jacob Champion
By skipping SSLRequest, you can eliminate one round-trip when
establishing a TLS connection. It is also more friendly to generic TLS
proxies that don't understand the PostgreSQL protocol.
This is disabled by default in libpq, because the direct TLS handshake
will fail with old server versions. It can be enabled with the
sslnegotation=direct option. It will still fall back to the negotiated
TLS handshake if the server rejects the direct attempt, either because
it is an older version or the server doesn't support TLS at all, but
the fallback can be disabled with the sslnegotiation=requiredirect
option.
Author: Greg Stark, Heikki Linnakangas
Reviewed-by: Matthias van de Meent, Jacob Champion
Replace (expr op C1) OR (expr op C2) ... with expr op ANY(ARRAY[C1, C2, ...])
on the preliminary stage of optimization when we are still working with the
expression tree.
Here Cn is a n-th constant expression, 'expr' is non-constant expression, 'op'
is an operator which returns boolean result and has a commuter (for the case
of reverse order of constant and non-constant parts of the expression,
like 'Cn op expr').
Sometimes it can lead to not optimal plan. This is why there is a
or_to_any_transform_limit GUC. It specifies a threshold value of length of
arguments in an OR expression that triggers the OR-to-ANY transformation.
Generally, more groupable OR arguments mean that transformation will be more
likely to win than to lose.
Discussion: https://postgr.es/m/567ED6CA.2040504%40sigaev.ru
Author: Alena Rybakina <lena.ribackina@yandex.ru>
Author: Andrey Lepikhov <a.lepikhov@postgrespro.ru>
Reviewed-by: Peter Geoghegan <pg@bowt.ie>
Reviewed-by: Ranier Vilela <ranier.vf@gmail.com>
Reviewed-by: Alexander Korotkov <aekorotkov@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com>
Reviewed-by: Jian He <jian.universality@gmail.com>
What the documentation calls an exclude_element is an index_elem
according to gram.y, and it allows all the same options that
a CREATE INDEX column specification does. The COLLATE patch
neglected to update the CREATE/ALTER TABLE docs about that,
and later the opclass-parameters patch made the same oversight.
Add those options to the syntax synopses, and polish the
associated text a bit.
Back-patch to v13 where opclass parameters came in. We could
update v12 with just the COLLATE omission, but it doesn't quite
seem worth the trouble at this point.
Shihao Zhong, reviewed by Daniel Vérité, Shubham Khanna and myself
Discussion: https://postgr.es/m/CAGRkXqShbVyB8E3gapfdtuwiWTiK=Q67Qb9qwxu=+-w0w46EBA@mail.gmail.com
This patch generalizes libpq's existing single-row mode to allow
individual partial-result PGresults to contain up to N rows, rather
than always one row. This reduces malloc overhead compared to plain
single-row mode, and it is very useful for psql's FETCH_COUNT feature,
since otherwise we'd have to add code (and cycles) to either merge
single-row PGresults into a bigger one or teach psql's
results-printing logic to accept arrays of PGresults.
To avoid API breakage, PQsetSingleRowMode() remains the same, and we
add a new function PQsetChunkedRowsMode() to invoke the more general
case. Also, PGresults obtained the old way continue to carry the
PGRES_SINGLE_TUPLE status code, while if PQsetChunkedRowsMode() is
used then their status code is PGRES_TUPLES_CHUNK. The underlying
logic is the same either way, though.
Daniel Vérité, reviewed by Laurenz Albe and myself (and whacked
around a bit by me, so any remaining bugs are my fault)
Discussion: https://postgr.es/m/CAKZiRmxsVTkO928CM+-ADvsMyePmU3L9DQCa9NwqjvLPcEe5QA@mail.gmail.com
This new DDL command splits a single partition into several parititions.
Just like ALTER TABLE ... MERGE PARTITIONS ... command, new patitions are
created using createPartitionTable() function with parent partition as the
template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
This new DDL command merges several partitions into the one partition of the
target table. The target partition is created using new
createPartitionTable() function with parent partition as the template.
This commit comprises quite naive implementation which works in single process
and holds the ACCESS EXCLUSIVE LOCK on the parent table during all the
operations including the tuple routing. This is why this new DDL command
can't be recommended for large partitioned tables under a high load. However,
this implementation come in handy in certain cases even as is.
Also, it could be used as a foundation for future implementations with lesser
locking and possibly parallel.
Discussion: https://postgr.es/m/c73a1746-0cd0-6bdd-6b23-3ae0b7c0c582%40postgrespro.ru
Author: Dmitry Koval
Reviewed-by: Matthias van de Meent, Laurenz Albe, Zhihong Yu, Justin Pryzby
Reviewed-by: Alvaro Herrera, Robert Haas, Stephane Tachoires
Commit 9e8da0f7 taught nbtree to handle ScalarArrayOpExpr quals
natively. This works by pushing down the full context (the array keys)
to the nbtree index AM, enabling it to execute multiple primitive index
scans that the planner treats as one continuous index scan/index path.
This earlier enhancement enabled nbtree ScalarArrayOp index-only scans.
It also allowed scans with ScalarArrayOp quals to return ordered results
(with some notable restrictions, described further down).
Take this general approach a lot further: teach nbtree SAOP index scans
to decide how to execute ScalarArrayOp scans (when and where to start
the next primitive index scan) based on physical index characteristics.
This can be far more efficient. All SAOP scans will now reliably avoid
duplicative leaf page accesses (just like any other nbtree index scan).
SAOP scans whose array keys are naturally clustered together now require
far fewer index descents, since we'll reliably avoid starting a new
primitive scan just to get to a later offset from the same leaf page.
The scan's arrays now advance using binary searches for the array
element that best matches the next tuple's attribute value. Required
scan key arrays (i.e. arrays from scan keys that can terminate the scan)
ratchet forward in lockstep with the index scan. Non-required arrays
(i.e. arrays from scan keys that can only exclude non-matching tuples)
"advance" without the process ever rolling over to a higher-order array.
Naturally, only required SAOP scan keys trigger skipping over leaf pages
(non-required arrays cannot safely end or start primitive index scans).
Consequently, even index scans of a composite index with a high-order
inequality scan key (which we'll mark required) and a low-order SAOP
scan key (which we won't mark required) now avoid repeating leaf page
accesses -- that benefit isn't limited to simpler equality-only cases.
In general, all nbtree index scans now output tuples as if they were one
continuous index scan -- even scans that mix a high-order inequality
with lower-order SAOP equalities reliably output tuples in index order.
This allows us to remove a couple of special cases that were applied
when building index paths with SAOP clauses during planning.
Bugfix commit 807a40c5 taught the planner to avoid generating unsafe
path keys: path keys on a multicolumn index path, with a SAOP clause on
any attribute beyond the first/most significant attribute. These cases
are now all safe, so we go back to generating path keys without regard
for the presence of SAOP clauses (just like with any other clause type).
Affected queries can now exploit scan output order in all the usual ways
(e.g., certain "ORDER BY ... LIMIT n" queries can now terminate early).
Also undo changes from follow-up bugfix commit a4523c5a, which taught
the planner to produce alternative index paths, with path keys, but
without low-order SAOP index quals (filter quals were used instead).
We'll no longer generate these alternative paths, since they can no
longer offer any meaningful advantages over standard index qual paths.
Affected queries thereby avoid all of the disadvantages that come from
using filter quals within index scan nodes. They can avoid extra heap
page accesses from using filter quals to exclude non-matching tuples
(index quals will never have that problem). They can also skip over
irrelevant sections of the index in more cases (though only when nbtree
determines that starting another primitive scan actually makes sense).
There is a theoretical risk that removing restrictions on SAOP index
paths from the planner will break compatibility with amcanorder-based
index AMs maintained as extensions. Such an index AM could have the
same limitations around ordered SAOP scans as nbtree had up until now.
Adding a pro forma incompatibility item about the issue to the Postgres
17 release notes seems like a good idea.
Author: Peter Geoghegan <pg@bowt.ie>
Author: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-By: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-By: Matthias van de Meent <boekewurm+postgres@gmail.com>
Reviewed-By: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/CAH2-Wz=ksvN_sjcnD1+Bt-WtifRA5ok48aDYnq3pkKhxgMQpcw@mail.gmail.com
The BAS_VACUUM ring size has been 256kB since commit d526575f introduced
the mechanism 17 years ago. Commit 1cbbee03 recently made it
configurable but retained the traditional default. The correct default
size has been debated for years, but 256kB is certainly very small.
VACUUM soon needs to write back data it dirtied only 32 blocks ago,
which usually requires flushing the WAL. New experiments in prefetching
pages for VACUUM exacerbated the problem by crashing into dirty data
even sooner. Let's make the default 2MB. That's 1.6% of the default
toy buffer pool size, and 0.2% of 1GB, which would be a considered a
small shared_buffers setting for a real system these days. Users are
still free to set the GUC to a different value.
Reviewed-by: Andres Freund <andres@anarazel.de>
Discussion: https://postgr.es/m/20240403221257.md4gfki3z75cdyf6%40awork3.anarazel.de
Discussion: https://postgr.es/m/CA%2BhUKGLY4Q4ZY4f1rvnFtv6%2BPkjNf8MejdPkcju3Qii9DYqqcQ%40mail.gmail.com
Adds --clone/--copy-file-range options to pg_combinebackup, to allow
copying files using file cloning or copy_file_range(). These methods may
be faster than the standard block-by-block copy, but the main advantage
is that they enable various features provided by CoW filesystems.
This commit only uses these copy methods for files that did not change
and can be copied as a whole from a single backup.
These new copy methods may not be available on all platforms, in which
case the command throws an error (immediately, even if no files would be
copied as a whole). This early failure seems better than failing later
when trying to copy the first file, after performing a lot of work on
earlier files.
If the requested copy method is available, but a checksum needs to be
recalculated (e.g. because of a different checksum type), the file is
still copied using the requested method, but it is also read for the
checksum calculation. Depending on the filesystem this may be more
expensive than just performing the simple copy, but it does enable the
CoW benefits.
Initial patch by Jakub Wartak, various reworks and improvements by me.
Author: Tomas Vondra, Jakub Wartak
Reviewed-by: Thomas Munro, Jakub Wartak, Robert Haas
Discussion: https://postgr.es/m/3024283a-7491-4240-80d0-421575f6bb23%40enterprisedb.com
The documentation index is getting very long, which makes it hard
to find things. Since these chapters are all very similar in structure
and content, merging them is a natural way of reducing the size of
the toplevel index.
Rather than actually combining all of the SGML into a single file,
keep one file per <sect1>, and add a glue file that includes all
of them.
Discussion: http://postgr.es/m/CA+Tgmob7_uoYuS2=rVwpVXaRwP-UXz+++saYTC-BCZ42QzSNKQ@mail.gmail.com
This commit does two things:
1) Maintains inactive_since for sync slots whenever the slot is released
just like any other regular slot.
2) Ensures the value is set to the current timestamp during the promotion
of standby to help correctly interpret the time after promotion. We don't
want the slots to appear inactive for a long time after promotion if they
haven't been synchronized recently. This would also avoid the invalidation
of such slots immediately after promotion if tomorrow we have a feature
that invalidates slots based on their inactivity time. Whoever acquires
the slot i.e. makes the slot active will reset it to NULL.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik, Masahiko Sawada
Discussion: https://postgr.es/m/CAA4eK1KrPGwfZV9LYGidjxHeW+rxJ=E2ThjXvwRGLO=iLNuo=Q@mail.gmail.com
Discussion: https://postgr.es/m/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
Discussion: https://postgr.es/m/CA+Tgmob_Ta-t2ty8QrKHBGnNLrf4ZYcwhGHGFsuUoFrAEDw4sA@mail.gmail.com
JSON_TABLE() allows JSON data to be converted into a relational view
and thus used, for example, in a FROM clause, like other tabular
data. Data to show in the view is selected from a source JSON object
using a JSON path expression to get a sequence of JSON objects that's
called a "row pattern", which becomes the source to compute the
SQL/JSON values that populate the view's output columns. Column
values themselves are computed using JSON path expressions applied to
each of the JSON objects comprising the "row pattern", for which the
SQL/JSON query functions added in 6185c9737c are used.
To implement JSON_TABLE() as a table function, this augments the
TableFunc and TableFuncScanState nodes that are currently used to
support XMLTABLE() with some JSON_TABLE()-specific fields.
Note that the JSON_TABLE() spec includes NESTED COLUMNS and PLAN
clauses, which are required to provide more flexibility to extract
data out of nested JSON objects, but they are not implemented here
to keep this commit of manageable size.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Jian He <jian.universality@gmail.com>
Reviewers have included (in no particular order):
Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup,
Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby, Álvaro Herrera, Jian He
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
EXPLAIN (ANALYZE, SERIALIZE) allows collection of statistics about
the volume of data emitted by a query, as well as the time taken
to convert the data to the on-the-wire format. Previously there
was no way to investigate this without actually sending the data
to the client, in which case network transmission costs might
swamp what you wanted to see. In particular this feature allows
investigating the costs of de-TOASTing compressed or out-of-line
data during formatting.
Stepan Rutz and Matthias van de Meent,
reviewed by Tomas Vondra and myself
Discussion: https://postgr.es/m/ca0adb0e-fa4e-c37e-1cd7-91170b18cae1@gmx.de
Presently, the archiver process restarts when an archive callback
ERRORs. To avoid this, archive module authors can use sigsetjmp(),
manage a memory context, etc., but that requires a lot of extra
code that will likely look roughly the same between modules. This
commit adds basic archive callback ERROR handling to pgarch.c so
that module authors won't ordinarily need to worry about this.
While this built-in handler attempts to clean up anything that an
archive module could conceivably have left behind, it is possible
that some modules are doing unexpected things that require
additional cleanup. Module authors should be sure to do any extra
required cleanup in a PG_CATCH block within the archiving callback.
The archiving callback is now called in a short-lived memory
context that the archiver process resets between invocations. If a
module requires longer-lived storage, it must maintain its own
memory context.
Thanks to these changes, the basic_archive module can be greatly
simplified.
Suggested-by: Andres Freund
Reviewed-by: Andres Freund, Yong Li
Discussion: https://postgr.es/m/20230217215624.GA3131134%40nathanxps13
pg_wal_replay_wait() is to be used on standby and specifies waiting for
the specific WAL location to be replayed before starting the transaction.
This option is useful when the user makes some data changes on primary and
needs a guarantee to see these changes on standby.
The queue of waiters is stored in the shared memory array sorted by LSN.
During replay of WAL waiters whose LSNs are already replayed are deleted from
the shared memory array and woken up by setting of their latches.
pg_wal_replay_wait() needs to wait without any snapshot held. Otherwise,
the snapshot could prevent the replay of WAL records implying a kind of
self-deadlock. This is why it is only possible to implement
pg_wal_replay_wait() as a procedure working in a non-atomic context,
not a function.
Catversion is bumped.
Discussion: https://postgr.es/m/eb12f9b03851bb2583adab5df9579b4b%40postgrespro.ru
Author: Kartyshov Ivan, Alexander Korotkov
Reviewed-by: Michael Paquier, Peter Eisentraut, Dilip Kumar, Amit Kapila
Reviewed-by: Alexander Lakhin, Bharath Rupireddy, Euler Taveira
This is useful when connecting to a database asynchronously via
PQconnectStart(), since it handles deciding between poll() and
select(), and some of the required boilerplate.
Tristan Partin, reviewed by Gurjeet Singh, Heikki Linnakangas, Jelte
Fennema-Nio, and me.
Discussion: http://postgr.es/m/D08WWCPVHKHN.3QELIKZJ2D9RZ@neon.tech
Break ReadBuffer() up into two steps. StartReadBuffers() and
WaitReadBuffers() give us two main advantages:
1. Multiple consecutive blocks can be read with one system call.
2. Advice (hints of future reads) can optionally be issued to the
kernel ahead of time.
The traditional ReadBuffer() function is now implemented in terms of
those functions, to avoid duplication.
A new GUC io_combine_limit is defined, and the functions for limiting
per-backend pin counts are made into public APIs. Those are provided
for use by callers of StartReadBuffers(), when deciding how many buffers
to read at once. The following commit will add a higher level mechanism
for doing that automatically with a practical interface.
With some more infrastructure in later work, StartReadBuffers() could
be extended to start real asynchronous I/O instead of just issuing
advice and leaving WaitReadBuffers() to do the work synchronously.
Author: Thomas Munro <thomas.munro@gmail.com>
Author: Andres Freund <andres@anarazel.de> (some optimization tweaks)
Reviewed-by: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Reviewed-by: Nazir Bilal Yavuz <byavuz81@gmail.com>
Reviewed-by: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Tested-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/CA+hUKGJkOiOCa+mag4BF+zHo7qo=o9CFheB8=g6uT5TUm2gkvA@mail.gmail.com
Previously, we used a simple array for storing dead tuple IDs during
lazy vacuum, which had a number of problems:
* The array used a single allocation and so was limited to 1GB.
* The allocation was pessimistically sized according to table size.
* Lookup with binary search was slow because of poor CPU cache and
branch prediction behavior.
This commit replaces that array with the TID store from commit
30e144287a.
Since the backing radix tree makes small allocations as needed, the
1GB limit is now gone. Further, the total memory used is now often
smaller by an order of magnitude or more, depending on the
distribution of blocks and offsets. These two features should make
multiple rounds of heap scanning and index cleanup an extremely rare
event. TID lookup during index cleanup is also several times faster,
even more so when index order is correlated with heap tuple order.
Since there is no longer a predictable relationship between the number
of dead tuples vacuumed and the space taken up by their TIDs, the
number of tuples no longer provides any meaningful insights for users,
nor is the maximum number predictable. For that reason this commit
also changes to byte-based progress reporting, with the relevant
columns of pg_stat_progress_vacuum renamed accordingly to
max_dead_tuple_bytes and dead_tuple_bytes.
For parallel vacuum, both the TID store and supplemental information
specific to vacuum are shared among the parallel vacuum workers. As
with the previous array, we don't take any locks on TidStore during
parallel vacuum since writes are still only done by the leader
process.
Bump catalog version.
Reviewed-by: John Naylor, (in an earlier version) Dilip Kumar
Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
This patch allows pg_restore to wrap its commands into transaction
blocks, somewhat like --single-transaction, except that we commit
and start a new block after every N objects. Using this mode
with a size limit of 1000 or so objects greatly reduces the number
of transactions consumed by the restore, while preventing any
one transaction from taking enough locks to overrun the receiving
server's shared lock table.
(A value of 1000 works well with the default lock table size of
around 6400 locks. Higher --transaction-size values can be used
if one has increased the receiving server's lock table size.)
Excessive consumption of XIDs has been reported as a problem for
pg_upgrade in particular, but it could be bad for any restore; and the
change also reduces the number of fsyncs and amount of WAL generated,
so it should provide speed benefits too.
This patch does not try to make parallel workers batch the SQL
commands they issue. The trouble with doing that is that other
workers may need to see the objects a worker creates right away.
Possibly this can be improved later.
In this patch I have hard-wired pg_upgrade to use a transaction size
of 1000 divided by the number of parallel restore jobs allowed
(without that, we'd still be at risk of overrunning the shared lock
table). Perhaps there would be value in adding another pg_upgrade
option to allow user control of that, but I'm unsure that it's worth
the trouble; I think few users would use it, and any who did would see
not that much benefit compared to the default.
Patch by me, but the original idea to batch SQL commands during
restore is due to Robins Tharakan.
Discussion: https://postgr.es/m/a9f9376f1c3343a6bb319dce294e20ac@EX13D05UWC001.ant.amazon.com
This commit adds a new COPY option LOG_VERBOSITY, which controls the
amount of messages emitted during processing. Valid values are
'default' and 'verbose'.
This is currently used in COPY FROM when ON_ERROR option is set to
ignore. If 'verbose' is specified, a NOTICE message is emitted for
each discarded row, providing additional information such as line
number, column name, and the malformed value. This helps users to
identify problematic rows that failed to load.
Author: Bharath Rupireddy
Reviewed-by: Michael Paquier, Atsushi Torikoshi, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/CALj2ACUk700cYhx1ATRQyRw-fBM%2BaRo6auRAitKGff7XNmYfqQ%40mail.gmail.com
This SQL-callable function behaves much like our internal utility
function getBaseType(), except it returns NULL rather than failing for
an invalid type OID. (That behavior is modeled on our experience with
other catalog-inquiry functions such as the ACL checking functions.)
The key advantage over doing a join to pg_type is that it will loop
as needed to find the bottom base type of a nest of domains.
Steve Chavez, reviewed by jian he and others
Discussion: https://postgr.es/m/CAGRrpzZSX8j=MQcbCSEisFA=ic=K3bknVfnFjAv1diVJxFHJvg@mail.gmail.com
This allows MERGE commands to include WHEN NOT MATCHED BY SOURCE
actions, which operate on rows that exist in the target relation, but
not in the data source. These actions can execute UPDATE, DELETE, or
DO NOTHING sub-commands.
This is in contrast to already-supported WHEN NOT MATCHED actions,
which operate on rows that exist in the data source, but not in the
target relation. To make this distinction clearer, such actions may
now be written as WHEN NOT MATCHED BY TARGET.
Writing WHEN NOT MATCHED without specifying BY SOURCE or BY TARGET is
equivalent to writing WHEN NOT MATCHED BY TARGET.
Dean Rasheed, reviewed by Alvaro Herrera, Ted Yu and Vik Fearing.
Discussion: https://postgr.es/m/CAEZATCWqnKGc57Y_JanUBHQXNKcXd7r=0R4NEZUVwP+syRkWbA@mail.gmail.com
This is marked PGC_SIGHUP, so it can only be set in a configuration
file, not anywhere else; and it is also marked GUC_DISALLOW_IN_AUTO_FILE,
so it can't be set using ALTER SYSTEM. When set to false, the
ALTER SYSTEM command is disallowed.
There was considerable concern that this would be misinterpreted as
a security feature, which it is not, because a determined superuser
has various ways of bypassing it. Hence, a lot of work has gone into
wordsmithing the documentation, in the hopes of avoiding any such
confusion.
Jelte Fennemia-Nio and Gabriele Bartolini, with wording suggestions
for the documentation from many others.
Discussion: http://postgr.es/m/CA%2BVUV5rEKt2%2BCdC_KUaPoihMu%2Bi5ChT4WVNTr4CD5-xXZUfuQw%40mail.gmail.com
Allow use of BeginInternalSubTransaction() in parallel mode, so long
as the subtransaction doesn't attempt to acquire an XID or increment
the command counter. Given those restrictions, the other parallel
processes don't need to know about the subtransaction at all, so
this should be safe. The benefit is that it allows subtransactions
intended for error recovery, such as pl/pgsql exception blocks,
to be used in PARALLEL SAFE functions.
Another reason for doing this is that the API of
BeginInternalSubTransaction() doesn't allow reporting failure.
pl/python for one, and perhaps other PLs, copes very poorly with an
error longjmp out of BeginInternalSubTransaction(). The headline
feature of this patch removes the only easily-triggerable failure
case within that function. There remain some resource-exhaustion
and similar cases, which we now deal with by promoting them to FATAL
errors, so that callers need not try to clean up. (It is likely
that such errors would leave us with corrupted transaction state
inside xact.c, making recovery difficult if not impossible anyway.)
Although this work started because of a report of a pl/python crash,
we're not going to do anything about that in the back branches.
Back-patching this particular fix is obviously not very wise.
While we could contemplate some narrower band-aid, pl/python is
already an untrusted language, so it seems okay to classify this
as a "so don't do that" case.
Patch by me, per report from Hao Zhang. Thanks to Robert Haas for
review.
Discussion: https://postgr.es/m/CALY6Dr-2yLVeVPhNMhuBnRgOZo1UjoTETgtKBx1B2gUi8yy+3g@mail.gmail.com
This adds 3 new variants of the random() function:
random(min integer, max integer) returns integer
random(min bigint, max bigint) returns bigint
random(min numeric, max numeric) returns numeric
Each returns a random number x in the range min <= x <= max.
For the numeric function, the number of digits after the decimal point
is equal to the number of digits that "min" or "max" has after the
decimal point, whichever has more.
The main entry points for these functions are in a new C source file.
The existing random(), random_normal(), and setseed() functions are
moved there too, so that they can all share the same PRNG state, which
is kept private to that file.
Dean Rasheed, reviewed by Jian He, David Zhang, Aleksander Alekseev,
and Tomas Vondra.
Discussion: https://postgr.es/m/CAEZATCV89Vxuq93xQdmc0t-0Y2zeeNQTdsjbmV7dyFBPykbV4Q@mail.gmail.com
Many other utilities use -d to specify the database to use, but
pgbench uses it to enable debug mode. This is causing some users
to accidentally enable it. This commit changes -d to accept the
database name and introduces --dbname. Debug mode can still be
enabled with --debug. This is a backward-incompatible change, but
it has been judged to be worth the trade-off, i.e., some scripts
that use pgbench will need to be updated.
Author: Greg Sabino Mullane
Reviewed-by: Tomas Vondra, Euler Taveira, Alvaro Herrera, David Christensen
Discussion: https://postgr.es/m/CAKAnmmLjAzwVtb%3DVEaeuCtnmOLpzkJ1uJ_XiQ362YdD9B72HSg%40mail.gmail.com
It's now possible to specify a table access method via
CREATE TABLE ... USING for a partitioned table, as well change it with
ALTER TABLE ... SET ACCESS METHOD. Specifying an AM for a partitioned
table lets the value be used for all future partitions created under it,
closely mirroring the behavior of the TABLESPACE option for partitioned
tables. Existing partitions are not modified.
For a partitioned table with no AM specified, any new partitions are
created with the default_table_access_method.
Also add ALTER TABLE ... SET ACCESS METHOD DEFAULT, which reverts to the
original state of using the default for new partitions.
The relcache of partitioned tables is not changed: rd_tableam is not
set, even if a partitioned table has a relam set.
Author: Justin Pryzby <pryzby@telsasoft.com>
Author: Soumyadeep Chakraborty <soumyadeep2007@gmail.com>
Author: Michaël Paquier <michael@paquier.xyz>
Reviewed-by: The authors themselves
Discussion: https://postgr.es/m/CAE-ML+9zM4wJCGCBGv01k96qQ3gFv4WFcFy=zqPHKeaEFwwv6A@mail.gmail.com
Discussion: https://postgr.es/m/20210308010707.GA29832%40telsasoft.com
The documentation for PGTYPESnumeric_to_long only mentioned errno
being set to indicate overflow but the code also sets errno when
underflow happens.
Reported-by: Aidar Imamov <a.imamov@postgrespro.ru>
Discussion: https://postgr.es/m/eebf0ad50ad4321d65d2d64dd6b7f17d@postgrespro.ru
It must be run on the target server and should be able to connect to
the source server (publisher) and the target server (subscriber). All
tables in the specified database(s) are included in the logical
replication setup. A pair of publication and subscription objects are
created for each database.
The main advantage of pg_createsubscriber over the common logical
replication setup is the initial data copy. It also reduces the
catchup phase.
Some prerequisites must be met to successfully run it. It is
basically the logical replication requirements. It starts creating a
publication using FOR ALL TABLES and a replication slot for each
specified database. Write recovery parameters into the target data
directory and start the target server. It specifies the LSN of the
last replication slot (replication start point) up to which the
recovery will proceed. Wait until the target server is promoted.
Create one subscription per specified database (using publication and
replication slot created in a previous step) on the target server.
Set the replication progress to the replication start point for each
subscription. Enable the subscription for each specified database on
the target server. And finally, change the system identifier on the
target server.
Author: Euler Taveira <euler.taveira@enterprisedb.com>
Reviewed-by: Hayato Kuroda <kuroda.hayato@fujitsu.com>
Reviewed-by: Amit Kapila <amit.kapila16@gmail.com>
Reviewed-by: Shlok Kyal <shlok.kyal.oss@gmail.com>
Reviewed-by: Vignesh C <vignesh21@gmail.com>
Reviewed-by: Shubham Khanna <khannashubham1197@gmail.com>
Reviewed-by: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/5ac50071-f2ed-4ace-a8fd-b892cffd33eb@www.fastmail.com
This commit adds a new property called last_inactive_time for slots. It is
set to 0 whenever a slot is made active/acquired and set to the current
timestamp whenever the slot is inactive/released or restored from the disk.
Note that we don't set the last_inactive_time for the slots currently being
synced from the primary to the standby because such slots are typically
inactive as decoding is not allowed on those.
The 'last_inactive_time' will be useful on production servers to debug and
analyze inactive replication slots. It will also help to know the lifetime
of a replication slot - one can know how long a streaming standby, logical
subscriber, or replication slot consumer is down.
The 'last_inactive_time' will also be useful to implement inactive
timeout-based replication slot invalidation in a future commit.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik
Discussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
Straight-forward index-level REINDEX is not supported with multiple jobs as
we cannot control the concurrent processing of multiple indexes depending on
the same relation. Instead, we dedicate the whole table to certain reindex
job. Thus, if indexes in the lists belong to different tables, that gives us
a fair level of parallelism.
This commit teaches get_parallel_object_list() to fetch table names for
indexes in the case of index-level REINDEX. The same tables are grouped
together in the output order, and the list of indexes is also rebuilt to
match that order. Later during processingof that list, we push indexes
belonging to the same table into the same job.
Discussion: https://postgr.es/m/CACG%3DezZU_VwDi-1PN8RUSE6mcYG%2BYx1NH_rJO4%2BKe-mKqLp%3DNw%40mail.gmail.com
Author: Maxim Orlov, Svetlana Derevyanko, Alexander Korotkov
Reviewed-by: Michael Paquier
Since commit 3d14e171e9, SET ROLE has required the current session
user to have membership with the SET option in the target role, but
the SET ROLE documentation only mentions the membership
requirement. This commit adds this important detail to the SET
ROLE page.
Reviewed-by: Robert Haas
Discussion: https://postgr.es/m/CA%2BRLCQysHtME0znk2KUMJN343ksboSRQSU-hCnOjesX6VK300Q%40mail.gmail.com
Backpatch-through: 16
Add PERIOD clause to foreign key constraint definitions. This is
supported for range and multirange types. Temporal foreign keys check
for range containment instead of equality.
This feature matches the behavior of the SQL standard temporal foreign
keys, but it works on PostgreSQL's native ranges instead of SQL's
"periods", which don't exist in PostgreSQL (yet).
Reference actions ON {UPDATE,DELETE} {CASCADE,SET NULL,SET DEFAULT}
are not supported yet.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: jian he <jian.universality@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
This adds the X509 attributes notBefore and notAfter to sslinfo
as well as pg_stat_ssl to allow verifying and identifying the
validity period of the current client certificate. OpenSSL has
APIs for extracting notAfter and notBefore, but they are only
supported in recent versions so we have to calculate the dates
by hand in order to make this work for the older versions of
OpenSSL that we still support.
Original patch by Cary Huang with additional hacking by Jacob
and myself.
Author: Cary Huang <cary.huang@highgo.ca>
Co-author: Jacob Champion <jacob.champion@enterprisedb.com>
Co-author: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://postgr.es/m/182b8565486.10af1a86f158715.2387262617218380588@highgo.ca
Till now, the reason for replication slot invalidation is not tracked
directly in pg_replication_slots. A recent commit 007693f2a3 added
'conflict_reason' to show the reasons for slot conflict/invalidation, but
only for logical slots.
This commit adds a new column 'invalidation_reason' to show invalidation
reasons for both physical and logical slots. And, this commit also turns
'conflict_reason' text column to 'conflicting' boolean column (effectively
reverting commit 007693f2a3). The 'conflicting' column is true for
invalidation reasons 'rows_removed' and 'wal_level_insufficient' because
those make the slot conflict with recovery. When 'conflicting' is true,
one can now look at the new 'invalidation_reason' column for the reason
for the logical slot's conflict with recovery.
The new 'invalidation_reason' column will also be useful to track other
invalidation reasons in the future commit.
Author: Bharath Rupireddy
Reviewed-by: Bertrand Drouvot, Amit Kapila, Shveta Malik
Discussion: https://www.postgresql.org/message-id/ZfR7HuzFEswakt/a%40ip-10-97-1-34.eu-west-3.compute.internal
Discussion: https://www.postgresql.org/message-id/CALj2ACW4aUe-_uFQOjdWCEN-xXoLGhmvRFnL8SNw_TZ5nJe+aw@mail.gmail.com
The old text claims that HOT completely removes old row versions.
It was unclear whether it just meant the tuples themselves, or the
tuples together with their line pointers. If it meant the former,
it was wrong because we can remove dead row versions even when no
HOT updates have occurred, so it's not describing a benefit of HOT.
If it meant the latter, it was wrong because HOT doesn't allow
reclaiming the root tuple's line pointer.
This section does seems like it's intended to be more of an
informal introduction to HOT than a precise technical description
of every detail of how it works, but we still don't want it to
say things that are just not true, so update the text enough
to avoid that.
Patch by me, reviewed by James Coleman (although he would have
preferred more extensive changes) and Shubham Khanna.
Discussion: http://postgr.es/m/CA+TgmobH6DPmR-u--Xgeg8cYUwhDhypNsv38nDrAJyf_xno=TQ@mail.gmail.com
This introduces the following SQL/JSON functions for querying JSON
data using jsonpath expressions:
JSON_EXISTS(), which can be used to apply a jsonpath expression to a
JSON value to check if it yields any values.
JSON_QUERY(), which can be used to to apply a jsonpath expression to
a JSON value to get a JSON object, an array, or a string. There are
various options to control whether multi-value result uses array
wrappers and whether the singleton scalar strings are quoted or not.
JSON_VALUE(), which can be used to apply a jsonpath expression to a
JSON value to return a single scalar value, producing an error if it
multiple values are matched.
Both JSON_VALUE() and JSON_QUERY() functions have options for
handling EMPTY and ERROR conditions, which can be used to specify
the behavior when no values are matched and when an error occurs
during jsonpath evaluation, respectively.
Author: Nikita Glukhov <n.gluhov@postgrespro.ru>
Author: Teodor Sigaev <teodor@sigaev.ru>
Author: Oleg Bartunov <obartunov@gmail.com>
Author: Alexander Korotkov <aekorotkov@gmail.com>
Author: Andrew Dunstan <andrew@dunslane.net>
Author: Amit Langote <amitlangote09@gmail.com>
Author: Peter Eisentraut <peter@eisentraut.org>
Author: Jian He <jian.universality@gmail.com>
Reviewers have included (in no particular order):
Andres Freund, Alexander Korotkov, Pavel Stehule, Andrew Alsup,
Erik Rijkers, Zihong Yu, Himanshu Upadhyaya, Daniel Gustafsson,
Justin Pryzby, Álvaro Herrera, Jian He, Anton A. Melnikov,
Nikita Malakhov, Peter Eisentraut, Tomas Vondra
Discussion: https://postgr.es/m/cd0bb935-0158-78a7-08b5-904886deac4b@postgrespro.ru
Discussion: https://postgr.es/m/20220616233130.rparivafipt6doj3@alap3.anarazel.de
Discussion: https://postgr.es/m/abd9b83b-aa66-f230-3d6d-734817f0995d%40postgresql.org
Discussion: https://postgr.es/m/CA+HiwqHROpf9e644D8BRqYvaAPmgBZVup-xKMDPk-nd4EpgzHw@mail.gmail.com
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
Commit cca97ce6a6 allowed dbname in pg_basebackup connstring and in this
commit we allow it to be written in postgresql.auto.conf when -R option is
used. The database name in the connection string will be used by the
logical replication slot synchronization on standby.
The dbname will be recorded only if specified explicitly in the connection
string or environment variable.
Masahiko Sawada hasn't reviewed the code in detail but endorsed the idea.
Author: Vignesh C, Kuroda Hayato
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAB8KJ=hdKdg+UeXhReeHpHA6N6v3e0qFF+ZsPFHk9_ThWKf=2A@mail.gmail.com
In combination with to_regtype, this allows converting a string to
the "canonicalized" form emitted by format_type. That usage requires
parsing the string twice, which is slightly annoying but not really
too expensive. We considered alternatives such as returning a record
type, but that way was notationally uglier than this, and possibly
less flexible.
Like to_regtype(), we'd rather that this return NULL for any bad
input, but the underlying type-parsing logic isn't yet capable of
not throwing syntax errors. Adjust the documentation for both
functions to point that out.
In passing, fix up a couple of nearby entries in the System Catalog
Information Functions table that had not gotten the word about our
since-v13 convention for displaying function usage examples.
David Wheeler and Erik Wienhold, reviewed by Pavel Stehule, Jim Jones,
and others.
Discussion: https://postgr.es/m/DF2324CA-2673-4ABE-B382-26B5770B6AA3@justatheory.com
The builtin C.UTF-8 locale has similar semantics to the libc locale of
the same name. That is, code point sort order (fast, memcmp-based)
combined with Unicode semantics for character operations such as
pattern matching, regular expressions, and
LOWER()/INITCAP()/UPPER(). The character semantics are based on
Unicode simple case mappings.
The builtin provider's C.UTF-8 offers several important advantages
over libc:
* faster sorting -- benefits from additional optimizations such as
abbreviated keys and varstrfastcmp_c
* faster case conversion, e.g. LOWER(), at least compared with some
libc implementations
* available on all platforms with identical semantics, and the
semantics are stable, testable, and documentable within a given
Postgres major version
Being based on memcmp, the builtin C.UTF-8 locale does not offer
natural language sort order. But it is an improvement for most use
cases that might otherwise use libc's "C.UTF-8" locale, as well as
many use cases that use libc's "C" locale.
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
Historically we've printed SubPlan expression nodes as "(SubPlan N)",
which is pretty uninformative. Trying to reproduce the original SQL
for the subquery is still as impractical as before, and would be
mighty verbose as well. However, we can still do better than that.
Displaying the "testexpr" when present, and adding a keyword to
indicate the SubLinkType, goes a long way toward showing what's
really going on.
In addition, this patch gets rid of EXPLAIN's use of "$n" to represent
subplan and initplan output Params. Instead we now print "(SubPlan
N).colX" or "(InitPlan N).colX" to represent the X'th output column
of that subplan. This eliminates confusion with the use of "$n" to
represent PARAM_EXTERN Params, and it's useful for the first part of
this change because it eliminates needing some other indication of
which subplan is referenced by a SubPlan that has a testexpr.
In passing, this adds simple regression test coverage of the
ROWCOMPARE_SUBLINK code paths, which were entirely unburdened
by testing before.
Tom Lane and Dean Rasheed, reviewed by Aleksander Alekseev.
Thanks to Chantal Keller for raising the question of whether
this area couldn't be improved.
Discussion: https://postgr.es/m/2838538.1705692747@sss.pgh.pa.us
Based on comments from Peter Eisentraut.
* Document CREATE DATABASE ... BUILTIN_LOCALE.
* Determine required encoding based on locale name for CREATE
COLLATION. Use -1 for "C" (requires catversion bump).
* initdb output fixups.
* Make ctype_is_c a constant true for now.
* Fixups to ICU 010_create_database.pl test.
Discussion: https://postgr.es/m/4135cf11-206d-40ed-96c0-9363c1232379@eisentraut.org
This allows a RETURNING clause to be appended to a MERGE query, to
return values based on each row inserted, updated, or deleted. As with
plain INSERT, UPDATE, and DELETE commands, the returned values are
based on the new contents of the target table for INSERT and UPDATE
actions, and on its old contents for DELETE actions. Values from the
source relation may also be returned.
As with INSERT/UPDATE/DELETE, the output of MERGE ... RETURNING may be
used as the source relation for other operations such as WITH queries
and COPY commands.
Additionally, a special function merge_action() is provided, which
returns 'INSERT', 'UPDATE', or 'DELETE', depending on the action
executed for each row. The merge_action() function can be used
anywhere in the RETURNING list, including in arbitrary expressions and
subqueries, but it is an error to use it anywhere outside of a MERGE
query's RETURNING list.
Dean Rasheed, reviewed by Isaac Morland, Vik Fearing, Alvaro Herrera,
Gurjeet Singh, Jian He, Jeff Davis, Merlin Moncure, Peter Eisentraut,
and Wolfgang Walther.
Discussion: http://postgr.es/m/CAEZATCWePEGQR5LBn-vD6SfeLZafzEm2Qy_L_Oky2=qw2w3Pzg@mail.gmail.com
This function returns the chunk_id of an on-disk TOASTed value. If
the value is un-TOASTed or not on-disk, it returns NULL. This is
useful for identifying which values are actually TOASTed and for
investigating "unexpected chunk number" errors.
Bumps catversion.
Author: Yugo Nagata
Reviewed-by: Jian He
Discussion: https://postgr.es/m/20230329105507.d764497456eeac1ca491b5bd%40sraoss.co.jp
New provider for collations, like "libc" or "icu", but without any
external dependency.
Initially, the only locale supported by the builtin provider is "C",
which is identical to the libc provider's "C" locale. The libc
provider's "C" locale has always been treated as a special case that
uses an internal implementation, without using libc at all -- so the
new builtin provider uses the same implementation.
The builtin provider's locale is independent of the server environment
variables LC_COLLATE and LC_CTYPE. Using the builtin provider, the
database collation locale can be "C" while LC_COLLATE and LC_CTYPE are
set to "en_US", which is impossible with the libc provider.
By offering a new builtin provider, it clarifies that the semantics of
a collation using this provider will never depend on libc, and makes
it easier to document the behavior.
Discussion: https://postgr.es/m/ab925f69-5f9d-f85e-b87c-bd2a44798659@joeconway.com
Discussion: https://postgr.es/m/dd9261f4-7a98-4565-93ec-336c1c110d90@manitou-mail.org
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Daniel Vérité, Peter Eisentraut, Jeremy Schneider
pg_stat_checkpointer contains statistics for checkpoints and restartpoints.
Before 12915a58ee documentation said only about checkpoints implying that
restartpoint is the variation of checkpoint. 12915a58ee introduced
new separate statistics fields for restartpoints. This commit explicitly
documents fields that are relevant for both checkpoints and restartpoints.
Reported-by: Magnus Hagander
Discussion: https://postgr.es/m/CABUevExav5-SR0x%2BG9kBUMV0G8XsvSUfuyyqmYBBJi6VHns6sw%40mail.gmail.com
Reviewed-by: Anton A. Melnikov
Roles with MAINTAIN on a relation may run VACUUM, ANALYZE, REINDEX,
REFRESH MATERIALIZE VIEW, CLUSTER, and LOCK TABLE on the relation.
Roles with privileges of pg_maintain may run those same commands on
all relations.
This was previously committed for v16, but it was reverted in
commit 151c22deee due to concerns about search_path tricks that
could be used to escalate privileges to the table owner. Commits
2af07e2f74, 59825d1639, and c7ea3f4229 resolved these concerns by
restricting search_path when running maintenance commands.
Bumps catversion.
Reviewed-by: Jeff Davis
Discussion: https://postgr.es/m/20240305161235.GA3478007%40nathanxps13
Before this patch, if you took a full backup on server A and then
tried to use the backup manifest to take an incremental backup on
server B, it wouldn't know that the manifest was from a different
server and so the incremental backup operation could potentially
complete without error. When you later tried to run pg_combinebackup,
you'd find out that your incremental backup was and always had been
invalid. That's poor timing, because nobody likes finding out about
backup problems only at restore time.
With this patch, you'll get an error when trying to take the (invalid)
incremental backup, which seems a lot nicer.
Amul Sul, revised by me. Review by Michael Paquier.
Discussion: http://postgr.es/m/CA+TgmoYLZzbSAMM3cAjV4Y+iCRZn-bR9H2+Mdz7NdaJFU1Zb5w@mail.gmail.com
In the synopsis, make the syntax for merge_update consistent with the
syntax for a plain UPDATE command. It was missing the optional "ROW"
keyword that can be used in a multi-column assignment, and the option
to assign from a multi-column subquery, both of which have been
supported by MERGE since it was introduced.
In the parameters section for the with_query parameter, mention that
WITH RECURSIVE isn't supported, since this is different from plain
INSERT, UPDATE, and DELETE commands. While at it, move that entry to
the top of the list, for consistency with the other pages.
Back-patch to v15, where MERGE was introduced.
Discussion: https://postgr.es/m/CAEZATCWoQyWkMFfu7JXXQr8dA6%3DgxjhYzgpuBP2oz0QoJTxGWw%40mail.gmail.com
The existing PQcancel API uses blocking IO, which makes PQcancel
impossible to use in an event loop based codebase without blocking the
event loop until the call returns. It also doesn't encrypt the
connection over which the cancel request is sent, even when the original
connection required encryption.
This commit adds a PQcancelConn struct and assorted functions, which
provide a better mechanism of sending cancel requests; in particular all
the encryption used in the original connection are also used in the
cancel connection. The main entry points are:
- PQcancelCreate creates the PQcancelConn based on the original
connection (but does not establish an actual connection).
- PQcancelStart can be used to initiate non-blocking cancel requests,
using encryption if the original connection did so, which must be
pumped using
- PQcancelPoll.
- PQcancelReset puts a PQcancelConn back in state so that it can be
reused to send a new cancel request to the same connection.
- PQcancelBlocking is a simpler-to-use blocking API that still uses
encryption.
Additional functions are
- PQcancelStatus, mimicks PQstatus;
- PQcancelSocket, mimicks PQcancelSocket;
- PQcancelErrorMessage, mimicks PQerrorMessage;
- PQcancelFinish, mimicks PQfinish.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Reviewed-by: Denis Laxalde <denis.laxalde@dalibo.com>
Discussion: https://postgr.es/m/AM5PR83MB0178D3B31CA1B6EC4A8ECC42F7529@AM5PR83MB0178.EURPRD83.prod.outlook.com
Presently, reindexdb's --table, --schema, --index, and --system
options cannot be used together with --all, i.e., you cannot
specify objects to process in all databases. This commit removes
this unnecessary restriction. Furthermore, it removes the
restriction that --system cannot be used with --table, --schema,
and --index. There is no such restriction for the latter options,
and there is no technical reason to disallow these combinations.
Reviewed-by: Kyotaro Horiguchi, Dean Rasheed
Discussion: https://postgr.es/m/20230628232402.GA1954626%40nathanxps13
Presently, clusterdb's --table option cannot be used together with
--all, i.e., you cannot specify tables to process in all databases.
This commit removes this unnecessary restriction. In passing,
change the synopsis in the documentation to use "[option...]"
instead of "[--verbose | -v]". There are other general-purpose
options (e.g., --quiet and --echo), but the synopsis currently only
lists --verbose.
Reviewed-by: Kyotaro Horiguchi, Dean Rasheed
Discussion: https://postgr.es/m/20230628232402.GA1954626%40nathanxps13
The list of connection statuses that PQstatus might return during an
asynchronous connection attempt was outdated:
1. CONNECTION_SETENV is never returned anymore and is only part of the
enum for backwards compatibility, so remove it from the docs.
2. CONNECTION_CHECK_STANDBY and CONNECTION_GSS_STARTUP were not listed,
so add them.
CONNECTION_NEEDED and CONNECTION_CHECK_TARGET are not listed in the docs
on purpose, since these are internal states that can never be observed
by a caller of PQstatus.
Author: Jelte Fennema-Nio <postgres@jeltef.nl>
Discussion: https://postgr.es/m/CAGECzQRb21spiiykQ48rzz8w+Hcykz+mB2_hxR65D9Qk6nnw=w@mail.gmail.com
Presently, vacuumdb's --table, --schema, and --exclude-schema
options cannot be used together with --all, i.e., you cannot
specify tables or schemas to process in all databases. This commit
removes this unnecessary restriction, thus enabling potentially
useful commands like "vacuumdb --all --schema pg_catalog".
Reviewed-by: Kyotaro Horiguchi, Dean Rasheed
Discussion: https://postgr.es/m/20230628232402.GA1954626%40nathanxps13
We expect the 'two_phase' and 'failover' properties to match between the
slot on the publisher and a subscription option on the subscriber.
Otherwise, the slot on the publisher may behave differently from what the
subscription's failover option says.
Author: Bertrand Drouvot
Reviewed-by: Peter Smith, Tristen Raab, Amit Kapila
Discussion: https://postgr.es/m/ZbkYrLPhH+RxpZlW@ip-10-97-1-34.eu-west-3.compute.internal
Rename pg_collation.colliculocale to colllocale, and
pg_database.daticulocale to datlocale. These names reflects that the
fields will be useful for the upcoming builtin provider as well, not
just for ICU.
This is purely a rename; no changes to the meaning of the fields.
Discussion: https://postgr.es/m/ff4c2f2f9c8fc7ca27c1c24ae37ecaeaeaff6b53.camel%40j-davis.com
Reviewed-by: Peter Eisentraut
This patch provides a way to ensure that physical standbys that are
potential failover candidates have received and flushed changes before
the primary server making them visible to subscribers. Doing so guarantees
that the promoted standby server is not lagging behind the subscribers
when a failover is necessary.
The logical walsender now guarantees that all local changes are sent and
flushed to the standby servers corresponding to the replication slots
specified in 'standby_slot_names' before sending those changes to the
subscriber.
Additionally, the SQL functions pg_logical_slot_get_changes,
pg_logical_slot_peek_changes and pg_replication_slot_advance are modified
to ensure that they process changes for failover slots only after physical
slots specified in 'standby_slot_names' have confirmed WAL receipt for those.
Author: Hou Zhijie and Shveta Malik
Reviewed-by: Masahiko Sawada, Peter Smith, Bertrand Drouvot, Ajin Cherian, Nisha Moond, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
This option can be used to switch a relation to use the access method
set by default_table_access_method when running the command.
This has come up when discussing the possibility to support setting
pg_class.relam for partitioned tables (left out here as future work),
while being useful on its own for relations with physical storage as
these must have an access method set.
Per suggestion from Justin Pryzby.
Author: Michael Paquier
Reviewed-by: Justin Pryzby
Discussion: https://postgr.es/m/ZeCZ89xAVFeOmrQC@pryzbyj2023
The copy_file_range() system call is available on at least Linux and
FreeBSD, and asks the kernel to use efficient ways to copy ranges of a
file. Options available to the kernel include sharing block ranges
(similar to --clone mode), and pushing down block copies to the storage
layer.
For automated testing, see PG_TEST_PG_UPGRADE_MODE. (Perhaps in a later
commit we could consider setting this mode for one of the CI targets.)
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CA%2BhUKGKe7Hb0-UNih8VD5UNZy5-ojxFb3Pr3xSBBL8qj2M2%3DdQ%40mail.gmail.com
pg_constraint.conwithoutoverlaps was recently added to support primary
keys and unique constraints with the WITHOUT OVERLAPS clause. An
upcoming patch provides the foreign-key side of this functionality,
but the syntax there is different and uses the keyword PERIOD. It
would make sense to use the same pg_constraint field for both of
these, but then we should pick a more general name that conveys "this
constraint has a temporal/period-related feature". conperiod works
for that and is nicely compact. Changing this now avoids possibly
having to introduce versioning into clients. Note there are still
some "without overlaps" variables left, which deal specifically with
the parsing of the primary key/unique constraint feature.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Discussion: https://www.postgresql.org/message-id/flat/CA+renyUApHgSZF9-nd-a0+OPGharLQLO=mDHcY4_qQ0+noCUVg@mail.gmail.com
Presently, if an archive module's check_configured_cb callback
returns false, a generic WARNING message is emitted, which
unfortunately provides no actionable details about the reason why
the module is not configured. This commit introduces a macro that
archive module authors can use to add a DETAIL line to this WARNING
message.
Co-authored-by: Tung Nguyen
Reviewed-by: Daniel Gustafsson, Álvaro Herrera
Discussion: https://postgr.es/m/4109578306242a7cd5661171647e11b2%40oss.nttdata.com
The adminpack extension was only used to support pgAdmin III, which
in turn was declared EOL many years ago. Removing the extension also
allows us to remove functions from core as well which were only used
to support old version of adminpack.
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>
Discussion: https://postgr.es/m/CALj2ACUmL5TraYBUBqDZBi1C+Re8_=SekqGYqYprj_W8wygQ8w@mail.gmail.com
Commit 5f2e179bd3 missed one place in rules.sgml that should have
mentioned MERGE. Also, be more specific when saying that MERGE doesn't
support rules, since it does support SELECT rules.
The datatype for analyze_sampling had accidentally been set to text
and not string. Backpatch to v16 where analyze_sampling first was
introduced.
Author: Shinya Kato <Shinya11.Kato@oss.nttdata.com>
Reviewed-by: Laurenz Albe <laurenz.albe@cybertec.at>
Discussion: https://postgr.es/m/7fd9166b9fda267411793f39986d7f24@oss.nttdata.com
Backpatch-through: v16
Now that BackendId was just another index into the proc array, it was
redundant with the 0-based proc numbers used in other places. Replace
all usage of backend IDs with proc numbers.
The only place where the term "backend id" remains is in a few pgstat
functions that expose backend IDs at the SQL level. Those IDs are now
in fact 0-based ProcNumbers too, but the documentation still calls
them "backend ids". That term still seems appropriate to describe what
the numbers are, so I let it be.
One user-visible effect is that pg_temp_0 is now a valid temp schema
name, for backend with ProcNumber 0.
Reviewed-by: Andres Freund
Discussion: https://www.postgresql.org/message-id/8171f1aa-496f-46a6-afc3-c46fe7a9b407@iki.fi
Correct out-of-date text that said the "default" collation is always
based on LC_COLLATE and LC_CTYPE.
Also reformat into a list to make it easier to understand and compare
the available collations, and briefly document the stability
characteristics of each one.
Discussion: https://postgr.es/m/4a69d067374d2f6bfb66f5bfb2ab9a020493d49f.camel@j-davis.com
This allows the target relation of MERGE to be an auto-updatable or
trigger-updatable view, and includes support for WITH CHECK OPTION,
security barrier views, and security invoker views.
A trigger-updatable view must have INSTEAD OF triggers for every type
of action (INSERT, UPDATE, and DELETE) mentioned in the MERGE command.
An auto-updatable view must not have any INSTEAD OF triggers. Mixing
auto-update and trigger-update actions (i.e., having a partial set of
INSTEAD OF triggers) is not supported.
Rule-updatable views are also not supported, since there is no
rewriter support for non-SELECT rules with MERGE operations.
Dean Rasheed, reviewed by Jian He and Alvaro Herrera.
Discussion: https://postgr.es/m/CAEZATCVcB1g0nmxuEc-A+gGB0HnfcGQNGYH7gS=7rq0u0zOBXA@mail.gmail.com
More precisely, what we do here is make the SLRU cache sizes
configurable with new GUCs, so that sites with high concurrency and big
ranges of transactions in flight (resp. multixacts/subtransactions) can
benefit from bigger caches. In order for this to work with good
performance, two additional changes are made:
1. the cache is divided in "banks" (to borrow terminology from CPU
caches), and algorithms such as eviction buffer search only affect
one specific bank. This forestalls the problem that linear searching
for a specific buffer across the whole cache takes too long: we only
have to search the specific bank, whose size is small. This work is
authored by Andrey Borodin.
2. Change the locking regime for the SLRU banks, so that each bank uses
a separate LWLock. This allows for increased scalability. This work
is authored by Dilip Kumar. (A part of this was previously committed as
d172b717c6f4.)
Special care is taken so that the algorithms that can potentially
traverse more than one bank release one bank's lock before acquiring the
next. This should happen rarely, but particularly clog.c's group commit
feature needed code adjustment to cope with this. I (Álvaro) also added
lots of comments to make sure the design is sound.
The new GUCs match the names introduced by bcdfa5f2e2 in the
pg_stat_slru view.
The default values for these parameters are similar to the previous
sizes of each SLRU. commit_ts, clog and subtrans accept value 0, which
means to adjust by dividing shared_buffers by 512 (so 2MB for every 1GB
of shared_buffers), with a cap of 8MB. (A new slru.c function
SimpleLruAutotuneBuffers() was added to support this.) The cap was
previously 1MB for clog, so for sites with more than 512MB of shared
memory the total memory used increases, which is likely a good tradeoff.
However, other SLRUs (notably multixact ones) retain smaller sizes and
don't support a configured value of 0. These values based on
shared_buffers may need to be revisited, but that's an easy change.
There was some resistance to adding these new GUCs: it would be better
to adjust to memory pressure automatically somehow, for example by
stealing memory from shared_buffers (where the caches can grow and
shrink naturally). However, doing that seems to be a much larger
project and one which has made virtually no progress in several years,
and because this is such a pain point for so many users, here we take
the pragmatic approach.
Author: Andrey Borodin <x4mmm@yandex-team.ru>
Author: Dilip Kumar <dilipbalaut@gmail.com>
Reviewed-by: Amul Sul, Gilles Darold, Anastasia Lubennikova,
Ivan Lazarev, Robert Haas, Thomas Munro, Tomas Vondra,
Yura Sokolov, Васильев Дмитрий (Dmitry Vasiliev).
Discussion: https://postgr.es/m/2BEC2B3F-9B61-4C1D-9FB5-5FAB0F05EF86@yandex-team.ru
Discussion: https://postgr.es/m/CAFiTN-vzDvNz=ExGXz6gdyjtzGixKSqs0mKHMmaQ8sOSEFZ33A@mail.gmail.com
There isn't a lot of user demand for AIX support, we have a bunch of
hacks to work around AIX-specific compiler bugs and idiosyncrasies,
and no one has stepped up to the plate to properly maintain it.
Remove support for AIX to get rid of that maintenance overhead. It's
still supported for stable versions.
The acute issue that triggered this decision was that after commit
8af2565248, the AIX buildfarm members have been hitting this
assertion:
TRAP: failed Assert("(uintptr_t) buffer == TYPEALIGN(PG_IO_ALIGN_SIZE, buffer)"), File: "md.c", Line: 472, PID: 2949728
Apperently the "pg_attribute_aligned(a)" attribute doesn't work on AIX
for values larger than PG_IO_ALIGN_SIZE, for a static const variable.
That could be worked around, but we decided to just drop the AIX support
instead.
Discussion: https://www.postgresql.org/message-id/20240224172345.32@rfd.leadboat.com
Reviewed-by: Andres Freund, Noah Misch, Thomas Munro
The new names are intended to match those in an upcoming patch that adds
a few GUCs to configure the SLRU buffer sizes.
Backwards compatibility concern: this changes the accepted names for
function pg_stat_slru_rest(). Since this function recognizes "any other
string" as a request to reset the entry for "other", this means that
calling it with the old names would silently reset "other" instead of
doing nothing or throwing an error.
Reviewed-by: Andrey M. Borodin <x4mmm@yandex-team.ru>
Discussion: https://postgr.es/m/202402261616.dlriae7b6emv@alvherre.pgsql
Add a note about the additional privileges required after the fix in
4989ce7264 (wording per Tom Lane); also change marked-up mentions of
"target_table_name" to be simply "the target table" or the like. Also,
note that "join_condition" is scouted for requisite privileges.
Backpatch to 15.
Discussion: https://postgr.es/m/202402211653.zuh6objy3z72@alvherre.pgsql
By enabling slot synchronization, all the failover logical replication
slots on the primary (assuming configurations are appropriate) are
automatically created on the physical standbys and are synced
periodically. The slot sync worker on the standby server pings the primary
server at regular intervals to get the necessary failover logical slots
information and create/update the slots locally. The slots that no longer
require synchronization are automatically dropped by the worker.
The nap time of the worker is tuned according to the activity on the
primary. The slot sync worker waits for some time before the next
synchronization, with the duration varying based on whether any slots were
updated during the last cycle.
A new parameter sync_replication_slots enables or disables this new
process.
On promotion, the slot sync worker is shut down by the startup process to
drop any temporary slots acquired by the slot sync worker and to prevent
the worker from trying to fetch the failover slots.
A functionality to allow logical walsenders to wait for the physical will
be done in a subsequent commit.
Author: Shveta Malik, Hou Zhijie based on design inputs by Masahiko Sawada and Amit Kapila
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Ajin Cherian, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
The explanation of interval's behavior in datatype.sgml wasn't wrong
exactly, but it was unclear, partly because it buried the lede about
there being three internal fields. Rearrange and wordsmith for more
clarity.
The discussion of extract() claimed that input of type date was
handled by casting, but actually there's been a separate SQL function
taking date for a very long time. Also, it was mostly silent about
how interval inputs are handled, but there are several field types
for which it seems useful to be specific.
Improve discussion of justify_days()/justify_hours() too.
In passing, remove vertical space in some groups of examples,
as there was little consistency about whether to have such space
or not. (I only did this within the datetime functions section;
there are some related inconsistencies elsewhere.)
Per discussion of bug #18348 from Michael Bondarenko. There
may be some code changes coming out of that discussion too,
but we likely won't back-patch them. This docs-only patch
seems useful to back-patch, though I only carried it back to
v13 because it didn't apply easily in v12.
Discussion: https://postgr.es/m/18348-b097a3587dfde8a4@postgresql.org
A child table can specify a compression or storage method different
from its parents. This was previously an error. (But this was
inconsistently enforced because for example the settings could be
changed later using ALTER TABLE.) This now also allows an explicit
override if multiple parents have different compression or storage
settings, which was previously an error that could not be overridden.
The compression and storage properties remains unchanged in a child
inheriting from parent(s) after its creation, i.e., when using ALTER
TABLE ... INHERIT. (This is not changed.)
Before this change, the error detail would mention the first pair of
conflicting parent compression or storage methods. But with this
change it waits till the child specification is considered by which
time we may have encountered many such conflicting pairs. Hence the
error detail after this change does not include the conflicting
compression/storage methods. Those can be obtained from parent
definitions if necessary. The code to maintain list of all
conflicting methods or even the first conflicting pair does not seem
worth the convenience it offers. This change is inline with what we
do with conflicting default values.
Before this commit, the specified storage method could be stored in
ColumnDef::storage (CREATE TABLE ... LIKE) or ColumnDef::storage_name
(CREATE TABLE ...). This caused the MergeChildAttribute() and
MergeInheritedAttribute() to ignore a storage method specified in the
child definition since it looked only at ColumnDef::storage. This
commit removes ColumnDef::storage and instead uses
ColumnDef::storage_name to save any storage method specification. This
is similar to how compression method specification is handled.
Author: Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/24656cec-d6ef-4d15-8b5b-e8dfc9c833a7@eisentraut.org
This commit adds timeout that is expected to be used as a prevention
of long-running queries. Any session within the transaction will be
terminated after spanning longer than this timeout.
However, this timeout is not applied to prepared transactions.
Only transactions with user connections are affected.
Discussion: https://postgr.es/m/CAAhFRxiQsRs2Eq5kCo9nXE3HTugsAAJdSQSmxncivebAxdmBjQ%40mail.gmail.com
Author: Andrey Borodin <amborodin@acm.org>
Author: Japin Li <japinli@hotmail.com>
Author: Junwang Zhao <zhjwpku@gmail.com>
Reviewed-by: Nikolay Samokhvalov <samokhvalov@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Reviewed-by: bt23nguyent <bt23nguyent@oss.nttdata.com>
Reviewed-by: Yuhang Qiu <iamqyh@gmail.com>
Setting the environment variable PG_TEST_INITDB_EXTRA_OPTS passes
extra options to initdb run by pg_regress or
PostgreSQL::Test::Cluster's init.
This can be useful for a wide variety of uses, like running all tests
with checksums enabled, or with JIT enabled, or with different GUC
settings, or with different locale settings. (Not all tests are going
to pass with arbitrary options, but it is useful to run this against
specific test suites.)
Reviewed-by: Ian Lawrence Barwick <barwick@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/d4d2ad9f-1c1d-47a1-bb4d-c10a747d4f15%40eisentraut.org
Presently, we rely on each individual signal handler to save the
initial value of errno and then restore it before returning if
needed. This is easily forgotten and, if missed, often goes
undetected for a long time.
In commit 3b00fdba9f, we introduced a wrapper signal handler
function that checks whether MyProcPid matches getpid(). This
commit moves the aforementioned errno restoration code from the
individual signal handlers to the new wrapper handler so that we no
longer need to worry about missing it.
Reviewed-by: Andres Freund, Noah Misch
Discussion: https://postgr.es/m/20231121212008.GA3742740%40nathanxps13
The pgcrypto docs contained a set of links for useful reading and
technical references. These sets of links were however not actively
curated and had stale content and dead links. Rather than investing
time into maintining these, this removes them altogether since there
are lots of resources online which are actively maintained.
Backpatch to all supported versions since these links have been in
the docs for a long time.
Reported-by: Hanefi Onaldi <hanefi.onaldi@microsoft.com>
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://postgr.es/m/170774255387.3279713.2822272755998870925@wrigleys.postgresql.org
Backpatch-through: v12
This commit introduces a new SQL function pg_sync_replication_slots()
which is used to synchronize the logical replication slots from the
primary server to the physical standby so that logical replication can be
resumed after a failover or planned switchover.
A new 'synced' flag is introduced in pg_replication_slots view, indicating
whether the slot has been synchronized from the primary server. On a
standby, synced slots cannot be dropped or consumed, and any attempt to
perform logical decoding on them will result in an error.
The logical replication slots on the primary can be synchronized to the
hot standby by using the 'failover' parameter of
pg-create-logical-replication-slot(), or by using the 'failover' option of
CREATE SUBSCRIPTION during slot creation, and then calling
pg_sync_replication_slots() on standby. For the synchronization to work,
it is mandatory to have a physical replication slot between the primary
and the standby aka 'primary_slot_name' should be configured on the
standby, and 'hot_standby_feedback' must be enabled on the standby. It is
also necessary to specify a valid 'dbname' in the 'primary_conninfo'.
If a logical slot is invalidated on the primary, then that slot on the
standby is also invalidated.
If a logical slot on the primary is valid but is invalidated on the
standby, then that slot is dropped but will be recreated on the standby in
the next pg_sync_replication_slots() call provided the slot still exists
on the primary server. It is okay to recreate such slots as long as these
are not consumable on standby (which is the case currently). This
situation may occur due to the following reasons:
- The 'max_slot_wal_keep_size' on the standby is insufficient to retain
WAL records from the restart_lsn of the slot.
- 'primary_slot_name' is temporarily reset to null and the physical slot
is removed.
The slot synchronization status on the standby can be monitored using the
'synced' column of pg_replication_slots view.
A functionality to automatically synchronize slots by a background worker
and allow logical walsenders to wait for the physical will be done in
subsequent commits.
Author: Hou Zhijie, Shveta Malik, Ajin Cherian based on an earlier version by Peter Eisentraut
Reviewed-by: Masahiko Sawada, Bertrand Drouvot, Peter Smith, Dilip Kumar, Nisha Moond, Kuroda Hayato, Amit Kapila
Discussion: https://postgr.es/m/514f6f2f-6833-4539-39f1-96cd1e011f23@enterprisedb.com
The macOS Finder application creates .DS_Store files in directories
when opened, which creates problems for serverside utilities which
expect all files to be PostgreSQL specific files. Skip these files
when encountered in pg_checksums, pg_rewind and pg_basebackup.
This was extracted from a larger patchset for skipping hidden files
and system files, where the concencus was to just skip these. Since
this is equally likely to happen in every version, backpatch to all
supported versions.
Reported-by: Mark Guertin <markguertin@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Reviewed-by: Tobias Bussmann <t.bussmann@gmx.net>
Discussion: https://postgr.es/m/E258CE50-AB0E-455D-8AAD-BB4FE8F882FB@gmail.com
Backpatch-through: v12