The "fsync" level already flushes drive write caches on Windows (as does
"fdatasync"), so it only confuses matters to have an apparently higher
level that isn't actually different at all.
That leaves "fsync_writethrough" only for macOS, where it actually does
something different.
Reviewed-by: Magnus Hagander <magnus@hagander.net>
Discussion: https://postgr.es/m/CA%2BhUKGJ2CG2SouPv2mca2WCTOJxYumvBARRcKPraFMB6GSEMcA%40mail.gmail.com
The contents of the line whose parsing failed was not reported in the
error message produced by generate-wait_event_types.pl, making harder
than necessary the debugging of incorrectly-shaped entries in the file.
Reported-by: Andres Freund
Discussion: https://postgr.es/m/ZK9S3jFEV1X797Ll@paquier.xyz
The double quotes used for the wait event names are not required, as
the values quoted are made of single words. The files generated by
generate-wait_event_types.pl (pgstat_wait_event.c, wait_event_types.h
and wait_event_types.sgml) are exactly the same before and after this
commit, hence the wait event names and the enum elements have the same
names as before.
Discussion: https://postgr.es/m/ZK9S3jFEV1X797Ll@paquier.xyz
Until now, when DROP DATABASE got interrupted in the wrong moment, the removal
of the pg_database row would also roll back, even though some irreversible
steps have already been taken. E.g. DropDatabaseBuffers() might have thrown
out dirty buffers, or files could have been unlinked. But we continued to
allow connections to such a corrupted database.
To fix this, mark databases invalid with an in-place update, just before
starting to perform irreversible steps. As we can't add a new column in the
back branches, we use pg_database.datconnlimit = -2 for this purpose.
An invalid database cannot be connected to anymore, but can still be
dropped.
Unfortunately we can't easily add output to psql's \l to indicate that some
database is invalid, it doesn't fit in any of the existing columns.
Add tests verifying that a interrupted DROP DATABASE is handled correctly in
the backend and in various tools.
Reported-by: Evgeny Morozov <postgresql3@realityexists.net>
Author: Andres Freund <andres@anarazel.de>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/20230509004637.cgvmfwrbht7xm7p6@awork3.anarazel.de
Discussion: https://postgr.es/m/20230314174521.74jl6ffqsee5mtug@awork3.anarazel.de
Backpatch: 11-, bug present in all supported versions
When vac_truncate_clog() encounters bogus datfrozenxid / datminmxid values, it
returns early. Unfortunately, until now, it did not release
WrapLimitsVacuumLock. If the backend later tries to acquire
WrapLimitsVacuumLock, the session / autovacuum worker hangs in an
uncancellable way. Similarly, other sessions will hang waiting for the
lock. However, if the backend holding the lock exited or errored out for some
reason, the lock was released.
The bug was introduced as a side effect of 566372b3d6.
It is interesting that there are no production reports of this problem. That
is likely due to a mix of bugs leading to bogus values having gotten less
common, process exit releasing locks and instances of hangs being hard to
debug for "normal" users.
Discussion: https://postgr.es/m/20230621221208.vhsqgduwfpzwxnpg@awork3.anarazel.de
Commit 89e46da5e allowed REPLICA IDENTITY FULL tables to use an index
on the subscriber during apply of update/delete. This commit clarifies
in the documentation that the leftmost field of candidate indexes must
be a column (not an expression) that references the published relation
column.
The source code comments are also updated accordingly.
Reviewed-by: Peter Smith, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoDJjffEvUFKXT27Q5U8-UU9JHv4rrJ9Ke8Zkc5UPWHLvA@mail.gmail.com
Backpatch-through: 16
Unlike the other implementations of getopt_long() I could find, the
in-tree implementation does not reorder non-options to the end of
argv. Instead, it returns -1 as soon as the first non-option is
found, even if there are other options listed afterwards. By
moving non-options to the end of argv, getopt_long() can parse all
specified options and return -1 when only non-options remain.
This quirk is periodically missed by hackers (e.g., 869aa40a27,
ffd398021c, and d9ddc50baf). This commit introduces the
aforementioned non-option reordering behavior to the in-tree
getopt_long() implementation.
Special thanks to Noah Misch for his help verifying behavior on
AIX.
Reviewed-by: Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/20230609232257.GA121461%40nathanxps13
A CaseTestExpr is currently being put into
JsonValueExpr.formatted_expr as placeholder for the result of
evaluating JsonValueExpr.raw_expr, which in turn is evaluated
separately. Though, there's no need for this indirection if
raw_expr itself can be embedded into formatted_expr and evaluated
as part of evaluating the latter, especially as there is no
special reason to evaluate it separately. So this commit makes it
so. As a result, JsonValueExpr.raw_expr no longer needs to be
evaluated in ExecInterpExpr(), eval_const_exprs_mutator() etc. and
is now only used for displaying the original "unformatted"
expression in ruleutils.c.
While at it, this also removes the function makeCaseTestExpr(),
because the code in makeJsonConstructorExpr() looks more readable
without it IMO and isn't used by anyone else either.
Finally, a note is added in the comment above CaseTestExpr's
definition that JsonConstructorExpr is also using it.
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/CA+HiwqE4XTdfb1nW=Ojoy_tQSRhYt-q_kb6i5d4xcKyrLC1Nbg@mail.gmail.com
The first check on the enum values was not necessary as the values set
in wait_event_names.txt for the classes LWLock and Lock were able to
satisfy the check. The second check when generating the C and header
files is now based on a match of the class names, making it simpler to
understand.
Author: Masahiro Ikeda, Michael Paquier
Discussion: https://postgr.es/m/eaf82a85c0ef1b55dc3b651d3f7b867a@oss.nttdata.com
The special handling of negative attribute numbers in
ATExecAddColumn() was introduced to support SET WITH OIDS (commit
6d1e361852). But that feature doesn't exist anymore, so we can revert
to the previous, simpler version. In passing, also remove an obsolete
comment about OID support.
Reviewed-by: Alvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://www.postgresql.org/message-id/flat/52a125e4-ff9a-95f5-9f61-b87cf447e4da@eisentraut.org
Previously we only allowed unique B-tree constraints on partitions
(and only if the constraint included all the partition keys). But we
could allow exclusion constraints with the same restriction. We also
require that those columns be compared for equality, not something
like &&.
Author: Paul A. Jungwirth <pj@illuminatedcomputing.com>
Reviewed-by: Ronan Dunklau <ronan.dunklau@aiven.io>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://www.postgresql.org/message-id/flat/ec8b1d9b-502e-d1f8-e909-1bf9dffe6fa5@illuminatedcomputing.com
All supported computers have either POSIX or Windows threads, and we no
longer have any automated testing of --disable-thread-safety. We define
a vestigial ENABLE_THREAD_SAFETY macro to 1 in ecpg_config.h in case it
is useful, but we no longer test it anywhere in PostgreSQL code, and
associated dead code paths are removed.
The Meson and perl-based Windows build scripts never had an equivalent
build option.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKGLtmexrpMtxBRLCVePqV_dtWG-ZsEbyPrYc%2BNBB2TkNsw%40mail.gmail.com
As coded, the row data strings generated for pgbench_accounts' COPY in
the client-side data generation were always assigning 0 for one of its
attributes. This simplifies a bit an upcoming patch to switch
client-side data generation of pgbench to use COPY for the teller and
branch tables, rather than individual INSERTs.
Author: Tristan Partin
Discussion: https://postgr.es/m/CSTU5P82ONZ1.19XFUGHMXHBRY@c3po
This commit adds two columns: indexes_total and indexes_processed, to
pg_stat_progress_vacuum system view to show the index vacuum
progress. These numbers are reported in the "vacuuming indexes" and
"cleaning up indexes" phases.
This uses the new parallel message type for progress reporting added
by be06506e7.
Bump catversion because this changes the definition of
pg_stat_progress_vacuum.
Author: Sami Imseih
Reviewed by: Masahiko Sawada, Michael Paquier, Nathan Bossart, Andres Freund
Discussion: https://www.postgresql.org/message-id/flat/5478DFCD-2333-401A-B2F0-0D186AB09228@amazon.com
This commit adds a new type of parallel message 'P' to allow a
parallel worker to poke at a leader to update the progress.
Currently it supports only incremental progress reporting but it's
possible to allow for supporting of other backend progress APIs in the
future.
There are no users of this new message type as of this commit. That
will follow in future commits.
Idea from Andres Freund.
Author: Sami Imseih
Reviewed by: Michael Paquier, Masahiko Sawada
Discussion: https://www.postgresql.org/message-id/flat/5478DFCD-2333-401A-B2F0-0D186AB09228@amazon.com
Windows has similar functions with leading underscores. Previously, we
provided the rename via a macro in win32_port.h. In fact its functions
are not always good replacements for the Unix functions, since they
can't deal with UTF-8. They are only currently used by pg_locale.c,
which is careful to redirect to other Windows routines for UTF-8. Given
that portability hazard, it seem unlikely to be a good idea to encourage
any other code to think of these functions as being available outside
pg_locale.c. Any code that thinks it wants these functions probably
wants our wchar2char() or char2wchar() routines instead, or it won't
actually work on Windows in UTF-8 databases.
Furthermore, some major libc implementations including glibc don't have
them (they only have the standard variants without _l), so external code
is very unlikely to require them to exist.
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CA%2BhUKG%2Bt_CHPzEoPnKyARJBJgE9-GxNajJo6ZuSfRK_KWFO%2B6w%40mail.gmail.com
Since PostgresMain calls sigsetjmp, any local variables that are not
marked "volatile" have a risk of unspecified behavior. In practice
this means that when control returns via longjmp, such variables might
get reset to their values as of the time of sigsetjmp, depending on
whether the compiler chose to put them in registers or on the stack.
We were careful about this for "send_ready_for_query", but not the
other local variables.
In the case of the timeout_enabled flags, resetting them to
their initial "false" states is actually good, since we do
"disable_all_timeouts()" in the longjmp cleanup code path. If that
does not happen, we risk uselessly calling "disable_timeout()" later,
which is harmless but a little bit expensive. Let's explicitly reset
these flags so that the behavior is correct and platform-independent.
(This change means that we really don't need the new "volatile"
markings after all, but let's install them anyway since any change
in this logic could re-introduce a problem.)
There is no issue for "firstchar" and "input_message" because those
are explicitly reinitialized each time through the query processing
loop. To make that clearer, move them to be declared inside the loop.
That leaves us with all the function-lifespan locals except the
sigjmp_buf itself marked as volatile, which seems like a good policy
to have going forward.
Because of the possibility of extra disable_timeout() calls, this
seems worth back-patching.
Sergey Shinderuk and Tom Lane
Discussion: https://postgr.es/m/2eda015b-7dff-47fd-d5e2-f1a9899b90a6@postgrespro.ru
changeDependencyFor() returns the number of pg_depend entries changed,
or 0 if there is a problem. The callers of this routine expect only one
dependency to change, but they did not check for the result returned.
The following code paths gain checks:
- Namespace for extensions.
- Namespace for various object types (see AlterObjectNamespace).
- Planner support function for a function.
Some existing error messages related to all that are reworded to be more
consistent with the project style, and the new error messages added
follow the same style. This change has exposed one bug fixed a bit
earlier with bd5ddbe.
Reviewed-by: Heikki Linnakangas, Akshat Jaimini
Discussion: https://postgr.es/m/ZJzD/rn+UbloKjB7@paquier.xyz
As coded, the code would use as a base comparison the namespace OID from
the first object scanned in pg_depend when switching its namespace
dependency entry to the new one, and use it as a base of comparison for
any follow-up checks. It would also be used as the old namespace OID to
switch *from* for the extension's pg_depend entry. Hence, if the first
object scanned has a namespace different than the one stored in the
extension, we would finish by:
- Not checking that the extension objects map with the extension's
schema.
- Not switching the extension -> namespace dependency entry to the new
namespace provided by the user, making ALTER EXTENSION ineffective.
This issue exists since this command has been introduced in d9572c4 for
relocatable extension, so backpatch all the way down to 11. The test
case has been provided by Heikki, that I have tweaked a bit to show the
effects on pg_depend for the extension.
Reported-by: Heikki Linnakangas
Author: Michael Paquier, Heikki Linnakangas
Discussion: https://postgr.es/m/20eea594-a05b-4c31-491b-007b6fceef28@iki.fi
Backpatch-through: 11
Comments in src/backend/libpq/auth.c say: (after successfully finding
the final DN to check the user-supplied password against)
/* Unbind and disconnect from the LDAP server */
and later
/*
* Need to re-initialize the LDAP connection, so that we can bind to
* it with a different username.
*/
But the protocol actually permits multiple subsequent authentications
("binds") over a single connection.
So, it seems like the whole connection re-initialization thing was
just a confusion and can be safely removed, thus saving quite a few
network round-trips, especially for the case of ldaps/starttls.
Author: Anatoly Zaretsky <anatoly.zaretsky@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/CALbq6kmJ-1+58df4B51ctPfTOSyPbY8Qi2=ct8oR=i4TamkUoQ@mail.gmail.com
Historically this module dealt with thread-safety of system interfaces,
but now all that's left is wrapper code for user name and home directory
lookup. Arguably the Windows variants of this logic could be moved in
here too, to justify its presence under port. For now, just tidy up
some obsolete references to multi-threading, and give the file a
meaningful name.
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/CA%2BhUKGLtmexrpMtxBRLCVePqV_dtWG-ZsEbyPrYc%2BNBB2TkNsw%40mail.gmail.com
locale_t is defined by POSIX.1-2008 and SUSv4, and available on all
targeted systems. For Windows, win32_port.h redirects to a partial
implementation called _locale_t. We can now remove a lot of
compile-time tests for HAVE_LOCALE_T, and associated comments and dead
code branches that were needed for older computers.
Since configure + MinGW builds didn't detect locale_t but now we assume
that all systems have it, further inconsistencies among the 3 Windows build
systems were revealed. With this commit, we no longer define
HAVE_WCSTOMBS_L and HAVE_MBSTOWCS_L on any Windows build system, but
we have logic to deal with that so that replacements are available where
appropriate.
Reviewed-by: Noah Misch <noah@leadboat.com>
Reviewed-by: Tristan Partin <tristan@neon.tech>
Reviewed-by: Peter Eisentraut <peter@eisentraut.org>
Discussion: https://postgr.es/m/CA%2BhUKGLg7_T2GKwZFAkEf0V7vbnur-NfCjZPKZb%3DZfAXSV1ORw%40mail.gmail.com
Commit e213de8e78 fixed a problem with path lengths to a tempdir on
Windows, but caused problems on at least some Unix systems where the
system tempdir is on a different file system. To work around this, only
used the system temdir for the destination of pg_replslot on Windows,
and otherwise restore the old behaviour.
Backpatch to relase 14 like the previous patch.
Problem exposed by a myriad of buildfarm animals.
The symlink to a longer location tripped up some Windows limit on
buildfarm animal fairywren when running with meson, which uses slightly
longer paths.
Backpatch to release 14 to keep the script in sync. Before that the
script skipped all symlink related tests on Windows.
Commit 19d8e2308b allowed a weaker check for HOT with summarizing
indexes, but it did not update README.HOT. So do that now.
Patch by Matthias van de Meent, minor changes by me. Backpatch to 16,
where the optimization was introduced.
Author: Matthias van de Meent
Reviewed-by: Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/CAEze2WiEOm8V+c9kUeYp2BPhbEc5s473fUf51xNeqvSFGv44Ew@mail.gmail.com
The restrictedToken handle was set but never read, so remove the
variable and change to a boolean style check to match other uses
of CreateRestrictedProcess().
Reviewed-by: Nathan Bossart <nathandbossart@gmail.com>
Reviewed-by: Michael Paquier <michael@paquier.xyz>
Discussion: https://postgr.es/m/62A63C81-3893-4E3F-A34E-2081DF67074E@yesql.se
On Windows, it's sometimes difficult to create a file with a path longer
than 255 chars, and if it can be created it might not be seen by the
archiver. This can be triggered by the test for tar backups with
filenames greater than 100 bytes. So we skip that test if the path would
exceed 255.
Backpatch to all live branches.
Reviewed by Daniel Gustafsson
Discussion: https://postgr.es/m/666ac55b-3400-fb2c-2cea-0281bf36a53c@dunslane.net
We create a file, so we better WAL-log it. In practice, all the
built-in index AMs and all extensions that I'm aware of write a
metapage to the init fork, which is WAL-logged, and replay of the
metapage implicitly creates the fork too. But if ambuildempty() didn't
write any page, we would miss it.
This can be seen with dummy_index_am. Set up replication, create a
'dummy_index_am' index on an unlogged table, and look at the files
created in the replica: the init fork is not created on the
replica. Dummy_index_am doesn't do anything with the relation files,
however, so it doesn't lead to any user-visible errors.
Backpatch to all supported versions.
Reviewed-by: Robert Haas
Discussion: https://www.postgresql.org/message-id/6e5bbc08-cdfc-b2b3-9e23-1a914b9850a9%40iki.fi
This is useful to show the allocation state of huge pages when setting
up a server with "huge_pages = try", where allocating huge pages would
be attempted but the server would continue its startup sequence even if
the allocation fails. The effective status of huge pages is not easily
visible without OS-level tools (or for instance, a lookup at
/proc/N/smaps), and the environments where Postgres runs may not
authorize that. Like the other GUCs related to huge pages, this works
for Linux and Windows.
This GUC can report as values:
- "on", if huge pages were allocated.
- "off", if huge pages were not allocated.
- "unknown", a special state that could only be seen when using for
example postgres -C because it is only possible to know if the shared
memory allocation worked after we can check for the GUC values, even if
checking a runtime-computed GUC. This value should never be seen when
querying for the GUC on a running server. An assertion is added to
check that.
The discussion has also turned around having a new function to grab this
status, but this would have required more tricks for -DEXEC_BACKEND,
something that GUCs already handle.
Noriyoshi Shinoda has initiated the thread that has led to the result of
this commit.
Author: Justin Pryzby
Reviewed-by: Nathan Bossart, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/TU4PR8401MB1152EBB0D271F827E2E37A01EECC9@TU4PR8401MB1152.NAMPRD84.PROD.OUTLOOK.COM
The header file wait_event_types.h was generated without a newline at
its end, which was inconsistent with all the other things generated
automatically.
Per offline gripe from Nathan Bossart.
This commit reverts the work done by commits 3ba59ccc89 and 72e78d831a.
Those commits were incorrect in asserting that we never acquire any other
heavy-weight lock after acquring page lock other than relation extension
lock. We can acquire a lock on catalogs while doing catalog look up after
acquring page lock.
This won't impact any existing feature but we need to think some other way
to achieve this before parallelizing other write operations or even
improving the parallelism in vacuum (like allowing multiple workers
for an index).
Reported-by: Jaime Casanova
Author: Amit Kapila
Backpatch-through: 13
Discussion: https://postgr.es/m/CAJKUy5jffnRKNvRHKQ0LynRb0RJC-o4P8Ku3x9vGAVLwDBWumQ@mail.gmail.com
This commit comes as a continuation of the discussion that has led to
d522b05, as \v was handled inconsistently when parsing array values or
anything going through the parsers, and changing a parser behavior in
stable branches is a scary thing to do. The parsing of array values now
uses the more central scanner_isspace() and array_isspace() is removed.
As pointing out by Peter Eisentraut, fix a confusing reference to
horizontal space in the parsers with the term "horiz_space". \f was
included in this set since 3cfdd8f from 2000, but it is not horizontal.
"horiz_space" is renamed to "non_newline_space", to refer to all
whitespace characters except newlines.
The changes impact the parsers for the backend, psql, seg, cube, ecpg
and replication commands. Note that JSON should not escape \v, as per
RFC 7159, so these are not touched.
Reviewed-by: Peter Eisentraut, Tom Lane
Discussion: https://postgr.es/m/ZJKcjNwWHHvw9ksQ@paquier.xyz
llvm_release_context() called llvm_enter_fatal_on_oom(), but was missing
the corresponding llvm_leave_fatal_on_oom() call. As a result, if JIT was
used at all, we were almost always in the "fatal-on-oom" state.
It only makes a difference if you use an extension written in C++, and
run out of memory in a C++ 'new' call. In that case, you would get a
PostgreSQL FATAL error, instead of the default behavior of throwing a
C++ exception.
Back-patch to all supported versions.
Reviewed-by: Daniel Gustafsson
Discussion: https://www.postgresql.org/message-id/54b78cca-bc84-dad8-4a7e-5b56f764fab5@iki.fi
Most, but not all, of our "/*----" style comments have an end-guard
line with dashes at the end of the comment. However, pgindent doesn't
care about the end-guards, so they mostly just waste screen
space. Going forward, let's not require end-guards.
Remove a broken end-guard in a comment in walsender.c that led me to
think about this. Remove the end guard from another comment in
walsender.c for consistency, so that we use the same style in all
comments in the file.
However, we have thousands of existing "/*----" comments the repository,
so it's not worth the code churn to change them all.
Discussion: https://www.postgresql.org/message-id/fb083c91-d490-3b65-25f3-05e9118b6b0d%40iki.fi
BuildEventTriggerCache sets up a context "EventTriggerCache" which
house a hash named "Event Trigger Cache", which in turn creates a
context with the table name. This generated log output for memory
context dumps like below:
LOG: level: 2; EventTriggerCache: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used
LOG: level: 3; Event Trigger Cache: 8192 total in 1 blocks; 2616 free (0 chunks); 5576 used
This renames the hash to ensure that the hash context has a unique
name for easier log reading and debugging.
Discussion: https://postgr.es/m/5EDC969E-CAE3-4CBD-965E-3B8A1294CFA4@yesql.se
Commit 7b64e4b3 taught DropSubscription() to drop stats entry of
subscription that is not associated with a replication slot for apply
worker at DROP SUBSCRIPTION but missed covering the case where the
subscription is not associated with replication slots for both apply
worker and tablesync worker.
Also add a test to verify that the stats for slot-less subscription is
removed at DROP SUBSCRIPTION time.
Backpatch down to 15.
Author: Masahiko Sawada
Reviewed-by: Nathan Bossart, Hayato Kuroda, Melih Mutlu, Amit Kapila
Discussion: https://postgr.es/m/CAD21AoB71zkP7uPT7JDPsZcvp0749ExEQnOJxeNKPDFisHar+w@mail.gmail.com
Backpatch-through: 15
Several commands internally assemble command lines to call other
commands. This includes initdb, pg_dumpall, and pg_regress. (Also
pg_ctl, but that is different enough that I didn't consider it here.)
This has all evolved a bit organically, with fixed-size buffers, and
various optional command-line arguments being injected with
confusing-looking code, and the spacing between options handled in
inconsistent ways. Clean all this up a bit to look clearer and be
more easily extensible with new arguments and options. We start each
command with printfPQExpBuffer(), and then append arguments as
necessary with appendPQExpBuffer(). Also standardize on using
initPQExpBuffer() over createPQExpBuffer() where possible. pg_regress
uses StringInfo instead of PQExpBuffer, but many of the same ideas
apply.
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/16d0beac-a141-e5d3-60e9-323da75f49bf@eisentraut.org
The documentation and the code is generated automatically from a new
file called wait_event_names.txt, formatted in sections dedicated to
each wait event class (Timeout, Lock, IO, etc.) with three tab-separated
fields:
- C symbol in enums
- Format in the system views
- Description in the docs
Using this approach has several advantages, as we have proved to be
rather bad in maintaining this area of the tree across the years:
- The order of each item in the documentation and the code, which should
be alphabetical, has become incorrect multiple times, and the script
generating the code and documentation has a few rules to enforce that,
making the maintenance a no-brainer.
- Some wait events were added to the code, but not documented, so this
cannot be missed now.
- The order of the tables for each wait event class is enforced in the
documentation (the input .txt file does so as well for clarity, though
this is not mandatory).
- Less code, shaving 1.2k lines from the tree, with 1/3 of the savings
coming from the code, the rest from the documentation.
The wait event types "Lock" and "LWLock" still have their own code path
for their code, hence only the documentation is created for them. These
classes are listed with a special marker called WAIT_EVENT_DOCONLY in
the input file.
Adding a new wait event now requires only an update of
wait_event_names.txt, with "Lock" and "LWLock" treated as exceptions.
This commit has been tested with configure/Makefile, the CI and VPATH
build. clean, distclean and maintainer-clean were working fine.
Author: Bertrand Drouvot, Michael Paquier
Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
Clear any potential stale next_phase_at value from the snapshot
builder which otherwise may trip an assertion check ensuring
that there is no next_phase_at value.
This can be reproduced by running 80 concurrent sessions like
the below where $c is a loop counter (assumes there has been
1..$c databases created) :
echo "
CREATE TABLE replication_example(id SERIAL PRIMARY KEY,
somedata int,
text varchar(120));
SELECT 'init' FROM
pg_create_logical_replication_slot('regression_slot_$c',
'test_decoding');
SELECT data FROM
pg_logical_slot_get_changes('regression_slot_$c', NULL,
NULL, 'include-xids', '0',
'skip-empty-xacts', '1');
" | psql -d regress_$c >>psql.log &
Backpatch down to v16.
Bug: #17695
Author: Masahiko Sawada <sawada.mshk@gmail.com>
Reviewed-by: Alexander Lakhin <exclusion@gmail.com>
Reported-by: bowenshi <zxwsbg@qq.com>
Discussion: https://postgr.es/m/17695-6be9277c9295985f@postgresql.org
Backpatch-through: v16
If you create a table and don't insert any data into it, the relation file
is never fsync'd. You don't lose data, because an empty table doesn't have
any data to begin with, but if you crash and lose the file, subsequent
operations on the table will fail with "could not open file" error.
To fix, register an fsync request in mdcreate(), like we do for mdwrite().
Per discussion, we probably should also fsync the containing directory
after creating a new file. But that's a separate and much wider issue.
Backpatch to all supported versions.
Reviewed-by: Andres Freund, Thomas Munro
Discussion: https://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi
This removes md5() function calls from these test suites:
- bloom
- test_decoding
- isolation
- recovery
- subscription
This covers all remaining test suites where md5() calls were just used
to generate some random data and can be replaced by appropriately
adapted sha256() calls. This will eventually allow these tests to
pass in OpenSSL FIPS mode (which does not allow MD5 use). See also
208bf364a9. Unlike for the main regression tests, I didn't write a
fipshash() wrapper here, because that would have been too repetitive
and wouldn't really save much here. In some cases it was easier to
remove one layer of indirection by changing column types from text to
bytea.
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
Discussion: https://www.postgresql.org/message-id/flat/f9b480b5-e473-d2d1-223a-4b9db30a229a@eisentraut.org
Previously an "amcanorderbyop" index would only be used when the index
could provide sorted results which satisfied all query_pathkeys. Here
we relax this so that we also allow these indexes to be considered by the
planner when they only provide partially sorted results. This allows the
planner to later consider making use of an Incremental Sort to satisfy the
remaining pathkeys. This change is particularly useful for KNN-type
queries which contain a LIMIT clause and an additional ORDER BY clause for
a non-indexed column.
Author: Miroslav Bendik
Reviewed-by: Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAPoEpV0QYDtzjwamwWUBqyWpaCVbJV2d6qOD7Uy09bWn47PJtw%40mail.gmail.com
The following routines are added to libpq:
PGresult *PQclosePrepared(PGconn *conn, const char *stmt);
PGresult *PQclosePortal(PGconn *conn, const char *portal);
int PQsendClosePrepared(PGconn *conn, const char *stmt);
int PQsendClosePortal(PGconn *conn, const char *portal);
The "send" routines are non-blocking versions of the two others.
Close messages are part of the protocol but they did not have a libpq
implementation. And, having these routines is for instance useful with
connection poolers as these can detect more easily Close messages
than DEALLOCATE queries.
The implementation takes advantage of what the Describe routines rely on
for portals and statements. Some regression tests are added in
libpq_pipeline, for the four new routines, by closing portals and
statements created already by the tests.
Author: Jelte Fennema
Reviewed-by: Jian He, Michael Paquier
Discussion: https://postgr.es/m/CAGECzQTb4xFAopAVokudB+L62Kt44mNAL4Z9zZ7UTrs1TRFvWA@mail.gmail.com
It's OK to be lazy about re-binning memory segments when allocating,
because that can only leave segments in a bin that's too high. We'll
search higher bins if necessary while allocating next time, and
also eventually re-bin, so no memory can become unreachable that way.
However, when freeing memory, the largest contiguous range of free pages
might go up, so we should re-bin eagerly to make sure we don't leave the
segment in a bin that is too low for get_best_segment() to find.
The re-binning code is moved into a function of its own, so it can be
called whenever free pages are returned to the segment's free page map.
Back-patch to all supported releases.
Author: Dongming Liu <ldming101@gmail.com>
Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version)
Reviewed-by: Thomas Munro <thomas.munro@gmail.com>
Discussion: https://postgr.es/m/CAL1p7e8LzB2LSeAXo2pXCW4%2BRya9s0sJ3G_ReKOU%3DAjSUWjHWQ%40mail.gmail.com
Prior to this, Bitmapsets could contain trailing words which had no
members set. Many places in bitmapset.c had to loop over trailing words
to check for empty words. If we ensure we always remove these trailing
zero words then we can perform various optimizations such as fast
pathing bms_is_subset to return false when 'a' has more words than 'b'.
A similar optimization is possible in bms_equal. Both of these together
can yield quite significant performance increases in the query planner
when querying a partitioned table with around 100 or more partitions.
While we're at it, since the minimum number of words a Bitmapset can
contain is 1, we can make use of do/while loops instead of for loops
when looping over all words in a set. This means checking the loop
condition 1 less time, which for single-word sets cuts the loop
condition checks in half.
Author: David Rowley
Reviewed-by: Yuya Watari
Discussion: https://postgr.es/m/CAApHDvr5O41MuUjw0DQKqmAnv7QqvmLqXReEd5o4nXTzWp8-+w@mail.gmail.com
This commit increases the size of the bgw_library_name member of
the BackgroundWorker struct from BGW_MAXLEN (96) bytes to MAXPGPATH
(default of 1024) bytes so that it can store longer file names
(e.g., absolute paths).
Author: Yurii Rashkovskii
Reviewed-by: Daniel Gustafsson, Aleksander Alekseev
Discussion: https://postgr.es/m/CA%2BRLCQyjFV5Y8tG5QgUb6gjteL4S3p%2B1gcyqWTqigyM93WZ9Pg%40mail.gmail.com
The ginfast.c code previously checked for conflicts in before locking
the relevant buffer, leaving a window where a RW conflict could be
missed. Re-order.
There was also a place where buffer ID and block number were confused
while trying to predicate-lock a page, noted by visual inspection.
Back-patch to all supported releases. Fixes one more problem discovered
with the reproducer from bug #17949, in this case when Dmitry tried
other index types.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reported-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
When performing a bitmap heap scan, we don't want to miss concurrent
writes that occurred after we observed the heap's rs_nblocks, but before
we took predicate locks on index pages. Therefore, we can't skip
fetching any heap tuples that are referenced by the index, because we
need to test them all with CheckForSerializableConflictOut(). The
old optimization that would ignore any references to blocks >=
rs_nblocks gets in the way of that requirement, because it means that
concurrent writes in that window are ignored.
Removing that optimization shouldn't affect correctness at any isolation
level, because any new tuples shouldn't be visible to an MVCC snapshot.
There also shouldn't be any error-causing references to heap blocks past
the end, because we should have held at least an AccessShareLock on the
table before the index scan. It can't get smaller while our transaction
is running. For now, though, we'll keep the optimization at lower
levels to avoid making unnecessary changes in a bug fix.
Back-patch to all supported releases. In release 11, the code is in a
different place but not fundamentally different. Fixes one aspect of
bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
When predicate-locking btrees, we have a special case for completely
empty btrees, since there is no page to lock. This was racy, because,
without buffer lock held, a matching key could be inserted between the
_bt_search() and the PredicateLockRelation() calls.
Fix, by rechecking _bt_search() after taking the relation-level SIREAD
lock, if using SERIALIZABLE isolation and an empty btree is discovered.
Back-patch to all supported releases. Fixes one aspect of bug #17949.
Reported-by: Artem Anisimov <artem.anisimov.255@gmail.com>
Reviewed-by: Dmitry Dolgov <9erthalion6@gmail.com>
Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://postgr.es/m/17949-a0f17035294a55e2%40postgresql.org
Unlike commands such as CREATE DATABASE, ProcessStartupPacket()
does not perform multibyte-aware truncation of overlength names.
This means that connection attempts might fail even if the user
provides the same overlength names that were used in CREATE
DATABASE, CREATE ROLE, etc. Ideally, we'd do the same multibyte-
aware truncation in both code paths, but it doesn't seem worth the
added complexity of trying to discover the encoding of the names.
Instead, let's simply skip truncating the names in the startup
packet and let the user/database lookup fail later on. With this
change, users must provide the exact names stored in the catalogs,
even if the names were truncated.
This reverts commit d18c1d1f51.
Author: Bertrand Drouvot
Reviewed-by: Kyotaro Horiguchi, Tom Lane
Discussion: https://postgr.es/m/07436793-1426-29b2-f924-db7422a05fb7%40gmail.com
When table_block_relation_estimate_size() estimated the number of tuples
in a relation without statistics (e.g. right after load), it did not
consider fillfactor when calculating density. With non-default
fillfactor values, this may result in significant overestimate of the
number of tuples - up to 10x with the minimum 10% fillfactor. This may
have unexpected consequences, e.g. when creating hash indexes.
This considers the current fillfactor value in the "no statistics" code
path. If the fillfactor changes after loading data into the table, the
estimate may be off. But that seems much less likely than changing the
fillfactor before the data load.
Reviewed-by: Corey Huinker, Peter Eisentraut
Discussion: https://postgr.es/m/cf154ef9-6bac-d268-b735-67a3443debba@enterprisedb.com
Creation of a file with a very long name can create problems on Windows
due to its file path limits. Work around that by creating the file via a
symlink with a shorter name.
Error displayed by buildfarm animal fairywren.o
Backpatch to all live branches
The VacAttrStats structure contained the whole Form_pg_attribute for a
column, but it actually only needs attstattarget from there. So
remove the Form_pg_attribute field and make a separate field for
attstattarget. This simplifies some code for extended statistics that
doesn't deal with a column but an expression, which had to fake up
pg_attribute rows to satisfy internal APIs. Also, we can remove some
comments that essentially said "don't look at pg_attribute directly".
Reviewed-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/d6069765-5971-04d3-c10d-e4f7b2e9c459%40eisentraut.org
Here are some notes about this change:
- As X509_get_signature_nid() should always exist (OpenSSL and
LibreSSL), hence HAVE_X509_GET_SIGNATURE_NID is now gone.
- OPENSSL_API_COMPAT is bumped to 0x10002000L.
- One comment related to 1.0.1e introduced by 74242c2 is removed.
Upstream OpenSSL still provides long-term support for 1.0.2 in a closed
fashion, so removing it is out of scope for a few years, at least.
Reviewed-by: Jacob Champion, Daniel Gustafsson
Discussion: https://postgr.es/m/ZG3JNursG69dz1lr@paquier.xyz
The following changes are done:
- Addition of WaitEventBufferPin and WaitEventExtension, that hold a
list of wait events related to each category.
- Addition of two functions that encapsulate the list of wait events for
each category.
- Rename BUFFER_PIN to BUFFERPIN (only this wait event class used an
underscore, requiring a specific rule in the automation script).
These changes make a bit easier the automatic generation of all the code
and documentation related to wait events, as all the wait event
categories are now controlled by consistent structures and functions.
Author: Bertrand Drouvot
Discussion: https://postgr.es/m/c6f35117-4b20-4c78-1df5-d3056010dcf5@gmail.com
Discussion: https://postgr.es/m/77a86b3a-c4a8-5f5d-69b9-d70bbf2e9b98@gmail.com
When set, this environment variable was only effective for data
directories but not for all the other temporary files created by
PostgreSQL::Test::Utils. Keeping the temporary files after a successful
run can be useful for debugging purposes.
The documentation is updated to reflect the new behavior, with contents
available in doc/ since v16 and in src/test/perl/README since v15.
Author: Jacob Champion
Reviewed-by: Daniel Gustafsson
Discussion: https://postgr.es/m/CAAWbhmgHtDH1SGZ+Fw05CsXtE0mzTmjbuUxLB9mY9iPKgM6cUw@mail.gmail.com
Discussion: https://postgr.es/m/YyPd9unV14SX2bLF@paquier.xyz
Backpatch-through: 11
Here we adjust the query planner to have it remove items from a window
clause's PARTITION BY clause in cases where the pathkey for a column in
the PARTITION BY clause is redundant.
Doing this allows the optimization added in 9d9c02ccd to stop window
aggregation early rather than going into "pass-through" mode to find
tuples belonging to the next partition. Also, when we manage to remove
all PARTITION BY columns, we now no longer needlessly check that the
current tuple belongs to the same partition as the last tuple in
nodeWindowAgg.c. If the pathkey was redundant then all tuples must
contain the same value for the given redundant column, so there's no point
in checking that during execution.
Author: David Rowley
Reviewed-by: Richard Guo
Discussion: https://postgr.es/m/CAApHDvo2ji+hdxrxfXtRtsfSVw3to2o1nCO20qimw0dUGK8hcQ@mail.gmail.com
Commit dd38ff28ad added a new error message "missing contrecord" when
we fail to reassemble a record. Unfortunately that caused noisy
messages to be logged by pg_waldump at end of segment, and by walsender
when asked to shut down on a segment boundary.
Remove the new error message, so that this condition signals end-of-
WAL without a message. It's arguably a reportable condition that should
not be silenced while performing crash recovery, but fixing that without
introducing noise in the other cases will require more research.
Back-patch to 15.
Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com>
Discussion: https://postgr.es/m/6a1df56e-4656-b3ce-4b7a-a9cb41df8189%40enterprisedb.com
Commit f24523672d fixed a memory leak by moving the modifiedCols bitmap
into the per-row memory context. In the case of AFTER UPDATE triggers,
the bitmap is however referenced from an event kept until the end of the
query, resulting in a use-after-free bug.
Fixed by copying the bitmap into the AfterTriggerEvents memory context,
which is the one where we keep the trigger events. There's only one
place that needs to do the copy, but the memory context may not exist
yet. Doing that in a separate function seems more readable.
Report by Alexander Pyhalov, fix by me. Backpatch to 13, where the
bitmap was added to the event by commit 71d60e2aa0.
Reported-by: Alexander Pyhalov
Backpatch-through: 13
Discussion: https://postgr.es/m/acddb17c89b0d6cb940eaeda18c08bbe@postgrespro.ru
The Incremental Sort had a couple issues, resulting in leaking memory
during rescans, possibly triggering OOM. The code had a couple of
related flaws:
1. During rescans, the sort states were reset but then also set to NULL
(despite the comment saying otherwise). ExecIncrementalSort then
sees NULL and initializes a new sort state, leaking the memory used
by the old one.
2. Initializing the sort state also automatically rebuilt the info about
presorted keys, leaking the already initialized info. presorted_keys
was also unnecessarily reset to NULL.
Patch by James Coleman, based on patches by Laurenz Albe and Tom Lane.
Backpatch to 13, where Incremental Sort was introduced.
Author: James Coleman, Laurenz Albe, Tom Lane
Reported-by: Laurenz Albe, Zu-Ming Jiang
Backpatch-through: 13
Discussion: https://postgr.es/m/b2bd02dff61af15e3526293e2771f874cf2a3be7.camel%40cybertec.at
Discussion: https://postgr.es/m/db03c582-086d-e7cd-d4a1-3bc722f81765%40inf.ethz.ch
Per the code coverage report, the existing regression tests did not
exercice some a couple important BRIN minmax-multi code paths.
- The tests focused on testing planning with a range of scan key
strategies, but not the execution. Fixed by adding queries that
actually test query execution for both equality and inequality.
- All tests created indexes after inserting data, but this only
exercises the CREATE INDEX strategy that sees all values at once, not
incremental summary updates. The new tests flip the order and create
the index before adding data.
- The assert check(s) validating correctness of expanded ranges were
present only in the "union" code path, which is not covered by
regression tests at all (as it requires concurrency etc.). Fixed by
adding the asserts to a couple more places.
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/57020b2e-d9c9-9bc7-4892-b36d9bb07563%40enterprisedb.com
Move the calculation of Bloom filter parameters (for BRIN indexes) into
a separate function to make reuse easier. At the moment we only call it
from one place, but that may change and it's easier to read anyway.
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/0e1f3350-c9cf-ab62-43a5-5dae314de89c%40enterprisedb.com
BRIN bloom and minmax-multi opclasses were somewhat inconsistent when
dealing with bool variables, assigning to them Datum values etc. While
not a bug, it makes the code harder to understand, so fix that.
While at it, update an incorrect comment copied to bloom opclass from
minmax, talking about strategies not supported by bloom.
Reviewed-by: Heikki Linnakangas
Discussion: https://postgr.es/m/0e1f3350-c9cf-ab62-43a5-5dae314de89c%40enterprisedb.com
This patch is a preliminary refactoring for an upcoming patch aimed at
adding new options to this tool, and using long options for these is
more user-friendly. The existing short options gain long flavors, as
of:
* -d/--debug
* -n/--dry-run
* -x/--strip-extension
Author: Atsushi Torikoshi
Reviewed-by: Fujii Masao, Kyotaro Horiguchi, Michael Paquier
Discussion: https://postgr.es/m/d660ef741ce3d82f3b4283f1cafd576c@oss.nttdata.com
The logic that introduced partitioned indexes missed a few things when
invalidating a partitioned index when these are created, still the code
is written to handle recursions:
1) If created from scratch because a mapping index could not be found,
the new index created could be itself invalid, if for example it was a
partitioned index with one of its leaves invalid.
2) A CCI was missing when indisvalid is set for a parent index, leading
to inconsistent trees when recursing across more than one level for a
partitioned index creation if an invalidation of the parent was
required.
This could lead to the creation of a partition index tree where some of
the partitioned indexes are marked as invalid, but some of the parents
are marked valid, which is not something that should happen (as
validatePartitionedIndex() defines, indisvalid is switched to true for a
partitioned index iff all its partitions are themselves valid).
This patch makes sure that indisvalid is set to false on a partitioned
index if at least one of its partition is invalid. The flag is set to
true if *all* its partitions are valid.
The regression test added in this commit abuses of a failed concurrent
index creation, marked as invalid, that maps with an index created on
its partitioned table afterwards.
Reported-by: Alexander Lakhin
Reviewed-by: Alexander Lakhin
Discussion: https://postgr.es/m/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
The following patterns are added for CREATE SCHEMA:
- AUTHORIZATION, without a schema name or after a schema name.
- Possible list of owner roles after AUTHORIZATION.
- CREATE and GRANT within the supported set of commands.
- Correct object types supported in an embedded CREATE SCHEMA command.
While on it, this commit adjusts the completion done after CREATE
UNLOGGED:
- Addition of SEQUENCE.
- Avoid suggesting MATERIALIZED VIEW in CREATE TABLE.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: Suraj Khamkar, Michael Paquier
Discussion: https://postgr.es/m/8735snihmz.fsf@wibble.ilmari.org
Using at least perl 5.14 is required since 4c15327, meaning that it is
possible to use named captures and the %+ hash instead of having to
count parenthesis groups manually.
While on it, CATALOG is made more flexible in its handling of
whitespaces for parameter lists (see the addition of \s* in this
case). The generated postgres.bki remains exactly the same before and
after this commit.
Author: Dagfinn Ilmari Mannsåker
Reviewed-by: John Naylor
Discussion: https://postgr.es/m/87y1l3s7o9.fsf@wibble.ilmari.org
ALTER TABLE .. SET ACCESS METHOD was not registering a dependency to the
new access method with the relation altered in its rewrite phase, making
possible the drop of an access method even if there are relations that
depend on it. During the rewrite, a temporary relation is created to
build the new relation files before swapping the new and old files, and,
while the temporary relation was registering a correct dependency to the
new AM, the old relation did not do that. A dependency on the access
method is added when the relation files are swapped, which is the point
where pg_class is updated.
Materialized views and tables use the same code path, hence both were
impacted.
Backpatch down to 15, where this command has been introduced.
Reported-by: Alexander Lakhin
Reviewed-by: Nathan Bossart, Andres Freund
Discussion: https://postgr.es/m/18000-9145c25b1af475ca@postgresql.org
Backpatch-through: 15
An outer join cannot be formed using an input path that is parameterized
by a value that is supposed to be nulled by the outer join. This is
obviously nonsensical, and it could lead to a bad plan being selected;
although currently it seems that we'll hit various sanity-check
assertions first.
I think that such cases were formerly prevented by the delay_upper_joins
mechanism, but now that that's gone we need an explicit check.
(Perhaps we should avoid generating baserel paths that could
lead to this situation in the first place; but it seems like
having a defense at the join level would be a good idea anyway.)
Richard Guo and Tom Lane, per report from Jaime Casanova
Discussion: https://postgr.es/m/CAJKUy5g2uZRrUDZJ8p-=giwcSHVUn0c9nmdxPSY0jF0Ov8VoEA@mail.gmail.com
If the given composite datum is toasted out-of-line,
DatumGetHeapTupleHeader will perform database accesses to detoast it.
That can invalidate the result of get_cached_rowtype, as documented
(perhaps not plainly enough) in that function's API spec; which leads
to strange errors or crashes when we try to use the TupleDesc to read
the tuple. In short then, trying to update a field of a composite
column could fail intermittently if the overall column value is wide
enough to require toasting.
We can fix the bug at no cost by just changing the order of
operations, since we don't need the TupleDesc until after detoasting.
(Other callers of get_cached_rowtype appear to get this right already,
so there's only one bug.)
Note that the added regression test case reveals this bug reliably
only with debug_discard_caches/CLOBBER_CACHE_ALWAYS.
Per bug #17994 from Alexander Lakhin. Sadly, this patch does not fix
the missing-values issue revealed in the bug discussion; we'll need
some more work to cover that.
Discussion: https://postgr.es/m/17994-5c7100b51b4790e9@postgresql.org
It was walking into the ColumnDef->compression field, which is not a
node but a string. This code is currently not reachable (because the
compression field is only set in situations that don't go through
raw_expression_tree_walker()), but if it had been, this could have
behaved erratically.
Not including the timeline IDs to the file names generated by pg_waldump
for the individual blocks saved could cause some of these files to be
overwritten when scanning segments across multiple timelines. Having
this information is also as much useful as the LSNs, to be able to know
from exactly which WAL segment a block is comes from.
While on it, this fixes a few comments in the tests, where the format of
the file was not described as matching with the reality.
Reported-by: Fujii Masao
Reviewed-by: Kyotaro Horiguchi, David Christensen
Discussion: https://postgr.es/m/ZJp921+nITFnvBVS@paquier.xyz
A portion of ALTER TABLE .. ATTACH PARTITION is to ensure that the
partition being attached to the partitioned table has a correct set of
indexes, so as there is a consistent index mapping between the
partitioned table and its new-to-be partition. However, as introduced
in 8b08f7d, the current logic could choose an invalid index as a match,
which is something that can exist when dealing with more than two levels
of partitioning, like attaching a partitioned table (that has
partitions, with an index created by CREATE INDEX ON ONLY) to another
partitioned table.
A partitioned index with indisvalid set to false is equivalent to an
incomplete partition tree, meaning that an invalid partitioned index
does not have indexes defined in all its partitions. Hence, choosing an
invalid partitioned index can create inconsistent partition index trees,
where the parent attaching to is valid, but its partition may be
invalid.
In the report from Alexander Lakhin, this showed up as an assertion
failure when validating an index. Without assertions enabled, the
partition index tree would be actually broken, as indisvalid should
be switched to true for a partitioned index once all its partitions are
themselves valid. With two levels of partitioning, the top partitioned
table used a valid index and was able to link to an invalid index stored
on its partition, itself a partitioned table.
I have studied a few options here (like the possibility to switch
indisvalid to false for the parent), but came down to the conclusion
that we'd better rely on a simple rule: invalid indexes had better never
be chosen, so as the partition attached uses and creates indexes that
the parent expects. Some regression tests are added to provide some
coverage. Note that the existing coverage is not impacted.
This is a problem since partitioned indexes exist, so backpatch all the
way down to v11.
Reported-by: Alexander Lakhin
Discussion: https://postgr.es/14987634-43c0-0cb3-e075-94d423607e08@gmail.com
Backpatch-through: 11
Since 3db72eb, the query ID of utilities is generated using the Query
structure, making the use of the query string in JumbleQuery()
unnecessary. This commit removes the argument "querytext" from
JumbleQuery().
Reported-by: Joe Conway
Reviewed-by: Nathan Bossart
Discussion: https://postgr.es/m/ZJlQAWE4COFqHuAV@paquier.xyz
TParserGet() recurses for some token types, meaning it's possible
to drive it to stack overflow. Since this is a minority behavior,
I chose to add the check_stack_depth() call to the two places that
recurse rather than doing it during every single call.
While at it, add CHECK_FOR_INTERRUPTS(), because this can run
unpleasantly long for long inputs.
Per bug #17995 from Zuming Jiang. This is old, so back-patch
to all supported branches.
Discussion: https://postgr.es/m/17995-9f20ff3e6389db4c@postgresql.org
One of the tests for the pipeline mode with portal description expects a
non-NULL PQgetResult, but used an incorrect error message on failure,
telling that PQgetResult being NULL was the expected result.
Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQTkShHecFF+EZrm94Lbsu2ej569T=bz+PjMbw9Aiioxuw@mail.gmail.com
Backpatch-through: 14
ff9618e82a introduced has_partition_ancestor_privs(), which is used
to check whether a user has MAINTAIN on any partition ancestors.
This involves syscache lookups, and presently this function does
not take any relation locks, so it is likely subject to the same
kind of cache lookup failures that were fixed by 19de0ab23c.
To fix this problem, this commit partially reverts ff9618e82a.
Specifically, it removes the partition-related changes, including
the has_partition_ancestor_privs() function mentioned above. This
means that MAINTAIN on a partitioned table is no longer sufficient
to perform maintenance commands on its partitions. This is more
like how privileges for maintenance commands work on supported
versions. Privileges are checked for each partition, so a command
that flows down to all partitions might refuse to process them
(e.g., if the current user doesn't have MAINTAIN on the partition).
In passing, adjust a few related comments and error messages, and
add a test for the privilege checks for CLUSTER on a partitioned
table.
Reviewed-by: Michael Paquier, Jeff Davis
Discussion: https://postgr.es/m/20230613211246.GA219055%40nathanxps13
Avoid "right sibling %u of block %u is not next child" errors when
vacuuming a corrupt nbtree index. Just LOG the issue and press on.
That way VACUUM will have a decent chance of finishing off all required
processing for the index (and for the table as a whole).
This is similar to recent work from commit 5abff197, as well as work
from commit 5b861baa (later backpatched as commit 43e409ce), which
taught nbtree VACUUM to keep going when its "re-find" check fails. The
hardening added by this commit takes place directly after the "re-find"
check, right before the critical section for the first stage of page
deletion.
Author: Peter Geoghegan <pg@bowt.ie>
Discussion: https://postgr.es/m/CAH2-Wz=dayg0vjs4+er84TS9ami=csdzjpuiCGbEw=idhwqhzQ@mail.gmail.com
Backpatch: 11- (all supported versions).
exec_parse_message() wants to create a cached plan in all cases,
including for empty input. The empty-input path does not have
a test for being in an aborted transaction, making it possible
that plancache.c will fail due to trying to do database lookups
even though there's no real work to do.
One solution would be to throw an aborted-transaction error in
this path too, but it's not entirely clear whether the lack of
such an error was intentional or whether some clients might be
relying on non-error behavior. Instead, let's hack plancache.c
so that it treats empty statements with the same logic it
already had for transaction control commands, ensuring that it
can soldier through even in an already-aborted transaction.
Per bug #17983 from Alexander Lakhin. Back-patch to all
supported branches.
Discussion: https://postgr.es/m/17983-da4569fcb878672e@postgresql.org
pgperltidy as well as pgperlcritic and pgperlsyncheck now allow
passing files and directories on the command line, like pgindent does.
(Previously, they would always operate on the whole tree.)
Also, for consistency with pgindent's new behavior (as of b16259b3c1),
passing an argument is now required. To get the previous default
behavior, use "pgperltidy ." for example.
Discussion: https://www.postgresql.org/message-id/flat/45aacd8a-5265-d9da-8df2-b8e2c0cf6a07%40eisentraut.org
This partially reverts 68cb5af, as using archiving to enforce the
rename of the last partial segment of the old timeline at promotion to
use .partial as suffix is impacting the tests when it does switchovers.
As showed by the logs gathered by the CI in the tests that failed, a new
standby may fail to find the WAL segment it needs to follow a promoted
instance with its timeline jump, as it got renamed to .partial.
This problem would manifest as a run timeout with 009_twophase.pl, as
the new standby repeatedly requests a segment from the promoted primary
that it would not find.
Reported-by: Nathan Bossart
Discussion: https://postgr.es/m/20230621043345.GA787473@nathanxps13
Backpatch-through: 13
The existing errhint message and docs were missing the fact that we can't
disassociate from the slot unless the subscription is disabled.
Author: Robert Sjöblom, Peter Smith
Reviewed-by: Peter Eisentraut, Amit Kapila
Backpatch-through: 11
Discussion: https://postgr.es/m/807bdf85-61ea-88e2-5712-6d9fcd4eabff@fortnox.se
ff9618e82a introduced the skip_privs parameter, which is used to
skip privilege checks when recursing to a relation's TOAST table.
This parameter should have been added as a flag bit in
VacuumParams->options instead.
Suggested-by: Michael Paquier
Reviewed-by: Michael Paquier, Jeff Davis
Discussion: https://postgr.es/m/ZIj4v1CwqlDVJZfB%40paquier.xyz
If the inner-side expressions contain PARAM_EXEC Params, we must
re-hash whenever the values of those Params change. The executor
mechanism for that exists already, but we failed to invoke it because
finalize_plan() neglected to search the Hash.hashkeys field for
Params. This allowed a previous scan's hash table to be re-used
when it should not be, leading to rows missing from the join's output.
(I believe incorrectly-included join rows are impossible however,
since checking the real hashclauses would reject false matches.)
This bug is very ancient, dating probably to d24d75ff1 of 7.4.
Sadly, this simple fix depends on the plan representational changes
made by 2abd7ae9b, so it will only work back to v12. I thought
about trying to make some kind of hack for v11, but I'm leery
of putting code significantly different from what is used in the
newer branches into a nearly-EOL branch. Seeing that the bug
escaped detection for a full twenty years, problematic cases
must be rare; so I don't feel too awful about leaving v11 as-is.
Per bug #17985 from Zuming Jiang. Back-patch to v12.
Discussion: https://postgr.es/m/17985-748b66607acd432e@postgresql.org
I removed the delay_upper_joins mechanism in commit b448f1c8d,
reasoning that it was only needed when we have a single-table
(SELECT ... WHERE) as the immediate RHS child of a left join,
and we could get rid of that by hoisting the WHERE condition into
the parent join's quals. However that new code missed a case:
we could have "foo LEFT JOIN ((SELECT ... WHERE) LEFT JOIN bar)",
and if the two left joins can be commuted then we now have the
problematic query shape. We can fix this too easily enough,
by allowing the syntactically-lower left join to pass through
its parent qual location pointer recursively. That lets
prepjointree.c discard the SELECT by temporarily hoisting the
WHERE condition into the ancestor join's qual.
Per bug #17978 from Zuming Jiang.
Discussion: https://postgr.es/m/17978-12f3d93a55297266@postgresql.org
This avoids an assertion failure when outer joins are rearranged
per identity 3. Listing only the baserels from a PlaceHolderVar's
ph_lateral set should be enough to ensure that the required values
are available when we need to compute the PHV --- it's what we
did before inventing nullingrel sets, after all. It's a bit
unsatisfying; but with beta2 hard upon us, there's not time to
look for an aesthetically cleaner fix.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
It turns out that the fixes we applied in commits bfd332b3f
and 63e4f13d2 were not nearly enough to solve the problem.
We'd focused narrowly on subquery RTEs with lateral references,
but lateral references can occur in several other RTE kinds
such as function RTEs. Putting the same hack into half a dozen
code paths seems quite unattractive. Hence, revert the code changes
(but not the test cases) from those commits and instead solve it
centrally in identify_current_nestloop_params(), as Richard proposed
originally. This is a bit annoying because it could mask erroneous
nullingrels in nestloop params that are generated from non-LATERAL
parameterized paths; but on balance I don't see a better way.
Maybe at some future time we'll be motivated to find a more rigorous
approach to nestloop params, but that's not happening for beta2.
Richard Guo and Tom Lane
Discussion: https://postgr.es/m/CAMbWs48Jcw-NvnxT23WiHP324wG44DvzcH1j4hc0Zn+3sR9cfg@mail.gmail.com
Run pgindent and pgperltidy. It seems we're still some ways
away from all committers doing this automatically. Now that
we have a buildfarm animal that will whine about poorly-indented
code, we'll try to keep the tree more tidy.
Discussion: https://postgr.es/m/3156045.1687208823@sss.pgh.pa.us
Specify --no-locale and --encoding=UTF8 to be consistent with the
Makefile, which specifies NO_LOCALE=1. Fixes test for some locales
when meson is used and ICU is disabled. May have been an oversight in
e6927270cd.
Also switch argument order in unaccent/meson.build to make it
consistent in style.
Discussion: https://postgr.es/m/CABwTF4Wz41pNMJ9q3tpH=6mnvg6aopDU5Lzvers5=6=WJVekww@mail.gmail.com
Author: Gurjeet Singh
Author: Jeff Davis
This is a follow-up of f663b00, that has been committed to v13 and v14,
tweaking the TAP test for two-phase transactions so as it provides
coverage for the bug that has been fixed. This change is done in its
own commit for clarity, as v15 and HEAD did not show the problematic
behavior, still missed coverage for it.
While on it, this adds a comment about the dependency of the last
partial segment rename and RecoverPreparedTransactions() at the end of
recovery, as that can be easy to miss.
Author: Michael Paquier
Reviewed-by: Kyotaro Horiguchi
Discussion: https://postgr.es/m/743b9b45a2d4013bd90b6a5cba8d6faeb717ee34.camel@cybertec.at
Backpatch-through: 13
Starting with 4d330a61bb we can use posix_fallocate() to extend
files. Unfortunately in some situation, e.g. on tmpfs filesystems, EINTR may
be returned. See also 4518c798b2.
To fix, add a retry path to FileFallocate(). In contrast to 4518c798b2 the
amount we extend by is limited and the extending may happen at a high
frequency, so disabling signals does not appear to be the correct path here.
Also add retry paths to other file operations currently lacking them (around
fdatasync(), fsync(), ftruncate(), posix_fadvise(), sync_file_range(),
truncate()) - they are all documented or have been observed to return EINTR.
Even though most of these functions used in the back branches, it does not
seem worth the risk to backpatch - outside of the new-to-16 case of
posix_fallocate() I am not aware of problem reports due to the lack of
retries.
Reported-by: Christoph Berg <myon@debian.org>
Discussion: https://postgr.es/m/ZEZDj1H61ryrmY9o@msg.df7cb.de
Backpatch: -
Instead of specifying LC_COLLATE='C' and LC_CTYPE='C', specify
LOCALE='C' which will also affect ICU. This makes pg_regress
consistent with recent changes to initdb in commit a14e75eb0b.
Fixes buildfarm failure.
Discussion: https://postgr.es/m/2458565.1686953169@sss.pgh.pa.us
Here we adjust relation_has_unique_index_for() so that it no longer makes
use of partial unique indexes as uniqueness proofs. It is incorrect to
use these as the predicates used by check_index_predicates() to set
predOK makes use of not only baserestrictinfo quals as proofs, but also
qual from join conditions. For relation_has_unique_index_for()'s case, we
need to know the relation is unique for a given set of columns before any
joins are evaluated, so if predOK was only set to true due to some join
qual, then it's unsafe to use such indexes in
relation_has_unique_index_for(). The final plan may not even make use
of that index, which could result in reading tuples that are not as
unique as the planner previously expected them to be.
Bug: #17975
Reported-by: Tor Erik Linnerud
Backpatch-through: 11, all supported versions
Discussion: https://postgr.es/m/17975-98a90c156f25c952%40postgresql.org
For CREATE DATABASE, make LOCALE parameter apply regardless of the
provider used. Also affects initdb and createdb --locale arguments.
Previously, LOCALE (and --locale) only affected the database default
collation when using the libc provider.
Discussion: https://postgr.es/m/1a63084d-221e-4075-619e-6b3e590f673e@enterprisedb.com
Reviewed-by: Peter Eisentraut
Since commit b448f1c8d, we've been able to remove left joins
(that are otherwise removable) even when they are underneath
other left joins, a case that was previously prevented by a
delay_upper_joins check. This is a clear improvement, but
it has a surprising side-effect: it's now possible that there
are EquivalenceClasses whose relid sets mention the removed
baserel and/or outer join. If we fail to clean those up,
we may drop essential join quals due to not having any join
level that appears to satisfy their relid sets.
(It's not quite 100% clear that this was impossible before.
But the lack of complaints since we added join removal a dozen
years ago strongly suggests that it was impossible.)
Richard Guo and Tom Lane, per bug #17976 from Zuming Jiang
Discussion: https://postgr.es/m/17976-4b638b525e9a983b@postgresql.org
This applies the new error message API from commit 0873b2d354 to the
changes introduced by bbf9c282ce. The latter was committed shortly
after the former, so it probably didn't get the news in time.
Between v15 and now, this function's "else if" chain grew from 252 lines
to 592 lines, exceeding a compiler limit that manifests as "fatal error
C1026: parser stack overflow, program too complex (compiling source file
src/backend/nodes/readfuncs.c)". Use "if (...) return ...;" instead.
Reviewed by Tom Lane, Peter Eisentraut and Michael Paquier. Not all
reviewers endorse this.
Discussion: https://postgr.es/m/20230607185458.GA1334487@rfd.leadboat.com
47bb9db75 modified the ApplyRetrieveRule()'s conversion of a view's
original RTE_RELATION entry into an RTE_SUBQUERY one to retain relid,
rellockmode, and perminfoindex so that the executor can lock the view
and check its permissions. It seems better to also retain
relkind for cross-checking that the exception of an
RTE_SUBQUERY entry being allowed to carry relation details only
applies to views, so do so.
Bump catversion because this changes the output format of
RTE_SUBQUERY RTEs.
Suggested-by: David Steele <david@pgmasters.net>
Reviewed-by: David Steele <david@pgmasters.net>
Reviewed-by: Álvaro Herrera <alvherre@alvh.no-ip.org>
Discussion: https://postgr.es/m/3953179e-9540-e5d1-a743-4bef368785b0%40pgmasters.net
The issue fixed in commit bfd332b3f can also bite Memoize plans,
because of the separate copies of lateral reference Vars made
by paraminfo_get_equal_hashops. Apply the same hacky fix there.
(In passing, clean up shaky grammar in the existing comments
for this function.)
Richard Guo
Discussion: https://postgr.es/m/CAMbWs4-krwk0Wbd6WdufMAupuou_Ua73ijQ4XQCr1Mb5BaVtKQ@mail.gmail.com
rewriteRuleAction neglected to check for SubLink nodes in the
securityQuals of range table entries. This could lead to failing
to convert such a SubLink to a SubPlan, resulting in assertion
crashes or weird errors later in planning.
In passing, fix some poor coding in rewriteTargetView:
we should not pass the source parsetree's hasSubLinks
field to ReplaceVarsFromTargetList's outer_hasSubLinks.
ReplaceVarsFromTargetList knows enough to ignore that
when a Query node is passed, but it's still confusing
and bad precedent: if we did try to update that flag
we'd be updating a stale copy of the parsetree.
Per bug #17972 from Alexander Lakhin. This has been broken since
we added RangeTblEntry.securityQuals (although the presented test
case only fails back to 215b43cdc), so back-patch all the way.
Discussion: https://postgr.es/m/17972-f422c094237847d0@postgresql.org
Previously stats in the startup process would only get reported during
shutdown of the startup process. It has been that way for a long time, but
became a lot more noticeable with the new pg_stat_io view, which separates out
IO done by different backend types...
While replaying after every XLOG_RUNNING_XACTS isn't the prettiest approach,
it has the advantage of being quite easy. Given that we're well past feature
freeze...
It's not a problem that we don't report stats more frequently with
wal_level=minimal, in that case stats can't be read before the stats process
has shut down.
Besides the above, this commit also changes pgstat_report_stat() to acquire
the timestamp with GetCurrentTimestamp() instead of
GetCurrentTransactionStopTimestamp().
Thanks to Melih Mutlu, Kyotaro Horiguchi for prototypes of other approaches to
solving this issue.
Reported-by: Fujii Masao <masao.fujii@oss.nttdata.com>
Discussion: https://postgr.es/m/5315aedc-fbca-1556-c5de-dc2e00b23a14@oss.nttdata.com
Commit 927d9abb6 purported to make datetime() accept any string
that could be output for a datetime value by to_jsonb(). But it
overlooked the possibility of fractional seconds being present,
so that cases as simple as to_jsonb(now()) would defeat it.
Fix by adding formats that include ".US" to the list in
executeDateTimeMethod(). (Note that while this is nominally
microseconds, it'll do the right thing for fractions with
fewer than six digits.)
In passing, re-order the list to restore the datatype ordering
specified in its comment. The violation accidentally did not
break anything; but the next edit might be less lucky, so add
more comments.
Per report from Tim Field. Back-patch to v13 where datetime()
was added, like the previous patch.
Discussion: https://postgr.es/m/014A028B-5CE6-4FDF-AC24-426CA6FC9CEE@mohiohio.com