optionally bind. I re-added the "statement:" label so people will
understand why the line is being printed (it is log_*statement
behavior).
Use single quotes for bind values, instead of double quotes, and double
literal single quotes in bind values (and document that). I also made
use of the DETAIL line to have much cleaner output.
such as debugging and performance measurement. This consists of two features:
a table of "rendezvous variables" that allows separately-loaded shared
libraries to communicate, and a new GUC setting "local_preload_libraries"
that allows libraries to be loaded into specific sessions without explicit
cooperation from the client application. To make local_preload_libraries
as flexible as possible, we do not restrict its use to superusers; instead,
it is restricted to load only libraries stored in $libdir/plugins/. The
existing LOAD command has also been modified to allow non-superusers to
LOAD libraries stored in this directory.
This patch also renames the existing GUC variable preload_libraries to
shared_preload_libraries (after a suggestion by Simon Riggs) and does some
code refactoring in dfmgr.c to improve clarity.
Korry Douglas, with a little help from Tom Lane.
when what's being executed is a COMMIT or ROLLBACK. Per report from
Sergey Koposov. Backpatch to 8.1; 8.0 and before don't have the bug
due to lack of any logging at all here.
o print user name for all
o print portal name if defined for all
o print query for all
o reduce log_statement header to single keyword
o print bind parameters as DETAIL if text mode
it's handled just about like timezone; in particular, don't try
to read anything during InitializeGUCOptions. Should solve current
startup failure on Windows, and avoid wasted cycles if a nondefault
setting is specified in postgresql.conf too. Possibly we need to
think about a more general solution for handling 'expensive to set'
GUC options.
changing semantics too much. statement_timestamp is now set immediately
upon receipt of a client command message, and the various places that used
to do their own gettimeofday() calls to mark command startup are referenced
to that instead. I have also made stats_command_string use that same
value for pg_stat_activity.query_start for both the command itself and
its eventual replacement by <IDLE> or <idle in transaction>. There was
some debate about that, but no argument that seemed convincing enough to
justify an extra gettimeofday() call.
already-aborted transaction block. GetSnapshotData throws an Assert if
not in a valid transaction; hence we mustn't attempt to set a snapshot
for the function until after checking for aborted transaction. This is
harmless AFAICT if Asserts aren't enabled (GetSnapshotData will compute
a bogus snapshot, but it doesn't matter since HandleFunctionRequest will
throw an error shortly anywy). Hence, not a major bug.
Along the way, add some ability to log fastpath calls when statement
logging is turned on. This could probably stand to be improved further,
but not logging anything is clearly undesirable.
Backpatched as far as 8.0; bug doesn't exist before that.
transaction_timestamp() (just like now()).
Also update statement_timeout() to mention it is statement arrival time
that is measured.
Catalog version updated.
not named ones, and replace linear searches of the list with array indexing.
The named-parameter support has been dead code for many years anyway,
and recent profiling suggests that the searching was costing a noticeable
amount of performance for complex queries.
8.0), and add as suggestion to use log_min_error_statement for this
purpose. I also fixed the code so the first EXECUTE has it's prepare,
rather than the last which is what was in the current code. Also remove
"protocol" prefix for SQL EXECUTE output because it is not accurate.
Backpatch to 8.1.X.
functions are not strict, they will be called (passing a NULL first parameter)
during any attempt to input a NULL value of their datatype. Currently, all
our input functions are strict and so this commit does not change any
behavior. However, this will make it possible to build domain input functions
that centralize checking of domain constraints, thereby closing numerous holes
in our domain support, as per previous discussion.
While at it, I took the opportunity to introduce convenience functions
InputFunctionCall, OutputFunctionCall, etc to use in code that calls I/O
functions. This eliminates a lot of grotty-looking casts, but the main
motivation is to make it easier to grep for these places if we ever need
to touch them again.
during parse analysis, not only errors detected in the flex/bison stages.
This is per my earlier proposal. This commit includes all the basic
infrastructure, but locations are only tracked and reported for errors
involving column references, function calls, and operators. More could
be done later but this seems like a good set to start with. I've also
moved the ReportSyntaxErrorPosition logic out of psql and into libpq,
which should make it available to more people --- even within psql this
is an improvement because warnings weren't handled by ReportSyntaxErrorPosition.
cursors. Patch from Joachim Wieland, review and ediorialization by Neil
Conway. The view lists cursors defined by DECLARE CURSOR, using SPI, or
via the Bind message of the frontend/backend protocol. This means the
view does not list the unnamed portal or the portal created to implement
EXECUTE. Because we do list SPI portals, there might be more rows in
this view than you might expect if you are using SPI implicitly (e.g.
via a procedural language).
Per recent discussion on -hackers, the query string included in the
view for cursors defined by DECLARE CURSOR is based on
debug_query_string. That means it is not accurate if multiple queries
separated by semicolons are submitted as one query string. However,
there doesn't seem a trivial fix for that: debug_query_string
is better than nothing. I also changed SPI_cursor_open() to include
the source text for the portal it creates: AFAICS there is no reason
not to do this.
Update the documentation and regression tests, bump the catversion.
access information about the prepared statements that are available
in the current session. Original patch from Joachim Wieland, various
improvements by Neil Conway.
The "statement" column of the view contains the literal query string
sent by the client, without any rewriting or pretty printing. This
means that prepared statements created via SQL will be prefixed with
"PREPARE ... AS ", whereas those prepared via the FE/BE protocol will
not. That is unfortunate, but discussion on -patches did not yield an
efficient way to improve this, and there is some merit in returning
exactly what the client sent to the backend.
Catalog version bumped, regression tests updated.
an LWLock instead of a spinlock. This hardly matters on Unix machines
but should improve startup performance on Windows (or any port using
EXEC_BACKEND). Per previous discussion.
messages, when client attempts to execute these outside a transaction (start
one) or in a failed transaction (reject message, except for COMMIT/ROLLBACK
statements which we can handle). Per report from Francisco Figueiredo Jr.
comment line where output as too long, and update typedefs for /lib
directory. Also fix case where identifiers were used as variable names
in the backend, but as typedefs in ecpg (favor the backend for
indenting).
Backpatch to 8.1.X.
anything but transaction-exiting commands (ROLLBACK etc). We already rejected
Parse and Execute in such cases, so there seems little point in allowing Bind.
This prevents at least an Assert failure, and probably worse things, since
there's a lot of infrastructure that doesn't work when not in a live
transaction. We can also simplify the Bind logic a bit by rejecting messages
with a nonzero number of parameters, instead of the former kluge to silently
substitute NULL for each parameter. Per bug #2033 from Joel Stevenson.
since it can take a fair amount of time and this can confuse boot scripts
that expect postmaster.pid to appear quickly. Move initialization of SSL
library and preloaded libraries to after that point, too, just for luck.
Per reports from Tony Caduto and others.
delay and limit, both as global GUCs and as table-specific entries in
pg_autovacuum. stats_reset_on_server_start is now OFF by default,
but a reset is forced if we did WAL replay. XID-wrap vacuums do not
ANALYZE, but do FREEZE if it's a template database. Alvaro Herrera
exit, instead of trying to take shortcuts. Introduce some additional
shutdown callback routines to eliminate kluges like having ProcKill
be responsible for shutting down the buffer manager. Ensure that the
order of operations during shutdown is predictable and what you would
expect given the module layering.
optional arguments as text input functions, ie, typioparam OID and
atttypmod. Make all the datatypes that use typmod enforce it the same
way in typreceive as they do in typinput. This fixes a problem with
failure to enforce length restrictions during COPY FROM BINARY.
chdir into PGDATA and subsequently use relative paths instead of absolute
paths to access all files under PGDATA. This seems to give a small
performance improvement, and it should make the system more robust
against naive DBAs doing things like moving a database directory that
has a live postmaster in it. Per recent discussion.
current time: provide a GetCurrentTimestamp() function that returns
current time in the form of a TimestampTz, instead of separate time_t
and microseconds fields. This is what all the callers really want
anyway, and it eliminates low-level dependencies on AbsoluteTime,
which is a deprecated datatype that will have to disappear eventually.
performance problem pointed out by phil@vodafone: to wit, we were
spending O(N^2) time to check dropped-ness in an N-deep join tree,
even in the case where the tree was freshly constructed and couldn't
possibly mention any dropped columns. Instead of recursing in
get_rte_attribute_is_dropped(), change the data structure definition:
the joinaliasvars list of a JOIN RTE must have a NULL Const instead
of a Var at any position that references a now-dropped column. This
costs nothing during normal parse-rewrite-plan path, and instead we
have a linear-time update to make when loading a stored rule that
might contain now-dropped columns. While at it, move the responsibility
for acquring locks on relations referenced by rules into this separate
function (which I therefore chose to call AcquireRewriteLocks).
This saves effort --- namely, duplicated lock grabs in parser and rewriter
--- in the normal path at a cost of one extra non-locked heap_open()
in the stored-rule path; seems a good tradeoff. A fringe benefit is
that it is now *much* clearer that we acquire lock on relations referenced
in rules before we make any rewriter decisions based on their properties.
(I don't know of any bug of that ilk, but it wasn't exactly clear before.)
to just around the bare recv() call that gets a command from the client.
The former placement in PostgresMain was unsafe because the intermediate
processing layers (especially SSL) use facilities such as malloc that are
not necessarily re-entrant. Per report from counterstorm.com.
logic operations during planning. Seems cleaner to create two new Path
node types, instead --- this avoids duplication of cost-estimation code.
Also, create an enable_bitmapscan GUC parameter to control use of bitmap
plans.
in GetNewTransactionId(). Since the limit value has to be computed
before we run any real transactions, this requires adding code to database
startup to scan pg_database and determine the oldest datfrozenxid.
This can conveniently be combined with the first stage of an attack on
the problem that the 'flat file' copies of pg_shadow and pg_group are
not properly updated during WAL recovery. The code I've added to
startup resides in a new file src/backend/utils/init/flatfiles.c, and
it is responsible for rewriting the flat files as well as initializing
the XID wraparound limit value. This will eventually allow us to get
rid of GetRawDatabaseInfo too, but we'll need an initdb so we can add
a trigger to pg_database.
Also performed an initial run through of upgrading our Copyright date to
extend to 2005 ... first run here was very simple ... change everything
where: grep 1996-2004 && the word 'Copyright' ... scanned through the
generated list with 'less' first, and after, to make sure that I only
picked up the right entries ...
to be processed by GUC before InitPostgres, because any required lookup
of the encoding conversion function has to be done during InitializeClientEncoding.
So, I broke this last week by moving GUC processing to after InitPostgres :-(.
What we can do as a compromise is process non-SUSET variables during
command line scanning (the same as before), and postpone the processing
of only SUSET variables. None of the SUSET variables need to be set
before InitPostgres.
collector until the transaction commits. Per recent discussion, this
should avoid confusing autovacuum when an updating transaction runs for
a long time.
plain SUSET instead. Also delay processing of options received in
client connection request until after we know if the user is a superuser,
so that SUSET values can be set that way by legitimate superusers.
Per recent discussion.
Refactor code into something reasonably understandable, cause
use of the feature to not fail in standalone backends or in
EXEC_BACKEND case, fix sloppy guc.c table entries, make the
documentation minimally usable.
mode see a fresh snapshot for each command in the function, rather than
using the latest interactive command's snapshot. Also, suppress fresh
snapshots as well as CommandCounterIncrement inside STABLE and IMMUTABLE
functions, instead using the snapshot taken for the most closely nested
regular query. (This behavior is only sane for read-only functions, so
the patch also enforces that such functions contain only SELECT commands.)
As per my proposal of 6-Sep-2004; I note that I floated essentially the
same proposal on 19-Jun-2002, but that discussion tailed off without any
action. Since 8.0 seems like the right place to be taking possibly
nontrivial backwards compatibility hits, let's get it done now.
rather than when returning to the idle loop. This makes no particular
difference for interactively-issued queries, but it makes a big difference
for queries issued within functions: trigger execution now occurs before
the calling function is allowed to proceed. This responds to numerous
complaints about nonintuitive behavior of foreign key checking, such as
http://archives.postgresql.org/pgsql-bugs/2004-09/msg00020.php, and
appears to be required by the SQL99 spec.
Also take the opportunity to simplify the data structures used for the
pending-trigger list, rename them for more clarity, and squeeze out a
bit of space.
executed. Previously, the DECLARE would succeed but subsequent FETCHes
would fail since the parameter values supplied to DECLARE were not
propagated to the portal created for the cursor.
In support of this, add type Oids to ParamListInfo entries, which seems
like a good idea anyway since code that extracts a value can double-check
that it got the type of value it was expecting.
Oliver Jowett, with minor editorialization by Tom Lane.
possible to trap an error inside a function rather than letting it
propagate out to PostgresMain. You still have to use AbortCurrentTransaction
to clean up, but at least the error handling itself will cooperate.
SAVEPOINT/RELEASE/ROLLBACK-TO syntax. (Alvaro)
Cause COMMIT of a failed transaction to report ROLLBACK instead of
COMMIT in its command tag. (Tom)
Fix a few loose ends in the nested-transactions stuff.
keep track of portal-related resources separately from transaction-related
resources. This allows cursors to work in a somewhat sane fashion with
nested transactions. For now, cursor behavior is non-subtransactional,
that is a cursor's state does not roll back if you abort a subtransaction
that fetched from the cursor. We might want to change that later.
performance front, but with feature freeze upon us I think it's time to
drive a stake in the ground and say that this will be in 7.5.
Alvaro Herrera, with some help from Tom Lane.
until Bind is received, so that actual parameter values are visible to the
planner. Make use of the parameter values for estimation purposes (but
don't fold them into the actual plan). This buys back most of the
potential loss of plan quality that ensues from using out-of-line
parameters instead of putting literal values right into the query text.
This patch creates a notion of constant-folding expressions 'for
estimation purposes only', in which case we can be more aggressive than
the normal eval_const_expressions() logic can be. Right now the only
difference in behavior is inserting bound values for Params, but it will
be interesting to look at other possibilities. One that we've seen
come up repeatedly is reducing now() and related functions to current
values, so that queries like ... WHERE timestampcol > now() - '1 day'
have some chance of being planned effectively.
Oliver Jowett, with some kibitzing from Tom Lane.
of a composite type to get that type's OID as their second parameter,
in place of typelem which is useless. The actual changes are mostly
centralized in getTypeInputInfo and siblings, but I had to fix a few
places that were fetching pg_type.typelem for themselves instead of
using the lsyscache.c routines. Also, I renamed all the related variables
from 'typelem' to 'typioparam' to discourage people from assuming that
they necessarily contain array element types.
place of time_t, as per prior discussion. The behavior does not change
on machines without a 64-bit-int type, but on machines with one, which
is most, we are rid of the bizarre boundary behavior at the edges of
the 32-bit-time_t range (1901 and 2038). The system will now treat
times over the full supported timestamp range as being in your local
time zone. It may seem a little bizarre to consider that times in
4000 BC are PST or EST, but this is surely at least as reasonable as
propagating Gregorian calendar rules back that far.
I did not modify the format of the zic timezone database files, which
means that for the moment the system will not know about daylight-savings
periods outside the range 1901-2038. Given the way the files are set up,
it's not a simple decision like 'widen to 64 bits'; we have to actually
think about the range of years that need to be supported. We should
probably inquire what the plans of the upstream zic people are before
making any decisions of our own.
than being random pieces of other files. Give bgwriter responsibility
for all checkpoint activity (other than a post-recovery checkpoint);
so this child process absorbs the functionality of the former transient
checkpoint and shutdown subprocesses. While at it, create an actual
include file for postmaster.c, which for some reason never had its own
file before.
about a third, make it work on non-Windows platforms again. (But perhaps
I broke the WIN32 code, since I have no way to test that.) Fold all the
paths that fork postmaster child processes to go through the single
routine SubPostmasterMain, which takes care of resurrecting the state that
would normally be inherited from the postmaster (including GUC variables).
Clean up some places where there's no particularly good reason for the
EXEC and non-EXEC cases to work differently. Take care of one or two
FIXMEs that remained in the code.
In the past, we used a 'Lispy' linked list implementation: a "list" was
merely a pointer to the head node of the list. The problem with that
design is that it makes lappend() and length() linear time. This patch
fixes that problem (and others) by maintaining a count of the list
length and a pointer to the tail node along with each head node pointer.
A "list" is now a pointer to a structure containing some meta-data
about the list; the head and tail pointers in that structure refer
to ListCell structures that maintain the actual linked list of nodes.
The function names of the list API have also been changed to, I hope,
be more logically consistent. By default, the old function names are
still available; they will be disabled-by-default once the rest of
the tree has been updated to use the new API names.
(SIGUSR1, which we have not been using recently) instead of piggybacking
on SIGUSR2-driven NOTIFY processing. This has several good results:
the processing needed to drain the sinval queue is a lot less than the
processing needed to answer a NOTIFY; there's less contention since we
don't have a bunch of backends all trying to acquire exclusive lock on
pg_listener; backends that are sitting inside a transaction block can
still drain the queue, whereas NOTIFY processing can't run if there's
an open transaction block. (This last is a fairly serious issue that
I don't think we ever recognized before --- with clients like JDBC that
tend to sit with open transaction blocks, the sinval queue draining
mechanism never really worked as intended, probably resulting in a lot
of useless cache-reset overhead.) This is the last of several proposed
changes in response to Philip Warner's recent report of sinval-induced
performance problems.
and should do now that we control our own destiny for timezone handling,
but this commit gets the bulk of the picayune diffs in place.
Magnus Hagander and Tom Lane.
timezone code and other places.
Remove elog() calls from find_my_exec; do fprintf(stderr) instead. We
can then remove the exec.c handling in the makefile because it doesn't
have to be built to suppress elog calls.
find_my_exec/find_other_exec(). Remove passing of progname to these
functions as they can find that out from argv[0], which they already
have.
Make get_progname return const char *, and update all progname variables
to be const char *.
all the code that looks for other binaries. I move FindExec into
port/exec.c (and renamed it to find_my_binary()). I also added
find_other_binary that looks for another binary in the same directory as
the calling program, and checks the version string.
The only behavior change was that initdb and pg_dump would look in the
hard-coded bindir directory if it can't find the requested binary in the
same directory as the caller. The new code throws an error. The old
behavior seemed too error prone for version mismatches.
* removed a few redundant defines
* get_user_name safe under win32
* rationalized pipe read EOF for win32 (UPDATED PATCH USED)
* changed all backend instances of sleep() to pg_usleep
- except for the SLEEP_ON_ASSERT in assert.c, as it would exceed a
32-bit long [Note to patcher: If a SLEEP_ON_ASSERT of 2000 seconds is
acceptable, please replace with pg_usleep(2000000000L)]
I added a comment to that part of the code:
/*
* It would be nice to use pg_usleep() here, but only does 2000 sec
* or 33 minutes, which seems too short.
*/
sleep(1000000);
Claudio Natoli
> >>with allowed values of "all, mod, ddl, none" with default "none".
OK, here is a patch that implements #1. Here is sample output:
test=> set client_min_messages = 'log';
SET
test=> set log_statement = 'mod';
SET
test=> select 1;
?column?
----------
1
(1 row)
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> update test set x=1;
LOG: statement: update test set x=1;
ERROR: relation "test" does not exist
test=> copy test from '/tmp/x';
LOG: statement: copy test from '/tmp/x';
ERROR: relation "test" does not exist
test=> copy test to '/tmp/x';
ERROR: relation "test" does not exist
test=> prepare xx as select 1;
PREPARE
test=> prepare xx as update x set y=1;
LOG: statement: prepare xx as update x set y=1;
ERROR: relation "x" does not exist
test=> explain analyze select 1;;
QUERY PLAN
------------------------------------------------------------------------------------
Result (cost=0.00..0.01 rows=1 width=0) (actual time=0.006..0.007 rows=1 loops=1)
Total runtime: 0.046 ms
(2 rows)
test=> explain analyze update test set x=1;
LOG: statement: explain analyze update test set x=1;
ERROR: relation "test" does not exist
test=> explain update test set x=1;
ERROR: relation "test" does not exist
It checks PREPARE and EXECUTE ANALYZE too. The log_statement values are
'none', 'mod', 'ddl', and 'all'. For 'all', it prints before the query
is parsed, and for ddl/mod, it does it right after parsing using the
node tag (or command tag for CREATE/ALTER/DROP), so any non-parse errors
will print after the log line.
is measured in kilobytes and checked against actual physical execution
stack depth, as per my proposal of 30-Dec. This gives us a fairly
bulletproof defense against crashing due to runaway recursive functions.
#log_line_prefix = '' # e.g. '<%u%%%d> '
# %u=user name %d=database name
# %r=remote host and port
# %p=PID %t=timestamp %i=command tag
# %c=session id %l=session line number
# %s=session start timestamp
# %x=stop here in non-session processes
# %%='%'
Andrew Dunstan
Make btree index creation and initial validation of foreign-key constraints
use maintenance_work_mem rather than work_mem as their memory limit.
Add some code to guc.c to allow these variables to be referenced by their
old names in SHOW and SET commands, for backwards compatibility.
whereToSendOutput instead because they are really inquiring about
the correct client communication protocol. Update some comments.
This is pointing towards supporting regular FE/BE client protocol
in a standalone backend, per discussion a month or so back.
PostmasterPid variable, which gets set (early) in PostmasterMain
getppid would not be the postmaster?
[fork/exec] Implements processCancelRequest by keeping an array of
pid/cancel_key structs in shared mem
[fork/exec] Moves AttachSharedMemoryAndSemaphores call for backends into
SubPostmasterMain
[win32] Implements reaper/waitpid by keeping an arrays of children
pids,handles in postmaster local mem
- this item is largely untested, for reasons which should be
obvious, but appears sound
[win32/all] Added extern for pgpipe in Win32 case, and changed the second
pipe call (which seems to have been missed earlier) to pgpipe
[win32] #define'd ftruncate to chsize in the Win32 case
[win32] PG_USLEEP for Win32 has a misplaced paren. Fixed.
[win32] DLLIMPORT handling for MingW case
Claudio Natoli
BackendFork/SSDataBase/pgstat) startup, to allow fork/exec calls to
closely mimic (the soon to be provided) Win32 CreateProcess equivalent
calls.
Claudio Natoli
on 64-bit Solaris. Use a non-system-dependent datatype for UsedShmemSegID,
namely unsigned long (which we were already assuming could hold a shmem
key anyway, cf RecordSharedMemoryInLockFile).
protocol, per report from Igor Shevchenko. NOTIFY thought it could
do its thing if transaction blockState is TBLOCK_DEFAULT, but in
reality it had better check the low-level transaction state is
TRANS_DEFAULT as well. Formerly it was not possible to wait for the
client in a state where the first is true and the second is not ...
but now we can have such a state. Minor cleanup in StartTransaction()
as well.
every string, especially if some of the output should be fixed-format
machine-readable. This needs to be more carefully sorted out. Also, make
the help message generated by --help-config -h be more similar in style to
the others.
now able to cope with assigning new relfilenode values to nailed-in-cache
indexes, so they can be reindexed using the fully crash-safe method. This
leaves only shared system indexes as special cases. Remove the 'index
deactivation' code, since it provides no useful protection in the shared-
index case. Require reindexing of shared indexes to be done in standalone
mode, but remove other restrictions on REINDEX. -P (IgnoreSystemIndexes)
now prevents using indexes for lookups, but does not disable index updates.
It is therefore safe to allow from PGOPTIONS. Upshot: reindexing system catalogs
can be done without a standalone backend for all cases except
shared catalogs.
config file if it exists. This was already discussed as being a good
idea, and now seems the cleanest way to deal with initdb-time failures
on machines with small SHMMAX. (The submitted patches instead modified
initdb.sh to pass the correct sizing parameters, but that would still
leave standalone backends prone to failure later. An admin who needs
to use a standalone backend has enough trouble already, he shouldn't
have to manually configure its shmem settings...)
max_connections at initdb time. Get rid of DEF_NBUFFERS and DEF_MAXBACKENDS
macros, which aren't doing anything useful anymore, and put more likely
defaults into postgresql.conf.sample.
just before CommitTransactionCommand(). This is a more sensible place
to put it since commit discards a lot of contexts, and we'd not find
out about stomps affecting only transaction-local contexts.
heuristic determination of day vs month in date/time input. Add the
ability to specify that input is interpreted as yy-mm-dd order (which
formerly worked, but only for yy greater than 31). DateStyle's input
component now has the preferred spellings DMY, MDY, or YMD; the older
keywords European and US are now aliases for the first two of these.
Per recent discussions on pgsql-general.
without needing a running backend. Reorder postgresql.conf.sample
to match new layout of runtime.sgml. This commit re-adds work lost
in Wednesday's crash.
only remnant of this failed experiment is that the server will take
SET AUTOCOMMIT TO ON. Still TODO: provide some client-side autocommit
logic in libpq.
handle multiple 'formats' for data I/O. Restructure CommandDest and
DestReceiver stuff one more time (it's finally starting to look a bit
clean though). Code now matches latest 3.0 protocol document as far
as message formats go --- but there is no support for binary I/O yet.
of Describe on a prepared statement. This was in the original 3.0
protocol proposal, but I took it out for reasons that seemed good at
the time. Put it back per yesterday's pghackers discussion.
DestReceiver pointers instead of just CommandDest values. The DestReceiver
is made at the point where the destination is selected, rather than
deep inside the executor. This cleans up the original kluge implementation
of tstoreReceiver.c, and makes it easy to support retrieving results
from utility statements inside portals. Thus, you can now do fun things
like Bind and Execute a FETCH or EXPLAIN command, and it'll all work
as expected (e.g., you can Describe the portal, or use Execute's count
parameter to suspend the output partway through). Implementation involves
stuffing the utility command's output into a Tuplestore, which would be
kind of annoying for huge output sets, but should be quite acceptable
for typical uses of utility commands.
the column by table OID and column number, if it's a simple column
reference. Along the way, get rid of reskey/reskeyop fields in Resdoms.
Turns out that representation was not convenient for either the planner
or the executor; we can make the planner deliver exactly what the
executor wants with no more effort.
initdb forced due to change in stored rule representation.
Both plannable queries and utility commands are now always executed
within Portals, which have been revamped so that they can handle the
load (they used to be good only for single SELECT queries). Restructure
code to push command-completion-tag selection logic out of postgres.c,
so that it won't have to be duplicated between simple and extended queries.
initdb forced due to addition of a field to Query nodes.
that the types of untyped string-literal constants are deduced (ie,
when coerce_type is applied to 'em, that's what the type must be).
Remove the ancient hack of storing the input Param-types array as a
global variable, and put the info into ParseState instead. This touches
a lot of files because of adjustment of routine parameter lists, but
it's really not a large patch. Note: PREPARE statement still insists on
exact specification of parameter types, but that could easily be relaxed
now, if we wanted to do so.
I had inadvertently omitted it while rearranging things to support
length-counted incoming messages. Also, change the parser's API back
to accepting a 'char *' query string instead of 'StringInfo', as the
latter wasn't buying us anything except overhead. (I think when I put
it in I had some notion of making the parser API 8-bit-clean, but
seeing that flex depends on null-terminated input, that's not really
ever gonna happen.)
rewritten and the protocol is changed, but most elog calls are still
elog calls. Also, we need to contemplate mechanisms for controlling
all this functionality --- eg, how much stuff should appear in the
postmaster log? And what API should libpq expose for it?
have length words. COPY OUT reimplemented per new protocol: it doesn't
need \. anymore, thank goodness. COPY BINARY to/from frontend works,
at least as far as the backend is concerned --- libpq's PQgetline API
is not up to snuff, and will have to be replaced with something that is
null-safe. libpq uses message length words for performance improvement
(no cycles wasted rescanning long messages), but not yet for error
recovery.
with variable-width fields. No more truncation of long user names.
Also, libpq can now send its environment-variable-driven SET commands
as part of the startup packet, saving round trips to server.
Add ALTER SEQUENCE to modify min/max/increment/cache/cycle values
Also updated create sequence docs to mention NO MINVALUE, & NO MAXVALUE.
New Files:
doc/src/sgml/ref/alter_sequence.sgml
src/test/regress/expected/sequence.out
src/test/regress/sql/sequence.sql
ALTER SEQUENCE is NOT transactional. It behaves similarly to setval().
It matches the proposed SQL200N spec, as well as Oracle in most ways --
Oracle lacks RESTART WITH for some strange reason.
--
Rod Taylor <rbt@rbt.ca>
utility statement (DeclareCursorStmt) with a SELECT query dangling from
it, rather than a SELECT query with a few unusual fields in it. Add
code to determine whether a planned query can safely be run backwards.
If DECLARE CURSOR specifies SCROLL, ensure that the plan can be run
backwards by adding a Materialize plan node if it can't. Without SCROLL,
you get an error if you try to fetch backwards from a cursor that can't
handle it. (There is still some discussion about what the exact
behavior should be, but this is necessary infrastructure in any case.)
Along the way, make EXPLAIN DECLARE CURSOR work.
at database shutdown, and then load it again at database startup. This
preserves our hard-won knowledge of free space across restarts (given
an orderly shutdown, that is).
codes, per discussion from last March. parse.h should now be included
*only* by gram.y, scan.l, keywords.c, parser.c. This prevents surprising
misbehavior after seemingly-trivial grammar adjustments.
"canceled", so I changed the one remaining usage of the British
spelling ("cancelled") over to the former, and updated the translation
files appropriately.
Neil Conway
between signal handler and enable/disable code, avoid accumulation of
timing error due to trying to maintain remaining-time instead of
absolute-end-time, disable timeout before commit not after.
all utility statement types *except* a short list, per discussion a few
days ago. Add missing SetQuerySnapshot calls in VACUUM and REINDEX,
and guard against calling REINDEX DATABASE from a function (has same
problem as VACUUM).
command status at the interactive level. SPI_processed, etc are set
in the same way as the returned command status would have been set if
the same querystring were issued interactively. Per gripe from
Michael Paesold 25-Sep-02.
into postgres.c; make sure it happens for all cases that seem to need it.
Perhaps it would be better to explicitly exclude just a few utility
statement types from setting a snapshot?
ProcKill instead, where we still have a PGPROC with which to wait on
LWLocks. This fixes 'can't wait without a PROC structure' failures
occasionally seen during backend shutdown (I'm surprised they weren't
more frequent, actually). Add an Assert() to LWLockAcquire to help
catch any similar mistakes in future. Fix failure to update MyProcPid
for standalone backends and pgstat processes.
> moment, but they used to be used; I think the correct response is to
> put back the missing counter increments, not rip out the counters.
Ok, fair enough. It's worth noting that they've been broken for a
while -- for example, the HashJoin counter increments were broken when
you comitted r1.20 of executor/nodeHashJoin.c in May of '99.
I've attached a revised patch that doesn't remove the counters (but
doesn't increment them either: I'm not sure of all the places where
the counter should be incremented).
Neil Conway
(notify/SI-overrun interrupt) while it is in process of doing proc_exit,
it is possible for Async_NotifyHandler() to try to start a transaction
when one is already running. This leads to Asserts() or worse. I think
it may only be possible to occur when frontend synchronization is lost
(ie, the elog(FATAL) in SocketBackend() fires), but that is a standard
occurrence after error during COPY. In any case, I have seen this
failure occur during regression tests, so it is definitely possible.
to false provides more SQL-spec-compliant behavior than we had before.
I am not sure that setting it false is actually a good idea yet; there
is a lot of client-side code that will probably be broken by turning
autocommit off. But it's a start.
Loosely based on a patch by David Van Wie.
> There's no longer a separate call to heap_storage_create in that routine
> --- the right place to make the test is now in the storage_create
> boolean parameter being passed to heap_create. A simple change, but
> it passeth patch's understanding ...
Thanks.
Attached is a patch against cvs tip as of 8:30 PM PST or so. Turned out
that even after fixing the failed hunks, there was a new spot in
bufmgr.c which needed to be fixed (related to temp relations;
RelationUpdateNumberOfBlocks). But thankfully the regression test code
caught it :-)
Joe Conway
error handling, and simplifies the code that remains. Apparently,
the code that left Berkeley had a whole "error handling subsystem",
which exceptions and whatnot. Since we don't use that anymore,
there's no reason to keep it around.
The regression tests pass with the patch applied. Unless anyone
sees a problem, please apply.
Neil Conway
executed in an implicitely aborted transaction (e.g. after an occur
occurs), we return an error (and not just a warning). For example:
nconway=# begin;
BEGIN
nconway=# insert; -- syntax error
ERROR: parser: parse error at or near ";"
nconway=# select * from a;
ERROR: current transaction is aborted, queries ignored until end of
transaction block
The old behavior was:
nconway=# begin;
BEGIN
nconway=# insert;
ERROR: parser: parse error at or near ";"
nconway=# select * from a;
WARNING: current transaction is aborted, queries ignored until end
of transaction block
*ABORT STATE*
Which can be confusing: if the client isn't paying careful attention,
they will conclude that the query has executed (because no error is
returned).
Neil Conway
functionality of the command is basically identical to that of
BEGIN; it just accepts a few extra options (only one of which
PostgreSQL currently implements), and is standards-compliant.
The patch includes a simple regression test and documentation.
[ Regression tests removed, per Peter.]
Neil Conway
'tioga recipes', whatever those are -- Peter E. killed most
of it a couple days ago, but this patch removes the rest. Most
of it was #ifdef'ed out anyway.
Neil Conway
documentation (xindex.sgml should be rewritten), need to teach pg_dump
about it, need to update contrib modules that currently build pg_opclass
entries by hand. Original patch by Bill Studenmund, grammar adjustments
and general update for 7.3 by Tom Lane.
extension to create binary compatible casts. Includes dependency tracking
as well.
pg_proc.proimplicit is now defunct, but will be removed in a separate
commit.
pg_dump provides a migration path from the previous scheme to declare
casts. Dumping binary compatible casts is currently impossible, though.
This is the first cut toward CREATE CONVERSION/DROP CONVERSION implementaion.
The commands can now add/remove tuples to the new pg_conversion system
catalog, but that's all. Still need work to make them actually working.
Documentations, regression tests also need work.
GUC support. It's now possible to set datestyle, timezone, and
client_encoding from postgresql.conf and per-database or per-user
settings. Also, implement rollback of SET commands that occur in a
transaction that later fails. Create a SET LOCAL var = value syntax
that sets the variable only for the duration of the current transaction.
All per previous discussions in pghackers.
As proof of concept, provide an alternate implementation based on POSIX
semaphores. Also push the SysV shared-memory implementation into a
separate file so that it can be replaced conveniently.
Use flex flags -CF. Pass the to-be-scanned string around as StringInfo
type, to avoid querying the length repeatedly. Clean up some code and
remove lex-compatibility cruft. Escape backslash sequences inline. Use
flex-provided yy_scan_buffer() function to set up input, rather than using
myinput().
DROP RULE and COMMENT ON RULE syntax adds an 'ON tablename' clause,
similar to TRIGGER syntaxes. To allow loading of existing pg_dump
files containing COMMENT ON RULE, the COMMENT code will still accept
the old syntax --- but only if the target rulename is unique across
the whole database.
addRangeTableEntry calls. Remove relname field from RTEs, since
it will no longer be a useful unique identifier of relations;
we want to encourage people to rely on the relation OID instead.
Further work on dumping qual expressions in EXPLAIN, too.
the parsetree representation. As yet we don't *do* anything with schema
names, just drop 'em on the floor; but you can enter schema-compatible
command syntax, and there's even a primitive CREATE SCHEMA command.
No doc updates yet, except to note that you can now extract a field
from a function-returning-row's result with (foo(...)).fieldname.
o Change all current CVS messages of NOTICE to WARNING. We were going
to do this just before 7.3 beta but it has to be done now, as you will
see below.
o Change current INFO messages that should be controlled by
client_min_messages to NOTICE.
o Force remaining INFO messages, like from EXPLAIN, VACUUM VERBOSE, etc.
to always go to the client.
o Remove INFO from the client_min_messages options and add NOTICE.
Seems we do need three non-ERROR elog levels to handle the various
behaviors we need for these messages.
Regression passed.
when to send what to which, prevent recursion by introducing new COMMERROR
elog level for client-communication problems, get rid of direct writes
to stderr in backend/libpq files, prevent non-error elogs from going to
client during the authentication cycle.
now just below FATAL in server_min_messages. Added more text to
highlight ordering difference between it and client_min_messages.
---------------------------------------------------------------------------
REALLYFATAL => PANIC
STOP => PANIC
New INFO level the prints to client by default
New LOG level the prints to server log by default
Cause VACUUM information to print only to the client
NOTICE => INFO where purely information messages are sent
DEBUG => LOG for purely server status messages
DEBUG removed, kept as backward compatible
DEBUG5, DEBUG4, DEBUG3, DEBUG2, DEBUG1 added
DebugLvl removed in favor of new DEBUG[1-5] symbols
New server_min_messages GUC parameter with values:
DEBUG[5-1], INFO, NOTICE, ERROR, LOG, FATAL, PANIC
New client_min_messages GUC parameter with values:
DEBUG[5-1], LOG, INFO, NOTICE, ERROR, FATAL, PANIC
Server startup now logged with LOG instead of DEBUG
Remove debug_level GUC parameter
elog() numbers now start at 10
Add test to print error message if older elog() values are passed to elog()
Bootstrap mode now has a -d that requires an argument, like postmaster