- Added unlikely() to optimize for not having optimizer trace enabled
- Made THD::trace_started() inline
- Added 'if (trace_enabled())' around some potentially expensive code
(not many found)
- Added ASSERT's to ensure we don't call expensive optimizer trace calls
if optimizer trace is not enabled
- Added length to Json_writer functions to speed up buffer writes
when optimizer trace is enabled.
- Changed LEX_CSTRING argument handling to not send full struct to writer
function on_add_str() functions now trusts length arguments
Reverted original patch (c2e0a0b).
For consistency with "LOCK TABLE <table_name> READ" and "FLUSH TABLES
WITH READ LOCK", which are forbidden under "BACKUP STAGE", forbid "FLUSH
TABLE <table_name> FOR EXPORT" and "FLUSH TABLE <table_name> WITH READ
LOCK" as well.
It'd allow consistent fixes for problems like MDEV-18643.
Support for galera GTID consistency thru cluster. All nodes in cluster
should have same GTID for replicated events which are originating from cluster.
Cluster originating commands need to contain sequential WSREP GTID seqno
Ignore manual setting of gtid_seq_no=X.
In master-slave scenario where master is non galera node replicated GTID is
replicated and is preserved in all nodes.
To have this - domain_id, server_id and seqnos should be same on all nodes.
Node which bootstraps the cluster, to achieve this, sends domain_id and
server_id to other nodes and this combination is used to write GTID for events
that are replicated inside cluster.
Cluster nodes that are executing non replicated events are going to have different
GTID than replicated ones, difference will be visible in domain part of gtid.
With wsrep_gtid_domain_id you can set domain_id for WSREP cluster.
Functions WSREP_LAST_WRITTEN_GTID, WSREP_LAST_SEEN_GTID and
WSREP_SYNC_WAIT_UPTO_GTID now works with "native" GTID format.
Fixed galera tests to reflect this chances.
Add variable to manually update WSREP GTID seqno in cluster
Add variable to manipulate and change WSREP GTID seqno. Next command
originating from cluster and on same thread will have set seqno and
cluster should change their internal counter to it's value.
Behavior is same as using @@gtid_seq_no for non WSREP transaction.
This is joint work with Thirunarayanan Balathandayuthapani.
The MDL interface between InnoDB and the rest of the server
(in storage/innobase/dict/dict0dict.cc and in include/)
is my work, while most everything else is Thiru's.
The collection of InnoDB persistent statistics and the
defragmentation were not refactored to use MDL. They will
keep relying on lower-level interlocking with
fil_check_pending_operations().
The purge of transaction history and the background operations on
fulltext indexes will use MDL. We will revert
commit 2c4844c9e7
(MDEV-17813) because thanks to MDL, purge cannot conflict
with DDL operations anymore. For a similar reason, we will remove
the MDEV-16222 test case from gcol.innodb_virtual_debug_purge.
Purge is essentially replacing all use of the global dict_sys.latch
with MDL. Purge will skip the undo log records for tables whose names
start with #sql-ib or #sql2. Theoretically, such tables might
be renamed back to visible table names if TRUNCATE fails to
create a new table, or the final rename in ALTER TABLE...ALGORITHM=COPY
fails. In that case, purge could permanently leave some garbage
in the table. Such garbage will be tolerated; the table would not
be considered corrupted.
To avoid repeated MDL releases and acquisitions,
trx_purge_attach_undo_recs() will sort undo log records by table_id,
and purge_node_t will keep the MDL and table handle open for multiple
successive undo log records.
get_purge_table(): A new accessor, used during the purge of
history for indexed virtual columns. This interface should ideally
not exist at all.
thd_mdl_context(): Accessor of THD::mdl_context.
Wrapped in a new thd_mdl_service.
dict_get_db_name_len(): Define inline.
dict_acquire_mdl_shared(): Acquire explicit shared MDL on a table name
if needed.
dict_table_open_on_id(): Return MDL_ticket, if requested.
dict_table_close(): Release MDL ticket, if requested.
dict_fts_index_syncing(), dict_index_t::index_fts_syncing: Remove.
row_drop_table_for_mysql() no longer needs to check these, because
MDL guarantees that a fulltext index sync will not be in progress
while MDL_EXCLUSIVE is protecting a DDL operation.
dict_table_t::parse_name(): Parse the table name for acquiring MDL.
purge_node_t::undo_recs: Change the type to std::list<trx_purge_rec_t*>
(different container, and storing also roll_ptr).
purge_node_t: Add mdl_ticket, last_table_id, purge_thd, mdl_hold_recs
for acquiring MDL and for keeping the table open across multiple
undo log records.
purge_vcol_info_t, row_purge_store_vsec_cur(), row_purge_restore_vsec_cur():
Remove. We will acquire the MDL earlier.
purge_sys_t::heap: Added, for reading undo log records.
fts_sync_during_ddl(): Invoked during ALGORITHM=INPLACE operations
to ensure that fts_sync_table() will not conflict with MDL_EXCLUSIVE.
Uses fts_t::sync_message for bookkeeping.
Currently InnoDB uses internal parser for adding foreign keys. Remove
internal parser and use data parsed by SQL parser (sql_yacc) for
adding foreign keys.
- create_table_info_t::create_foreign_keys() replacement for
dict_create_foreign_constraints_low();
- Pass constraint name via Foreign_key object.
Temporary until MDEV-20865:
- Pass alter_info as part of create_info.
Create background THDs that are not counted, and do not block
close_connections(), in other words they are just something that plugin
can use internally.
These THD will be used later by Innodb purge threads, and they need THD
to calculate virtual functions.
revision-id: 673e253724979fd9fe43a4a22bd7e1b2c3a5269e
Author: Kristian Nielsen
Fix missing memory barrier in wait_for_commit.
The function wait_for_commit::wait_for_prior_commit() has a fast path where it
checks without locks if wakeup_subsequent_commits() has already been called.
This check was missing a memory barrier. The waitee thread does two writes to
variables `waitee' and `wakeup_error', and if the waiting thread sees the
first write it _must_ also see the second or incorrect behavior will occur.
This requires memory barriers between both the writes (release semantics) and
the reads (acquire semantics) of those two variables.
Other accesses to these variables are done under lock or where only one thread
will be accessing them, and can be done without barriers (relaxed semantics).
Count the "gap" time between table accesses and display it as
r_other_time_ms in the "table" element.
* The advantage of this approach is that it doesn't add any new
my_timer_cycles() calls.
* The disadvantage is that the definition of what is done during
"other time" is not that clear: it includes checking the WHERE
(for this table), constructing index lookup tuple (for the next table)
writing to GROUP BY temporary table (as we dont account for that time
separately [yet], etc)
Relates to MDEV-17863 DROP TEMPORARY TABLE creates a transaction in
binary log on read only server
Other things:
- Fixed that insert into normal_table select from tmp_table is
replicated as row events if tmp_table doesn't exists on slave.
- Use local variables table and share to simplify code
- Use sql_command_flags to detect what kind of command was used
- Added CF_DELETES_DATA to simplify detecton of delete commands
- Removed duplicate error in create_table_from_items().
- Any temporary tables created under read-only mode will never be logged
to binary log. Any usage of these tables to update normal tables, even
after read-only has been disabled, will use row base logging (as the
temporary table will not be on the slave).
- Analyze, check and repair table will not be logged in read-only mode.
Other things:
- Removed not used varaibles in
MYSQL_BIN_LOG::flush_and_set_pending_rows_event.
- Set table_share->table_creation_was_logged for all normal tables.
- THD::binlog_query() now returns -1 if statement was not logged., This
is used to update table_share->table_creation_was_logged.
- Don't log admin statements in opt_readonly is set.
- Table's that doesn't have table_creation_was_logged will set binlog format to row
logging.
- Removed not needed/wrong setting of table->s->table_creation_was_logged
in create_table_from_items()
Now both offset and limit are stored and do not chenged during execution
(offset is decreased during processing in versions before 10.5).
(Big part of this changes made by Monty)
read_statistics_for_tables_if_needed
Regression after 279a907, read_statistics_for_tables_if_needed() was
called after open_normal_and_derived_tables() failure.
Fixed by moving read_statistics_for_tables() call to a branch of
get_schema_stat_record() where result of open_normal_and_derived_tables()
is checked.
Removed THD::force_read_stats, added read_statistics_for_tables() instead.
Simplified away statistics_for_command_is_needed().
- call current_schema::mark_as_changed() directly
- call state_change::mark_as_changed() directly
- replaced SESSION_TRACKER_CHANGED with dummy tracker
- replaced Session_tracker::mark_as_changed() with
State_tracker::mark_as_changed()
- hide and devirtualize original State_tracker::mark_as_changed(),
rename it to set_changed()
- all implementations of mark_as_changed() now check is_enabled() for
consistency
- no argument casts anymore
MDEV-17614 flags INSERT…ON DUPLICATE KEY UPDATE unsafe for statement-based
replication when there are multiple unique indexes. This correctly fixes
something whose attempted fix in MySQL 5.7
in mysql/mysql-server@c93b0d9a97
caused lock conflicts. That change was reverted in MySQL 5.7.26
in mysql/mysql-server@066b6fdd43
(with a substantial amount of other changes).
In MDEV-17073 we already disabled the unfortunate MySQL change when
statement-based replication was not being used. Now, thanks to MDEV-17614,
we can actually remove the change altogether.
This reverts commit 8a346f31b9 (MDEV-17073)
and mysql/mysql-server@c93b0d9a97 while
keeping the test cases.
Problem:-
When mysql executes INSERT ON DUPLICATE KEY INSERT, the storage engine checks
if the inserted row would generate a duplicate key error. If yes, it returns
the existing row to mysql, mysql updates it and sends it back to the storage
engine.When the table has more than one unique or primary key, this statement
is sensitive to the order in which the storage engines checks the keys.
Depending on this order, the storage engine may determine different rows
to mysql, and hence mysql can update different rows.The order that the
storage engine checks keys is not deterministic. For example, InnoDB checks
keys in an order that depends on the order in which indexes were added to
the table. The first added index is checked first. So if master and slave
have added indexes in different orders, then slave may go out of sync.
Solution:-
Make INSERT...ON DUPLICATE KEY UPDATE unsafe while using stmt or mixed format
When there is more then one unique key.
Although there is two exception.
1. Auto Increment key is not counted because Innodb will get gap lock for
failed Insert and concurrent insert will get a next increment value. But if
user supplies auto inc value it can be unsafe.
2. Count only unique keys for which insertion is performed.
So this patch also addresses the bug id #72921
Sources did not compile in some builds because of undeclared
ER_BLOB_KEY_WITHOUT_LENGTH. Moving the implementations of
Key_part_spec::check_key_length_for_blob() from sql_class.h to sql_class.cc
Plugin fixed to not lock the LOCK_operations when not active.
Server fixed to lock the LOCK_plugin less - do it once per
thread and then only if a plugin was installed/uninstalled.
Originally introduced by e972125f1 to avoid harmless wait for
LOCK_global_system_variables in a newly created thread, which creation was
initiated by system variable update.
At the same time it opens dangerous hole, when system variable update
thread already released LOCK_global_system_variables and ack_receiver
thread haven't yet completed new THD construction. In this case THD
constructor goes completely unprotected.
Since ack_receiver.stop() waits for the thread to go down, we have to
temporarily release LOCK_global_system_variables so that it doesn't
deadlock with ack_receiver.run(). Unfortunately it breaks atomicity
of rpl_semi_sync_master_enabled updates and makes them not serialized.
LOCK_rpl_semi_sync_master_enabled was introduced to workaround the above.
TODO: move ack_receiver start/stop into repl_semisync_master
enable_master/disable_master under LOCK_binlog protection?
Part of MDEV-14984 - regression in connect performance
Rather than parsing session_track_system_variables when thread starts, do
it when first trackable event occurs.
Benchmarked on a 2socket/20core/40threads Broadwell system using sysbench
connect brencmark @40 threads (with select 1 disabled):
101379.77 -> 143016.68 CPS, whereas 10.2 is currently at 137766.31 CPS.
Part of MDEV-14984 - regression in connect performance
One less new/delete per connection.
Removed m_mem_flag since most allocs are thread specific. The only
exception are allocs performed during initialization.
Removed State_tracker and Session_tracker constructors as they don't make
sense anymore.
No reason to access session_sysvars_tracker via get_tracker(), so access
it directly instead.
Part of MDEV-14984 - regression in connect performance
If a derived table has SELECT DISTINCT, provide index statistics for it so that the join optimizer in the
upper select knows that ref access to the table will produce one row.
depends on uninitialised value
Initialized THD::force_read_stats introduced in the patch for MDEV-17605.
Leaving this field uninitialized in the constructor of the THD class may
trigger reading statistical data that is not needed.
depends on uninitialised value
Initialized THD::force_read_stats introduced in the patch for MDEV-17605.
Leaving this field uninitialized in the constructor of the THD class may
trigger reading statistical data that is not needed.
Let xid_cache_insert()/xid_cache_delete() handle xa_state.
Let session tracker use is_explicit_XA() rather than xa_state != XA_NOTR.
Fixed open_tables() to refuse data access in XA_ROLLBACK_ONLY state.
Removed dead code from THD::cleanup(). It was supposed to be a reminder,
but it got messed up over time.
spider_internal_start_trx() is called either with XA_NOTR or XA_ACTIVE,
which is guarded by server callers. Thus is_explicit_XA() is acceptable
replacement for XA_ACTIVE check (which was likely wrong anyway).
Setting xa_state to XA_PREPARED in spider_internal_xa_prepare() isn't
meaningful, as this value is never accessed later. It can't be accessed
by current thread and it can't be recovered either. It can only be
accessed by spider internally, which never happens.
Make spider_xa_lock()/spider_xa_unlock() static.
Part of MDEV-7974 - backport fix for mysql bug#12161 (XA and binlog)
XID_STATE::rm_error is never used by internal 2PC, it is intended to be
used by explicit XA only.
Also removed redundant xid reset from THD::init_for_queries(). Must've
been done already either by THD::transaction constructor or by
THD::cleanup().
Part of MDEV-7974 - backport fix for mysql bug#12161 (XA and binlog)
- Changed replaying to always allocate a separate THD object
for applying log events. This is to avoid tampering original
THD state during replay process.
- Return success from sp_instr_stmt::exec_core() if replaying
succeeds.
- Do not push warnings/errors into diagnostics area if the
transaction must be replayed. This is to avoid reporting
transient errors to the client.
Added two tests galera_sp_bf_abort, galera_sp_insert_parallel.
Wsrep-lib position updated.
This bug is caused by pushdown from HAVING into WHERE.
It appears because condition that is pushed wasn't fixed.
It is also discovered that condition pushdown from HAVING into
WHERE is done wrong. There is no need to build clones for some
conditions that can be pushed. They can be simply moved from HAVING
into WHERE without cloning.
build_pushable_cond_for_having_pushdown(),
remove_pushed_top_conjuncts_for_having() methods are changed.
It is found that there is no transformation made for fields of
pushed condition.
field_transformer_for_having_pushdown transformer is added.
New tests are added. Some comments are changed.
ignore FK-prelocked tables when looking for write-prelocked tables
with auto-increment to complain about "Statement is unsafe because
it invokes a trigger or a stored function that inserts into an
AUTO_INCREMENT column"
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
This patch contains a fix for the MDEV-17262/17243 issues and
new mtr test.
These issues (MDEV-17262/17243) have two reasons:
1) After an intermediate commit, a transaction loses its status
of "transaction that registered in the MySQL for 2pc coordinator"
(in the InnoDB) due to the fact that since version 10.2 the
write_row() function (which located in the ha_innodb.cc) does
not call trx_register_for_2pc(m_prebuilt->trx) during the processing
of split transactions. It is necessary to restore this call inside
the write_row() when an intermediate commit was made (for a split
transaction).
Similarly, we need to set the flag of the started transaction
(m_prebuilt->sql_stat_start) after intermediate commit.
The table->file->extra(HA_EXTRA_FAKE_START_STMT) called from the
wsrep_load_data_split() function (which located in sql_load.cc)
will also do this, but it will be too late. As a result, the call
to the wsrep_append_keys() function from the InnoDB engine may be
lost or function may be called with invalid transaction identifier.
2) If a transaction with the LOAD DATA statement is divided into
logical mini-transactions (of the 10K rows) and binlog is rotated,
then in rare cases due to the wsrep handler re-registration at the
boundary of the split, the last portion of data may be lost. Since
splitting of the LOAD DATA into mini-transactions is technical,
I believe that we should not allow these mini-transactions to fall
into separate binlogs. Therefore, it is necessary to prohibit the
rotation of binlog in the middle of processing LOAD DATA statement.
https://jira.mariadb.org/browse/MDEV-17262 and
https://jira.mariadb.org/browse/MDEV-17243
* MDEV-16509 Improve wsrep commit performance with binlog disabled
Release commit order critical section early after trx_commit_low() if
binlog is not transaction coordinator. In order to avoid two phase commit,
binlog_hton is not registered for THD during IO_CACHE population.
Implemented a test which verifies that the transactions release
commit order early.
This optimization will change behavior during recovery as the commit
is not two phase when binlog is off. Fixed and recorded wsrep-recover-v25
and wsrep-recover to match the behavior.
* MDEV-18730 Ordering for wsrep binlog group commit
Previously out of order execution was allowed for wsrep commits.
Established proper ordering by populating wait_for_commit
for every wsrep THD and making group commit leader to wait for
prior commits before proceeding to trx_group_commit_leader().
* MDEV-18730 Added a test case to verify correct commit ordering
* MDEV-16509, MDEV-18730 Review fixes
Use WSREP_EMULATE_BINLOG() macro to decide if the binlog_hton
should be registered. Whitespace/syntax fixes and cleanups.
* MDEV-16509 Require binlog for galera_var_innodb_disallow_writes test
If the commit to InnoDB is done in one phase, the native InnoDB behavior
is that the transaction is committed in memory before it is persisted to
disk. This means that the innodb_disallow_writes=ON may not prevent
transaction to become visible to other readers before commit is completely
over. On the other hand, if the commit is two phase (as it is with binlog),
the transaction will be blocked in prepare phase.
Fixed the test to use binlog, which enforces two phase commit, which
in turn makes commit to block before the changes become visible to
other connections. This guarantees that the test produces expected
result.
The patches features an optional shutdown behavior to hold on until
after all connected slaves have been sent the last binlogged event.
The connected slave is one whose START SLAVE has been acknowledged and
that was not stopped since that though it could be technically
reconnecting in background.
The solution therefore disallows killing the dump thread until is has
found EOF of the latest binlog file. It is up to the shutdown
requester (DBA) to set up a sufficiently large shutdown timeout value
for shudown to wait patiently until lagging behind slaves have been
synchronized. On the other hand if a specific slave needs exclusion
from synchronization the DBA would have to stop it manually which
would terminate its dump thread.
`mysqladmin shutdown' is extended with a `--wait_for_all_slaves' option
which translates to `SHUTDOW WAIT FOR ALL SLAVES' sql query
to enable the feature on the client side.
The patch also performs a small refactoring of the server shutdown
around close_connections() to introduce kill thread phases which
are two as of current.
Make mysqltest to use --ps-protocol more
use prepared statements for everything that server supports
with the exception of CALL (for now).
Fix discovered test failures and bugs.
tests:
* PROCESSLIST shows Execute state, not Query
* SHOW STATUS increments status variables more than in text protocol
* multi-statements should be avoided (see tests with a wrong delimiter)
* performance_schema events have different names in --ps-protocol
* --enable_prepare_warnings
mysqltest.cc:
* make sure run_query_stmt() doesn't crash if there's
no active connection (in wait_until_connected_again.inc)
* prepare all statements that server supports
protocol.h
* Protocol_discard::send_result_set_metadata() should not send
anything to the client.
sql_acl.cc:
* extract the functionality of getting the user for SHOW GRANTS
from check_show_access(), so that mysql_test_show_grants() could
generate the correct column names in the prepare step
sql_class.cc:
* result->prepare() can fail, don't ignore its return value
* use correct number of decimals for EXPLAIN columns
sql_parse.cc:
* discard profiling for SHOW PROFILE. In text protocol it's done in
prepare_schema_table(), but in --ps it is called on prepare only,
so nothing was discarding profiling during execute.
* move the permission checking code for SHOW CREATE VIEW to
mysqld_show_create_get_fields(), so that it would be called during
prepare step too.
* only set sel_result when it was created here and needs to be
destroyed in the same block. Avoid destroying lex->result.
* use the correct number of tables in check_show_access(). Saying
"as many as possible" doesn't work when first_not_own_table isn't
set yet.
sql_prepare.cc:
* use correct user name for SHOW GRANTS columns
* don't ignore verbose flag for SHOW SLAVE STATUS
* support preparing REVOKE ALL and ROLLBACK TO SAVEPOINT
* don't ignore errors from thd->prepare_explain_fields()
* use select_send result for sending ANALYZE and EXPLAIN, but don't
overwrite lex->result, because it might be needed to issue execute-time
errors (select_dumpvar - too many rows)
sql_show.cc:
* check grants for SHOW CREATE VIEW here, not in mysql_execute_command
sql_view.cc:
* use the correct function to check privileges. Old code was doing
check_access() for thd->security_ctx, which is invoker's sctx,
not definer's sctx. Hide various view related errors from the invoker.
sql_yacc.yy:
* initialize lex->select_lex for LOAD, otherwise it'll contain garbage
data that happen to fail tests with views in --ps (but not otherwise).
slave_list was used to provide data for SHOW SLAVE HOSTS and
Slaves_connected status variable.
Introduced binlog_dump_thread_count which is exposed via Slaves_connected
(replaces slave_list.records).
Store Slave_info on THD and access it by iterating server_threads
(replaces slave_list).
Added:
THD::slave_info
binlog_dump_thread_count
show_slave_hosts_callback()
Removed:
slave_list
SLAVE_LIST_CHUNK
SLAVE_ERRMSG_SIZE
slave_list_key()
slave_info_free()
init_slave_list()
end_slave_list()
all_slave_list_mutexes
init_all_slave_list_mutexes()
key_LOCK_slave_list
LOCK_slave_list
Moved:
SLAVE_INFO -> Slave_info
register_slave() -> THD::register_slave()
unregister_slave() -> THD::unregister_slave()
Also removed redundant end_slave() from close_connections(): it is called
again soon afterwards by clean_up().
Pre-requisite for clean MDEV-18450 solution.
and, again, *don't use thd->clear_error()*
this fixed main.sp_notembedded failure on various amd64 platforms
(where ER_STACK_OVERRUN_NEED_MORE happens to fire in open_stat_tables()
under Dummy_error_handler)
This patch adds support for expiring user passwords.
The following statements are extended:
CREATE USER user@localhost PASSWORD EXPIRE [option]
ALTER USER user@localhost PASSWORD EXPIRE [option]
If no option is specified, the password is expired with immediate
effect. If option is DEFAULT, global policy applies according to
the default_password_lifetime system var (if 0, password never
expires, if N, password expires every N days). If option is NEVER,
the password never expires and if option is INTERVAL N DAY, the
password expires every N days.
The feature also supports the disconnect_on_expired_password system
var and the --connect-expired-password client option.
Closes#1166
The check for streaming replication logging format in
THD::decide_logging_format() did the check also for DDLs running
in TOI mode. This caused DROP DATABASE to fail if streaming
replication was enabled.
Added check for THD wsrep execution mode and perform the check
only if the THD is in local processing mode (i.e. not TOI).
Added galera_sr_create_drop test to verify that CREATE/DROP
statements pass even if streaming replication is on.
Condition can be pushed from the HAVING clause into the WHERE clause
if it depends only on the fields that are used in the GROUP BY list
or depends on the fields that are equal to grouping fields.
Aggregate functions can't be pushed down.
How the pushdown is performed on the example:
SELECT t1.a,MAX(t1.b)
FROM t1
GROUP BY t1.a
HAVING (t1.a>2) AND (MAX(c)>12);
=>
SELECT t1.a,MAX(t1.b)
FROM t1
WHERE (t1.a>2)
GROUP BY t1.a
HAVING (MAX(c)>12);
The implementation scheme:
1. Extract the most restrictive condition cond from the HAVING clause of
the select that depends only on the fields that are used in the GROUP BY
list of the select (directly or indirectly through equalities)
2. Save cond as a condition that can be pushed into the WHERE clause
of the select
3. Remove cond from the HAVING clause if it is possible
The optimization is implemented in the function
st_select_lex::pushdown_from_having_into_where().
New test file having_cond_pushdown.test is created.
This task involves the implementation for the optimizer trace.
This feature produces a trace for any SELECT/UPDATE/DELETE/,
which contains information about decisions taken by the optimizer during
the optimization phase (choice of table access method, various costs,
transformations, etc). This feature would help to tell why some decisions were
taken by the optimizer and why some were rejected.
Trace is session-local, controlled by the @@optimizer_trace variable.
To enable optimizer trace we need to write:
set @@optimizer_trace variable= 'enabled=on';
To display the trace one can run:
SELECT trace FROM INFORMATION_SCHEMA.OPTIMIZER_TRACE;
This task also involves:
MDEV-18489: Limit the memory used by the optimizer trace
introduces a switch optimizer_trace_max_mem_size which limits
the memory used by the optimizer trace. This was implemented by
Sergei Petrunia.
ANALYZE and ANALYZE FORMAT=JSON structures are changed in the way that they
show additional information when rowid filter is used:
- r_selectivity_pct - the observed filter selectivity
- r_buffer_size - the size of the rowid filter container buffer
- r_filling_time_ms - how long it took to fill rowid filter container
New class Rowid_filter_tracker was added. This class is needed to collect data
about how rowid filter is executed.
introduce the syntax
... IDENTIFIED { WITH | VIA }
plugin [ { USING | AS } auth ]
[ OR plugin [ { USING | AS } auth ]
[ OR ... ]]
Server will try auth plugins in the specified order until the first
success. No protocol changes, server uses the existing "switch plugin"
packet.
The auth chain is stored in json as
"auth_or":[{"plugin":"xxx","authentication_string":"yyy"},
{},
{"plugin":"foo","authentication_string":"bar"},
...],
"plugin":"aaa", "authentication_string":"bbb"
Note:
* "auth_or" implies that there might be "auth_and" someday;
* one entry in the array is an empty object, meaning to take plugin/auth
from the main json object. This preserves compatibility with
the existing mysql.global_priv table and with the mysql.user view.
This entry is preferrably a mysql_native_password plugin for a
non-empty mysql.user.password column.
SET PASSWORD is supported and changes the password for the *first*
plugin in the chain that has a notion of a "password"
This patch contains a full implementation of the optimization
that allows to use in-memory rowid / primary filters built for range
conditions over indexes. In many cases usage of such filters reduce
the number of disk seeks spent for fetching table rows.
In this implementation the choice of what possible filter to be applied
(if any) is made purely on cost-based considerations.
This implementation re-achitectured the partial implementation of
the feature pushed by Galina Shalygina in the commit
8d5a11122c.
Besides this patch contains a better implementation of the generic
handler function handler::multi_range_read_info_const() that
takes into account gaps between ranges when calculating the cost of
range index scans. It also contains some corrections of the
implementation of the handler function records_in_range() for MyISAM.
This patch supports the feature for InnoDB and MyISAM.
In contrast to thread_count, which is decremented by THD destructor,
this one was most probably intended to be decremented after all THD
destructors are done.
THD_count class was added to achieve similar effect with thread_count.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
Implemented and integrated THD_list as a replacement for the global
thread list. It uses own mutex instead of LOCK_thread_count for THD
list protection.
Removed unused first_global_thread() and next_global_thread().
delayed_insert_threads is now protected by LOCK_delayed_insert. Although
this patch doesn't fix very wrong synchronization of this variable.
After this patch there are only 2 legitimate uses of LOCK_thread_count
left, both in mysqld.cc: thread_count and ready_to_exit.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
always logged properly with binlog_row_image=MINIMAL
There are two issues fixed in this commit.
The first is an observation of a multi-table UPDATE binlogged
in row-format in binlog_row_image=MINIMAL mode. While the UPDATE aims
at a table with an ON-UPDATE attribute its binlog after-image misses
to record also installed default value.
The reason for that turns out missed marking of default-capable fields
in TABLE::write_set.
This is fixed to mark such fields similarly to 10.2's MDEV-10134 patch (db7edfed17)
that introduced it. The marking follows up 93d1e5ce0b841bed's idea
to exploit TABLE:rpl_write_set introduced there though,
and thus does not mess (in 10.1) with the actual MDEV-10134 agenda.
The patch makes formerly arg-less TABLE::mark_default_fields_for_write()
to accept an argument which would be TABLE:rpl_write_set.
The 2nd issue is extra columns in in binlog_row_image=MINIMAL before-image
while merely a packed primary key is enough. The test main.mysqlbinlog_row_minimal
always had a wrong result recorded.
This is fixed to invoke a function that intended for read_set
possible filtering and which is called (supposed to) in all type of MDL, UPDATE
including; the test results have gotten corrected.
At *merging* from 10.1->10.2 the 1st "main" part of the patch is unnecessary
since the bug is not observed in 10.2, so only hunks from
sql/sql_class.cc
are required.
This mutex can be freed when server shuts down (when thread_count goes down to 0)
, but it is still used inside THD::~THD() when Statement_map is destroyed.
The fix is to call Statement_map::reset() at the point where thread_count
is still positive, and avoid locking LOCK_prepared_stmt_count in THD
destructor.
The error message modified.
Then the TABLE_SHARE::error_table_name() implementation taken from 10.3,
to be used as a name of the table in this message.
Internal transactions may not have trx->mysql_thd.
But at the same time, trx->duplicates should only hold if
REPLACE or INSERT...ON DUPLICATE KEY UPDATE was executed from SQL.
The flag feels misplaced. A more appropriate place for it would
be row_prebuilt_t or similar.
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Changed check of Global_only_lock to also include BACKUP lock.
- We store latest MDL_BACKUP_DDL lock in thd->mdl_backup_ticket to be able
to downgrade lock during copy_data_between_tables()
main.derived_cond_pushdown: Move all 10.3 tests to the end,
trim trailing white space, and add an "End of 10.3 tests" marker.
Add --sorted_result to tests where the ordering is not deterministic.
main.win_percentile: Add --sorted_result to tests where the
ordering is no longer deterministic.
thd_rpl_stmt_based(): A new predicate to check if statement-based
replication is active. (This can also hold when replication is not
in use, but binlog is.)
que_thr_stop(), row_ins_duplicate_error_in_clust(),
row_ins_sec_index_entry_low(), row_ins(): On a duplicate key error,
only lock all index records when statement-based replication is in use.
Problem:
========
Truncate operation holds MDL on the table (t1) and tries to
acquire InnoDB dict_operation_lock. Purge holds dict_operation_lock
and tries to acquire MDL on the table (t1) to evaluate virtual
column expressions for indexed virtual columns.
It leads to deadlock of purge and truncate table (DDL).
Solution:
=========
If purge tries to acquire MDL on the table then it should do the following:
i) Purge should release all innodb latches (including dict_operation_lock)
before acquiring metadata lock on the table.
ii) After acquiring metadata lock on the table, it should check whether the
table was dropped or renamed. If the table is dropped then purge should
ignore the undo log record. If the table is renamed then it should
release the old MDL and acquire MDL on the new name.
iii) Once purge acquires MDL, it should use the SQL table handle for all
the remaining virtual index for the purge record.
purge_node_t: Introduce new virtual column information to know whether
the MDL was acquired successfully.
This is joint work with Marko Mäkelä.
specific temporary errors
The optimistic parallel slave's worker thread could face a run-time error due to
the algorithm's specifics which allows for conflicts like the reported
"Can't find record in 'table'".
A typical stack is like
{noformat}
#0 handler::print_error (this=0x61c00008f8a0, error=149, errflag=0) at handler.cc:3650
#1 0x0000555555e95361 in write_record (thd=thd@entry=0x62a0000a2208, table=table@entry=0x61f00008ce88, info=info@entry=0x7fffdee356d0) at sql_insert.cc:1944
#2 0x0000555555ea7767 in mysql_insert (thd=thd@entry=0x62a0000a2208, table_list=0x61b00012ada0, fields=..., values_list=..., update_fields=..., update_values=..., duplic=<optimized out>, ignore=<optimized out>) at sql_insert.cc:1039
#3 0x0000555555efda90 in mysql_execute_command (thd=thd@entry=0x62a0000a2208) at sql_parse.cc:3927
#4 0x0000555555f0cc50 in mysql_parse (thd=0x62a0000a2208, rawbuf=<optimized out>, length=<optimized out>, parser_state=<optimized out>) at sql_parse.cc:7449
#5 0x00005555566d4444 in Query_log_event::do_apply_event (this=0x61200005b9c8, rgi=<optimized out>, query_arg=<optimized out>, q_len_arg=<optimized out>) at log_event.cc:4508
#6 0x00005555566d639e in Query_log_event::do_apply_event (this=<optimized out>, rgi=<optimized out>) at log_event.cc:4185
#7 0x0000555555d738cf in Log_event::apply_event (rgi=0x61d0001ea080, this=0x61200005b9c8) at log_event.h:1343
#8 apply_event_and_update_pos_apply (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080, reason=<optimized out>) at slave.cc:3479
#9 0x0000555555d8596b in apply_event_and_update_pos_for_parallel (ev=ev@entry=0x61200005b9c8, thd=thd@entry=0x62a0000a2208, rgi=rgi@entry=0x61d0001ea080) at slave.cc:3623
#10 0x00005555562aca83 in rpt_handle_event (qev=qev@entry=0x6190000fa088, rpt=rpt@entry=0x62200002bd68) at rpl_parallel.cc:50
#11 0x00005555562bd04e in handle_rpl_parallel_thread (arg=arg@entry=0x62200002bd68) at rpl_parallel.cc:1258
{noformat}
Here {{handler::print_error}} computes whether to error log the
current error when --log-warnings > 1. The decision flag is consulted
bu {{my_message_sql()}} which can be eventually called.
In the bug case the decision is to log.
However in the optimistic mode slave applier case any conflict is
attempted to resolve with rollback and retry to success. Hence the
logging is at least extraneous.
The case is fixed with adding a new flag {{ME_LOG_AS_WARN}} which
{{handler::print_error}} may propagate further on through {{my_error}}
when the error comes from an optimistically running slave worker thread.
The new flag effectively requests the warning level for the errlog record,
while the thread's DA records the actual error (which is regarded as temporary one
by the parallel slave error handler).
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
The cause of this was several different bugs:
- When using binary logging with binlog_row_image=FULL
the all bits in read_set was set, which caused a
different (wrong) pattern for marking vcol_set.
- TABLE::mark_virtual_columns_for_write() didn't in all
cases mark vcol_set with the vcol_field.
- TABLE::update_virtual_fields() has to update all
vcol fields on REPLACE if binary logging with FULL
is used.
- VCOL_UPDATE_INDEXED should update all vcol fields part
of an index that was not updated by VCOL_UPDATE_FOR_READ
- max_row_length() calculated length of NULL and not
used fields. This didn't cause any crash, but used
more memory than needed.
simplify. move common code inside, specify common flags inside,
rewrite dead code (`if (mode & O_TEMPORARY)` on Linux, where
`O_TEMPORARY` is always 0) to actually do something.
The code passing positions in the query to constructors of
Rewritable_query_parameter descendants (e.g. Item_splocal)
was not reliable. It used various Lex_input_stream methods:
- get_tok_start()
- get_tok_start_prev()
- get_tok_end()
- get_ptr()
to find positions of the recently scanned tokens.
The challenge was mostly to choose between get_tok_start()
and get_tok_start_prev(), taking into account to the current
grammar (depending if lookahead takes place before
or after we read the positions in every particular rule).
But this approach did not work at all in combination
with token contractions, when MYSQLlex() translates
two tokens into one token ID, for example:
WITH ROLLUP -> WITH_ROLLUP_SYM
As a result, the tokenizer is already one more token ahead.
So in query fragment:
"GROUP BY d, spvar WITH ROLLUP"
get_tok_start() points to "ROLLUP".
get_tok_start_prev() points to "WITH".
As a result, it was "WITH" who was erroneously replaced
to NAME_CONST() instead of "spvar".
This patch modifies the code to do it a different way.
Changes:
1. For keywords and identifiers, the tokenizer now
returns LEX_CTRING pointing directly to the query
fragment. So query positions are now just available using:
- $1.str - for the beginning of a token
- $1.str+$1.length - for the end of a token
2. Identifiers are not allocated on the THD memory root
in the tokenizer any more. Allocation is now done
on later stages, in methods like LEX::create_item_ident().
3. Two LEX_CSTRING based structures were added:
- Lex_ident_cli_st - used to store the "client side"
identifier representation, pointing to the
query fragment. Note, these identifiers
are encoded in @@character_set_client
and can have broken byte sequences.
- Lex_ident_sys_st - used to store the "server side"
identifier representation, pointing to the
THD allocated memory. This representation
guarantees that the identifier was checked
for being well-formed, and is encoded in utf8.
4. To distinguish between two identifier types
in the grammar, two Bison types were added:
<ident_cli> and <ident_sys>
5. All non-reserved keywords were marked as
being of the type <ident_cli>.
All reserved keywords are still of the type NONE.
6. All curly brackets in rules collecting
non-reserved keywords into non-terminal
symbols were removed, e.g.:
Was:
keyword_sp_data_type:
BIT_SYM {}
| BOOLEAN_SYM {}
Now:
keyword_sp_data_type:
BIT_SYM
| BOOLEAN_SYM
This is important NOT to have brackets here!!!!
This is needed to make sure that the underlying
Lex_ident_cli_ststructure correctly passes up to
the calling rule.
6. The code to scan identifiers and keywords
was moved from lex_one_token() into new
Lex_input_stream methods:
scan_ident_sysvar()
scan_ident_start()
scan_ident_middle()
scan_ident_delimited()
This was done to:
- get rid of enormous amount of references to &yylval->lex_str
- and remove a lot of references like lip->xxx
7. The allocating functionality which puts identifiers on the
THD memory root now resides in methods of Lex_ident_sys_st,
and in THD::to_ident_sys_alloc().
get_quoted_token() was removed.
8. Cleanup: check_simple_select() was moved as a method to LEX.
9. Cleanup: Some more functionality was moved from *.yy
to new methods were added to LEX:
make_item_colon_ident_ident()
make_item_func_call_generic()
create_item_qualified_asterisk()
These test can sporadically show mutex deadlock warnings between LOCK_wsrep_thd
and LOCK_thd_data mutexes. This means that these mutexes can be locked in opposite
order by different threads, and thus result in deadlock situation.
To fix such issue, the locking policy of these mutexes should be revised and
enforced to be uniform. However, a quick code review shows that the number of
lock/unlock operations for these mutexes combined is between 100-200, and all these
mutex invocations should be checked/fixed.
On the other hand, it turns out that LOCK_wsrep_thd is used for protecting access to
wsrep variables of THD (wsrep_conflict_state, wsrep_query_state), whereas LOCK_thd_data
protects query, db and mysys_var variables in THD. Extending LOCK_thd_data to protect
also wsrep variables looks like a viable solution, as there should not be a use case
where separate threads need simultaneous access to wsrep variables and THD data variables.
In this commit LOCK_wsrep_thd mutex is refactored to be replaced by LOCK_thd_data.
By bluntly replacing LOCK_wsrep_thd by LOCK_thd_data, will result in double locking
of LOCK_thd_data, and some adjustements have been performed to fix such situations.
Methods:
- Item_user_var_as_out_param::print_for_load()
- sql_exchange::escaped_given(void)
Parameters:
- sql_exchange in write_execute_load_query_log_event()
- sql_exchange in mysql_load()
- sql_exchange in Load_log_event::Load_log_event()
Also, removing cast to "char*" in a few places in
Load_log_event::Load_log_event()
Backporting from bb-10.2-compatibility to bb-10.2-ext
Version: 2018-01-26
- CREATE PACKAGE [BODY] statements are now
entirely written to mysql.proc with type='PACKAGE' and type='PACKAGE BODY'.
- CREATE PACKAGE BODY now supports IF NOT EXISTS
- DROP PACKAGE BODY now supports IF EXISTS
- CREATE OR REPLACE PACKAGE [BODY] is now supported
- CREATE PACKAGE [BODY] now support the DEFINER clause:
CREATE DEFINER user@host PACKAGE pkg ... END;
CREATE DEFINER user@host PACKAGE BODY pkg ... END;
- CREATE PACKAGE [BODY] now supports SQL SECURITY and COMMENT clauses, e.g.:
CREATE PACKAGE p1 SQL SECURITY INVOKER COMMENT "comment" AS ... END;
- Package routines are now created from the package CREATE PACKAGE BODY
statement and don't produce individual records in mysql.proc.
- CREATE PACKAGE BODY now supports package-wide variables.
Package variables can be read and set inside package routines.
Package variables are stored in a separate sp_rcontext,
which is cached in THD on the first packate routine call.
- CREATE PACKAGE BODY now supports the initialization section.
- All public routines (i.e. declared in CREATE PACKAGE)
must have implementations in CREATE PACKAGE BODY
- Only public package routines are available outside of the package
- {CREATE|DROP} PACKAGE [BODY] now respects CREATE ROUTINE and ALTER ROUTINE
privileges
- "GRANT EXECUTE ON PACKAGE BODY pkg" is now supported
- SHOW CREATE PACKAGE [BODY] is now supported
- SHOW PACKAGE [BODY] STATUS is now supported
- CREATE and DROP for PACKAGE [BODY] now works for non-current databases
- mysqldump now supports packages
- "SHOW {PROCEDURE|FUNCTION) CODE pkg.routine" now works for package routines
- "SHOW PACKAGE BODY CODE pkg" now works (the package initialization section)
- A new package body level MDL was added
- Recursive calls for package procedures are now possible
- Routine forward declarations in CREATE PACKATE BODY are now supported.
- Package body variables now work as SP OUT parameters
- Package body variables now work as SELECT INTO targets
- Package body variables now support ROW, %ROWTYPE, %TYPE
- CREATE PACKAGE [BODY] statements are now
entirely written to mysql.proc with type='PACKAGE' and type='PACKAGE BODY'.
- CREATE PACKAGE BODY now supports IF NOT EXISTS
- DROP PACKAGE BODY now supports IF EXISTS
- CREATE OR REPLACE PACKAGE [BODY] is now supported
- CREATE PACKAGE [BODY] now support the DEFINER clause:
CREATE DEFINER user@host PACKAGE pkg ... END;
CREATE DEFINER user@host PACKAGE BODY pkg ... END;
- CREATE PACKAGE [BODY] now supports SQL SECURITY and COMMENT clauses, e.g.:
CREATE PACKAGE p1 SQL SECURITY INVOKER COMMENT "comment" AS ... END;
- Package routines are now created from the package CREATE PACKAGE BODY
statement and don't produce individual records in mysql.proc.
- CREATE PACKAGE BODY now supports package-wide variables.
Package variables can be read and set inside package routines.
Package variables are stored in a separate sp_rcontext,
which is cached in THD on the first packate routine call.
- CREATE PACKAGE BODY now supports the initialization section.
- All public routines (i.e. declared in CREATE PACKAGE)
must have implementations in CREATE PACKAGE BODY
- Only public package routines are available outside of the package
- {CREATE|DROP} PACKAGE [BODY] now respects CREATE ROUTINE and ALTER ROUTINE
privileges
- "GRANT EXECUTE ON PACKAGE BODY pkg" is now supported
- SHOW CREATE PACKAGE [BODY] is now supported
- SHOW PACKAGE [BODY] STATUS is now supported
- CREATE and DROP for PACKAGE [BODY] now works for non-current databases
- mysqldump now supports packages
- "SHOW {PROCEDURE|FUNCTION) CODE pkg.routine" now works for package routines
- "SHOW PACKAGE BODY CODE pkg" now works (the package initialization section)
- A new package body level MDL was added
- Recursive calls for package procedures are now possible
- Routine forward declarations in CREATE PACKATE BODY are now supported.
- Package body variables now work as SP OUT parameters
- Package body variables now work as SELECT INTO targets
- Package body variables now support ROW, %ROWTYPE, %TYPE
Because NOW() is set to system time, unless overriden.
And both should follow big manual system time changes,
while still coping with lowres system clocks.
Ignoring system time changes is both confusing and
breaks with restarts.
RBR cannot work with system versioning on the master.
row_end column is either system time (not @@timestamp) with microsecond
precision or transaction id. Either way, it'll certainly be different
on the slave. So if the master row contains row_end column, it won't
match on the slave. And if we ignore row_end when comparing,
then some other row might match instead.