- In ha_innobase::prepare_inplace_alter_table(), InnoDB should
check whether the table is empty. If the table is empty then
server should avoid downgrading the MDL after prepare phase.
It is more like instant alter, does change only in dicationary
and metadata.
- Changed few debug test case to make non-empty DDL table
Drop and add same key is considered rename (look ALTER_RENAME_INDEX in
fill_alter_inplace_info()). But in this case order of keys may be
changed, because mysql_prepare_alter_table() yet does not know about
rename and treats 2 operations: drop and add.
In that case we disable inplace algorithm for such engines as Memory,
MyISAM and Aria with ALTER_INDEX_ORDER flag. These engines have no
specialized check_if_supported_inplace_alter() and default
handler::check_if_supported_inplace_alter() sees an unknown flag and
returns HA_ALTER_INPLACE_NOT_SUPPORTED.
ha_innobase::check_if_supported_inplace_alter() works differently and
inplace is not disabled (with the help of modified
INNOBASE_INPLACE_IGNORE). add_drop_v_cols fork was also tweaked as it
wrongly failed with MSG_UNSUPPORTED_ALTER_ONLINE_ON_VIRTUAL_COLUMN
when it seen ALTER_INDEX_ORDER.
No-op operation must be still no-op no matter of ALTER_INDEX_ORDER
presence, so we tweek its condition as well.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
There is a case when implicit primary key may be changed when removing
NOT NULL from the part of unique key. In that case we update
modified_primary_key which is then used to not skip key sorting.
According to is_candidate_key() there is no other cases when primary
key may be changed implicitly.
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
There is a case when implicit primary key may be changed when removing
NOT NULL from the part of unique key. In that case we update
modified_primary_key which is then used to not skip key sorting.
According to is_candidate_key() there is no other cases when primary
kay may be changed implicitly.
mysql_prepare_create_table() does my_qsort(sort_keys) on key
info. This sorting is indeterministic: a table is created with one
order and inplace alter may overwrite frm with another order. Since
inplace alter does nothing about key info for MyISAM/Aria storage
engines this results in discrepancy between frm and storage engine key
definitions.
The fix avoids the sorting of keys when no new keys added by ALTER
(and this is ok for MyISAM/Aria since it cannot add new keys inplace).
Notes:
mi_keydef_write()/mi_keyseg_write() are used only in mi_create(). They
should be used in ha_inplace_alter_table() as well.
Aria corruption detection is unimplemented: maria_check_definition()
is never used!
MySQL 8.0 has this bug as well as of 8.0.26.
This breaks main.long_unique in 10.4. The new result is correct and
should be applied as it just different (original) order of keys.
mysql_write_frm(): Correctly enclose code inside
#ifdef WITH_PARTITION_STORAGE_ENGINE
so that cmake -DPLUGIN_PARTITION=NO builds can succeed.
This was broken in
commit b7bba721ee.
Syntax for CONVERT TABLE
ALTER TABLE tbl_name CONVERT TABLE tbl_name TO PARTITION partition_name partition_spec
Examples:
ALTER TABLE t1 CONVERT TABLE tp2 TO PARTITION p2 VALUES LESS THAN MAX_VALUE();
New ALTER_PARTITION_CONVERT_IN command for
fast_alter_partition_table() is done in alter_partition_convert_in()
function which basically does ha_rename_table().
Table structure and data check is basically the same as in EXCHANGE
PARTITION command. And these are done by
compare_table_with_partition() and check_table_data().
Atomic DDL is done by the scheme from MDEV-22166 (see the
corresponding commit message). The only differnce is that it also has
to drop source table frm and that is done by WFRM_DROP_CONVERTED_FROM.
Initial patch was done by Dmitry Shulga <dmitry.shulga@mariadb.com>
Syntax for CONVERT keyword
ALTER TABLE tbl_name
[alter_option [, alter_option] ...] |
[partition_options]
partition_option: {
...
| CONVERT PARTITION partition_name TO TABLE tbl_name
}
Examples:
ALTER TABLE t1 CONVERT PARTITION p2 TO TABLE tp2;
New ALTER_PARTITION_CONVERT_OUT command for
fast_alter_partition_table() is done in alter_partition_convert_out()
function which basically does ha_rename_table().
Partition to extract is marked with the same flag as dropped
partition: PART_TO_BE_DROPPED. Note that we cannot have multiple
partitioning commands in one ALTER.
For DDL logging basically the principle is the same as for other
fast_alter_partition_table() commands. The only difference is that it
integrates late Atomic DDL functions and introduces additional phase
of WFRM_BACKUP_ORIGINAL. That is required for binlog consistency
because otherwise we could not revert back after WFRM_INSTALL_SHADOW
is done. And before DDL log is complete if we crash or fail the
altered table will be already new but binlog will miss that ALTER
command. Note that this is different from all other atomic DDL in that
it rolls back until the ddl_log_complete() is done even if everything
was done fully before the crash.
Test cases added to:
parts.alter_table \
parts.partition_debug \
versioning.partition \
atomic.alter_partition
Dead code cleanup:
part_info->num_parts usage was wrong and working incorrectly in
mysql_drop_partitions() because num_parts is already updated in
prep_alter_part_table(). We don't have to update part_info->partitions
because part_info is destroyed at alter_partition_lock_handling().
Cleanups:
- DBUG_EVALUATE_IF() macro replaced by shorter form DBUG_IF();
- Typo in ER_KEY_COLUMN_DOES_NOT_EXITS.
Refactorings:
- Splitted write_log_replace_delete_frm() into write_log_delete_frm()
and write_log_replace_frm();
- partition_info via DDL_LOG_STATE;
- set_part_info_exec_log_entry() removed.
DBUG_EVALUATE removed
DBUG_EVALUTATE was only added for consistency together with
DBUG_EVALUATE_IF. It is not used anywhere in the code.
DBUG_SUICIDE() fix on release build
On release DBUG_SUICIDE() was statement. It was wrong as
DBUG_SUICIDE() is used in expression context.
Assertion `!pk->has_virtual()' failed in dict_index_build_internal_clust
while creating PRIMARY key longer than possible to store in the page.
This happened because the key was wrongly deduced as Long UNIQUE supported,
however PRIMARY KEY cannot be of that type. The main reason is that
only 8 bytes are used to store the hash, see HA_HASH_FIELD_LENGTH.
This is also why HA_NOSAME flag is removed (and caused the assertion in
turn) in open_table_from_share:
if (key_info->algorithm == HA_KEY_ALG_LONG_HASH)
{
key_part_end++;
key_info->flags&= ~HA_NOSAME;
}
To make it unique, the additional check is done by
check_duplicate_long_entries call from ha_write_row, and similar one from
ha_update_row.
PRIMARY key is already forbidden, which is checked by the first test in
main.long_unique, however is_hash_field_needed was wrongly deduced to true
in mysql_prepare_create_table in this particular case.
FIX:
* Improve the check for Key::PRIMARY type
* Simplify is_hash_field_needed deduction for a more neat reading
- There should be no substitution if engine exists, only when doesn't
exist
- Handling of an error when sys_var `default_tmp_storage_engine` is
assigned to unsupported engine.
- rocksdb doesn't support embedded server ebfc4e6ad0 so is excluded
Closes PR #774
Reviewed by: serg@mariadb.comvicentiu@mariadb.org
We cannot revert the ALTER, so anything happening after
the point of no return should not be treated as an error. A
very unfortunate condition that a user needs to be warned about - yes,
but we cannot say "ALTER TABLE has failed" if the table was successfully
altered.
A table rebuild that would truncate the default value of a
DATE column is expected to issue data truncation warnings.
But, these warnings are not being issued if the ADD COLUMN
is being executed with ALGORITHM=INSTANT. InnoDB sets the
warning of the field while assigning the default value
of the field during check_if_supported_inplace_alter().
Withing this task the following changes were made:
- Added sending of metadata info in prepare phase for the admin related
command (check table, checksum table, repair, optimize, analyze).
- Refactored implmentation of HELP command to support its execution in
PS mode
- Added support for execution of LOAD INTO and XA- related statements
in PS mode
- Modified mysqltest.cc to run statements in PS mode unconditionally
in case the option --ps-protocol is set. Formerly, only those statements
were executed using PS protocol that matched the hard-coded regular expression
- Fixed the following issues:
The statement
explain select (select 2)
executed in regular and PS mode produces different results:
MariaDB [test]> prepare stmt from "explain select (select 2)";
Query OK, 0 rows affected (0,000 sec)
Statement prepared
MariaDB [test]> execute stmt;
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
| 2 | SUBQUERY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
2 rows in set (0,000 sec)
MariaDB [test]> explain select (select 2);
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
| 1 | SIMPLE | NULL | NULL | NULL | NULL | NULL | NULL | NULL | No tables used |
+------+-------------+-------+------+---------------+------+---------+------+------+----------------+
1 row in set, 1 warning (0,000 sec)
In case the statement
CREATE TABLE t1 SELECT * FROM (SELECT 1 AS a, (SELECT a+0)) a
is run in PS mode it fails with the error
ERROR 1054 (42S22): Unknown column 'a' in 'field list'.
- Uniform handling of read-only variables both in case the SET var=val
statement is executed as regular or prepared statememt.
- Fixed assertion firing on handling LOAD DATA statement for temporary tables
- Relaxed assert condition in the function lex_end_stage1() by adding
the commands SQLCOM_ALTER_EVENT, SQLCOM_CREATE_PACKAGE,
SQLCOM_CREATE_PACKAGE_BODY to a list of supported command
- Removed raising of the error ER_UNSUPPORTED_PS in the function
check_prepared_statement() for the ALTER VIEW command
- Added initialization of the data memember st_select_lex_unit::last_procedure
(assign NULL value) in the constructor
Without this change the test case main.ctype_utf8 fails with the following
report in case it is run with the optoin --ps-protocol.
mysqltest: At line 2278: query 'VALUES (_latin1 0xDF) UNION VALUES(_utf8'a' COLLATE utf8_bin)' failed: 2013: Lost connection
- The following bug reports were fixed:
MDEV-24460: Multiple rows result set returned from stored
routine over prepared statement binary protocol is
handled incorrectly
CONC-519: mariadb client library doesn't handle server_status and
warnign_count fields received in the packet
COM_STMT_EXECUTE_RESPONSE.
Reasons for these bug reports have the same nature and caused by
missing loop iteration on results sent by server in response to
COM_STMT_EXECUTE packet.
Enclosing of statements for processing of COM_STMT_EXECUTE response
in the construct like
do
{
...
} while (!mysql_stmt_next_result());
fixes the above mentioned bug reports.
In commit 1c5ae99194 (MDEV-25666)
we had changed Mariabackup so that it would no longer skip files
whose names start with #sql. This turned out to be wrong.
Because operations on such named files are not protected by any
locks in the server, it is not safe to copy them.
Not copying the files may make the InnoDB data dictionary
inconsistent with the file system. So, we must do something
in InnoDB to adjust for that.
If InnoDB is being started up without the redo log (ib_logfile0)
or with a zero-length log file, we will assume that the server
was restored from a backup, and adjust things as follows:
dict_check_sys_tables(), fil_ibd_open(): Do not complain about
missing #sql files if they would be dropped a little later.
dict_stats_update_if_needed(): Never add #sql tables to
the recomputing queue. This avoids a potential race condition when
dropping the garbage tables.
drop_garbage_tables_after_restore(): Try to drop any garbage tables.
innodb_ddl_recovery_done(): Invoke drop_garbage_tables_after_restore()
if srv_start_after_restore (a new flag) was set and we are not in
read-only mode (innodb_read_only=ON or innodb_force_recovery>3).
The tests and dbug_mariabackup_event() instrumentation
were developed by Vladislav Vaintroub, who also reviewed this.
1. work around MDEV-25912 to not apply assert
at wsrep running time;
2. handle wsrep mode of the server recovery
3. convert hton calls to static binlog_commit ones.
4. satisfy MSAN complain on uninitialized std::pair
- InnoDB fails to check DB_COMPUTE_VALUE_FAILED error in
row_merge_read_clustered_index() and wrongly asserts that
the buffer shouldn't be ran out of memory. Alter table
should give warning when the column value is being
truncated.
When doing a ALTER TABLE ... RENAME, MariaDB doesn't rename
original table to #sql-backup, which it does in other cases,
but insteads drops the original table directly. However
this optimization doesn't work in case of InnoDB table
with a foreign key constraint.
During copy algorithm, InnoDB fails to rename the foreign key
constraint(MDEV-25855). With this optimisation, InnoDB also
fails to drop the original table because the table has
FOREIGN Key constraint exist in INNODB_SYS_FOREIGN table.
This leads to orphan .ibd file in InnoDB dictionary.
so disabling this optimization when FK is involved.
Reviewer: monty@mariadb.org
This fixed the MySQL bug# 20338 about misuse of double underscore
prefix __WIN__, which was old MySQL's idea of identifying Windows
Replace it by _WIN32 standard symbol for targeting Windows OS
(both 32 and 64 bit)
Not that connect storage engine is not fixed in this patch (must be
fixed in "upstream" branch)
MDEV-25604 Atomic DDL: Binlog event written upon recovery does not
have default database
The purpose of this task is to ensure that ALTER TABLE is atomic even if
the MariaDB server would be killed at any point of the alter table.
This means that either the ALTER TABLE succeeds (including that triggers,
the status tables and the binary log are updated) or things should be
reverted to their original state.
If the server crashes before the new version is fully up to date and
commited, it will revert to the original table and remove all
temporary files and tables.
If the new version is commited, crash recovery will use the new version,
and update triggers, the status tables and the binary log.
The one execption is ALTER TABLE .. RENAME .. where no changes are done
to table definition. This one will work as RENAME and roll back unless
the whole statement completed, including updating the binary log (if
enabled).
Other changes:
- Added handlerton->check_version() function to allow the ddl recovery
code to check, in case of inplace alter table, if the table in the
storage engine is of the new or old version.
- Added handler->table_version() so that an engine can report the current
version of the table. This should be changed each time the table
definition changes.
- Added ha_signal_ddl_recovery_done() and
handlerton::signal_ddl_recovery_done() to inform all handlers when
ddl recovery has been done. (Needed by InnoDB).
- Added handlerton call inplace_alter_table_committed, to signal engine
that ddl_log has been closed for the alter table query.
- Added new handerton flag
HTON_REQUIRES_NOTIFY_TABLEDEF_CHANGED_AFTER_COMMIT to signal when we
should call hton->notify_tabledef_changed() during
mysql_inplace_alter_table. This was required as MyRocks and InnoDB
needed the call at different times.
- Added function server_uuid_value() to be able to generate a temporary
xid when ddl recovery writes the query to the binary log. This is
needed to be able to handle crashes during ddl log recovery.
- Moved freeing of the frm definition to end of mysql_alter_table() to
remove duplicate code and have a common exit strategy.
-------
InnoDB part of atomic ALTER TABLE
(Implemented by Marko Mäkelä)
innodb_check_version(): Compare the saved dict_table_t::def_trx_id
to determine whether an ALTER TABLE operation was committed.
We must correctly recover dict_table_t::def_trx_id for this to work.
Before purge removes any trace of DB_TRX_ID from system tables, it
will make an effort to load the user table into the cache, so that
the dict_table_t::def_trx_id can be recovered.
ha_innobase::table_version(): return garbage, or the trx_id that would
be used for committing an ALTER TABLE operation.
In InnoDB, table names starting with #sql-ib will remain special:
they will be dropped on startup. This may be revisited later in
MDEV-18518 when we implement proper undo logging and rollback
for creating or dropping multiple tables in a transaction.
Table names starting with #sql will retain some special meaning:
dict_table_t::parse_name() will not consider such names for
MDL acquisition, and dict_table_rename_in_cache() will treat such
names specially when handling FOREIGN KEY constraints.
Simplify InnoDB DROP INDEX.
Prevent purge wakeup
To ensure that dict_table_t::def_trx_id will be recovered correctly
in case the server is killed before ddl_log_complete(), we will block
the purge of any history in SYS_TABLES, SYS_INDEXES, SYS_COLUMNS
between ha_innobase::commit_inplace_alter_table(commit=true)
(purge_sys.stop_SYS()) and purge_sys.resume_SYS().
The completion callback purge_sys.resume_SYS() must be between
ddl_log_complete() and MDL release.
--------
MyRocks support for atomic ALTER TABLE
(Implemented by Sergui Petrunia)
Implement these SE API functions:
- ha_rocksdb::table_version()
- hton->check_version = rocksdb_check_versionMyRocks data dictionary
now stores table version for each table.
(Absence of table version record is interpreted as table_version=0,
that is, which means no upgrade changes are needed)
- For inplace alter table of a partitioned table, call the underlying
handlerton when checking if the table is ok. This assumes that the
partition engine commits all changes at once.
ALTER TABLE .. RENAME, when used with the inplace algorithm, does:
- Do an inplace or online alter to the new definition
- Rename to new name
- Update triggers.
If update triggers would fail, we would rename the table back.
The problem with this approach is that the table would have the new
definition but the rename would fail. The binary log would also not be
updated.
The solution to this is to very early check if we can rename triggers
and give an error if this would fail.
Both ALTER TABLE ... RENAME and RENAME TABLE is fixed.
This was implemented by moving the pre-check of rename table in triggers
from Table_triggers_list::change_table_name() to
Table_triggers_list::prepare_for_rename().
There are a few different cases to consider
Logging of CREATE TABLE and CREATE TABLE ... LIKE
- If REPLACE is used and there was an existing table, DDL log the drop of
the table.
- If discovery of table is to be done
- DDL LOG create table
else
- DDL log create table (with engine type)
- create the table
- If table was created
- Log entry to binary log with xid
- Mark DDL log completed
Crash recovery:
- If query was in binary log do nothing and exit
- If discoverted table
- Delete the .frm file
-else
- Drop created table and frm file
- If table was dropped, write a DROP TABLE statement in binary log
CREATE TABLE ... SELECT required a little more work as when one is using
statement logging the query is written to the binary log before commit is
done.
This was fixed by adding a DROP TABLE to the binary log during crash
recovery if the ddl log entry was not closed. In this case the binary log
will contain:
CREATE TABLE xxx ... SELECT ....
DROP TABLE xxx;
Other things:
- Added debug_crash_here() functionality to Aria to be able to test
crash in create table between the creation of the .MAI and the .MAD files.
Description of how DROP DATABASE works after this patch
- Collect list of tables
- DDL log tables as they are dropped
- DDL log drop database
- Delete db.opt
- Delete data directory
- Log either DROP TABLE or DROP DATABASE to binary log
- De active ddl log entry
This is in line of how things where before (minus ddl logging) except that
we delete db.opt file last to not loose it if DROP DATABASE fails.
On recovery we have to ensure that all dropped tables are logged in
binary log and that they are properly dropped (as with atomic drop
table).
No new tables be dropped as part of recovery.
Recovery of active drop database ddl log entry:
- If drop database was logged to ddl log but was not found in the binary
log:
- drop the db.opt file and database directory.
- Log DROP DATABASE to binary log
- If drop database was not logged to ddl log
- Update binary log with DROP TABLE of the dropped tables. If table list
is longer than max_allowed_packet, then the query will be split into
multiple DROP TABLE/VIEW queries.
Other things:
- Added DDL_LOG_STATE and 'current database' as arguments to
mysql_rm_table_no_locks(). This was needed to be able to combine
ddl logging of DROP DATABASE and DROP TABLE and make the generated
DROP TABLE statements shorter.
- To make the DROP TABLE statement created by ddl log shorter, I changed
the binlogged query to use current directory and omit the directory
part for all tables in the current directory.
- Merged some DROP TABLE and DROP VIEW code in ddl logger. This was done
to be able get separate DROP VIEW and DROP TABLE statements in the binary
log.
- Added a 'recovery_state' variable to remember the state of dropped
tables and views.
- Moved out code that drops database objects (stored procedures) from
mysql_rm_db_internal() to drop_database_objects() for better code reuse.
- Made mysql_rm_db_internal() global so that could be used by the ddl
recovery code.
Logging logic:
- Log tables, just before they are dropped, to the ddl log
- After the last table for the statement is dropped, log an xid for the
whole ddl log event
In case of crash:
- Remove first any active DROP TABLE events from the ddl log that matches
xids found in binary log (this mean the drop was successful and was
propery logged).
- Loop over all active DROP TABLE events
- Ensure that the table is completely dropped
- Write a DROP TABLE entry to the binary log with the dropped tables.
Other things:
- Added code to ha_drop_table() to be able to tell the difference if
a get_new_handler() failed because of out-of-memory or because the
handler refused/was not able to create a a handler. This was needed
to get sequences to work as sequences needs a share object to be passed
to get_new_handler()
- TC_LOG_BINLOG::recover() was changed to always collect Xid's from the
binary log and always call ddl_log_close_binlogged_events(). This was
needed to be able to collect DROP TABLE events with embedded Xid's
(used by ddl log).
- Added a new variable "$grep_script" to binlog filter to be able to find
only rows that matches a regexp.
- Had to adjust some test that changed because drop statements are a bit
larger in the binary log than before (as we have to store the xid)
Other things:
- MDEV-25588 Atomic DDL: Binlog query event written upon recovery is corrupt
fixed (in the original commit).
- Major rewrite of ddl_log.cc and ddl_log.h
- ddl_log.cc described in the beginning how the recovery works.
- ddl_log.log has unique signature and is dynamic. It's easy to
add more information to the header and other ddl blocks while still
being able to execute old ddl entries.
- IO_SIZE for ddl blocks is now dynamic. Can be changed without affecting
recovery of old logs.
- Code is more modular and is now usable outside of partition handling.
- Renamed log file to dll_recovery.log and added option --log-ddl-recovery
to allow one to specify the path & filename.
- Added ddl_log_entry_phase[], number of phases for each DDL action,
which allowed me to greatly simply set_global_from_ddl_log_entry()
- Changed how strings are stored in log entries, which allows us to
store much more information in a log entry.
- ddl log is now always created at start and deleted on normal shutdown.
This simplices things notable.
- Added probes debug_crash_here() and debug_simulate_error() to simply
crash testing and allow crash after a given number of times a probe
is executed. See comments in debug_sync.cc and rename_table.test for
how this can be used.
- Reverting failed table and view renames is done trough the ddl log.
This ensures that the ddl log is tested also outside of recovery.
- Added helper function 'handler::needs_lower_case_filenames()'
- Extend binary log with Q_XID events. ddl log handling is using this
to check if a ddl log entry was logged to the binary log (if yes,
it will be deleted from the log during ddl_log_close_binlogged_events()
- If a DDL entry fails 3 time, disable it. This is to ensure that if
we have a crash in ddl recovery code the server will not get stuck
in a forever crash-restart-crash loop.
mysqltest.cc changes:
- --die will now replace $variables with their values
- $error will contain the error of the last failed statement
storage engine changes:
- maria_rename() was changed to be more robust against crashes during
rename.
The reason for the removal are:
- Generates more code
- Storing and retreving THD
- Causes extra code and daata to be generated to handle possible throw
exceptions (which never happens in MariaDB code)
- Uses more stack space
Other things:
- Changed convert_const_to_int() to use item->save_in_field_no_warnings(),
which made the code shorter and simpler.
- Removed not needed code in Sp_handler::sp_create_routine()
- Added thd as argument to store_key.copy() to make function simpler
- Added thd as argument to some subselect* constructor that inherites
from Item_subselect.
DROP TABLE opens all temporary tables at start, but then
uses find_temporary_table() to check if a table is temporary
instead of is_temporary_table() which is much faster.
This patch fixes this issue.
This change removed 68 explict strlen() calls from the code.
The following renames was done to ensure we don't use the old names
when merging code from earlier releases, as using the new variables
for print function could result in crashes:
- charset->csname renamed to charset->cs_name
- charset->name renamed to charset->coll_name
Almost everything where mechanical changes except:
- Changed to use the new Protocol::store(LEX_CSTRING..) when possible
- Changed to use field->store(LEX_CSTRING*, CHARSET_INFO*) when possible
- Changed to use String->append(LEX_CSTRING&) when possible
Other things:
- There where compiler issues with ensuring that all character set names
points to the same string: gcc doesn't allow one to use integer constants
when defining global structures (constant char * pointers works fine).
To get around this, I declared defines for each character set name
length.
Changes:
- To detect automatic strlen() I removed the methods in String that
uses 'const char *' without a length:
- String::append(const char*)
- Binary_string(const char *str)
- String(const char *str, CHARSET_INFO *cs)
- append_for_single_quote(const char *)
All usage of append(const char*) is changed to either use
String::append(char), String::append(const char*, size_t length) or
String::append(LEX_CSTRING)
- Added STRING_WITH_LEN() around constant string arguments to
String::append()
- Added overflow argument to escape_string_for_mysql() and
escape_quotes_for_mysql() instead of returning (size_t) -1 on overflow.
This was needed as most usage of the above functions never tested the
result for -1 and would have given wrong results or crashes in case
of overflows.
- Added Item_func_or_sum::func_name_cstring(), which returns LEX_CSTRING.
Changed all Item_func::func_name()'s to func_name_cstring()'s.
The old Item_func_or_sum::func_name() is now an inline function that
returns func_name_cstring().str.
- Changed Item::mode_name() and Item::func_name_ext() to return
LEX_CSTRING.
- Changed for some functions the name argument from const char * to
to const LEX_CSTRING &:
- Item::Item_func_fix_attributes()
- Item::check_type_...()
- Type_std_attributes::agg_item_collations()
- Type_std_attributes::agg_item_set_converter()
- Type_std_attributes::agg_arg_charsets...()
- Type_handler_hybrid_field_type::aggregate_for_result()
- Type_handler_geometry::check_type_geom_or_binary()
- Type_handler::Item_func_or_sum_illegal_param()
- Predicant_to_list_comparator::add_value_skip_null()
- Predicant_to_list_comparator::add_value()
- cmp_item_row::prepare_comparators()
- cmp_item_row::aggregate_row_elements_for_comparison()
- Cursor_ref::print_func()
- Removes String_space() as it was only used in one cases and that
could be simplified to not use String_space(), thanks to the fixed
my_vsnprintf().
- Added some const LEX_CSTRING's for common strings:
- NULL_clex_str, DATA_clex_str, INDEX_clex_str.
- Changed primary_key_name to a LEX_CSTRING
- Renamed String::set_quick() to String::set_buffer_if_not_allocated() to
clarify what the function really does.
- Rename of protocol function:
bool store(const char *from, CHARSET_INFO *cs) to
bool store_string_or_null(const char *from, CHARSET_INFO *cs).
This was done to both clarify the difference between this 'store' function
and also to make it easier to find unoptimal usage of store() calls.
- Added Protocol::store(const LEX_CSTRING*, CHARSET_INFO*)
- Changed some 'const char*' arrays to instead be of type LEX_CSTRING.
- class Item_func_units now used LEX_CSTRING for name.
Other things:
- Fixed a bug in mysql.cc:construct_prompt() where a wrong escape character
in the prompt would cause some part of the prompt to be duplicated.
- Fixed a lot of instances where the length of the argument to
append is known or easily obtain but was not used.
- Removed some not needed 'virtual' definition for functions that was
inherited from the parent. I added override to these.
- Fixed Ordered_key::print() to preallocate needed buffer. Old code could
case memory overruns.
- Simplified some loops when adding char * to a String with delimiters.
This was done to simplify copying of with_* flags
Other things:
- Changed Flags to C++ enums, which enables gdb to print
out bit values for the flags. This also enables compiler
errors if one tries to manipulate a non existing bit in
a variable.
- Added set_maybe_null() as a shortcut as setting the
MAYBE_NULL flags was used in a LOT of places.
- Renamed PARAM flag to SP_VAR to ensure it's not confused with persistent
statement parameters.
The reason for the change is that neither clang or gcc can do efficient
code when several bit fields are change at the same time or when copying
one or more bits between identical bit fields.
Updated bits explicitely with & and | is MUCH more efficient than what
current compilers can do.
after dfb41fddf6 tables that failed to drop are excluded from the
binlogged DROP TABLE statement. It means that the slave should not
expect any errors when executing DROP TABLE, and the binlog should
report that no error has happened, even if it was.
Do not write error code into the binlogged DROP TABLE,
and remove all code that was needed to compute it.
InnoDB fails to fetch the index type when innodb dictionary
doesn't match with frm. InnoDB should return corrupted if it
can't find the index in ha_innobase::index_type().
This is a backport of
commit fd9ca2a742 (MDEV-23295) and
commit 9a156e1a23 (MDEV-23345) to 10.3.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
Before FRM is written walk vcol expressions through
check_table_name_processor() and check if field items match (db,
table_name) qualifier.
We cannot do this in check_vcol_func_processor() as there is already
no table name qualifiers in expressions of written and loaded FRM.
When a column is added to an non-empty table, existing rows will have
a column's default value for existing rows. Or a "zero value" if the
column has no default.
But this check should be skipped when an existing column is altered.
Fixed by adding a MDL_BACKUP_COMMIT lock before altering temporary tables
whose creation was logged to binary log (in which case the ALTER TABLE
must also be logged)
Problem:
The problem happened because of a conceptual flaw in the server code:
a. The table level CHARSET/COLLATE clause affected all data types,
including numeric and temporal ones:
CREATE TABLE t1 (a INT) CHARACTER SET utf8 [COLLATE utf8_general_ci];
In the above example, the Column_definition_attributes
(and then the FRM record) for the column "a" erroneously inherited
"utf8" as its character set.
b. The "ALTER TABLE t1 CONVERT TO CHARACTER SET csname" statement
also erroneously affected Column_definition_attributes::charset
for numeric and temporal data types and wrote "csname" as their
character set into FRM files.
So now we have arbitrary non-relevant charset ID values for numeric
and temporal data types in all FRM files in the world :)
The code in the server and the other engines did not seem to be affected
by this flaw. Only InnoDB inplace ALTER was affected.
Solution:
Fixing the code in the way that only character string data types
(CHAR,VARCHAR,TEXT,ENUM,SET):
- inherit the table level CHARSET/COLLATE clause
- get the charset value according to "CONVERT TO CHARACTER SET csname".
Numeric and temporal data types now always get &my_charset_numeric
in Column_definition_attributes::charset and always write its ID into FRM files:
- no matter what the table level CHARSET/COLLATE clause is, and
- no matter what "CONVERT TO CHARACTER SET" says.
Details:
1. Adding helper classes to pass small parts of HA_CREATE_INFO
into Type_handler methods:
- Column_derived_attributes - to pass table level CHARSET/COLLATE,
so columns that do not have explicit CHARSET/COLLATE clauses
can derive them from the table level, e.g.
CREATE TABLE t1 (a VARCHAR(1), b CHAR(1)) CHARACTER SET utf8;
- Column_bulk_alter_attributes - to pass bulk attribute changes
generated by the ALTER related code. These bulk changes affect
multiple columns at the same time:
ALTER TABLE ... CONVERT TO CHARACTER SET csname;
Note, passing the whole HA_CREATE_INFO directly to Type_handler
would not be good: HA_CREATE_INFO is huge and would need not desired
dependencies in sql_type.h and sql_type.cc. The Type_handler API should
use smallest possible data types!
2. Type_handler::Column_definition_prepare_stage1() is now responsible
to set Column_definition::charset properly, according to the data type,
for example:
- For string data types, Column_definition_attributes::charset is set from
the table level CHARSET/COLLATE clause (if not specified explicitly in
the column definition).
- For numeric and temporal fields, Column_definition_attributes::charset is
set to &my_charset_numeric, no matter what the table level
CHARSET/COLLATE says.
- For GEOMETRY, Column_definition_attributes::charset is set to
&my_charset_bin, no matter what the table level CHARSET/COLLATE says.
Previously this code (setting `charset`) was outside of of
Column_definition_prepare_stage1(), namely in
mysql_prepare_create_table(), and was erroneously called for
all data types.
3. Adding Type_handler::Column_definition_bulk_alter(), to handle
"ALTER TABLE .. CONVERT TO". Previously this code was inside
get_sql_field_charset() and was erroneously called for all data types.
4. Removing the Schema_specification_st parameter from
Type_handler::Column_definition_redefine_stage1().
Column_definition_attributes::charset is now fully properly initialized by
Column_definition_prepare_stage1(). So we don't need access to the
table level CHARSET/COLLATE clause in Column_definition_redefine_stage1()
any more.
5. Other changes:
- Removing global function get_sql_field_charset()
- Moving the part of the former get_sql_field_charset(), which was
responsible to inherit the table level CHARSET/COLLATE clause to
new methods:
-- Column_definition_attributes::explicit_or_derived_charset() and
-- Column_definition::prepare_charset_for_string().
This code is only needed for string data types.
Previously it was erroneously called for all data types.
- Moving another part, which was responsible to apply the
"CONVERT TO" clause, to
Type_handler_general_purpose_string::Column_definition_bulk_alter().
- Replacing the call for get_sql_field_charset() in sql_partition.cc
to sql_field->explicit_or_derived_charset() - it is perfectly enough.
The old code was redundant: get_sql_field_charset() was called from
sql_partition.cc only when there were no a "CONVERT TO CHARACTER SET"
clause involved, so its purpose was only to inherit the table
level CHARSET/COLLATE clause.
- Moving the code handling the BINCMP_FLAG flag from
mysql_prepare_create_table() to
Column_definition::prepare_charset_for_string():
This code is responsible to resolve the BINARY comparison style
into the corresponding _bin collation, to do the following transparent
rewrite:
CREATE TABLE t1 (a VARCHAR(10) BINARY) CHARSET utf8; ->
CREATE TABLE t1 (a VARCHAR(10) CHARACTER SET utf8 COLLATE utf8_bin);
This code is only needed for string data types.
Previously it was erroneously called for all data types.
6. Renaming Table_scope_and_contents_source_pod_st::table_charset
to alter_table_convert_to_charset, because the only purpose it's used for
is handlering "ALTER .. CONVERT". The new name is much more self-descriptive.
The problem was that the CONNECT engine is trying to open the .frm file
during drop_table(), which the code did not take into account.
Fixed by adding the HA_REUSES_FILE_NAMES table flag to CONNECT.
Other things:
- Fixed a wrong test of HA_REUSE_FILE_NAMES of in mysql_alter_table()
(Comment was correct, no the code)
- Added a test in the connect engine that if the .frm it tries to use in
delete is not made for connect, it will generate an error instead of
crash.
This feature adds the functionality of ignorability for indexes.
Indexes are not ignored be default.
To control index ignorability explicitly for a new index,
use IGNORE or NOT IGNORE as part of the index definition for
CREATE TABLE, CREATE INDEX, or ALTER TABLE.
Primary keys (explicit or implicit) cannot be made ignorable.
The table INFORMATION_SCHEMA.STATISTICS get a new column named IGNORED that
would store whether an index needs to be ignored or not.
partially revert 76063c2a13. Item::clone() is not an all-purpose
Item copying machine, it was specifically created for pushdown
of predicates into derived tables and views and it does not
copy everything. In particular, it does not copy Item_func_regex.
Fix the bug differently by preserving the old constraint name.
But keep setting automatic_name=true to have it regenerated
for cases like ALTER TABLE ... ADD CONSTRAINT.
Ever since commit 007f68c37f,
ALTER TABLE no longer invokes handler::open() after
handler::commit_inplace_alter_table().
ha_innobase::reload_statistics(): Reload or recompute statistics
after ALTER TABLE.
innodb_notify_tabledef_changed(): A new function to invoke
ha_innobase::reload_statistics().
handlerton::notify_tabledef_changed(): Add the parameter handler*
so that ha_innobase::reload_statistics() can be invoked.
ha_partition::notify_tabledef_changed(),
partition_notify_tabledef_changed(): Pass through the call
to any partitions or subpartitions.
This is based on code that was supplied by Monty.
Added new enum variable `wsrep_mode` which can be used to turn on WSREP
features which are not part of default behaviour.
Added enum `BINLOG_ROW_FORMAT_ONLY`, `REQUIRED_PRIMARY_KEY` and
`STRICT_REPLICATION`. `wsrep-mode=STRICT_REPLICATION` behaves
like variable `wsrep_strict_ddl`.
Variable wsrep_strict_ddl is deprecated and if set we use
new wsrep_mode setting instead.
Reviewed and improved by: Jan Lindström <jan.lindstrom@mariadb.com>
We implement an idea that was suggested by Michael 'Monty' Widenius
in October 2017: When InnoDB is inserting into an empty table or partition,
we can write a single undo log record TRX_UNDO_EMPTY, which will cause
ROLLBACK to clear the table.
For this to work, the insert into an empty table or partition must be
covered by an exclusive table lock that will be held until the transaction
has been committed or rolled back, or the INSERT operation has been
rolled back (and the table is empty again), in lock_table_x_unlock().
Clustered index records that are covered by the TRX_UNDO_EMPTY record
will carry DB_TRX_ID=0 and DB_ROLL_PTR=1<<55, and thus they cannot
be distinguished from what MDEV-12288 leaves behind after purging the
history of row-logged operations.
Concurrent non-locking reads must be adjusted: If the read view was
created before the INSERT into an empty table, then we must continue
to imagine that the table is empty, and not try to read any records.
If the read view was created after the INSERT was committed, then
all records must be visible normally. To implement this, we introduce
the field dict_table_t::bulk_trx_id.
This special handling only applies to the very first INSERT statement
of a transaction for the empty table or partition. If a subsequent
statement in the transaction is modifying the initially empty table again,
we must enable row-level undo logging, so that we will be able to
roll back to the start of the statement in case of an error (such as
duplicate key).
INSERT IGNORE will continue to use row-level logging and locking, because
implementing it would require the ability to roll back the latest row.
Since the undo log that we write only allows us to roll back the entire
statement, we cannot support INSERT IGNORE. We will introduce a
handler::extra() parameter HA_EXTRA_IGNORE_INSERT to indicate to storage
engines that INSERT IGNORE is being executed.
In many test cases, we add an extra record to the table, so that during
the 'interesting' part of the test, row-level locking and logging will
be used.
Replicas will continue to use row-level logging and locking until
MDEV-24622 has been addressed. Likewise, this optimization will be
disabled in Galera cluster until MDEV-24623 enables it.
dict_table_t::bulk_trx_id: The latest active or committed transaction
that initiated an insert into an empty table or partition.
Protected by exclusive table lock and a clustered index leaf page latch.
ins_node_t::bulk_insert: Whether bulk insert was initiated.
trx_t::mod_tables: Use C++11 style accessors (emplace instead of insert).
Unlike earlier, this collection will cover also temporary tables.
trx_mod_table_time_t: Add start_bulk_insert(), end_bulk_insert(),
is_bulk_insert(), was_bulk_insert().
trx_undo_report_row_operation(): Before accessing any undo log pages,
invoke trx->mod_tables.emplace() in order to determine whether undo
logging was disabled, or whether this is the first INSERT and we are
supposed to write a TRX_UNDO_EMPTY record.
row_ins_clust_index_entry_low(): If we are inserting into an empty
clustered index leaf page, set the ins_node_t::bulk_insert flag for
the subsequent trx_undo_report_row_operation() call.
lock_rec_insert_check_and_lock(), lock_prdt_insert_check_and_lock():
Remove the redundant parameter 'flags' that can be checked in the caller.
btr_cur_ins_lock_and_undo(): Simplify the logic. Correctly write
DB_TRX_ID,DB_ROLL_PTR after invoking trx_undo_report_row_operation().
trx_mark_sql_stat_end(), ha_innobase::extra(HA_EXTRA_IGNORE_INSERT),
ha_innobase::external_lock(): Invoke trx_t::end_bulk_insert() so that
the next statement will not be covered by table-level undo logging.
ReadView::changes_visible(trx_id_t) const: New accessor for the case
where the trx_id_t is not read from a potentially corrupted index page
but directly from the memory. In this case, we can skip a sanity check.
row_sel(), row_sel_try_search_shortcut(), row_search_mvcc():
row_sel_try_search_shortcut_for_mysql(),
row_merge_read_clustered_index(): Check dict_table_t::bulk_trx_id.
row_sel_clust_sees(): Replaces lock_clust_rec_cons_read_sees().
lock_sec_rec_cons_read_sees(): Replaced with lower-level code.
btr_root_page_init(): Refactored from btr_create().
dict_index_t::clear(), dict_table_t::clear(): Empty an index or table,
for the ROLLBACK of an INSERT operation.
ROW_T_EMPTY, ROW_OP_EMPTY: Note a concurrent ROLLBACK of an INSERT
into an empty table.
This is joint work with Thirunarayanan Balathandayuthapani,
who created a working prototype.
Thanks to Matthias Leich for extensive testing.
* be strict in CREATE TABLE, just like in ALTER TABLE, because
CREATE TABLE, just like ALTER TABLE, can be rolled back for any engine
* but don't auto-convert warnings into errors for engine warnings
(handler::create) - this matches ALTER TABLE behavior
* and not when creating a default record, these errors are handled
specially (and replaced with ER_INVALID_DEFAULT)
* always issue a Note when a non-unique key is truncated, because it's
not a Warning that can be converted to an Error. Before this commit
it was a Note for blobs and a Warning for all other data types.
..causes error on slave.
Cause: if the master doesn't have the frm file for the table,
DROP TABLE code will call ha_delete_table_force() to drop the table
in all available storage engines.
The issue was that this code path didn't check for
HTON_TABLE_MAY_NOT_EXIST_ON_SLAVE flag for the storage engine,
and so did not add "... IF EXISTS" to the statement that's written
to the binary log. This can cause error on the slave when it tries to
drop a table that's already gone.
Though this is an error message task, the problem was deep in the
`mysql_prepare_create_table` implementation. The problem is described as
follows:
1. `append_system_key_parts` was called before
`mysql_prepare_create_table`, though key name generation was done close to
the latest stage of the latter.
2. We can't move `append_system_key_parts` in the end, because system keys
should be appended before some checks done.
3. If the checks from `append_system_key_parts` are moved to the end of
`mysql_prepare_create_table`, then some other inappropriate errors are
issued. like `ER_DUP_FIELDNAME`.
To have key name specified in error message, name generation should be done
before the checks, which consequenced in more changes.
The final design for key initialization in `mysql_prepare_create_table`
follows. The initialization is done in three phases:
1. Calculate a total number of keys created with respect to keys ignored.
Allocate KEY* buffer.
2. Generate unique names; calculate a total number of key parts.
Make early checks. Allocate KEY_PART_INFO* buffer.
3. Initialize key parts, make the rest of the checks.
This failure was caused because of several bugs:
- Someone had removed s3-slave-ignore-updates=1 from slave.cnf, which
caused the slave to remove files that the master was working on.
- Bug in ha_partition::change_partitions() that didn't reset m_new_file
in case of errors. This caused crashes in ha_maria::extra() as the
maria handler was called on files that was already closed.
- In ma_pagecache there was a bug that when one got a read error one a
big block (s3 block), it left the flag PCBLOCK_BIG_READ on for the page
which cased an assert when the page where flushed.
- Flush all cached tables in case of ignored ALTER TABLE
Note that when merging code from 10.3, that fixes the partition bug, use
the code from this patch instead.
Changes to ma_pagecache.cc written or reviewed by Sanja
This commit fixed the problems with S3 after the "DROP TABLE FORCE" changes.
It also fixes all failing replication S3 tests.
A slave is delayed if it is trying to execute replicated queries on a
table that is already converted to S3 by the master later in the binlog.
Fixes for replication events on S3 tables for delayed slaves:
- INSERT and INSERT ... SELECT and CREATE TABLE are ignored but written
to the binary log. UPDATE & DELETE will be fixed in a future commit.
Other things:
- On slaves with --s3-slave-ignore-updates set, allow S3 tables to be
opened in read-write mode. This was done to be able to
ignore-but-replicate queries like insert. Without this change any
open of an S3 table failed with 'Table is read only' which is too
early to be able to replicate the original query.
- Errors are now printed if handler::extra() call fails in
wait_while_tables_are_used().
- Error message for row changes are changed from HA_ERR_WRONG_COMMAND
to HA_ERR_TABLE_READONLY.
- Disable some maria_extra() calls for S3 tables. This could cause
S3 tables to fail in some cases.
- Added missing thr_lock_delete() to ma_open() in case of failure.
- Removed from mysql_prepare_insert() the not needed argument 'table'.
- Remove row_start/row_end from keys in fix_create_like();
- Disable manual adding of implicit row_start/row_end to indexes on
CREATE TABLE. INVISIBLE_SYSTEM fields are unoperable by user;
- Fix memory leak on allocation of Key_part_spec.
Fixes also:
MDEV-22674 Server crash in compare_bin ... restore_table_state_after_repair
The bug was that the 'can_enable_index' variable in MyISAM and Aria was
not properly set and reset for bulk insert.
Because of this, insert...select was trying to recreate indexes while
another thread was using it, causing crashes in page cache.
Related to 7c2ba9e: ha_table_exists() is replaced by
dd_frm_type(). ha_table_exists() checked the existence of share and
that succeeded to enter the execution branch of ha_delete_table()
where tdc_remove_table() was called. Now it is skipped because
dd_frm_type() returns TABLE_TYPE_UNKNOWN. Fix it by calling
tdc_remove_table() in this case as well.
An instant ADD/DROP/reorder column could create a dummy table
object with the wrong ROW_FORMAT when innodb_default_row_format
was changed between CREATE TABLE and ALTER TABLE.
prepare_inplace_alter_table_dict(): If we had promised that
ALGORITHM=INPLACE is supported, we must preserve the ROW_FORMAT.
dict_table_t::prepare_instant(): Add debug assertions to catch
ROW_FORMAT mismatch.
The rest of the changes are related to adding
Alter_inplace_info::inplace_supported to cache the return value of
handler::check_if_supported_inplace_alter().
normal DROP TABLE with many tables continues after an error,
trying to drop as many tables as possible. But DROP TEMPORARY TABLE
was aborting on the first error. Change it to behave as DROP TABLE does.
don't do table discovery on DROP. DROP falls back to "force"
approach when a table isn't found and will try to drop in all
engines anyway. That is, trying to discover in all engines before
the drop is redundant and may be expensive.
first step in moving drop table out of the handler.
todo: other methods that don't need an open table
for now hton->drop_table is optional, for backward compatibility
reasons
When converting a table (test.s3_table) from S3 to another engine, the
following will be logged to the binary log:
DROP TABLE IF EXISTS test.t1;
CREATE OR REPLACE TABLE test.t1 (...) ENGINE=new_engine
INSERT rows to test.t1 in binary-row-log-format
The bug is that the above statements are logged one by one to the binary
log. This means that a fast slave, configured to use the same S3 storage
as the master, would be able to execute the DROP and CREATE from the
binary log before the master has finished the ALTER TABLE.
In this case the slave would ignore the DROP (as it's on a S3 table) but
it will stop on CREATE of the local tale, as the table is still exists in
S3. The REPLACE part will be ignored by the slave as it can't touch the
S3 table.
The fix is to ensure that all the above statements is written to binary
log AFTER the table has been deleted from S3.
copy_data_between_tables() sets to->s->default_fields to 0, as a part
of the code disabling ON UPDATE actions for all old fields
(so ON UPDATE is enable only for new fields during copying).
After the actual copying, copy_data_between_tables() did not restore
to->s->default_fields to the original value. As a result, the
TABLE_SHARE to->s was left in a wrong state after copy_data_between_tables()
and further open_table_from_share() using this TABLE_SHARE did not
populate TABLE::default_field, which further made
TABLE::evaluate_update_default_function() crash on access to NULL
pointer.
Fix:
Changing copy_data_between_tables() to restore to->s->default_fields
to its original value after the copying loop.