- call current_schema::mark_as_changed() directly
- call state_change::mark_as_changed() directly
- replaced SESSION_TRACKER_CHANGED with dummy tracker
- replaced Session_tracker::mark_as_changed() with
State_tracker::mark_as_changed()
- hide and devirtualize original State_tracker::mark_as_changed(),
rename it to set_changed()
- all implementations of mark_as_changed() now check is_enabled() for
consistency
- no argument casts anymore
Shift-Reduce conflicts prevented parsing some queries with subqueries that
used set operations when the subqueries occurred in expressions or in IN
predicands.
The grammar rules for query expression were transformed in order to avoid
these conflicts. New grammar rules employ an idea taken from MySQL 8.0.
memmove() should be used instead of memcpy() for overlapping memory regions.
Overlapping memory regions itself here are fine, because code simply removes
one element from arbitrary position of an array.
* do not allow versioned table to be without versioned (non-system) fields
* prohibit changing field versioning, when removing table versioning
* handle CREATE...SELECT as well
* remove one level of virtual functions
* remove redundant checks
* remove an if() as the value is always known at compilation time
don't pretend that "DEFAULT expr" and "ON UPDATE DEFAULT NOW"
are "basically the same thing"
The MDEV-20265 commit e746f451d5
introduces DBUG_ASSERT(right_op == r_tbl) in
st_select_lex::add_cross_joined_table(), and that assertion would
fail in several tests that exercise joins. That commit was skipped
in this merge, and a separate fix of MDEV-20265 will be necessary in 10.4.
1. Revert incorrect treatment of m_needs_reopen;
2. Close single instance of TABLE instead of all instances since
reopened only those that are marked for reopen.
Problem:
=======
DROP TABLE IF EXISTS was killed. The table still exists on
the master but the DDL was still logged.
Analysis:
=========
During the execution of DROP TABLE command "ha_delete_table" call is invoked
to delete the table. If the query is killed at this point, the kill command
is not handled within the code. This results in two issues.
1) The table which is not dropped also gets written into the binary log.
2) The code continues further upon receiving 'KILL QUERY'.
Fix:
===
Upon receiving the KILL command the query should stop its current execution.
Tables which were successfully dropped prior to KILL command should be
included in the binary log.
Use DEBUG_SYNC to hang the execution at the interesting point,
and then kill and restart the server externally. This will work
also with Valgrind. DBUG_SUICIDE() causes Valgrind to hang,
and it could also cause uninteresting reports about memory leaks.
While we are at it, let us clean up innodb.innodb_bulk_create_index_debug
so that it will actually test the desired functionality also in future
versions (with instant ADD COLUMN and DROP COLUMN) and avoid
some unnecessary restarts.
We are adding two DEBUG_SYNC points for ALTER TABLE, because there were
none that would be executed right before ha_commit_trans().
1. Fix DBUG_ASSERT(!table->pos_in_locked_tables) in tc_release_table();
2. Fix access of prematurely freed MDL_ticket: don't close ticket if table was not closed;
3. Fix deadlock after erroneous ALTER.
mysql_alter_table() leaves dirty table->m_needs_reopen in case of
error exit which then incorrectly treated by mysql_lock_tables().
Problem:
=======
Failed CREATE OR REPLACE TEMPORARY TABLE statement which dropped the table but
failed at a later stage of creation of temporary table is not written to
binarylog in row based replication. This causes the slave to diverge.
Analysis:
========
CREATE OR REPLACE statements work as shown below.
CREATE OR REPLACE TABLE table_name (a int);
is basically the same as:
DROP TABLE IF EXISTS table_name;
CREATE TABLE table_name (a int);
Hence every CREATE OR REPLACE TABLE command which dropped the table should be
written to binary log, even when following CREATE TABLE part fails. In order
to achieve this, during the execution of CREATE OR REPLACE command, when a
table is dropped 'thd->log_current_statement' flag is set. When table creation
results in an error within 'mysql_create_table' code, the error handling part
looks for this flag. If it is set the failed CREATE OR REPLACE statement is
written into the binary log inspite of error. This ensure that slave doesn't
diverge from the master. In case of row based replication the error handling
code returns very early, if the table is of type temporary. This is done based
on the assumption that temporary tables are not replicated in row based
replication.
It fails to handle the cases where a temporary table was created as part of
statement based replication at an earlier stage and the binary log format was
changed to row because of an unsafe statement. In this case when a CREATE OR
REPLACE statement is executed on this temporary table it will dropped but the
query will not be written to binary log. Hence slave diverges.
Fix:
===
In error handling code check the return status of create table operation. If
it is successful and replication mode is row based and table is of type
temporary then return. Other wise proceed further to the code which checks for
thd->log_current_statement flag and does appropriate logging.
MDEV-5589 commit set up a policy to skip DROP TEMPORARY TABLE binary logging
in case the target table has not been "CREATEed" in binlog (no CREATE
Query-log-event was logged into the binary log).
It turns out that
1. the rule did not cover non-existing table DROPped with IF-EXISTS clause.
The logged-create knowledge for the non-existing one does not even need
MDEV-5589 patch, and
2. connection close disobeys it to trigger automatic DROP-IF-EXISTS
binlogging.
Either 1 or 2 or even both is/are also responsible for unexpected binlog
records observed in MDEV-17863, actually rendering a referred
@@global.read_only irrelevant as far as the described stored procedure
definition *and* the ROW binlog-format are concerned.
Analysis
========
Point in time recovery using mysqlbinlog containing queries
operating on temporary tables results in an error.
While writing the query log event in the binary log, the
thread id used for execution of DROP TABLE and DELETE commands
were incorrect. The thread variable 'thread_specific_used'
is used to determine whether a specific thread id is to used
while executing the statements i.e using 'SET
@@session.pseudo_thread_id'. This variable was not set
correctly for DROP TABLE query and was never set for DELETE
query. The thread id is important for temporary tables
since the tables are session specific. DROP TABLE and DELETE
queries executed using a wrong thread id resulted in errors
while applying the queries generated by mysqlbinlog utility.
Fix
===
Set the 'thread_specific_used' THD variable for DROP TABLE and
DELETE queries.
ReviewBoard: 21833
Problem:
========
There is a possibility that there can be more concurrent DMLs While the
alter table thread is waiting for upgrading to MDL_EXCLUSIVE before commit phase.
In commit phase, InnoDB acquires dict_operation_lock and it already holds MDL_EXCLUSIVE
on the table. After that, InnoDB applies the concurrent DML logs in commit phase.
This could lead to blocking of the following things:
1) DML on the particular table (due to MDL_EXCLUSIVE on the table)
2) InnoDB DDLs (due to dict_operation_lock)
3) Purge thread, stats thread, the master thread (due to dict_operation_lock)
Fix:
====
Apply the concurrent DML logs in commit phase but before acquiring
dict_operation_lock in commit phase. It makes sure that (2), (3) can't be
blocked for longer time.
Make Field::is_equal() const and return bool as it's a naturally fitting
type for it. Also it's agrument was narrowed to Column_definition.
InnoDB can change type of some columns by itself. InnoDB-specific code used to
reside in Field_xxx:is_equal() methods. Now engine-specific stuff was
moved to a virtual methods of handler::can_convert{string,varstring,blob,geom}.
These methods are called by Field::can_be_converted_by_engine() which is a
double dispatch pattern.
Some InnoDB-specific code still resides in compare_keys_but_name(). It should
be moved from here someday to handler::compare_key_parts(...) or similar.
IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET
IS_EQUAL_WITH_REINTERPRET_COMPATIBLE_CHARSET_BUT_COLLATE: both was removed
IS_EQUAL_NO, IS_EQUAL_YES are not needed now and should be removed
along with deprecated handler::check_if_incompatible_data().
HA_EXTENDED_TYPES_CONVERSION: was removed as such logic is not needed now by
server code.
ALTER_COLUMN_EQUAL_PACK_LENGTH: was renamed to a more generic
ALTER_COLUMN_TYPE_CHANGE_BY_ENGINE
Patch is about two cases:
1) On some collate changes it's possible to rebuild only secondary indexes
2) For non-indexed columns collate can be changed INSTANTly
Implemented mostly in Field_{string,varstring,blob}::is_equal().
Make this method return how exactly collationa differs.
This information is later used by fill_alter_inplace_info() to pass
correct info to engine.
There was two separate problems:
- Aria pagecache didn't properly handle re-reading of blocks
that have given errors before (this triggered an assert)
- temporary tables that where opened several times where
not properly closed in ALTER, REPAIR or OPTIMIZE table
Other things
- Added a couple of asserts that will make it easier to
find problems like this in the future.
When compiling with -DEXTRA_DEBUG and run main.stat_tables_missing
on go the warning:
Warning: Table: ./mysql/column_stats is open on rename old_table
This happened because rename_table_in_stat_tables() re-open the
table that was to be renamed.
Fixed by moving update of stat tables after all renames has been made.
Removed not needed table renames when doing ALTER TABLE when engine
changes and both of the following is true:
- Either new or old engine does not store the table in files
- Neither old or new engine uses files from another engine
We also skip renames when ALTER TABLE does an explicit rename
This improves performance, especially for engines where rename is
a slow operation (like the upcoming S3 engine)
Reason for the change was that ha_notify_table_changed() was done
after table open when .frm had been replaced, which caused failure
in engines that checks on open if .frm matches the engines table
definition.
Other changes:
- Remove not needed open/close call at end of inline alter table.
Some test that depended on the table beeing in the table cache after
ALTER TABLE had to be updated.
A sequel to 9180e86 and 149b754.
ALTER TABLE ... ADD FOREIGN KEY may crash if parent table is updated
concurrently.
Block FK parent table updates even earlier, before intermediate child
table is created.
Use proper charset info for my_casedn_str() and don't update original
identifiers so that lower_cast_table_names == 2 is honoured.
make live checksum to be returned in handler::info(),
and slow table-scan checksum to be calculated in handler::checksum().
part of
MDEV-16249 CHECKSUM TABLE for a spider table is not parallel and saves all data in memory in the spider head by default
Just rename index in data dictionary and in InnoDB cache when it's possible.
Introduce ALTER_INDEX_RENAME for that purpose so that engines can optimize
such operation.
Unused code between macro MYSQL_RENAME_INDEX was removed.
compare_keys_but_name(): compare index definitions except for index names
Alter_inplace_info::rename_keys:
ha_innobase_inplace_ctx::rename_keys: vector of rename indexes
fill_alter_inplace_info():: fills Alter_inplace_info::rename_keys
Moved rea_create_table() to the sole caller.
Also ha_create_partitioning_metadata(CHF_CREATE_FLAG) does cleanup on
error now.
Part of MDEV-17805 - Remove InnoDB cache for temporary tables.
Do not register intermediate tables created by inplace ALTER TABLE in
THD::temporary_tables.
Regular ALTER TABLE doesn't create .frm for temporary and discoverable
tables anymore. For inplace ALTER TABLE moved .frm creation to
create_table_for_inplace_alter().
Removed open_in_engine argument of create_and_open_tmp_table() and
open_temporary_table(): it became unused after this patch.
Part of MDEV-17805 - Remove InnoDB cache for temporary tables.
This check was introduced in 602a222 and then became redundant in ad1553e,
where we attempt to open a table even for non-copy algorithms.
Added missing test case from 602a222.
Part of MDEV-17805 - Remove InnoDB cache for temporary tables.
CREATE TEMPORARY TABLE locks SE plugin 6 times. 5 of these locks are
released by the end of the statement. And only 1 acquired by
init_from_binary_frm_image() / plugin_lock() remains.
The lock removed in this patch was clearly redundant.
Part of MDEV-17805 - Remove InnoDB cache for temporary tables.
The MDEV-17262 commit 26432e49d3
was skipped. In Galera 4, the implementation would seem to require
changes to the streaming replication.
In the tests archive.rnd_pos main.profiling, disable_ps_protocol
for SHOW STATUS and SHOW PROFILE commands until MDEV-18974
has been fixed.
attempt to drop BLOB with long index
Restore long table->key_info, So that in the case of unsuccessful table
alter table->key_info should have a correct value. In the case of successful
table alter old table is flushed so that is why we don't see this error in
the case of successful alter.
ALTER TABLE ... ADD FOREIGN KEY may trigger assertion failure when
it has LOCK=EXCLUSIVE clause or concurrent FLUSH TABLES is being
executed.
In both cases being altered table is marked as flushed, which forces
subsequent attempt to open parent table to re-open. Which in turn is
not allowed while transaction is running.
Rather than opening parent table, just take appropriate MDL lock.
Also removed table_already_fk_prelocked() check: MDL itself has much
better methods to handle duplicate locks. E.g. the former won't acquire
MDL_SHARED_NO_WRITE if it already has MDL_SHARED_READ.
when auto-adding a virtual LONG_UNIQUE_HASH_FIELD, fill in
a Virtual_column_info for it, so that fill_alter_inplace_info()
would know we're adding a virtual field (ALTER_ADD_VIRTUAL_COLUMN).
sql_field->key_length was 0 for blob fields when a field was
being added, but Field_blob::character_octet_length() on
subsequent ALTER TABLE's (when the Field object in the old table
already existed). This means mysql_prepare_create_table() couldn't
reliably detect if the keyseg was a prefix.
This patch implements engine independent unique hash index.
Usage:- Unique HASH index can be created automatically for blob/varchar/test column whose key
length > handler->max_key_length()
or it can be explicitly specified.
Automatic Creation:-
Create TABLE t1 (a blob unique);
Explicit Creation:-
Create TABLE t1 (a int , unique(a) using HASH);
Internal KEY_PART Representations:-
Long unique key_info will have 2 representations.
(lets understand this with an example create table t1(a blob, b blob , unique(a, b)); )
1. User Given Representation:- key_info->key_part array will be similar to what user has defined.
So in case of example it will have 2 key_parts (a, b)
2. Storage Engine Representation:- In this case there will be only one key_part and it will point to
HASH_FIELD. This key_part will be always after user defined key_parts.
So:- User Given Representation [a] [b] [hash_key_part]
key_info->key_part ----^
Storage Engine Representation [a] [b] [hash_key_part]
key_info->key_part ------------^
Table->s->key_info will have User Given Representation, While table->key_info will have Storage Engine
Representation.Representation can be changed into each other by calling re/setup_keyinfo_hash function.
Working:-
1. So when user specifies HASH_INDEX or key_length is > handler->max_key_length(), In mysql_prepare_create_table
One extra vfield is added (for each long unique key). And key_info->algorithm is set to HA_KEY_ALG_LONG_HASH.
2. In init_from_binary_frm_image values for hash_keypart is set (like fieldnr , field and flags)
3. In parse_vcol_defs, HASH_FIELD->vcol_info is created. Item_func_hash is used with list of Item_fields,
When Explicit length is given by user then Item_left is used to concatenate Item_field values.
4. In ha_write_row/ha_update_row check_duplicate_long_entry_key is called which will create the hash key from
table->record[0] and then call ha_index_read_map , if we found duplicated hash , we will compare the result
field by field.
After FLUSH PRIVILEGES remember if the connection started under
--skip-grant-tables and keep it all-powerful, not a lowly anonymous.
One could use this connection to reset passwords as needed.
Also fix a crash in SHOW CREATE USER
If we instantly change the size of a fixed-length field
and treat it as kind-of variable-length, then we will need
conversions between old column values and new ones.
I tried adding such a conversion to row_build(), but then I
noticed that more conversions would be needed, because
old values still appeared in a freshly rebuilt secondary index,
causing a mismatch when trying to search with the correct
longer value that was converted in my provisional fix to row_build().
So, we will revert the essential part of
MDEV-15563: Instant ROW_FORMAT=REDUNDANT column extension
(commit 22feb179ae), but not
remove any tests.
This was developed by Aleksey Midenkov based on my design.
In the original InnoDB storage format (that was retroactively named
ROW_FORMAT=REDUNDANT in MySQL 5.0.3), the length of each index field
is stored explicitly.
Because of this, we can and now will allow instant conversion from
VARCHAR to CHAR or VARBINARY to BINARY of equal or greater size,
as well as instant conversion of TINYINT to SMALLINT to MEDIUMINT
to INT to BIGINT (while not changing between signed and unsigned).
Theoretically, we could allow changing from an unsigned integer to
a bigger unsigned integer, as well as changing CHAR to VARCHAR, but
that would require additional metadata and conversions whenever
reading old records.
Field_str::is_equal(), Field_varstring::is_equal(), Field_num::is_equal():
Return the new result IS_EQUAL_PACK_LENGTH_EXT if the table advertises
HA_EXTENDED_TYPES_CONVERSION capability and we are considering the
above-mentioned conversions.
ALTER_COLUMN_EQUAL_PACK_LENGTH_EXT: A new ALTER TABLE flag, similar
to ALTER_COLUMN_EQUAL_PACK_LENGTH but requiring conversions when
reading the data. The Field::is_equal() result IS_EQUAL_PACK_LENGTH_EXT
will map to this flag.
dtype_get_fixed_size_low(): For BINARY, CHAR and integer columns
in ROW_FORMAT=REDUNDANT, return 0 (variable length) from now on.
dtype_get_sql_null_size(): Keep returning the current size for
BINARY, CHAR and integer columns, so that in ROW_FORMAT=REDUNDANT
it will remain possible to update in place between NULL and NOT NULL
values.
btr_index_rec_validate(): Relax a CHECK TABLE length check for
ROW_FORMAT=REDUNDANT tables.
btr_cur_instant_init_low(): No longer trust fixed_len
for ROW_FORMAT=REDUNDANT tables.
We cannot rely on fixed_len anymore because the record can have shorter
length from before instant extension. Note that importing such tablespace
into earlier MariaDB versions produces ER_TABLE_SCHEMA_MISMATCH when
using a .cfg file.
renaming columns in a CHECK constraint during ALTER TABLE
taints the original TABLE and requires m_need_reopen=1.
In this case, though, renaming was redundant, so just don't do it.
remove TABLE_SHARE::error_table_name() and TABLE_SHARE::orig_table_name
(that was allocated in a wrong memroot in this bug).
instead, simply set TABLE_SHARE::table_name correctly.
Analysis:
========
Increasing the length of the indexed varchar column is not an instant operation for
innodb.
Fix:
===
- Introduce the new handler flag 'Alter_inplace_info::ALTER_COLUMN_INDEX_LENGTH' to
indicate the index length differs due to change of column length changes.
- InnoDB makes the ALTER_COLUMN_INDEX_LENGTH flag as instant operation.
This is a port of Mysql fix.
commit 913071c0b16cc03e703308250d795bc381627e37
Author: Nisha Gopalakrishnan <nisha.gopalakrishnan@oracle.com>
Date: Wed May 30 14:54:46 2018 +0530
BUG#26848813: INDEXED COLUMN CAN'T BE CHANGED FROM VARCHAR(15)
TO VARCHAR(40) INSTANTANEOUSLY
Bootstrap in a separate thread was introduced in 746f0b3b7 to workaround
OS/2 small stack size. OS/2 support was discontinued in 2006 and modern
operating systems have default stack size a few times larger than
default thread_stack and it is tunable.
Aim is to reduce usage of LOCK_thread_count and COND_thread_count.
Part of MDEV-15135.
Problem:
========
Server fails to notify the engine by not setting the ADD_PK_INDEX and
DROP_PK_INDEX When there is a
i) Change in candidate for primary key.
ii) New candidate for primary key.
Fix:
====
Server sets the ADD_PK_INDEX and DROP_PK_INDEX while doing alter for the
above problematic case.
The error message modified.
Then the TABLE_SHARE::error_table_name() implementation taken from 10.3,
to be used as a name of the table in this message.
Alter statement changed the THD structure by setting the value to FIELD_CHECK_WARN
and then not resetting it back. This led ANALYZE to throw a warning which previously
it didn't.
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Changed check of Global_only_lock to also include BACKUP lock.
- We store latest MDL_BACKUP_DDL lock in thd->mdl_backup_ticket to be able
to downgrade lock during copy_data_between_tables()
Part of MDEV-5336 Implement LOCK FOR BACKUP
- Added new locks to MDL_BACKUP for all stages of backup locks and
a new MDL lock needed for backup stages.
- Renamed MDL_BACKUP_STMT to MDL_BACKUP_DDL
- flush_tables() takes a new parameter that decides what should be flushed.
- InnoDB, Aria (transactional tables with checksums), Blackhole, Federated
and Federatedx tables are marked to be safe for online backup. We are
using MDL_BACKUP_TRANS_DML instead of MDL_BACKUP_DML locks for these
which allows any DML's to proceed for these tables during the whole
backup process until BACKUP STAGE COMMIT which will block the final
commit.
- Call delete_statistics_tables() after lock_table_names in drop tables.
This avoids a deadlock issue with FTWRL and future backup locks.
- Added some missing clear_error()
- Ensure we don't clear error caused by the caller
- Updated function comments
- The old code used the original create_info from lex, not the new one
that includes more information (like OPT_OR_REPLACE).
The bug was not discovered as the code in lock_table_named() only
checked for OPT_OR_REPLACE in case of timeout errors.
As lock_table_names will be fixed as part of BACKUP STAGE's, there
is no changes in lock_table_names() in this commit.
- Removed also the 'temporary' copy of statement flags to thd for
lock_table_names()
We don't have many bits left, no need to add another InnoDB-specific flag.
Instead, we say that HA_REQUIRE_PRIMARY_KEY does not apply to SEQUENCE.
Meaning, if the engine declares HA_CAN_TABLES_WITHOUT_ROLLBACK (required
for SEQUENCE) it *must* support tables without a primary key.
Fixed by adding table flag HA_WANTS_PRIMARY_KEY, which is like
HA_REQUIRE_PRIMARY_KEY but tells SQL upper layer that the storage engine
internally can handle tables without primary keys (for example for
sequences or trough user variables)
Locked_tables_list::unlock_locked_tables
Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR
REPLACE dropped temporary table but failed to cerate new one.
The problem is that there's no track of which temporary table was "locked" by
LOCK TABLES.
ALTER TABLE locks the table with TL_READ_NO_INSERT, to prevent the
source table modifications while it's being copied. But there's an
indirect way of modifying a table, via cascade FK actions.
After previous commits, an attempt to modify an FK parent table
will cause FK children to be prelocked, so the table-being-altered
cannot be modified by a cascade FK action, because ALTER holds a
lock and prelocking will wait.
But if a new FK is being added by this very ALTER, then the target
table is not locked yet (it's a temporary table). So, we have to
lock FK parents explicitly.
Problem was that the number of NULL bit's was record wrong in the
.frm file because there could be more fields marked NOT_NULL after the
number of not_null fields where recorded.
Fixed by copying test for virtual fields from prepare_create_field()
The code change, only the test, doesn't have to be merged to 10.3
as this is fixed there.
Make all system tables in mysql directory of type
engine=Aria
Privilege tables are using transactional=1
Statistical tables are using transactional=0, to allow them
to be quickly updated with low overhead.
Help tables are also using transactional=0 as these are only
updated at init time.
Other changes:
- Aria store engine is now a required engine
- Update comment for Aria tables to reflect their new usage
- Fixed that _ma_reset_trn_for_table() removes unlocked table
from transaction table list. This was needed to allow one
to lock and unlock system tables separately from other
tables, for example when reading a procedure from mysql.proc
- Don't give a warning when using transactional=1 for engines
that is using transactions. This is both logical and also
to avoid warnings/errors when doing an alter of a privilege
table to InnoDB.
- Don't abort on warnings from ALTER TABLE for changes that
would be accepted by CREATE TABLE.
- New created Aria transactional tables are marked as not movable
(as they include create_rename_lsn).
- bootstrap.test was changed to kill orignal server, as one
can't anymore have two servers started at same time on same
data directory and data files.
- Disable maria.small_blocksize as one can't anymore change
aria block size after system tables are created.
- Speed up creation of help tables by using lock tables.
- wsrep_sst_resync now also copies Aria redo logs.
- Adding a helper class Sec6 to store (neg,seconds,microseconds)
- Adding a helper class VSec6 (Sec6 with a flag for "IS NULL")
- Wrapping related functions as methods of Sec6;
* number_to_datetime()
* number_to_time()
* my_decimal2seconds()
* Item::get_seconds()
* A big piece of code in Item_func_sec_to_time::get_date()
- Using the new classes in places where second-to-temporal
conversion takes place:
* Field_timestamp::store(double)
* Field_timestamp::store(longlong)
* Field_timestamp_with_dec::store_decimal(my_decimal)
* Field_temporal_with_date::store(double)
* Field_temporal_with_date::store(longlong)
* Field_time::store(double)
* Field_time::store(longlong)
* Field_time::store_decimal(my_decimal)
* Field_temporal_with_date::store_decimal(my_decimal)
* get_interval_value()
* Item_func_sec_to_time::get_date()
* Item_func_from_unixtime::get_date()
* Item_func_maketime::get_date()
This change simplifies these methods and functions a lot.
- Warnings are now sent at VSec6 initialization time, when the source
data is available in its original data type representation.
If Sec6::to_time() or Sec6::to_datetime() truncate data again during
conversion to MYSQL_TIME, they send warnings, but only if no warnings
were sent during VSec6 initialization. This helps prevents double warnings.
The call for val_str() in Item_func_sec_to_time::get_date() is not
needed any more, so it's removed. This change actually fixes the problem.
As a good effect, FROM_UNIXTIME() and MAKETIME() now also send warnings
in case if the seconds arguments is out of range. Previously these
functions returned NULL silently.
- Splitting the code in the global function make_truncated_value_warning()
into a number of methods THD::raise_warning_xxxx().
This was needed to reuse the logic that chooses between:
* ER_TRUNCATED_WRONG_VALUE
* ER_WRONG_VALUE
* ER_TRUNCATED_WRONG_VALUE_FOR_FIELD
for non-temporal data types (Sec6).
- Removing:
* Item::get_seconds()
* number_to_time_with_warn()
as this code now resides inside methods of Sec6.
- Cleanup (changes that are not directly related to the fix):
* Removing calls for field_name_or_null() and passing NULL instead
in Item_func_hybrid_field_type::get_date_from_{int|real}_op,
because Item_func_hybrid_field_type::field_name_or_null()
always returns NULL
* Replacing a number of calls for make_truncated_value_warning()
to calls for THD::raise_warning_xxx(). In these places
we know that the execution went through a certain
branch of make_truncated_value_warning(),
(e.g. the exact error code is known, or field name is always NULL,
or field name is always not-NULL). So calls for the entire
make_truncated_value_warning() after splitting are not necessary.
MEMORY table could be renamed into a non-extistent database.
rename() is documented to return ENOENT when the source file does not
exist OR when the target directory not exist. Nonexistent source .frm
file is ok (table can still exist in the engine), nonexistent target
directory is not.
Make my_rename to use ENOTDIR for the latter case. Make RENAME TABLE
issue an appropriate error ("unknown database" instead of "unknown table")
* ignore CHECK constraint for historical rows;
* FOREIGN KEY test case.
TODO:
MDEV-16301 IB: use real table name for error messages on ALTER
Closes tempesta-tech/mariadb#491
Closes#748
NULL values when there is no DEFAULT
Copy and inplace algorithm works similarly for
NULL to NOT NULL conversion for the following cases:
(1) strict sql mode - Should give error.
(2) non-strict sql mode - Should give warnings alone
(3) alter ignore table command. - Should give warnings alone.
Changing columns WITH/WITHOUT SYSTEM VERSIONING doens't require to read data at
all. Thus it should be an instant operation.
Patch also fixes a bug when ALTER_COLUMN_UNVERSIONED wasn't passed to InnoDB
to change its internal structures.
change_field_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for one field.
change_fields_versioning_try(): apply WITH/WITHOUT SYSTEM VERSIONING
change in SYS_COLUMNS for every changed field in a table.
change_fields_versioning_cache(): update cache for versioning property
of columns.
don't read all columns from the source table, but only
those that the will be inserted into the target table
and cannot be generated by the target table.
test case is in the following commits
Fixed by deleting the sequence if we where not able to initialize it
I also noticed that we didn't always set the error message when
check_killed(), which could lead to aborted queries without error
beeing properly set. Fixed by default setting error message if
check_error() noticed that killed had been called.
This allowed me to remove a lot of calls to thd->send_kill_message().
Crash happened because in discover, table->work_part_info was not properly
reset before execution.
Fixed by resetting before calling execute alter table, create table or
mysql_create_frm_image.
Crash happened when deleting all columns that was part of a check constraint
The bug was that read map for from table was used when
checking CHECK constraint and was not properly reset
in copy_data_between_tables()
Problem was that if copy_data_between_tables() didn't do proper
clean up in case of failures:
- copy object was not properly freed
- end_bulk_insert() was not called
- mysql_trans_prepare_alter_copy_data() set THD->transaction.on to
false which was not properly restored
The last part caused a crash in Aria as Aria depends on that THD
is correct.
Other things:
- Reset info->switched_transactional after usage (safety)
- Reset bulk_insert_single_undo (safety)
Part one, non-temporary tables.
Rrenaming a column can make destructive changes
to the TABLE. This TABLE cannot be used anymore
and needs to be reopened even if ALTER TABLE
was aborted with an error.
10.3+ fix
On ALTER TABLE, if a non-changed column default
might need a charset conversion, it must
be a blob. Because blob's defaults ar stored
as expressions, and for any other type
a basic_const_item() will be in the record,
so it'll have correct charset and won't need
converting. For the same reason it makes no
sense to convert blob defaults (and it's
unsafe, see MDEV-15746).
test case is already in main/ps.test
don't try to convert a default value string from a user character set
into a column character set, if this particular default value string did
not came from the user at all (that is, if it's an ALTER TABLE and the
default value string is the *old* default value of the unaltered
column).
This used to crash, because old defaults are allocated on the old
table's memroot, which is freed mid-ALTER when the old table is closed.
So thd->rollback_item_tree_changes() at the end of the ALTER was writing
into the freed memory.
Introduced new alter algorithm type called NOCOPY & INSTANT for
inplace alter operation.
NOCOPY - Algorithm refuses any alter operation that would
rebuild the clustered index. It is a subset of INPLACE algorithm.
INSTANT - Algorithm allow any alter operation that would
modify only meta data. It is a subset of NOCOPY algorithm.
Introduce new variable called alter_algorithm. The values are
DEFAULT(0), COPY(1), INPLACE(2), NOCOPY(3), INSTANT(4)
Message to deprecate old_alter_table variable and make it alias
for alter_algorithm variable.
alter_algorithm variable for slave is always set to default.
As thd->alloc() and new automatically calls my_error(ER_OUTOFMEORY)
there is no reason to call mem_alloc_error()
Other things:
- Fixed bug in mysql_unpack_partition() where lex.part_info was
changed even if it would be a null pointer
ALTER TABLE ... ADD PARTITION modifies the open TABLE structure,
and sets table->need_reopen=1 to reset these modifications
in case of an error.
But under LOCK TABLES the table isn't get reopened, despite need_reopen.
Fixed by reopening need_reopen tables under LOCK TABLE.
numerous fixes for CREATE ... SELECT with system versioning:
In CREATE ... SELECT the table is created based on the result set,
field properties do not count. That is
* field invisibility is *not* copied over
* AS ROW START/END is *not* copied over
* the history is *not* copied over
* system row_start/row_end fields can *not* be created from the SELECT part
MDEV-14831 CREATE OR REPLACE SEQUENCE under LOCK TABLE corrupts the
sequence, causes ER_KEY_NOT_FOUND
The problem was that sequence_insert didn't properly handle the case
where there where there was a LOCK TABLE while creating the sequence.
Fixed by opening the sequence table, for inserting the first record, in
a new environment without any other open tables.
Found also a bug in Locked_tables_list::reopen_tables() where the lock
structure for the new tables was allocated in THD::mem_root, which causes
crashes. This could cause problems with other create tables done under
LOCK TABLES.
This is done to get more free flag bits for alter_info->flags
Renamed all ALTER PARTITION defines to start with ALTER_PARTITION_
Renamed ALTER_PARTITION to ALTER_PARTITION_INFO
Renamed ALTER_TABLE_REORG to ALTER_PARTITION_TABLE_REORG
Other things:
- Shifted some ALTER_xxx defines to get empty bits at end
Main reason was to make it easier to print the above structures in
a debugger. Additional benefits is that I was able to use same
defines for both structures, which simplifes some code.
Most of the code is just removing Alter_info:: and Alter_inplace_info::
from alter table flags.
Following renames was done:
HA_ALTER_FLAGS -> alter_table_operations
CHANGE_CREATE_OPTION -> ALTER_CHANGE_CREATE_OPTION
Alter_info::ADD_INDEX -> ALTER_ADD_INDEX
DROP_INDEX -> ALTER_DROP_INDEX
ADD_UNIQUE_INDEX -> ALTER_ADD_UNIQUE_INDEX
DROP_UNIQUE_INDEx -> ALTER_DROP_UNIQUE_INDEX
ADD_PK_INDEX -> ALTER_ADD_PK_INDEX
DROP_PK_INDEX -> ALTER_DROP_PK_INDEX
Alter_info:ALTER_ADD_COLUMN -> ALTER_PARSE_ADD_COLUMN
Alter_info:ALTER_DROP_COLUMN -> ALTER_PARSE_DROP_COLUMN
Alter_inplace_info::ADD_INDEX -> ALTER_ADD_NON_UNIQUE_NON_PRIM_INDEX
Alter_inplace_info::DROP_INDEX -> ALTER_DROP_NON_UNIQUE_NON_PRIM_INDEX
Other things:
- Added typedef alter_table_operatons for alter table flags
- DROP CHECK CONSTRAINT can now be done online
- Added checks for Aria tables in alter_table_online.test
- alter_table_flags now takes an ulonglong as argument.
- Don't support online operations if checksum option is used.
- sql_lex.cc doesn't add ALTER_ADD_INDEX if index is not created
fill_alter_table() always thought that index was changed because of
of a wrong check of block_size. Some engines had code to correct this
that should not be needed, Aria didn't and because of this some online
operations didn't work.
This code fixes the comparision of block_size to only compare if it's set.
PREBUILT->TABLE->N_MYSQL_HANDLES_OPENED == 1
ANALYSIS:
=========
Adding unique index to a InnoDB table which is locked as
mutliple instances may trigger an InnoDB assert.
When we add a primary key or an unique index, we need to
drop the original table and rebuild all indexes. InnoDB
expects that only the instance of the table that is being
rebuilt, is open during the process. In the current
scenario we have opened multiple instances of the table.
This triggers an assert during table rebuild.
'Locked_tables_list' encapsulates a list of all
instances of tables locked by LOCK TABLES statement.
FIX:
===
We are now temporarily closing all the instances of the
table except the one which is being altered and later
reopen them via Locked_tables_list::reopen_tables().
Handle string length as size_t, consistently (almost always:))
Change function prototypes to accept size_t, where in the past
ulong or uint were used. change local/member variables to size_t
when appropriate.
This fix excludes rocksdb, spider,spider, sphinx and connect for now.
1st. Create_field does not have function vers_sys_field() kind of handy
function, second I think Create_field and Field should not divert much , and
Field does have this function.
2nd. Versioning column does not have NOT_NULL_FLAG, since they can never be
null. So I have added NOT_NULL_FLAG.
3rd. Since I added NOT_NULL_FLAG this created one issue , versioning column
of datatype bigint unsigned were getting NO_DEFAULT_VALUE_FLAG. This makes
test like versioning.insert to fail, Reason being If a column gets this
flag if we insert 'default' value it will generate error(that is why ) test
was failing. So now versioning column wont get NO_DEFAULT_VALUE_FLAG flag.
Problem:-
create or replace table t1 (pk int auto_increment primary key invisible, i int);
alter table t1 modify pk int invisible;
This last alter makes a invisible column which is not null and does not
have default value.
Analysis:-
This is caused because our error check for NOT_NULL_FLAG and
NO_DEFAULT_VALUE_FLAG flag misses this sql_field , but this is not the fault
of error check :).Actually this field come via mysql_prepare_alter_table
and it does not have NO_DEFAULT_VALUE_FLAG flag turned on. (If it was create
table NO_DEFAULT_VALUE_FLAG would have turned on Column_definition::check)
and this would have generated error.
Solution:-
I have moved the error check to kind last of mysql_prepare_create_table
because upto this point we have applied NO_DEFAULT_VALUE_FLAG to required
column.
This will make it easier to how memory allocation is done when debugging
with either DBUG or gdb.
Will especially help when debugging stored procedures
Main change is a name argument as second argument to init_alloc_root()
init_sql_alloc()
Other things:
- Added DBUG_ENTER/EXIT to some Virtual_tmp_table functions
This was done in, among other things:
- thd->db and thd->db_length
- TABLE_LIST tablename, db, alias and schema_name
- Audit plugin database name
- lex->db
- All db and table names in Alter_table_ctx
- st_select_lex db
Other things:
- Changed a lot of functions to take const LEX_CSTRING* as argument
for db, table_name and alias. See init_one_table() as an example.
- Changed some function arguments from LEX_CSTRING to const LEX_CSTRING
- Changed some lists from LEX_STRING to LEX_CSTRING
- threads_mysql.result changed because process list_db wasn't always
correctly updated
- New append_identifier() function that takes LEX_CSTRING* as arguments
- Added new element tmp_buff to Alter_table_ctx to separate temp name
handling from temporary space
- Ensure we store the length after my_casedn_str() of table/db names
- Removed not used version of rename_table_in_stat_tables()
- Changed Natural_join_column::table_name and db_name() to never return
NULL (used for print)
- thd->get_db() now returns db as a printable string (thd->db.str or "")
MDEV-11415 Remove excessive undo logging during ALTER TABLE…ALGORITHM=COPY
Move a test from innodb.rename_table_debug to innodb.alter_copy.
ha_innobase::extra(HA_EXTRA_BEGIN_ALTER_COPY): Register id-versioned
tables so that mysql.transaction_registry will be updated, even for
empty tables that are subjected to ALTER TABLE…ALGORITHM=COPY.
If a crash occurs during ALTER TABLE…ALGORITHM=COPY, InnoDB would spend
a lot of time rolling back writes to the intermediate copy of the table.
To reduce the amount of busy work done, a work-around was introduced in
commit fd069e2bb3 in MySQL 4.1.8 and 5.0.2,
to commit the transaction after every 10,000 inserted rows.
A proper fix would have been to disable the undo logging altogether and
to simply drop the intermediate copy of the table on subsequent server
startup. This is what happens in MariaDB 10.3 with MDEV-14717,MDEV-14585.
In MariaDB 10.2, the intermediate copy of the table would be left behind
with a name starting with the string #sql.
This is a backport of a bug fix from MySQL 8.0.0 to MariaDB,
contributed by jixianliang <271365745@qq.com>.
Unlike recent MySQL, MariaDB supports ALTER IGNORE. For that operation
InnoDB must for now keep the undo logging enabled, so that the latest
row can be rolled back in case of an error.
In Galera cluster, the LOAD DATA statement will retain the existing
behaviour and commit the transaction after every 10,000 rows if
the parameter wsrep_load_data_splitting=ON is set. The logic to do
so (the wsrep_load_data_split() function and the call
handler::extra(HA_EXTRA_FAKE_START_STMT)) are joint work
by Ji Xianliang and Marko Mäkelä.
The original fix:
Author: Thirunarayanan Balathandayuthapani <thirunarayanan.balathandayuth@oracle.com>
Date: Wed Dec 2 16:09:15 2015 +0530
Bug#17479594 AVOID INTERMEDIATE COMMIT WHILE DOING ALTER TABLE ALGORITHM=COPY
Problem:
During ALTER TABLE, we commit and restart the transaction for every
10,000 rows, so that the rollback after recovery would not take so long.
Fix:
Suppress the undo logging during copy alter operation. If fts_index is
present then insert directly into fts auxiliary table rather
than doing at commit time.
ha_innobase::num_write_row: Remove the variable.
ha_innobase::write_row(): Remove the hack for committing every 10000 rows.
row_lock_table_for_mysql(): Remove the extra 2 parameters.
lock_get_src_table(), lock_is_table_exclusive(): Remove.
Reviewed-by: Marko Mäkelä <marko.makela@oracle.com>
Reviewed-by: Shaohua Wang <shaohua.wang@oracle.com>
Reviewed-by: Jon Olav Hauglid <jon.hauglid@oracle.com>
The thd->lex->part_info should be kept intact during PS
execution. Or the second execution gets that modified part_info.
Let's modify ths->work_part_info instead.
"tokudb_alter_table.drop_add_pk_part_104 leaves a temporary file behind"
Fixed by copying 3 lines from 10.1 to 10.0 that cleaned up the temporary
file for partitioning tables.
May also fix: MDEV-14970 "MariaDB crashed with signal 11 and Aria table"
I am not able to reproduce a crash, however there was no protection in
print_keydup_error() if the storage engine reported the wrong key number.
This patch adds such a protection and should stop any further crashes
in this case.
Other things:
- Added extra protection in Aria to not set errkey to more than number of
keys. (Don't think this is cause of this crash, but better safe than
sorry)
- Extend test_if_equal_repl_errors() to handle different cases of
ER_DUP_ENTRY. This is just mainly precaution for the future.
and the system_versioning_transaction_registry variable.
The user enables transaction registry by specifying BIGINT for
row_start/row_end columns.
check mysql.transaction_registry structure on the first open,
not on startup. Avoid warnings unless transaction_registry
is actually used.
Other things, mainly to get
create_mysqld_error_find_printf_error tool to work:
- Added protection to not include mysqld_error.h twice
- Include "unireg.h" instead of "mysqld_error.h" in server
- Added protection if ER_XX messages are already defined
- Removed wrong calls to my_error(ER_OUTOFMEMORY) as
my_malloc() and my_alloc will do this automatically
- Added missing %s to ER_DUP_QUERY_NAME
- Removed old and wrong calls to my_strerror() when using
MY_ERROR_ON_RENAME (wrong merge)
- Fixed deadlock error message from Galera. Before the extra
information given to ER_LOCK_DEADLOCK was missing because
ER_LOCK_DEADLOCK doesn't provide any extra information.
I kept #ifdef mysqld_error_find_printf_error_used in sql_acl.h
to make it easy to do this kind of check again in the future
Locked_tables_list::unlock_locked_tables
Similarly to regular DROP TABLE, don't leave locked tables mode if CREATE OR
REPLACE dropped temporary table but failed to cerate new one.
The problem is that there's no track of which temporary table was "locked" by
LOCK TABLES.
Other changes done to get this to work:
- Added 'internal_tables' to TABLE object to list which sequence tables
is needed to use the table.
- Mark any expression using DEFAULT() with LEX->default_used.
This is needed when deciding if we should open internal sequence
tables when a table is opened (we don't need to open sequence tables
if the main table is only used with SELECT).
- Create_and_open_temporary_table() can now also open all internal
sequence tables.
- Added option MYSQL_LOCK_USE_MALLOC to mysql_lock_tables()
to force memory allocation to be used with malloc instead of
memroot.
- Added flag to MYSQL_LOCK to remember if allocation was done with
malloc or memroot (makes code simpler and safer).
- init_one_table_for_prelocking() now takes argument for what lock to
use instead of it's a routine or something else.
- Renamed prelocking placeholders to make them more understandable as
they are now used in more code.
- Changed test in check_lock_and_start_stmt() if found table has correct
locks. The old test didn't work for tables that has lock
TL_WRITE_ALLOW_WRITE, which is what sequence tables are using.
- Added VCOL_NOT_VIRTUAL option to ensure that sequence functions can't
be used with virtual columns
- More sequence tests
This is needed for MDEV 13679 Enabled sequences to be used in DEFAULT
Added new option for count_cuted_fields: CHECK_FIELD_EXPRESSION
which is used to check if a DEFAULT expression is correct before
ALTER TABLE starts
Changed also all test:
if (thd->count_cuted_fields)
to
if (thd->count_cuted_fields > CHECK_FIELD_EXPRESSION)
Merge branch '10.3' into trunk
Both field_visibility and VERS_HIDDEN_FLAG exist independently.
TODO:
VERS_HIDDEN_FLAG should be replaced with SYSTEM_INVISIBLE (or COMPLETELY_INVISIBLE?).
Feature Definition:-
This feature adds invisible column functionality to server.
There is 4 level of "invisibility":
1. Not invisible (NOT_INVISIBLE) — Normal columns created by the user
2. A little bit invisible (USER_DEFINED_INVISIBLE) — columns that the
user has marked invisible. They aren't shown in SELECT * and they
don't require values in INSERT table VALUE (...). Otherwise
they behave as normal columns.
3. More invisible (SYSTEM_INVISIBLE) — Can be queried explicitly,
otherwise invisible from everything. Think ROWID sytem column.
Because they're invisible from ALTER TABLE and from CREATE TABLE
they cannot be created or dropped, they're created by the system.
User cant not create a column name which is same as of
SYSTEM_INVISIBLE.
4. Very invisible (COMPLETELY_INVISIBLE) — as above, but cannot be
queried either. They can only show up in EXPLAIN EXTENDED (might
be possible for a very invisible indexed virtual column) but
otherwise they don't exist for the user.If user creates a columns
which has same name as of COMPLETELY_INVISIBLE then
COMPLETELY_INVISIBLE column is renamed again. So it is completely
invisible from user.
Invisible Index(HA_INVISIBLE_KEY):-
Creation of invisible columns require a new type of index which
will be only visible to system. User cant see/alter/create/delete
this index. If user creates a index which is same name as of
invisible index then it will be renamed.
Syntax Details:-
Only USER_DEFINED_INVISIBLE column can be created by user. This
can be created by adding INVISIBLE suffix after column definition.
Create table t1( a int invisible, b int);
Rules:-
There are some rules/restrictions related to use of invisible columns
1. All the columns in table cant be invisible.
Create table t1(a int invisible); \\error
Create table t1(a int invisible, b int invisble); \\error
2. If you want invisible column to be NOT NULL then you have to supply
Default value for the column.
Create table t1(a int, b int not null); \\error
3. If you create a view/create table with select * then this wont copy
invisible fields. So newly created view/table wont have any invisible
columns.
Create table t2 as select * from t1;//t2 wont have t1 invisible column
Create view v1 as select * from t1;//v1 wont have t1 invisible column
4. Invisibility wont be forwarded to next table in any case of create
table/view as select */(a,b,c) from table.
Create table t2 as select a,b,c from t1; // t2 will have t1 invisible
// column(b), but this wont be invisible in t2
Create view v1 as select a,b,c from t1; // v1 will have t1 invisible
// column(b), but this wont be invisible in v1
Implementation Details:-
Parsing:- INVISIBLE_SYM is added into vcol_attribute(so its like unique
suffix), It is also added into keyword_sp_not_data_type so that table
can have column with name invisible.
Implementation detail is given by each modified function/created function.
(Some function are left as they were self explanatory)
(m= Modified, n= Newly Created)
mysql_prepare_create_table(m):- Extra checks for invisible columns are
added. Also some DEBUG_EXECUTE_IF are also added for test cases.
mysql_prepare_alter_table(m):- Now this will drop all the
COMPLETELY_INVISIBLE column and HA_INVISIBLE_KEY index. Further
Modifications are made to stop drop/change/delete of SYSTEM_INVISIBLE
column.
build_frm_image(m):- Now this allows incorporating field_visibility
status into frm image. To remain compatible with old frms
field_visibility info will be only written when any of the field is
not NOT_INVISIBLE.
extra2_write_additional_field_properties(n):- This will write field
visibility info into buffer. We first write EXTRA2_FIELD_FLAGS into
buffer/frm , then each next char will have field_visibility for each
field.
init_from_binary_frm_image(m):- Now if we get EXTRA2_FIELD_FLAGS,
then we will read the next n(n= number of fields) chars and set the
field_visibility. We also increment
thd->status_var.feature_invisible_columns. One important thing to
note if we find out that key contains a field whose visibility is
> USER_DEFINED_INVISIBLE then , we declare this key as invisible
key.
sql_show.cc is changed accordingly to make show table, show keys
correct.
mysql_insert(m):- If we get to know that we are doing insert in
this way insert into t1 values(1,1); without explicitly specifying
columns, then we check for if we have invisible fields if yes then
we reset the whole record, Why ? Because first we want hidden columns
to get default/null value. Second thing auto_increment has property
no default and no null which voilates invisible key rule 2, And
because of this it was giving error. Reseting table->record[0]
eliminates this issue. More info put breakpoint on handler::write_row
and see auto_increment value.
fill_record(m):- we continue loop if we find invisible column because
this is already reseted/will get its value if it is default.
Test cases:- Since we can not directly add > USER_DEFINED_INVISIBLE
column then I have debug_dbug to create it in mysql_prepare_create_table.
Patch Credit:- Serg Golubchik
Do not generate fake values when adding an auto-inc column to a versioned
table. This is not a auto-inc issue, but a more general case of adding
a not nullalble unique column to a table with history. We don't support
it yet, not even with a special auto-inc hack. As a workaround, one
can use a nullable unique column, that works.
THD::vers_update_trt, trx_t::vers_update_trt, trx_savept_t::vers_update_trt:
Remove. Instead, determine from trx_t::mod_tables whether versioned
columns were affected by the transaction.
handlerton::prepare_commit_versioned: Replaces vers_get_trt_data.
Return the transaction start ID and also the commit ID, in case
the transaction modified any system-versioned columns (0 if not).
TR_table::store_data(): Remove (merge with update() below).
TR_table::update(): Add the parameters start_id, end_id.
ha_commit_trans(): Remove a condition on SQLCOM_ALTER_TABLE.
If we need something special for ALTER TABLE...ALGORITHM=INPLACE,
that can be done inside InnoDB by modifying trx_t::mod_tables.
innodb_prepare_commit_versioned(): Renamed from innodb_get_trt_data().
Check trx_t::mod_tables to see if any changes to versioned columns
are present.
trx_mod_table_time_t: A pair of logical timestamps, replacing the
undo_no_t in trx_mod_tables_t. Keep track of not only the first
modification to a persistent table in each transaction, but also
the first modification of a versioned column in a table.
dtype_t, dict_col_t: Add the accessor is_any_versioned(), to check
if the type refers to a system-versioned user or system column.
upd_t::affects_versioned(): Check if an update affects a versioned
column.
trx_undo_report_row_operation(): If a versioned column is affected
by the update, invoke trx_mod_table_time_t::set_versioned().
trx_rollback_to_savepoint_low(): If all changes to versioned columns
were rolled back, invoke trx_mod_table_time_t::rollback_versioned(),
so that trx_mod_table_time_t::is_versioned() will no longer hold.
Most "new" failures fixed in the following files:
- sql_select.cc
- item.cc
- item_func.cc
- opt_subselect.cc
Other things:
- Allocate udf_handler strings in mem_root
- Required changes in sql_string.h
- Add mem_root as argument to some new [] calls
- Mark udf_handler strings as thread specific
- Removed some comment blocks with code
This was done to get more information about where time is spent.
Now we can get proper timing for time spent in commit, rollback,
binlog write etc.
Following stages was added:
- Commit
- Commit_implicit
- Rollback
- Rollback implicit
- Binlog write
- Init for update
- This is used instead of "Init" for insert, update and delete.
- Staring cleanup
Following stages where changed:
- "Unlocking tables" stage reset stage to previous stage at end
- "binlog write" stage resets stage to previous stage at end
- "end" -> "end of update loop"
- "cleaning up" -> "Reset for next command"
- Added stage_searching_rows_for_update when searching for rows
to be deleted.
Other things:
- Renamed all stages to start with big letter (before there was no
consitency)
- Increased performance_schema_max_stage_classes from 150 to 160.
- Most of the test changes in performance schema comes from renaming of
stages.
- Removed duplicate output of variables and inital state in a lot of
performance schema tests.
This was done to make it easier to change a default value for a
performance variable without affecting all tests.
- Added start_server_variables.test to check configuration
- Removed some duplicate "closing tables" stages
- Updated position for "stage_init_update" and "stage_updating" for
delete, insert and update to be just before update loop (for more
exact timing).
- Don't set "Checking permissions" twice in a row.
- Remove stage_end stage from creating views (not done for create table
either).
- Updated default performance history size from 10 to 20 because of new
stages
- Ensure that ps_enabled is correct (to be used in a later patch)
This happened when trying to do delete a sequence hidden by a temporary
table.
Fixed by ignoring non-sequence temporary tables when trying to drop
sequences.
Signed-off-by: Monty <monty@mariadb.org>
This happened when trying to PARTITION a SEQUENCE table
Problem was that wrong function was used to get engine name
Signed-off-by: Monty <monty@mariadb.org>
backport ce6c0e584e
MDEV-8960: Can't refer the same column twice in one ALTER TABLE
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
- Fix win64 pointer truncation warnings
(usually coming from misusing 0x%lx and long cast in DBUG)
- Also fix printf-format warnings
Make the above mentioned warnings fatal.
- fix pthread_join on Windows to set return value.
Storage engine independent support for column compression.
TINYBLOB, BLOB, MEDIUMBLOB, LONGBLOB, TINYTEXT, TEXT, MEDIUMTEXT, LONGTEXT,
VARCHAR and VARBINARY columns can be compressed.
New COMPRESSED column attribute added:
COMPRESSED[=<compression_method>]
System variables added:
column_compression_threshold
column_compression_zlib_level
column_compression_zlib_strategy
column_compression_zlib_wrap
Status variables added:
Column_compressions
Column_decompressions
Limitations:
- the only supported method currently is zlib
- CSV storage engine stores data uncompressed on-disk even if COMPRESSED
attribute is present
- it is not possible to create indexes over compressed columns.
For running the Galera tests, the variable my_disable_leak_check
was set to true in order to avoid assertions due to memory leaks
at shutdown.
Some adjustments due to MDEV-13625 (merge InnoDB tests from MySQL 5.6)
were performed. The most notable behaviour changes from 10.0 and 10.1
are the following:
* innodb.innodb-table-online: adjustments for the DROP COLUMN
behaviour change (MDEV-11114, MDEV-13613)
* innodb.innodb-index-online-fk: the removal of a (1,NULL) record
from the result; originally removed in MySQL 5.7 in the
Oracle Bug #16244691 fix
377774689b
* innodb.create-index-debug: disabled due to MDEV-13680
(the MySQL Bug #77497 fix was not merged from 5.6 to 5.7.10)
* innodb.innodb-alter-autoinc: MariaDB 10.2 behaves like MySQL 5.6/5.7,
while MariaDB 10.0 and 10.1 assign different values when
auto_increment_increment or auto_increment_offset are used.
Also MySQL 5.6/5.7 exhibit different behaviour between
LGORITHM=INPLACE and ALGORITHM=COPY, so something needs to be tested
and fixed in both MariaDB 10.0 and 10.2.
* innodb.innodb-wl5980-alter: disabled because it would trigger an
InnoDB assertion failure (MDEV-13668 may need additional effort in 10.2)
This fixes MDEV-7742 and MDEV-8305 (Allow user to specify if stored
procedures should be logged in the slow and general log)
New functionality:
- Added new variables log_slow_disable_statements and log_disable_statements
that can be used to disable logging of certain queries to slow and
general log. Currently supported options are 'admin', 'call', 'slave'
and 'sp'.
Defaults are as before. Only 'sp' (stored procedure statements) is
disabled for slow and general_log.
- Slow log to files now includes the following new information:
- When logging stored procedure statements the name of stored
procedure is logged.
- Number of created tmp_tables, tmp_disk_tables and the space used
by temporary tables.
- When logging 'call', the logged status now contains the sum of all
included statements. Before only 'time' was correct.
- Added filsort_priority_queue as an option for log_slow_filter (this
variable existed before, but was not exposed)
- Added support for BIT types in my_getopt()
Mapped some old variables to bitmaps (old variables can still be used)
- Variable 'log_queries_not_using_indexes' is mapped to
log_slow_filter='not_using_index'
- Variable 'log_slow_slave_statements' is mapped to
log_slow_disabled_statements='slave'
- Variable 'log_slow_admin_statements' is mapped to
log_slow_disabled_statements='admin'
- All the above variables are changed to session variables from global
variables
Other things:
- Simplified LOGGER::log_command. We don't need to check for super if
OPTION_LOG_OFF is set as this flag can only be set if one is a super
user.
- Removed some setting of enable_slow_log as it's guaranteed to be set by
mysql_parse()
- mysql_admin_table() now sets thd->enable_slow_log
- Added prepare_logs_for_admin_command() to reset thd->enable_slow_log if
needed.
- Added new functions to store, restore and add slow query status
- Added new functions to store and restore query start time
- Reorganized Sub_statement_state according to types
- Added code in dispatch_command() to ensure that
thd->reset_for_next_command() is always called for a query.
- Added thd->last_sql_command to simplify checking of what was the type
of the last command. Needed when logging to slow log as lex->sql_command
may have changed before slow logging is called.
- Moved QPLAN_TMP_... to where status for tmp tables are updated
- Added new THD variable, affected_rows, to be able to correctly log
number of affected rows to slow log.
- Added sql/mariadb.h file that should be included first by files in sql
directory, if sql_plugin.h is not used (sql_plugin.h adds SHOW variables
that must be done before my_global.h is included)
- Removed a lot of include my_global.h from include files
- Removed include's of some files that my_global.h automatically includes
- Removed duplicated include's of my_sys.h
- Replaced include my_config.h with my_global.h
Problem was that if column was created in alter table when
it was refered again it was not tried to find from list
of current columns.
mysql_prepare_alter_table:
There is two cases
(1) If alter table adds a new column and then later alter
changes the field definition, there was no check from
list of new columns, instead an incorrect error was given.
(2) If alter table adds a new column and then later alter
changes the default, there was no check from list of
new columns, instead an incorrect error was given.
collateral changes:
* remove a test from innodb_virtual_basic that is already present in
gcol_keys_innodb
* set thd->abort_on_warning for inplace alter, just like it's set
for copy_data_between_tables - to have warnings converted into
errors identically in all alter algorithms
* don't ignore errors in TABLE::update_virtual_field
SQL Standard behavior for DROP COLUMN xxx RESTRICT:
* If a constraint (UNIQUE or CHECK) uses only the dropped column,
it's automatically dropped too. If it uses many columns - an error.
The problem was that the introduction of max-thread-mem-used can cause
an allocation error very early, even before mysql_parse() is called.
As mysql_parse() calls thd->reset_for_next_command(), which called
clear_error(), the error number was lost.
Fixed by adding an option to have unique messages for each KILL
signal and change max-thread-mem-used to use this new feature.
This removes a lot of problems with the original approach, where
one could get errors signaled silenty almost any time.
ixed by moving clear_error() from reset_for_next_command() to
do_command(), before any memory allocation for the thread.
Related changes:
- reset_for_next_command() now have an optional parameter if we should
call clear_error() or not. By default it's called, but not anymore from
dispatch_command() which was the original problem.
- Added optional paramater to clear_error() to force calling of
reset_diagnostics_area(). Before clear_error() only called
reset_diagnostics_area() if there was no error, so we normally
called reset_diagnostics_area() twice.
- This change removed several duplicated calls to clear_error()
when starting a query.
- Reset max_mem_used on COM_QUIT, to protect against kill during
quit.
- Use fatal_error() instead of setting is_fatal_error (cleanup)
- Set fatal_error if max_thead_mem_used is signaled.
(Same logic we use for other places where we are out of resources)
Don't write to a temporary file, use String.
Remove strange one-liner "helpers", use String methods.
Don't use current_thd, don't allocate memory for 1-byte strings, etc.
DBUG_EXECUTE_IF was wrong, it used my_error, but didn't do error=1.
It's not clear what it was actually testing, what it was supposed
to be testing, and what it has to do with bug#43138, so I removed it.
CREATE/DROP TEMPORARY TABLE are not safe to optimistically replicate in
parallel with other transactions, so they need to be marked as "ddl" in the
binlog.
This was already done for stand-alone CREATE/DROP TEMPORARY. But temporary
tables can also be created and dropped inside a BEGIN...END transaction, and
such transactions were not marked as ddl. Nor was the DROP TEMPORARY TABLE
statement emitted implicitly when a client connection is closed.
So this patch adds such ddl mark for the missing cases.
The difference to Kristian's original patch is mainly a fix in
mysql_trans_commit_alter_copy_data() to remember the unsafe_rollback_flags
over the temporary commit.
IB: Fixes in logic when to do versioned or usual row updates. Now it is
able to do unversioned updates for versioned tables just by disabling
`TABLE_SHARE::versioned` flag.
SQL: DDL tracking for:
* RENAME TABLE, ALTER TABLE .. RENAME TO;
* DROP TABLE;
* data-modifying operations (f.ex. ALTER TABLE .. ADD/DROP COLUMN).