- Tabular EXPLAIN now prints "RECURSIVE UNION".
- There is a basic implementation of EXPLAIN FORMAT=JSON.
- it produces "recursive_union" JSON struct
- No other details or ANALYZE support, yet.
Temporary tables created for recursive CTE
were instantiated at the prepare phase. As
a result these temporary tables missed
indexes for look-ups and optimizer could not
use them.
Variant #4 of the fix.
Make ORDER BY optimization functions take into account multiple
equalities. This is done in several places:
- remove_const() checks whether we can sort the first table in the
join, or we need to put rows into temp.table and then sort.
- test_if_order_by_key() checks whether there are indexes that
can be used to produce the required ordering
- make_unireg_sortorder() constructs sort criteria for filesort.
This bug revealed a serious problem: if the same partition list
was used in two window specifications then the temporary table created
to calculate window functions contained fields for two identical
partitions. This problem was fixed as well.
- Rename Window_funcs_computation to Window_funcs_computation_step
- Introduce Window_func_sort which invokes filesort and then
invokes computation of all window functions that use this ordering.
- Expose Window functions' sort operations in EXPLAIN|ANALYZE FORMAT=JSON
Added class Window_funcs_computation, with setup() method to setup
execution, and exec() to run window function computation.
setup() is currently trivial. In the future, it is expected to optimize
the number of sorting operations and passes that are done over the temp.
table.
filesort and init_read_record() for the same table.
This will simplify code for WINDOW FUNCTIONS (MDEV-6115)
- Filesort_info renamed to SORT_INFO and moved to filesort.h
- filesort now returns SORT_INFO
- init_read_record() now takes a SORT_INFO parameter.
- unique declaration is moved to uniques.h
- subselect caching of buffers is now more explicit than before
- filesort_buffer is now reusable even if rec_length has changed.
- filsort_free_buffers() and free_io_cache() calls are removed
- Remove one malloc() when using get_addon_fields()
Other things:
- Added --debug-assert-on-not-freed-memory option to make it easier to
debug some not-freed-memory issues.
These do not have any meaning after MDEV-8646. Their only valid values are
- table_access_tabs= join_tab;
- top_table_access_tabs_count= top_join_tab_count;
Disable the code that attempts to group window functions together
by their PARTITION BY / ORDER BY clauses, because
1. It doesn't work: when I issue a query with just one window function,
and no indexes on the table, filesort is not invoked at all.
2. It is not possible to check that it works currently.
Add my own code that does invoke filesort() for each window function.
- Hopefully the sort criteria is right
- Debugging shows that filesort operates on {sort_key, rowid} pairs (OK)
- We can read the filesort rowid result in order.
"Re-factor the code for post-join operations".
The patch mainly contains the code ported from mysql-5.6 and
created for two essential architectural changes:
1. WL#5558: Resolve ORDER BY execution method at the optimization stage
2. WL#6071: Inline tmp tables into the nested loops algorithm
The first task was implemented for mysql-5.6 by Ole John Aske.
It allows to make all decisions on ORDER BY operation at the optimization
stage.
The second task implemented for mysql-5.6 by Evgeny Potemkin adds JOIN_TAB
nodes for post-join operations that require temporary tables. It allows
to execute these operations within the nested loops algorithm that used to
be used before this task only for join queries. Besides these task moves
all planning on the execution of these operations from the execution phase
to the optimization phase.
Some other re-factoring changes of mysql-5.6 were pulled in, mainly because
it was easier to pull them in than roll them back. In particular all
changes concerning Ref_ptr_array were incorporated.
The port required some changes in the MariaDB code that concerned the
functionality of EXPLAIN and ANALYZE. This was done mainly by Sergey
Petrunia.
This task is to allow storage engines that can execute GROUP BY or
summary queries efficiently to intercept a full query or sub query from
MariaDB and deliver the result either to the client or to a temporary
table for further processing.
- Added code in sql_select.cc to intercept GROUP BY queries.
Creation of group_by_handler is done after all optimizations to allow
storage engine to benefit of an optimized WHERE clause and suggested
indexes to use.
- Added group by handler to sequence engine and a group_by test suite as
a way to test the new interface.
- Intercept EXPLAIN with a message "Storage engine handles GROUP BY"
libmysqld/CMakeLists.txt:
Added new group_by_handler files
sql/CMakeLists.txt:
Added new group_by_handler files
sql/group_by_handler.cc:
Implementation of group_by_handler functions
sql/group_by_handler.h:
Definition of group_by_handler class
sql/handler.h:
Added handlerton function to create a group_by_handler, if the storage
engine can intercept the query.
sql/item_cmpfunc.cc:
Allow one to evaluate item_equal any time.
sql/sql_select.cc:
Added code to intercept GROUP BY queries
- If all tables are from the same storage engine and the query is
using sum functions, call create_group_by() to check if the storage
engine can intercept the query.
- If yes:
- create a temporary table to hold a GROUP_BY row or result
- In do_select() intercept normal query execution by instead
calling the group_by_handler to get the result
- Intercept EXPLAIN
sql/sql_select.h:
Added handling of group_by_handler
Added caching of the original join tab (needed for cleanup after
group_by handler)
storage/sequence/mysql-test/sequence/group_by.result:
Test group_by_handler interface
storage/sequence/mysql-test/sequence/group_by.test:
Test group_by_handler interface
storage/sequence/sequence.cc:
Added simple group_by_engine for handling COUNT(*) and
SUM(primary_key). This was done as a test of the group_by_handler
interface
- Added mem_root to all calls to new Item
- Added private method operator new(size_t size) to Item to ensure that
we always use a mem_root when creating an item.
This saves use once call to current_thd per Item creation
Added mandatory thd parameter to Item (and all derivative classes) constructor.
Added thd parameter to all routines that may create items.
Also removed "current_thd" from Item::Item. This reduced number of
pthread_getspecific() calls from 290 to 177 per OLTP RO transaction.
- Make semi-join optimizer not to choose LooseScan
when 1) the index is not covered and 2) full index
scan will be required.
- Make sure that the code in make_join_select() that may change
full index scan into a range scan is not invoked when the table
uses full scan.
- Changed ER(ER_...) to ER_THD(thd, ER_...) when thd was known or if there was many calls to current_thd in the same function.
- Changed ER(ER_..) to ER_THD_OR_DEFAULT(current_thd, ER...) in some places where current_thd is not necessary defined.
- Removing calls to current_thd when we have access to thd
Part of this is optimization (not calling current_thd when not needed),
but part is bug fixing for error condition when current_thd is not defined
(For example on startup and end of mysqld)
Notable renames done as otherwise a lot of functions would have to be changed:
- In JOIN structure renamed:
examined_rows -> join_examined_rows
record_count -> join_record_count
- In Field, renamed new_field() to make_new_field()
Other things:
- Added DBUG_ASSERT(thd == tmp_thd) in Item_singlerow_subselect() just to be safe.
- Removed old 'tab' prefix in JOIN_TAB::save_explain_data() and use members directly
- Added 'thd' as argument to a few functions to avoid calling current_thd.
Fixed several optimizer issues relatied to GROUP BY:
a) Refering to a SELECT column in HAVING sometimes calculated it twice, which caused problems with non determinstic functions
b) Removing duplicate fields and constants from GROUP BY was done too late for "using index for group by" optimization to work
c) EXPLAIN SELECT ... GROUP BY did wrongly show 'Using filesort' in some cases involving "Using index for group-by"
a) was fixed by:
- Changed last argument to Item::split_sum_func2() from bool to int to allow more flags
- Added flag argument to Item::split_sum_func() to allow on to specify if the item was in the SELECT part
- Mark all split_sum_func() calls from SELECT with SPLIT_SUM_SELECT
- Changed split_sum_func2() to do nothing if called with an argument that is not a sum function and doesn't include sum functions, if we are not an argument to SELECT.
This ensures that in a case like
select a*sum(b) as f1 from t1 where a=1 group by c having f1 <= 10;
That 'a' in the SELECT part is stored as a reference in the temporary table togeher with sum(b) while the 'a' in having isn't (not needed as 'a' is already a reference to a column in the result)
b) was fixed by:
- Added an extra remove_const() pass for GROUP BY arguments before make_join_statistics() in case of one table SELECT.
This allowes get_best_group_min_max() to optimize things better.
c) was fixed by:
- Added test for group by optimization in JOIN::exec_inner for
select->quick->get_type() == QUICK_SELECT_I::QS_TYPE_GROUP_MIN_MAX
item.cc:
- Simplifed Item::split_sum_func2()
- Split test to make them faster and easier to read
- Changed last argument to Item::split_sum_func2() from bool to int to allow more flags
- Added flag argument to Item::split_sum_func() to allow on to specify if the item was in the SELECT part
- Changed split_sum_func2() to do nothing if called with an argument that is not a sum function and doesn't include sum functions, if we are not an argument to SELECT.
opt_range.cc:
- Simplified get_best_group_min_max() by calcuating first how many group_by elements.
- Use join->group instead of join->group_list to test if group by, as join->group_list may be NULL if everything was optimized away.
sql_select.cc:
- Added an extra remove_const() pass for GROUP BY arguments before make_join_statistics() in case of one table SELECT.
- Use group instead of group_list to test if group by, as group_list may be NULL if everything was optimized away.
- Moved printing of "Error in remove_const" to remove_const() instead of having it in caller.
- Simplified some if tests by re-ordering code.
- update_depend_map_for_order() and remove_const() fixed to handle the case where make_join_statistics() has not yet been called (join->join_tab is 0 in this case)
This is MDEV-7601, including it's sub tasks MDEV-7594, MDEV-7555, MDEV-7590, MDEV-7581, MDEV-7589
The problem was that select_lex->non_agg_fields was not properly reset for re-execution and this caused an overwrite of a random memory position.
The fix was move non_agg_fields from select_lext to JOIN, which is properly reset.
Split first_breadth_first_tab() into
JOIN::first_breadth_first_optimization_tab() and
JOIN::first_breadth_first_execution_tab().
This allows to eliminate function call and one condition. Adjusted callers
accordingly.
Overhead change:
first_breadth_first_tab() 0.07% -> out of radar
next_breadth_first_tab() 0.04% -> 0.04%
JOIN::cleanup() 0.15% -> 0.11%
JOIN::save_explain_data_intern() 0.28% -> 0.24%
Step#5: changing the function remove_eq_conds() into a virtual method in Item.
It removes 6 virtual calls for Item_func::type(), and adds only 2
virtual calls for Item***::remove_eq_conds().
sql_alloc() has additional costs compared to direct mem_root allocation:
- function call: it is defined in a separate translation unit and can't be
inlined
- it needs to call pthread_getspecific() to get THD::mem_root
It is called dozens of times implicitely at least by:
- List<>::push_back()
- List<>::push_front()
- new (for Sql_alloc derived classes)
- sql_memdup()
Replaced lots of implicit sql_alloc() calls with direct mem_root allocation,
passing through THD pointer whenever it is needed.
Number of sql_alloc() calls reduced 345 -> 41 per OLTP RO transaction.
pthread_getspecific() overhead dropped 0.76 -> 0.59
sql_alloc() overhed dropped 0.25 -> 0.06
- Remove ANALYZE's timing code off the the execution path of regular
SELECTs.
- Improve the tracker that tracks counts/execution times of SELECTs or
DML statements:
= regular execution just increments counters
= ANALYZE will also collect timings.
JOIN::cur_dups_producing_tables was not maintained correctly in
the cases of greedy optimization (search_depth < n_tables).
Moved it to POSITION structure where it will be maintained automatically.
Removed POSITION::prefix_dups_producing_tables since its value can now
be calculated.
Show total execution time (r_total_time_ms) for various parts of the
query:
1. time spent in SELECTs
2. time spent reading rows from storage engines
#2 currently gets the data from P_S.
Temporary table count fix. The number of temporary tables was increased
when the table is not actually created. (when do_not_open was passed
as TRUE to create_tmp_table).
Join_plan_state performs out-of-API initialization of DYNAMIC_ARRAY. This is
done to postpone actual array initialization till first use, whilst retaining
the right to call delete_dynamic().
Since delete_dynamic() now checks DYNAMIC_ARRAY::malloc_flags it should be
initialized it as well.
- Changed 0x%lx -> %p
array.c:
- Static (preallocated) buffer can now be anywhere
my_sys.h
- Define MY_INIT_BUFFER_USED
sql_delete.cc & sql_lex.cc
- Use memroot when allocating classes (avoids call to current_thd)
sql_explain.h:
- Use preallocated buffers
sql_explain.cc:
- Use preallocated buffers and memroot
sql_select.cc:
- Use multi_alloc_root() instead of many alloc_root()
- Update calls to Explain
- When traversing JOIN_TABs with first_linear_tab/next_linear_tab(), don't forget
to enter the semi-join nest when it is the first table in the join order.
Failure to do so could cause e.g. I_S tables not to be filled.
- "ANALYZE $stmt" should discard select's output, but it should still
evaluate the output columns (otherwise, subqueries in select list
are not executed)
- SHOW EXPLAIN's code practice of calling JOIN::save_explain_data()
after JOIN::exec() is disastrous for ANALYZE, because it resets
all counters after the first execution. It is stopped
= "Late" test_if_skip_sort_order() calls explicitly update their part
of the query plan.
= Also, I had to rewrite I_S optimization to actually have optimization
and execution stages.
- First code, "EXPLAIN FORMAT=JSON stmt" and "ANALYZE FORMAT=JSON stmt"
work for basic queries. Complex constructs (e.g subqueries, etc) not
yet supported.
- No test infrastructure yet
MDEV-406: ANALYZE $stmt
- Ported the old patch to new explain code
- New SQL syntax (ANALYZE $stmt)
- ANALYZE UPDATE/DELETE is now supported (because EXPLAIN UPDATE/DELETE is supported)
- Basic counters are calculated for basic kinds of queries
(still need to see what happens with join buffer, ORDER BY...LIMIT queries, etc)
This is port of fix for MySQL BUG#17647863.
revno: 5572
revision-id: jon.hauglid@oracle.com-20131030232243-b0pw98oy72uka2sj
committer: Jon Olav Hauglid <jon.hauglid@oracle.com>
timestamp: Thu 2013-10-31 00:22:43 +0100
message:
Bug#17647863: MYSQL DOES NOT COMPILE ON OSX 10.9 GM
Rename test() macro to MY_TEST() to avoid conflict with libc++.
- The problem was that JOIN::prepare() tried to set TABLE::maybe_null
for a table in join. Non-merged semi-join tables 1) are present as
join's base tables on second EXECUTE, but 2) do not yet have a TABLE
object.
Worked around the problem by putting mixed_implicit_grouping into JOIN
object, and then passing it to JTBM tables in setup_jtbm_semi_joins().
Analysis:
st_select_lex_unit::prepare() computes can_skip_order_by as TRUE.
As a result join->prepare() gets called with order == NULL, and
doesn't do name resolution for the inner ORDER clause. Due to this
the prepare phase doesn't detect that the query references non-exiting
function and field.
Later join->optimize() calls update_used_tables() for a non-resolved
Item_field, which understandably has no Field object. This call results
in a crash.
Solution:
Resolve unnecessary ORDER BY clauses to detect if they reference non-exising
objects. Then remove such clauses from the JOIN object.
BNL and BNLH joins pre-filter the records from a joined table via JOIN_TAB::cache_select->cond.
There is no need to re-evaluate the same conditions via JOIN_TAB::select_cond. This patch removes
the duplicated conditions from the top-level conjuncts of each pushed condition.
The added "Using where" in few EXPLAINs is due to taking into account tab->cache_select->cond
in addition to tab->select_cond in JOIN::save_explain_data_intern.
Apparently in a general case a short-cut for the distinct optimization
is invalid if join buffers are used to join tables after the tables whose
values are to selected.
ORDER BY does not work
Use "dynamic" row format (instead of "block") for MARIA internal
temporary tables created for cursors.
With "block" row format MARIA may shuffle rows, with "dynamic" row
format records are inserted sequentially (there are no gaps in data
file while we fill temporary tables).
This is needed to preserve row order when scanning materialized cursors.
The patch to fix mdev-4418 turned out to be incorrect.
At the substitution of single row tables in make_join_statistics()
the used multiple equalities may change and references to the new multiple
equalities must be updated. The function remove_eq_conds() takes care of it and
it should be called right after the substitution of single row tables.
Calling it after the call of make_join_statistics was a mistake.
Apply the patch from Patryk Pomykalski:
- create_internal_tmp_table_from_heap() will now return information whether
the last row that we tried to write was a duplicate row.
(mysql-5.6 also has this change)
- Make query plan be re-saved after the first join execution
(saving it after JOIN::cleanup is too late because EXPLAIN output
is currently produced before that)
- Handle QPF allocation/deallocation for edge cases, like unsuccessful
BINLOG command.
- Work around the problem with UNION's direct subselects not being visible.
- Update test results ("Using temporary; Using filesort" are now always printed
last in the Extra column)
- This cset gets rid of memory leaks/crashes. Some result mismatches still remain.
This requires that subselect's footprints are saved before it is deleted.
Attempt to save select's QPF exposes one to a variety of edge cases:
- the select may be a UNION's "fake select" which has no valid id
- optimization may fail in the middle (but subsequent JOIN::optimize() calls
will succeed, despite the fact that there never was a query plan)
- Introduce "Query Plan Footprints" (abbrev. QPFs)
QPF is a part of query plan that is
1. sufficient to produce EXPLAIN output,
2. can be used to produce EXPLAIN output even after its subquery/union
was executed and deleted
3. is cheap to save so that we can always save query plans
- This patch doesn't fully address #2, we make/save strings for
a number of EXPLAIN's columns. This will be fixed.
-Added test and extra code to ensure we don't leave keyread on for a handler table.
-Create on disk temporary files always with long data pointers if SQL_SMALL_RESULT is not used. This ensures that we can handle temporary files bigger than 4G.
mysql-test/include/default_mysqld.cnf:
Run test suite with smaller aria keybuffer size
mysql-test/suite/maria/maria3.result:
Run test suite with smaller aria keybuffer size
mysql-test/suite/sys_vars/r/aria_pagecache_buffer_size_basic.result:
Run test suite with smaller aria keybuffer size
sql/handler.cc:
Disable key read (extra safety if something went wrong)
sql/multi_range_read.cc:
Ensure we have don't leave keyread on for secondary_file
sql/opt_range.cc:
Simplify code with mark_columns_used_by_index_no_reset()
Ensure that read_keys_and_merge() disableds keyread if it enables it
sql/opt_subselect.cc:
Remove not anymore used argument for create_internal_tmp_table()
sql/sql_derived.cc:
Remove not anymore used argument for create_internal_tmp_table()
sql/sql_select.cc:
Use 'enable_keyread()' instead of calling HA_EXTRA_RESET. (Makes debugging easier)
Create on disk temporary files always with long data pointers if SQL_SMALL_RESULT is not used. This ensures that we can handle temporary files bigger than 4G.
Remove not anymore used argument for create_internal_tmp_table()
More DBUG
sql/sql_select.h:
Remove not anymore used argument for create_internal_tmp_table()
This bug happened because the executor tried to use a wrong
TABLE REF object when building access keys. It constructed
keys from fields of a materialized table from a ref object
created to construct keys from the fields of the underlying
base table. This could happen only when materialized table
was created for a non-correlated IN subquery and only
when the materialized table used for lookups.
In this case we are guaranteed to be able to construct the
keys from the fields of tables that would be outer tables
for the tables of the IN subquery.
The patch makes sure that no ref objects constructed from
fields of materialized lookup tables are to be used.
WITH A VARIABLE AND ORDER BY
Bug#16035412 MYSQL SERVER 5.5.29 WRONG SORTING USING COMPLEX INDEX
This is a fix for a regression introduced by Bug#12667154:
Bug#12667154 attempted to fix a performance problem with subqueries
that did filesort. For doing filesort, the optimizer creates a quick
select object to use when building the sort index. This quick select
object was deleted after the first call to create_sort_index(). Thus,
for queries where the subquery was executed multiple times, the quick
object was only used for the first execution. For all later executions
of the subquery, filesort used a complete table scan for building the
sort index. The fix for Bug#12667154 tried to fix this by not deleting
the quick object after the first execution of create_sort_index() so
that it would be re-used for building the sort index by the following
executions of the subquery.
This regression introduced in Bug#12667154 is that due to not deleting
the quick select object after building the sort index, the quick
object could in some cases be used also during the second phase of the
execution of the subquery instead of using the created sort
index. This caused wrong results to be returned.
The fix for this issue is to delete the reference to the select object
after it has been used in create_sort_index(). In this way the select
and quick objects will not be available when doing the second phase
of the execution of the select operation. To ensure that the select
object can be re-used for the following executions of the subquery
we make a copy of the select pointer. This is used for restoring the
select object after the select operation is completed.
mysql-test/suite/innodb/r/innodb_mysql.result:
Changed explain output: The explain now contains "Using where" since we
have restored the select pointer after doing the filesort operation.
sql/sql_select.cc:
Change create_sort_index() so that it always sets the pointer to
the select object to NULL. This is done in order to avoid that the
select->quick object can be used when execution the main part of
the select operation.
sql/sql_select.h:
New member in JOIN_TAB: saved_select. Used by create_sort_index to
make a backup copy of the select pointer.
The problem was that in debugging binaries it try to print item to assign human readable name to the item.
But subquery item was already freed (join_free/cleanup with full cleanup) so Item_field refers to temporary
table which memory had been already freed.
MDEV-567: Wrong result from a query with correlated subquery if ICP is allowed:
backport the fix developed for SHOW EXPLAIN:
revision-id: psergey@askmonty.org-20120719115219-212cxmm6qvf0wlrb
branch nick: 5.5-show-explain-r21
timestamp: Thu 2012-07-19 15:52:19 +0400
BUG#992942 & MDEV-325: Pre-liminary commit for testing
and adjust it so that it handles DS-MRR scans correctly.
This task fixes an ineffeciency that is a remainder from MySQL 5.0/5.1. There, subqueries
were optimized in a lazy manner, when executed for the first time. During this lazy optimization
it may happen that the server finds a more efficient subquery engine, and substitute the current
engine of the query being executed with the new engine. This required re-execution of the engine.
MariaDB 5.3 pre-optimizes subqueries in almost all cases, and the engine is chosen in most cases,
except when subquery materialization found that it must use partial matching. In this case, the
current code was performing one extra re-execution although it was not needed at all. The patch
performs the re-execution only if the engine was changed while executing.
In addition the patch performs small cleanup by removing "enum store_key_result" because it is
essentially a boolean, and the code that uses it already maps it to a boolean.
and small collateral changes
mysql-test/lib/My/Test.pm:
somehow with "print" we get truncated writes sometimes
mysql-test/suite/perfschema/r/digest_table_full.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/dml_handler.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/information_schema.result:
host table is not ported over yet
mysql-test/suite/perfschema/r/nesting.result:
this differs, because we don't rewrite general log queries, and multi-statement
packets are logged as a one entry. this result file is identical to what mysql-5.6.5
produces with the --log-raw option.
mysql-test/suite/perfschema/r/relaylog.result:
MariaDB modifies the binlog index file directly, while MySQL 5.6 has a feature "crash-safe binlog index" and modifies a special "crash-safe" shadow copy of the index file and then moves it over. That's why this test shows "NONE" index file writes in MySQL and "MANY" in MariaDB.
mysql-test/suite/perfschema/r/server_init.result:
MariaDB initializes the "manager" resources from the "manager" thread, and starts this thread only when --flush-time is not 0. MySQL 5.6 initializes "manager" resources unconditionally on server startup.
mysql-test/suite/perfschema/r/stage_mdl_global.result:
this differs, because MariaDB disables query cache when query_cache_size=0. MySQL does not
do that, and this causes useless mutex locks and waits.
mysql-test/suite/perfschema/r/statement_digest.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_consumers.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/perfschema/r/statement_digest_long_query.result:
md5 hashes of statement digests differ, because yacc token codes are different in mariadb
mysql-test/suite/rpl/r/rpl_mixed_drop_create_temp_table.result:
will be updated to match 5.6 when alfranio.correia@oracle.com-20110512172919-c1b5kmum4h52g0ni and anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y are merged
mysql-test/suite/rpl/r/rpl_non_direct_mixed_mixing_engines.result:
will be updated to match 5.6 when anders.song@greatopensource.com-20110105052107-zoab0bsf5a6xxk2y is merged
The fix backports from MWL#182: Explain running statements the logic that
saves the original JOIN_TAB array of a query plan after optimization. This
array is later used during EXPLAIN to iterate over the original JOIN plan
nodes in the cases when this plan could be changed by early subquery
execution during the optimization phase of the outer query.
- The problem was that create_ref_for_key() would act differently, depending on
whether we're running EXPLAIN or the actual query.
- As the first step, fixed the EXPLAIN printout not to depend on actions in create_ref_for_key().
The patch enables back constant subquery execution during
query optimization after it was disabled during the development
of MWL#89 (cost-based choice of IN-TO-EXISTS vs MATERIALIZATION).
The main idea is that constant subqueries are allowed to be executed
during optimization if their execution is not expensive.
The approach is as follows:
- Constant subqueries are recursively optimized in the beginning of
JOIN::optimize of the outer query. This is done by the new method
JOIN::optimize_constant_subqueries(). This is done so that the cost
of executing these queries can be estimated.
- Optimization of the outer query proceeds normally. During this phase
the optimizer may request execution of non-expensive constant subqueries.
Each place where the optimizer may potentially execute an expensive
expression is guarded with the predicate Item::is_expensive().
- The implementation of Item_subselect::is_expensive has been extended
to use the number of examined rows (estimated by the optimizer) as a
way to determine whether the subquery is expensive or not.
- The new system variable "expensive_subquery_limit" controls how many
examined rows are considered to be not expensive. The default is 100.
In addition, multiple changes were needed to make this solution work
in the light of the changes made by MWL#89. These changes were needed
to fix various crashes and wrong results, and legacy bugs discovered
during development.
Analysis:
The reason for the wrong result is the interaction between constant
optimization (in this case 1-row table) and subquery optimization.
- First the outer query is optimized, and 'make_join_statistics' finds that
table t2 has one row, reads that row, and marks the whole table as constant.
This also means that all fields of t2 are constant.
- Next, we optimize the subquery in the end of the outer 'make_join_statistics'.
The field 'f2' is considered constant, with value '3'. The subquery predicate
is rewritten as the constant TRUE.
- The outer query execution detects early that the whole query result is empty
and calls 'return_zero_rows'. Since the query is with implicit grouping, we
have to produce one row with special values for the aggregates (depending on
each aggregate function), and NULL values for all non-aggregate fields. This
function calls 'no_rows_in_result' to set each aggregate function to the
default value when it aggregates over an empty result, and then calls
'send_data', which in turn evaluates each Item in the SELECT list.
- When evaluation reaches the subquery predicate, it executes the subquery
with field 'f2' having a constant value '3', and the subquery produces the
incorrect result '7'.
Solution:
Implement Item::no_rows_in_result for all subquery predicates. In order to
make this work, it is also needed to make all val_* methods of all subquery
predicates respect the Item_subselect::forced_const flag. Otherwise subqueries
are executed anyways, and override the default value set by no_rows_in_result
with whatever result is produced from the subquery evaluation.
- Remove all references of MAX_TABLES from JOIN struct and make these dynamic
- Updated Join_plan_state to allocate just as many elements as it's needed
sql/opt_subselect.cc:
Optimized version of Join_plan_state
sql/sql_select.cc:
Set join->positions and join->best_positions dynamicly
Don't call update_virtual_fields() if table->vfield is not set.
sql/sql_select.h:
Remove all references of MAX_TABLES from JOIN struct and Join_plan_state and make these dynamic
- Disable use of join cache when we're using FirstMatch strategy, and the join
order is such that subquery's inner tables are interleaved with outer. Join
buffering code is incapable of handling such join orders.
- The testcase requires use of @@debug_optimizer_prefer_join_prefix to hit the bug,
but I'm pushing it anyway (including the mention of the variable in .test file),
so that it can be found and enabled when/if we get something comparable in the
main tree.
The problem was that LooseScan execution code assumed that tab->key holds
the index used for looseScan. This is only true when range or full index
scan are used. In case of ref access, the index is in tab->ref.key (and
tab->index==0 which explains how LooseScan passed tests with ref access: they
used one index)
Fixed by setting/using loosescan_key, which always the correct index#.
The function subselect_uniquesubquery_engine::copy_ref_key has to take into
account that when EXPLAIN is processed the array of store_key object created
for any TABLE_REF may contain elements for constant items. These items should
be ignored by thefunction.
Problem: When building the condition for JOIN::outer_ref_cond the optimizer forgot to take into account
that this condition could depend on constant tables as well.
- Correctly handle plan refinement stage for LooseScan plans: run create_ref_for_key() if LooseScan
plan includes a ref access, and if we don't have any fixed key components, switch to a full index scan.
If the duplicate elimination strategy is used for a semi-join and potentially
one of the block-based join algorithms can be employed to join the inner
tables of the semi-join then sorting of the head (first non-constant) table
for a query with ORDER BY / GROUP BY cannot be used.
The execution plan cannot use sorting on the first table from the
sequence of the joined tables if it plans to employ the block-based
hash join algorithm.
KEYUSE elements for a possible hash join key are not sorted by field
numbers of the second table T of the hash join operation. Besides
some of these KEYUSE elements cannot be used to build any key as their
key expressions depend on the tables that are planned to be accessed
after the table T.
The code before the patch did not take this into account and, as a result,
execition of a query the employing block-based hash join algorithm could
cause a crash or return a wrong result set.
- Make EXPLAIN display "Start temporary" at the start of the fanout (it used to display
at the first table whose rowid gets into temp. table which is not that useful for
the user)
- Updated test results (all checked)
- Break down POSITION/advance_sj_state() into four classes
representing potential semi-join strategies.
- Treat all strategies uniformly (before, DuplicateWeedout
was special as it was the catch-all strategy. Now, we're
still relying on it to be the catch-all, but are able to
function,e.g. with firstmatch=on,duplicate_weedout=off.
- Update test results (checked)
If the optimizer switch 'semijoin_with_cache' is set to 'off' then
join cache cannot be used to join inner tables of a semijoin.
Also fixed a bug in the function check_join_cache_usage() that led
to wrong output of the EXPLAIN commands for some test cases.
sql/sql_insert.cc:
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
******
CREATE ... IF NOT EXISTS may do nothing, but
it is still not a failure. don't forget to my_ok it.
sql/sql_table.cc:
small cleanup
******
small cleanup
- The problem was that JOIN::save/restore_query_plan() did not save/restore parts of
the query plan that are located inside SJ_MATERIALIZATION_INFO structures. This could
cause parts of one plan to be used with another, which led get_best_combination() to
constructing non-sensical join plans (and crash).
Fixed by saving/restoring SJM parts of the query plans.
- check_and_do_in_subquery_rewrites() will not set SUBS_MATERIALIZATION flag when it
records that the subquery predicate is to be converted into semi-join.
If convert_join_subqueries_to_semijoins() later decides not to convert to semi-join,
let it set SUBS_MATERIALIZATION flag, if appropriate.
- Let join buffering code correctly take into account rowids needed
by DuplicateElimination when it is calculating minimum record sizes.
- In JOIN_CACHE::write_record_data, added asserts that prevent us from
writing beyond the end of the buffer.
Analysis:
In the test query semi-join merges the inner-most subquery
into the outer subquery, and the optimization of the merged
subquery finds some new index access methods. Later the
IN-EXISTS transformation is applied to the unmerged subquery.
Since the optimizer is instructed to not consider
materialization, it reoptimizes the plan in-place to take into
account the new IN-EXISTS conditions. Just before reoptimization
JOIN::choose_subquery_plan resets the query plan, which also
resets the access methods found during the semi-join merge.
Then reoptimization discovers there are no new access methods,
but it leaves the query plan in its reset state. Later semi-join
crashes because it assumes these access methods are present.
Solution:
When reoptimizing in-place, reset the query plan only after new
access methods were discovered. If no new access methods were
discovered, leave the current plan as it was.
First code
- "Asynchronous procedure call" system
- new THD::check_killed() that serves APC request is called from within most important loops
- EXPLAIN code is now able to generate EXPLAIN output on-the-fly [incomplete]
Parts that are still missing:
- put THD::check_killed() call into every loop where we could spend significant amount of time
- Make sure EXPLAIN code works for group-by queries that replace JOIN::join_tab with make_simple_join()
and other such cases.
- User interface: what error code to use, where to get timeout settings from, etc.
The problem was that optimizer removes some outer references (it they are
constant for example) and the list of outer items built during prepare phase is
not actual during execution phase when we need it as the cache parameters.
First solution was use pointer on pointer on outer reference Item and
initialize temporary table on demand. This solved most problem except case
when optimiser also reduce Item which contains outer references ('OR' in
this bug test suite).
The solution is to build the list of outer reference items on execution
phase (after optimization) on demand (just before temporary table creation)
by walking Item tree and finding outer references among Item_ident
(Item_field/Item_ref) and Item_sum items.
Removed depends_on list (because it is not neede any mnore for the cache, in the place where it was used it replaced with upper_refs).
Added processor (collect_outer_ref_processor) and get_cache_parameters() methods to collect outer references (or other expression parameters in future).
mysql-test/r/subselect_cache.result:
A new test added.
mysql-test/r/subselect_scache.result:
Changes in creating the cache and its paremeters order or adding arguments of aggregate function (which is a parameter also, but this has no influence on the result).
mysql-test/t/subselect_cache.test:
Added a new test.
sql/item.cc:
depends_on removed.
Added processor (collect_outer_ref_processor) and get_cache_parameters() methods to collect outer references.
Item_cache_wrapper collect parameters befor initialization of its cache.
sql/item.h:
depends_on removed.
Added processor (collect_outer_ref_processor) and get_cache_parameters() methods to collect outer references.
sql/item_cmpfunc.cc:
depends_on removed.
Added processor (collect_outer_ref_processor) to collect outer references.
sql/item_cmpfunc.h:
Added processor (collect_outer_ref_processor) to collect outer references.
sql/item_subselect.cc:
depends_on removed.
Added processor get_cache_parameters() method to collect outer references.
sql/item_subselect.h:
depends_on removed.
Added processor get_cache_parameters() method to collect outer references.
sql/item_sum.cc:
Added processor (collect_outer_ref_processor) method to collect outer references.
sql/item_sum.h:
Added processor (collect_outer_ref_processor) and get_cache_parameters() methods to collect outer references.
sql/opt_range.cc:
depends_on removed.
sql/sql_base.cc:
depends_on removed.
sql/sql_class.h:
New iterator added.
sql/sql_expression_cache.cc:
Build of list of items resolved in outer query done just before creating expression cache on the first execution of the subquery which removes influence of optimizer removing items (all optimization already done).
sql/sql_expression_cache.h:
Build of list of items resolved in outer query done just before creating expression cache on the first execution of the subquery which removes influence of optimizer removing items (all optimization already done).
sql/sql_lex.cc:
depends_on removed.
sql/sql_lex.h:
depends_on removed.
sql/sql_list.h:
Added add_unique method to add only unique elements to the list.
sql/sql_select.cc:
Support of new Item list added.
sql/sql_select.h:
Support of new Item list added.
Also:
1. simplified the code of the function mysql_derived_merge_for_insert.
2. moved merge of views/dt for multi-update/delete to the prepare stage.
3. the list of the references to the candidates for semi-join now is
allocated in the statement memory.
The bitmap of used tables must be evaluated for the select list of every
materialized derived table / view and saved in a dedicated field.
This is also applied to materialized subqueries.
- Don't attempt to construct FirstMatch access method if we've
just figured three lines above that it can't be used (because join
prefix doesn't have the needed tables), and so have set
pos->first_firstmatch_table= MAX_TABLES
Attempts to analyze join->positions[MAX_TABLES] caused valgrind warnings
mysql-test/r/subselect4.result:
Moved test case for LP BUG#718593 into the correct test file subselect_mat_cost_bugs.test.
mysql-test/t/subselect4.test:
Moved test case for LP BUG#718593 into the correct test file subselect_mat_cost_bugs.test.
Resolved all conflicts, bad merges and fixed a few minor bugs in the code.
Commented out the queries from multi_update, view, subselect_sj, func_str,
derived_view, view_grant that failed either with crashes in ps-protocol or
with wrong results.
The failures are clear indications of some bugs in the code and these bugs
are to be fixed.
sql/item_subselect.cc:
Cleanup. Comments added.
sql/item_subselect.h:
Cleanup.
sql/mysql_priv.h:
Comments added.
sql/opt_subselect.cc:
The function renamed and turned to method.
Comments added.
sql/opt_subselect.h:
The function turned to method of JOIN.
sql/sql_select.cc:
Comment added. The function turned to method.
sql/sql_select.h:
The function turned to method.
mysql-test/r/explain.result:
fixed results (new item)
mysql-test/r/subselect.result:
fixed results (new item)
mysql-test/r/subselect_no_mat.result:
fixed results (new item)
mysql-test/r/subselect_no_opts.result:
fixed results (new item)
mysql-test/r/subselect_no_semijoin.result:
Fixed results (new item)
mysql-test/suite/pbxt/r/subselect.result:
Fixed results (new item)
mysql-test/t/explain.test:
Fixed results (correct behaviour)
sql/item_cmpfunc.cc:
Pass through for max/min
sql/item_subselect.cc:
moving max/min
sql/item_subselect.h:
moving max/min
sql/mysql_priv.h:
new uncacheble flags added
sql/opt_subselect.cc:
maxmin moved.
sql/opt_subselect.h:
New function for maxmin.
sql/sql_class.h:
debug code
sql/sql_lex.cc:
Fixed flags.
Limit setting fixed.
sql/sql_lex.h:
2 new flags.
sql/sql_select.cc:
Prepare divided on 2 function to be able recollect some info after transformation.
sql/sql_select.h:
Prepare divided on 2 functions.
- Let advance_sj_state() save the value of JOIN::cur_dups_producing_tables
in POSITION::prefix_dups_producing_tables, and restore_sj_state() restore
it.
Valgrind warnings were caused by comparing index values to an un-initialized field.
mysql-test/r/subselect.result:
New test cases.
mysql-test/t/subselect.test:
New test cases.
sql/opt_sum.cc:
Add thd to opt_sum_query enabling it to test for errors.
If we have a non-nullable index, we cannot use it to match null values,
since set_null() will be ignored, and we might compare uninitialized data.
sql/sql_select.cc:
Add thd to opt_sum_query, enabling it to test for errors.
sql/sql_select.h:
Add thd to opt_sum_query, enabling it to test for errors.
Analysis:
There are two code paths through which JOIN::exec may produce
an all-NULL row for an empty result set. One goes via the
function return_zero_rows(), when query processing detectes
early that the where clause is false, the other one is via
do_select() in the case of join execution.
In the case of do_select(), the problem was that the executioner
didn't set TABLE::null_row to 1. As result when sending the only
result row, the evaluation of each field didn't detect that all
non-aggregated fields are NULL, because Field::is_null returned
true, after checking that field->table->null_row was false.
Given that the each non-aggregated field was not considered NULL,
select_result::send_data sent whatever was in the buffer of each
field. However, since there was no actual data in the field buffer,
send_data() accessed and sent whatever junk was in the field's
data buffer.
Solution:
Similar to the analogous case in return_zero_rows() mark all
tables that their current row is NULL before sending the
artificailly created NULL row.