Wednesday, September 29, 2010

A Quick Review of Stack Traces

I'll try to pass on some basic knowledge about those confusing stack traces we sometimes see in the mysql error logs.  What can you tell from them, what are they useful for, and how to validate them?

Debugging Crashes

We tried to improve postmortem debugging of crashes + stack traces in the error log:
o) old versions of mysqld only printed numerical numbers instead of function names (if you're lucky!)
o) some platforms/architectures printed no stack trace what-so-ever!
o) faulty implementations of the crash error reporting and running query.

So, the result is that on modern supported platforms and recent versions of mysqld, you should get a useful stack trace.  A simple example from 5.5.5 on Windows:

mysqld.exe!mysql_admin_table()[sql_table.cc:5150]
mysqld.exe!mysql_optimize_table()[sql_table.cc:5226]
mysqld.exe!mysql_execute_command()[sql_parse.cc:3107]
mysqld.exe!mysql_parse()[sql_parse.cc:5911]
mysqld.exe!dispatch_command()[sql_parse.cc:1138]
mysqld.exe!do_command()[sql_parse.cc:807]
mysqld.exe!do_handle_one_connection()[sql_connect.cc:1196]
mysqld.exe!handle_one_connection()[sql_connect.cc:1136]
mysqld.exe!pthread_start()[my_winthread.c:62]
mysqld.exe!_callthreadstartex()[threadex.c:348]
mysqld.exe!_threadstartex()[threadex.c:331]
kernel32.dll!BaseThreadStart()

It should be obvious that OPTIMIZE TABLE crashed here, since the function name is clear.
The reason I always use windows stack traces in bug reports is because they are readable.  Most GDB and linux error log stack traces are not readable by the human brain at a glance, and therefore not memorable.  One reason is excessive wrapping, another reason is offsets and arguments to the functions are irrelevant and not useful for search engine indexes or the average Joe trying to find a bug report matching a stack trace.

In the old days, a crash only this printed (only 32-bit), and you had to resolve it yourself:

0x81a0705
0x8407eef
0x8408a8f
0x8408b75
0x8408dd2
0x8409af7
0x83e7fc9
0x83eefc3
0x826014c
0x826035c
0x825ec43
0x82106fd
0x81b86da
0x81bd78b
0x81bdd16
0x81bfae2

Which, on a side note is not always possible to do properly with optimized binaries so you get a partially bogus looking stack and you might be tempted to wrongly suspect faulty hardware, foul play, or bad binaries:

0x81a0705 handle_segfault + 805
0x8407eef my_write + 671
0x8408a8f init_key_cache + 1279
0x8408b75 init_key_cache + 1509
0x8408dd2 init_key_cache + 2114
0x8409af7 flush_key_blocks + 55
0x83e7fc9 flush_blocks + 41
0x83eefc3 mi_repair_by_sort + 451
0x826014c ha_myisam::repair(THD*, st_mi_check_param&, bool) + 1900
0x826035c ha_myisam::enable_indexes(unsigned int) + 364
0x825ec43 ha_myisam::end_bulk_insert() + 99
0x82106fd mysql_insert(THD*, TABLE_LIST*, List&, List >&, List&, List&, enum_duplicates, bool) + 5037
0x81b86da mysql_execute_command(THD*) + 9610
0x81bd78b mysql_parse(THD*, char const*, unsigned int, char const**) + 379
0x81bdd16 dispatch_command(enum_server_command, THD*, char*, unsigned int) + 1238
0x81bfae2 handle_one_connection + 2578

Using a technique that involves disassembling the mysqld binary into ASM and piecing together C/C++ source code/comments, it's quite possible to find those inlined functions, expanded macros, or functions that have no name in the symbols file.  A nice topic for another posting.  So, after manually inspecting the binary + numeric offsets, I could get a proper stack trace:

0x81a0705 handle_segfault
0x8407eef unlink_block
0x8408a8f free_block
0x8408b75 flush_cached_blocks
0x8408dd2 flush_key_blocks_int
0x8409af7 flush_key_blocks
0x83e7fc9 flush_blocks
0x83eefc3 mi_repair_by_sort
0x826014c ha_myisam::repair
0x826035c ha_myisam::enable_indexes
0x825ec43 ha_myisam::end_bulk_insert
0x82106fd mysql_insert
0x81b86da mysql_execute_command
0x81bd78b mysql_parse
0x81bdd16 dispatch_command
0x81bfae2 handle_one_connection

This is simply a bulk insert performing a 'repair by sort'.  It crashed in the keycache when flushing blocks, perhaps due to a memory corruption or overrun of something.  I remember fulltext indexes or large table having this problem..

Some Identifying Elements of a Stack Trace
If your 5.1. or 5.5. server ever crashes, please keep the stack trace as it can help identify exactly what the problem is, and you can search google for clues.  
  • prepared statements
Easily distinguishable by looking for the functions similar to this:
.....
mysqld-debug.exe!Prepared_statement::execute()[sql_prepare.cc:3050]
mysqld-debug.exe!mysql_sql_stmt_execute()[sql_prepare.cc:2393]
mysqld-debug.exe!mysql_execute_command()[sql_parse.cc:2935]
.....
  • stored routines and their call depth
Seeing sp_* is a sign of some stored routine activity  You can even see how many SP calls there are nested, whether they called triggers.  Takes some intuition to follow.

.......
06 mysqld_debug!sp_instr_stmt::exec_core
07 mysqld_debug!sp_lex_keeper::reset_lex_and_exec_core
08 mysqld_debug!sp_instr_stmt::execute
09 mysqld_debug!sp_head::execute
0a mysqld_debug!sp_head::execute_procedure
0b mysqld_debug!mysql_execute_command
.......
  • storage engine code (archive, innodb, myisam, merge)
InnoDB mostly asserts, and this is clearly identified in the error log before a stack trace.
"070223 21:47:40  InnoDB: Assertion failure in thread 1655241648 in file row0mysql.c line 3228"

Archive crashes can be easily seen by ha_archive functions:
.....
mysqld.exe!ha_archive::free_share()[ha_archive.cc:411]
mysqld.exe!ha_archive::open()[ha_archive.cc:498]
mysqld.exe!handler::ha_open()[handler.cc:2059]
.....
  • query cache
Any thing involving the Query_Cache class functions:
.....
#5  0x000000000065f390 in Query_cache::insert_table ()
#6  0x000000000065f63a in Query_cache::register_tables_from_list ()
#7  0x000000000065f6a5 in Query_cache::register_all_tables ()
#8  0x000000000065fc0a in Query_cache::store_query ()
#9  0x000000000058cd38 in mysql_execute_command ()
.....
  • mysql functions (string, math, datetime, comparative)
Look out for specific Item_func* methods...
.....
mysqld.exe!Arg_comparator::compare_binary_string()[item_cmpfunc.cc:1158]
mysqld.exe!Item_func_eq::val_int()[item_cmpfunc.cc:1692]
mysqld.exe!Item::val_bool()[item.cc:184]
mysqld.exe!Item_cond_and::val_int()[item_cmpfunc.cc:4222]
.....
  • first calling function in the application
You can nearly always expect a valid stack trace to have these functions at the bottom:
.....
mysqld-debug.exe!mysql_execute_command()[sql_parse.cc:2256]
mysqld-debug.exe!mysql_parse()[sql_parse.cc:5974]
mysqld-debug.exe!dispatch_command()[sql_parse.cc:1233]
mysqld-debug.exe!do_command()[sql_parse.cc:872]
mysqld-debug.exe!handle_one_connection()[sql_connect.cc:1127]
  • last calling functions before the crash
Crashing is usually handled by mysqld's segfault handler.  It depends on the OS and environment.  Most of the time you'll have:

mysqld-debug.exe!my_sigabrt_handler()[mysqld.cc:2048]
mysqld-debug.exe!raise()[winsig.c:597]
mysqld-debug.exe!abort()[abort.c:78]
......

or

0   mysqld   0x00579d3e my_print_stacktrace + 44
1   mysqld   0x00100f78 handle_segfault + 836
......


Debugging Hangs

When mysqld hangs or flatlines the CPU and logging in or killing queries doesn't help, you'd better either create a corefile,  break into the process with a debugger, or just use the PMP.

You'll probably need stack trace of all the threads to determine what is going on:

  1. A single thread is looping endlessly in some loop
  2. Multiple threads are hitting a hot mutex, or totally deadlocked, waiting for each other.


If the deadlock is in innodb you often get useful innodb outputs in the error log
for each waiting thread. But it can be extremely helpful to get full stack traces too.

--Thread 3003468656 has waited at fsp/fsp0fsp.c line 2204 for 556.00 seconds the
semaphore:
X-lock on RW-latch at 0xb759ceb0 created in file fil/fil0fil.c line 1061
That's all for now.!

Wednesday, June 16, 2010

quick update from down under...

I'm pleased to report that so far the Soccer World Cup has been pulled off rather successfully, with only minor incidents reported. I have however noticed local news showing some 'feel-good' stories that are obviously written to give a false impression to the international media of the real situation here. Let's hope the unions and Eskom workers don't mess things up by holding a gun to the Country's head with protesting/striking too much in the international media's light.. Hold thumbs..

BTW, I hate the vuvuzela :)


I thought I should share two important MySQL bugs with you today. In case you ever used YaSSL to establish SSL connections, you were at risk of hitting random crashes due to bug #34236 (Various possibly related SSL crashes) if more than one concurrent connection was ever made. The reason is the YaSSL code was built without mutexes as if for single threaded apps...


Next bug I think is widespread enough to mention is optimizer/query plan related. Examine the testcase on bug #48537 (difference of index selection between rpm binary and .tar.gz, windows vs linux..) And read the changeset notes:


On Intel x86 machines index selection by the MySQL query
optimizer could sometimes depend on the compiler version and
optimization flags used to build the server binary.

The problem was a result of a known issue with floating point
calculations on x86: since internal FPU precision (80 bit)
differs from precision used by programs (32-bit float or 64-bit
double), the result of calculating a complex expression may
depend on how FPU registers are allocated by the compiler and
whether intermediate values are spilled from FPU to memory. In
this particular case compiler versions and optimization flags
had an effect on cost calculation when choosing the best index
in best_access_path().

A possible solution to this problem which has already been
implemented in mysql-trunk is to limit FPU internal precision
to 64 bits. So the fix is a backport of the relevant code to
5.1 from mysql-trunk.


Now I'll get back to enjoying the public holiday and bugs reporting ;-)

Friday, May 14, 2010

vmstat/iostat replacement for windows ?

I dislike the old perfmon interface and it's unreadable graphs and logs. For a long time I've been searching for a basic vmstat and/or iostat windows port, and one that doesn't rely on that nonsensical cygwin. If anybody knows of one, please leave a comment.


Here's a proof on concept I cooked up in 20 minutes using the PDH (performance data helper) functions. In a nutshell, it queries the PDH counters directly and I'm be free to display
the data however I like.

Here's what I got so far:


proc_q_len pagefile interrupt/s cswitch/s %cpu_user %cpu_sys %cpu_idle %disk_busy %disk_read %disk_write
10 2308780032 2016 2555 64 26 9 100 0 100
3 2308911104 2863 3669 62 13 23 100 1 100
0 2309206016 1857 3057 79 10 9 2 0 2
0 2310217728 2579 3664 64 9 26 100 0 100
1 2309140480 2195 2985 71 3 25 100 0 100
2 2309140480 2241 3042 76 4 18 100 0 100


Not perfect, and I'm still trying to devise a proper layout for the stuff I want to display.
The columns shown correspond to the counters:
  • \System\Processor Queue Length
  • \Process(_Total)\Page File Bytes
  • \Processor(_Total)\Interrupts/sec
  • \System\Context Switches/sec
  • \Processor(_Total)\% User Time
  • \Processor(_Total)\% Privileged Time
  • \Processor(_Total)\% Idle Time
  • \PhysicalDisk(_Total)\% Disk Time
  • \PhysicalDisk(_Total)\% Disk Read Time
  • \PhysicalDisk(_Total)\% Disk Write Time

The general flow of code is like this (there are samples online):

2. PdhAddCounter for each counter
3. for each X seconds:
3.2 PdhGetFormattedCounterValue for each counter
3.3 print the result for each counter



Monday, May 3, 2010

Beware of RBR and tables without indexes

I always knew RBR and unindexed tables didn't play along very well, but never realized just how much you can distress a slave can in some cases.
Consider this statement (yeah yeah, i know :)


mysql> delete from t1 order by rand();
Query OK, 78130 rows affected (2.61 sec)

t1 has no indexes and is an int field with numbers from 1 to 78130. However, this will cause the slave to re-read entire table for each row deleted! Here it's still running, causing 100% cpu usage:

---TRANSACTION 0 1799, ACTIVE 2390 sec, OS thread id 3672 fetching rows mysql tables in use 1, locked 1 153 lock struct(s), heap size 30704, 78281 row lock(s), undo log entries 35423

Number of rows inserted 78130, updated 0, deleted 35423, read 1076560253 0.00 inserts/s, 0.00 updates/s, 17.58 deletes/s, 367099.91 reads/s

Over a billion row reads 40 minutes later and it's not even half done yet.For a large table this could take weeks or years to complete. It would be nice if there was a way to prevent this situation from happening.

Friday, February 12, 2010

debugging mysqld corefile on AIX

I recently had the pleasure of logging into an AIX 5.3 machine for the first time ever, to debug a corefile.

Firstly, having the mysqld binary and core is not enough, unless you have an identical machine on which to study the corefile. Library mismatches can be a problem.. IBM was kind enough to provide the snapcore utility to solve this easily.

Snapcore will gather all the libraries and create a single archive contain libs, binary, core.

So we now have a file called something like: snapcore_555060.pax
On our dev box, extract the pax archive:

gunzip snapcore_555060.pax.Z
pax -r -f snapcore_555060.pax

On your dev AIX box, make sure you have DBX installed!!

bash-3.00# lslpp -l | grep bos.adt.debug
bos.adt.debug 5.3.8.0 COMMITTED Base Application Development

Now, we are ready to debug a core. But we have to instruct DBX to read the libraries
that we got from the pax archive, instead of the default libraries on this system.

dbx -F -p /usr=/home/sbester/core/usr ./mysqld ./core


And since this isn't a DBX tutorial, I'll stop here, but you'll use normal DBX commands to print a stack trace, move up/down frames, print variables, list the source code.