Export to GitHub

maatkit - issue #1352

Negative "at byte:" offsets


Posted on Sep 7, 2011 by Happy Bird

I'm seeing a fair number of very large or even wrap-around to negative numbers for the "byte at:" offset to find a representative query.

Example:

Query 1: 0.08 QPS, 0.03x concurrency, ID 0xC94C6BB4AC84C91A at byte -2135329898

This is probably something experts can calculate, but, really, I'd rather just have the first representative sample if that's practical.

More context: http://6112northwolcott.com/drupal/slow_query_log_report_head_1200

Full Log (raw or gzip) is accessible from links at: http://6112northwolcott.com/drupal/

Comment #1

Posted on Sep 7, 2011 by Happy Bird

Tool is mk-query-digest Not sure how to "set" that in the report. Sorry!

Comment #2

Posted on Sep 14, 2011 by Swift Cat

I've never seen this and I have no idea how it could happen. Perl shouldn't wrap around; it should just use floats and your file would have to be incredibly large for that to happen. It is possible that tell() is lying to Perl, although I can't imagine why. Running the tool with strace would show this in action, but that seems rather inefficient. Is the log file on NFS or anything else odd or unusual about the storage system that could cause this? How big is the log file?

Comment #3

Posted on Sep 14, 2011 by Happy Bird

The full log file is about 3Meg.

The file system is ext3 as installed by a stock RHEL 5.5

It is running on top of VMWare on a Sandisk, but that's at such a low level, I don't see how the log file or a Perl script reading it can matter... But I'm just a lowly developer, not a sysadmin.

The whole log file is available from the link in the original report if you want to download it. I have a gzipped version that's not too terribly big there for you: 420K.

Comment #4

Posted on Sep 16, 2011 by Swift Cat

That's the report, not the original log that you analyzed to generate the report. What is the input that you gave to the tool?

Comment #5

Posted on Oct 11, 2011 by Swift Cat

If you can reproduce this problem reliably, please open an issue in Launchpad against the percona-toolkit project.

Comment #6

Posted on Nov 8, 2011 by Happy Bird

Sorry for my inattention. This ticket was linked by Google to my gmail account, which I rarely use. You can find the 90M after gzip file here: http://6112northwolcott.com/drupal/mysqld.slow.log.gz

OFF-TOPIC PS The ORIGINAL problem was not MySQL at all, but a brain-dead "cache" algorithm in Drupal: Cache full HTML but only until the next time cron jobs run. Every cron job nukes the page_cache. Period.

Trying to extend a page cache lifetime leads to HTML caches referencing CSS that was purged. Not pretty. Figuratively, and literally.

Oh well. If it helps make maatkit better somehow, it was worth it.

Status: NotReproducible

Labels:
Type-Defect Tool-mk_query_digest