mysql

AutoMySQLBackup

Submitted by tomo on October 9, 2011 - 1:09am

I'm using AutoMySQLBackup v2.5 and haven't upgraded to v3.0 because the new version splits out config from code (nice) but also expects you to put everything in /etc (not nice). It's a fairly simple script made complicated by this new version which even includes an installer script now, which is more reason to keep with the old version.

Some changes I made:

OPT=" --quote-names --opt --skip-lock-tables"

I'm running this script out of cron and I don't want to lock tables which the web server is serving, even if it means some rows will be out of sync.

Then I change all "rm -fv" to "rm -f" because OpenBSD rm doesn't have a "-v" option, which if my assumption is correct means "verbose" and it's not useful in this script.

Since my databases aren't so small anymore, I've had to set MAXATTSIZE="20000". I also want to filter certain tables (big ones that never change, cache and log tables) and AutoMySQLBackup 3.0 has a new setting CONFIG_table_exclude for this.

The section of new code that does this is:

    if ((${#CONFIG_table_exclude[@]})); then
      for i in "${CONFIG_table_exclude[@]}"; do
	opt=( "${opt[@]}" "--ignore-table=$i" )
      done
    fi

In 2.5 we can add these tables manually to the OPT variable after the "
OPT" line, e.g.

$OPT="$OPT --ignore-table=foo.bar --ignore-table=foo.baz"

This saves a lot of disk space and space in my Gmail account used for storing backups.

Sorting broken after upgrading Pligg

Submitted by tomo on January 6, 2011 - 7:36am

If, after upgrading from an old version of Pligg to 1.1.2, you notice that the front page is sorted completely backwards and you can't change the sorting even by fiddling with the sort-query code in index.php, it may be because of the new Groups feature. A new groups table has been added and all queries join this table but the Pligg upgrader only creates the table and sets the new link_group_id to 0 for all rows in links. The solution is to populate the groups table. An easy way is to head to groups/submit/ and create a generic catchall group. Then in your database, change the group_id for the newly created row in groups to 0 (if it's 1).

If the Pligg group submitter is broken (as Pligg is prone to be buggy) then just:
INSERT INTO groups (group_id) VALUES (1);

Now those queries like:

SELECT link_id FROM links LEFT JOIN groups ON links.link_group_id = groups.group_id WHERE link_status='published' AND (groups.group_privacy!='private' OR ISNULL(groups.group_privacy) OR groups.group_id IN(1)) GROUP BY link_id ORDER BY link_published_date DESC, link_date DESC LIMIT 0,10

will work again!

Pligg long running queries

Submitted by tomo on September 7, 2010 - 2:20am

 

 

For awhile I've been plagued with these long running queries on my Pligg site that were running up CPU:
 
SELECT DISTINCT pv_user_id, pv_type, pv_page_id FROM mypligg_pageviews GROUP BY pv_type, pv_page_id, pv
 
And so I was running the mkill Perl script to kill any queries taking longer than a minute to complete.
But recently even that stopped working, leaving queries in the Killed state but still running up CPU.  They were all stuck in "converting HEAP to MyISAM".  Did some diagnostics:
 
mysql> SHOW GLOBAL STATUS LIKE 'Created_tmp_tables';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| Created_tmp_tables | 196   |
+--------------------+-------+
1 row in set (0.01 sec)
 
mysql> SHOW GLOBAL STATUS LIKE 'Created_tmp_disk_tables';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| Created_tmp_disk_tables | 159   |
+-------------------------+-------+
1 row in set (0.01 sec)
 
mysql> show variables like 'tmp_table_size';
+----------------+----------+
| Variable_name  | Value    |
+----------------+----------+
| tmp_table_size | 33554432 |
+----------------+----------+
1 row in set (0.01 sec)
 
 
So it seemed there wasn't enough memory to perform the conversion without going to disk which is not what we want.  So I set tmp_table_size and max_heap_table_size to larger values, max_heap_table_size too because MySQL uses the lesser of the two.
 
mysql> set tmp_table_size=100000000;
Query OK, 0 rows affected (0.00 sec)
 
mysql> set  max_heap_table_size=100000000;
Query OK, 0 rows affected (0.01 sec)
 

And now we're golden, CPU back down to normal levels and queries finishing.

Read the rest of this article...
Syndicate content
© 2010-2014 Saigonist.