Export to GitHub

mysql-master-ha - issue #90

masterha_check_repl error (version node-0.54 manger-0.55) when relay_log_info_repository=table


Posted on Jun 6, 2014 by Helpful Cat

mha version : node-0.54 manager-0.55 mysql version : 5.6.15-rel63.0-log OS version : centos 6.4 x_85 64

1,some parameters in my.cnf on both master and slave

relay_log_info_repository=TABLE master_info_repository=TABLE

2, MHA app1.cnf

[server default] manager_log=/var/log/managemha.log manager_workdir=/var/log/managemha master_ip_failover_script="/usr/local/bin/master_ip_failover" master_ip_online_change_script="/usr/local/bin/master_ip_online_change" password=xxxooo ping_interval=4 ssh_connection_timeout=15 ping_type=CONNECT remote_workdir=/data/nodemha repl_password=xxxx repl_user=repl ssh_port=22 ssh_user=root user=root

[server1] candidate_master=1 ignore_fail=1 hostname=192.168.100.55 master_binlog_dir=/data/percona-data-3307 port=3307

[server2] candidate_master=1 ignore_fail=1 hostname=192.168.100.55 master_binlog_dir=/data/percona-data-3306 port=3306

when i executed "masterha_check_repl --conf=/etc/masterha/app1/app1.cnf" i got some errors like followings:

Fri Jun 6 15:58:12 2014 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping. Fri Jun 6 15:58:12 2014 - [info] Reading application default configurations from /etc/masterha/app1/app1.cnf.. Fri Jun 6 15:58:12 2014 - [info] Reading server configurations from /etc/masterha/app1/app1.cnf.. Fri Jun 6 15:58:12 2014 - [info] MHA::MasterMonitor version 0.55. Fri Jun 6 15:58:12 2014 - [error][/usr/share/perl5/vendor_perl/MHA/Server.pm, ln241] Getting relay log directory or current relay logfile from replication table failed on 192.168.100.55(192.168.100.55:3306)! Fri Jun 6 15:58:12 2014 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln386] Error happend on checking configurations. at /usr/share/perl5/vendor_perl/MHA/ServerManager.pm line 269 Fri Jun 6 15:58:12 2014 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln482] Error happened on monitoring servers. Fri Jun 6 15:58:12 2014 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!

need your help

thanks !

Comment #1

Posted on Jun 6, 2014 by Grumpy Bear

Could you please paste the following command resultset on the slave 192.168.100.55:3306?

select * from mysql.slave_relay_log_info;

Comment #2

Posted on Jun 8, 2014 by Helpful Cat

oh ,thanks for your answers

192.168.100.55:3306 is the master while 192.168.100.55:3307 is the slave ,so there should be nothing in table slave_relay_log_info on 192.168.100.55:3306 .

look this(192.168.100.55:3307):

(user:root time: 11:44 port: 3307)[db: (none)]select * from mysql.slave_relay_log_info; +-----------------+--------------------------+---------------+------------------+----------------+-----------+-------------------+----+ | Number_of_lines | Relay_log_name | Relay_log_pos | Master_log_name | Master_log_pos | Sql_delay | Number_of_workers | Id | +-----------------+--------------------------+---------------+------------------+----------------+-----------+-------------------+----+ | 7 | ./mysql-relay-bin.000002 | 24298857 | mysql-bin.000008 | 52881234 | 0 | 0 | 1 | +-----------------+--------------------------+---------------+------------------+----------------+-----------+-------------------+----+

Comment #3

Posted on Jun 18, 2014 by Massive Rabbit

I have the same problem. I think it's maybe a bug in Server.pm in line 237: master's mysql.slave_relay_log_info is empty, so get_relay_dir_file_from_table returns nothing.

if ( $self->{relay_log_info_type} eq "TABLE" ) { my ( $relay_dir, $current_relay_log ) = MHA::SlaveUtil::get_relay_dir_file_from_table($dbh); $self->{relay_dir} = $relay_dir; $self->{current_relay_log} = $current_relay_log; if ( !$relay_dir || !$current_relay_log ) { $log->error( sprintf( " Getting relay log directory or current relay logfile from replication table failed on %s!", $self->get_hostinfo() ) ); croak; } }

Comment #4

Posted on Jun 20, 2014 by Swift Rhino

Upgrading to 0.56 should resolve this issue.

Status: New

Labels:
Type-Defect Priority-Medium