InnoDB: Operating system error number 13

I was looking tonight on having the datadir for MySQL changed from /var to /home (as the /var partition was almost full). Once this has been done, I noticed that MySQL was no longer starting and it’s log was returning the following error:

Jan 28 22:18:35 mysqld[24760]: 101011 22:18:35  InnoDB: Operating system error number 13 in a file operation.
Jan 28 22:18:35 mysqld[24760]: InnoDB: The error means mysqld does not have the access rights to
Jan 28 22:18:35 mysqld[24760]: InnoDB: the directory.
Jan 28 22:18:35 mysqld[24760]: InnoDB: File name ./ibdata1
Jan 28 22:18:35 mysqld[24760]: InnoDB: File operation call: ‘create’.
Jan 28 22:18:35 mysqld[24760]: InnoDB: Cannot continue operation.

After a long investigation I noticed that this was the direct result of having SeLinux enabled on the server:

root@dragos [~]# sestatus
SELinux status:                 enabled

As my customer was running CentOS on the machine, I went ahead and had SeLinux disabled, by running:

root@root [~]# setenforce 0
setenforce: SELinux is disabled

The above method would allow you to disable SeLinux without having to reboot the machine. Having SeLinix disabled by editing /etc/selinux/config would require a server reboot.

Notification on server restart

Many times you find out that your server has been rebooted, but that information comes to late for you. For those cases, and not only, you can set a cron job that would sent a notification to your email address, each time when your server is rebooted. For this run first

crontab -e

(this opens up vi as your crontab editor) and at the end of the file add this command:

@reboot echo “`date +%r` `date +%b-%d-%Y` the server has been restarted.” | mail -s “`hostname` restart” dragos@fedorovici.com

This way each time when your server is restarted a notification is sent to your email address. The notification includes the hostname of the machine (in title) and the date when the server has been restarted (on body).

WARNING: mismatch_cnt is not 0 on /dev/md0

Recently I received (through the Plesk backup cron) several reports about a mismatch in synchronized block on md0 (/boot) device :

Cron run-parts /etc/cron.weekly
/etc/cron.weekly/99-raid-check:
WARNING: mismatch_cnt is not 0 on /dev/md0

Indeed the value for mismatch_cnt was nonzero:

[root@server ~]# cat /sys/block/md0/md/mismatch_cnt
128

First I went ahead and checked the status of the RAID, but cat /proc/mdstat was not returning any issues:

[root@server ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[1] sda1[0]
264960 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
8193024 blocks [2/2] [UU]

md2 : active raid1 sdb3[1] sda3[0]
235737728 blocks [2/2] [UU]

To repair this I went ahead and forced the value of that block:

echo repair >/sys/block/md0/md/sync_action
echo check >/sys/block/md0/md/sync_action

Now the proper value is displayed for this block:

[root@server ~]# cat /sys/block/md0/md/mismatch_cnt
0

child pid exit signal File size limit exceeded (25)

I was investigating today an issue with a Joomla installation, were everyone (including the customer) though that this was related to the well known “white screen of death“. After a small investigation I noticed that each time when that page was accessed the global error log file of Apache (error_log) was reporting this:

[Sun Oct 31 08:43:40 2010] [notice] child pid 5984 exit signal File size limit exceeded (25)
[Sun Oct 31 08:43:40 2010] [notice] child pid 5985 exit signal File size limit exceeded (25)
[Sun Oct 31 08:43:44 2010] [notice] child pid 5998 exit signal File size limit exceeded (25)

Apache (<2.1) is not being able to server files higher then 2G and this has been fixed in versions of apache 2.1 and apache 2.2, both handle files using 64-bit file offsets. This could also be caused by anyone using an EXT2 file system, as it doesn’t support files over 2G in size.

Solution: clear all log files that reached the maximum value of 2G and restart the web server (in this case it was the error_log file):

# cp error_log error_log.old
# cat /dev/null > error_log