Email: service@parnassusdata.com 7 x 24 online support!

MySQL Recover after DROP TABLE, innodb_file_per_table is OFF

MySQL Recover after DROP TABLE, innodb_file_per_table is OFF

Introduction
Human mistakes are inevitable. Wrong “DROP DATABASE” or “DROP TABLE” may destroy critical data on the MySQL server. Backups would help however they’re not always available. This situation is frightening but not hopeless. In many cases it is possible to recover almost all the data that was in the database or table.
Let’s look how we can do it. The recovery plan depends on whether InnoDB kept all data in a single ibdata1 or each table had its own tablespace . In this post we will consider the case innodb_file_per_table=OFF. This option assumes that all tables are stored in a common file, usually located at /var/lib/mysql/ibdata1.
 
Wrong action – table deletion
For our scenario we will use test database sakila that is shipped together with the tool.
Suppose we drop my mistake table actor:
 
 
mysql> SELECT * FROM actor LIMIT 10;
+----------+------------+--------------+---------------------+
| actor_id | first_name | last_name    | last_update         |
+----------+------------+--------------+---------------------+
|        1 | PENELOPE   | GUINESS      | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG     | 2006-02-15 04:34:33 |
|        3 | ED         | CHASE        | 2006-02-15 04:34:33 |
|        4 | JENNIFER   | DAVIS        | 2006-02-15 04:34:33 |
|        5 | JOHNNY     | LOLLOBRIGIDA | 2006-02-15 04:34:33 |
|        6 | BETTE      | NICHOLSON    | 2006-02-15 04:34:33 |
|        7 | GRACE      | MOSTEL       | 2006-02-15 04:34:33 |
|        8 | MATTHEW    | JOHANSSON    | 2006-02-15 04:34:33 |
|        9 | JOE        | SWANK        | 2006-02-15 04:34:33 |
|       10 | CHRISTIAN  | GABLE        | 2006-02-15 04:34:33 |
+----------+------------+--------------+---------------------+
10 rows in set (0.00 sec)
mysql> CHECKSUM TABLE actor;
+--------------+------------+
| Table        | Checksum   |
+--------------+------------+
| sakila.actor | 3596356558 |
+--------------+------------+
1 row in set (0.00 sec)
 
mysql> SET foreign_key_checks=OFF
mysql> DROP TABLE actor;
Query OK, 0 rows affected (0.00 sec)
 
mysql>
 
 
 
 
Recover after DROP TABLE from ibdata1
Now the table is gone, but information containing in the table can still be in the database file. The data remains untouched until InnoDB reuses free pages. Hurry up and stop MySQL ASAP!
For the recovery we’ll use TwinDB recovery toolkit. Check out our recent post “Recover InnoDB dictionary” for details on how to download and compile it.
 
Parse InnoDB tablespace
 
InnoDB stores all data in B+tree indexes. A table has one clustered index PRIMARY, all fields are stored here. If the table has secondary keys then each key has an index. Each index is identified by index_id.
 
If we want to recover a table we have to find all pages that belong to particular index_id.
 
stream_parser reads InnoDB tablespace and sorts InnoDB pages per type and per index_id.
 
 
 
 
root@test:~/undrop-for-innodb# ./stream_parser -f /var/lib/mysql/ibdata1
Opening file: /var/lib/mysql/ibdata1
File information:
 
ID of device containing file:        64768
inode number:                      1190268
protection:                         100660 (regular file)
number of hard links:                    1
user ID of owner:                      106
group ID of owner:                     114
device ID (if special file):             0
blocksize for filesystem I/O:         4096
number of blocks allocated:          69632
time of last access:            1404842312 Tue Jul  8 13:58:32 2014
time of last modification:      1404842478 Tue Jul  8 14:01:18 2014
time of last status change:     1404842478 Tue Jul  8 14:01:18 2014
total size, in bytes:             35651584 (34.000 MiB)
 
Size to process:                  35651584 (34.000 MiB)
All workers finished in 0 sec
root@test: ~/undrop-for-innodb#
 
 
 
 
Data from database pages is saved by the stream_parser to folder pages-ibdata1:
 
 
 
root@test:~/undrop-for-innodb/pages-ibdata1/FIL_PAGE_INDEX# ls
0000000000000001.page  0000000000000121.page  0000000000000382.page
0000000000000395.page  0000000000000408.page  0000000000000421.page
0000000000000434.page  0000000000000447.page  0000000000000002.page
...
0000000000000406.page  0000000000000419.page  0000000000000432.page
0000000000000445.page  0000000000000120.page  0000000000000381.page
0000000000000394.page  0000000000000407.page  0000000000000420.page
0000000000000433.page  0000000000000446.page
root@test: ~/undrop-for-innodb/pages-ibdata1/FIL_PAGE_INDEX
 
 
 
Now each index_id from InnoDB tablespace is saved in a separate file. We can use c_parser to fetch records from the pages. But we need to know what index_id corresponds to table sakila/actor. That information we can acquire from the dictionary – SYS_TABLES and SYS_INDEXES.
 
SYS_TABLES is always stored in file index_id 1 which is file pages-ibdata1/FIL_PAGE_INDEX./0000000000000001.page
Let’s find table_id of sakila/actor. If MySQL had enough time to flush changes to disk then add -D option which means “find deleted records”. The dictionary is always in REDUNDANT format, so we specify option -4:
 
 
 
 
root@test:~/undrop-for-innodb# ./c_parser -4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000001.page -t dictionary/SYS_TABLES.sql | grep sakila/actor
000000000B28  2A000001430D4D  SYS_TABLES  "sakila/actor"  158  4  1 0   0   ""  0
000000000B28  2A000001430D4D  SYS_TABLES  "sakila/actor"  158  4  1 0   0   ""  0
 
 
 
Note number 158 right after the table name. This is table_id.
 
The next thing do is to find the index id of the PRIMARY index of table actor. For this purpose we will  fetch records of SYS_INDEXES from file 0000000000000003.page (this table will contain information about  index_id and table_id). The structure of  SYS_INDEXES is passed with -t option.
 
 
 
root@test:~/undrop-for-innodb$ ./c_parser -4Df pages-ibdata1/FIL_PAGE_INDEX/0000000000000003.page -t dictionary/SYS_INDEXES.sql | grep 158
000000000B28    2A000001430BCA  SYS_INDEXES     158     376     "PRIMARY"       1       3       0       4294967295
000000000B28    2A000001430C3C  SYS_INDEXES     158     377     "idx\_actor\_last\_name"        1       0       0       4294967295
000000000B28    2A000001430BCA  SYS_INDEXES     158     376     "PRIMARY"       1       3       0       4294967295
000000000B28    2A000001430C3C  SYS_INDEXES     158     377     "idx\_actor\_last\_name"        1       0       0       4294967295
 
 
 
As you can see from the output, necessary index_id is 376. Therefore we will look for the actor data in the file 0000000000000376.page
 
 
root@test:~/undrop-for-innodb# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000376.page -t sakila/actor.sql |  head -5
-- Page id: 895, Format: COMPACT, Records list: Valid, Expected records: (200 200)
000000000AA0    B60000035D0110  actor   1       "PENELOPE"      "GUINESS"       "2006-02-15 04:34:33"
000000000AA0    B60000035D011B  actor   2       "NICK"  "WAHLBERG"      "2006-02-15 04:34:33"
000000000AA0    B60000035D0126  actor   3       "ED"    "CHASE" "2006-02-15 04:34:33"
000000000AA0    B60000035D0131  actor   4       "JENNIFER"      "DAVIS" "2006-02-15 04:34:33"
root@test:~/undrop-for-innodb#
 
 
 
he resulting output looks correct, so let’s save the dump in a file. To make load simpler c_parser outputs LOAD DATA INFILE command to stderr.
 
We will use default location of this files: dump/default
 
 
 
 
root@test:~/undrop-for-innodb# mkdir -p dumps/default
root@test:~/undrop-for-innodb# ./c_parser -6f pages-ibdata1/FIL_PAGE_INDEX/0000000000000376.page -t sakila/actor.sql > dumps/default/actor 2> dumps/default/actor_load.sql
 
 
 
And here’s a command to load the table.
 
 
root@test:~/undrop-for-innodb# cat dumps/default/actor_load.sql
SET FOREIGN_KEY_CHECKS=0;
LOAD DATA LOCAL INFILE '/home/asterix/undrop-for-innodb/dumps/default/actor' REPLACE INTO TABLE `actor` FIELDS TERMINATED BY '\t' OPTIONALLY ENCLOSED BY '"' LINES STARTING BY 'actor\t' (`actor_id`, `first_name`, `last_name`, `last_update`);
root@test:~/undrop-for-innodb#
 
 
Load data back to the database
Now it’s time to recover the data into the database. But, before loading the dump we need to create empty structure of table actor:
 
 
 
mysql> source sakila/actor.sql
mysql> show create table actor\G
*************************** 1. row ***************************
       Table: actor
Create Table: CREATE TABLE `actor` (
  `actor_id` smallint(5) unsigned NOT NULL AUTO_INCREMENT,
  `first_name` varchar(45) NOT NULL,
  `last_name` varchar(45) NOT NULL,
  `last_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  PRIMARY KEY (`actor_id`),
  KEY `idx_actor_last_name` (`last_name`)
) ENGINE=InnoDB AUTO_INCREMENT=201 DEFAULT CHARSET=utf8
1 row in set (0.00 sec)
mysql>
 
 
 
 
 
 
Now, the table actor is created. We can load our data after recovery.
 
 
 
root@test:~/undrop-for-innodb# mysql --local-infile -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> USE sakila;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
 
Database changed
mysql> source dumps/default/actor_load.sql
Query OK, 0 rows affected (0.00 sec)
 
Query OK, 600 rows affected (0.01 sec)
Records: 400  Deleted: 200  Skipped: 0  Warnings: 0
 
mysql>
 
 
 
 
Checking recovered data
And the final step – check data quality. We will see total number of records, preview several records and calculate checksum.
 
 
 
mysql> SELECT COUNT(*) FROM actor;
+----------+
| COUNT(*) |
+----------+
|      200 |
+----------+
1 row in set (0.00 sec)
 
mysql> SELECT * FROM actor LIMIT 5;
+----------+------------+--------------+---------------------+
| actor_id | first_name | last_name    | last_update         |
+----------+------------+--------------+---------------------+
|        1 | PENELOPE   | GUINESS      | 2006-02-15 04:34:33 |
|        2 | NICK       | WAHLBERG     | 2006-02-15 04:34:33 |
|        3 | ED         | CHASE        | 2006-02-15 04:34:33 |
|        4 | JENNIFER   | DAVIS        | 2006-02-15 04:34:33 |
|        5 | JOHNNY     | LOLLOBRIGIDA | 2006-02-15 04:34:33 |
+----------+------------+--------------+---------------------+
5 rows in set (0.00 sec)
 
mysql> CHECKSUM TABLE actor;
+--------------+------------+
| Table        | Checksum   |
+--------------+------------+
| sakila.actor | 3596356558 |
+--------------+------------+
1 row in set (0.00 sec)
 
mysql>
 
 
 
 
As you can see, checksum after recovery is 3596356558 which is equal to the checksum taken before accidental drop of the table. Therefore we can be sure that all the data was recovered correctly.
In the next posts we will see other cases of recovery.