Recovering Data from Synology Mirror on New NAS
So, my DS218+ recently died and I picked up a new DS918+. After setting it up with two new 4TB disks, I thought I was going to be able to put my two old 2 TB disks that were setup as a mirror in my old system and just copy the data over, but unfortunately that's not an option. The solution in the community seemed to be to buy a hard drive caddy, hook it up to my laptop, and drag the files over that way. As I don't have a caddy available right now, I thought I'd try and figure it out on my own to get the data.
I'm admittedly a complete novice when it comes to disk management in Linux, so this might be a terrible path to take, and remember you're playing with your data here, so do so slowly and at your own risk. If you don't have a backup, you may just want to go the supported route instead. In fact, I walked blindly through this and I'm really only documenting it in case I ever need to do it again :)
Pre-reqs
Before I got started, I already had some things configured that had nothing to do with the fix, but did end up being required:
- SSH was Enabled (Control Panel -> Terminal & SNMP -> "Enable SSH Service").
- I had PuTTy downloaded and ready to go on my Windows box.
- I had nano installed (from SynoCommunity).
- I have a folder called 'scratch' created using Synology File Station to keep things into before being organized.
Making it Happen
I had my new NAS setup with two new disks, leaving two available bays. If I tried to boot with an old Synology disk in the third bay, it would just beep at me. Instead, I waited until the NAS was completely booted up, and then I inserted one of the disks into Bay 3. I'm not sure if this is good idea or not, but it worked.
Once booted, I SSH'd into my new NAS using my administrator account and followed these steps in order (items in red are meant to make specific options for my environment stand out):
- Executed
sudo fdisk -l
to get a list of disks. I found /dev/sdc was my disk (size matched) and it had /dev/sdc1, /dev/sdc2, and /dev/sdc3 as devices on the list. I took note that /dev/sdc5 was the one I wanted as it was the largest of all of them so it's where my data would have lived. - Used mdadm to assemble the RAID (even though I only had half the mirror in there) by executing
sudo mdadm --assemble --run /dev/md3 /dev/sdc5
. I used /dev/md3 because md0, md1, and md2 were already in use. - Used nano to modify /etc/lvm/lvm.conf. I commented out the line
filter = [ "r|^/dev/md0$|", "r|^/dev/md1$|","a|/dev/md|", "r/.*/" ]
and uncommented the linefilter = [ "a|.*/|" ]
. Without doing this, the disk wasn't showing up when I ran LVM commands. If you installed nano, you can do this withsudo nano /etc/lvm/lvm.conf
. - As both my old and new disks had identical volume group (VG) names which caused problems for me, I renamed the old one by first executing
sudo vgdisplay
, and thensudo lvm vgrename <uuid> old_disk
where UUID is from the output of vgdisplay and 'old_disk' is actually what I called it so it would stand out. I then ranlvm vgchange -a y
because an article I read mentioned you have to activate the VG after a name change. Honestly not sure if that was required, but no apparent harm. - Finally, I got the name of the logical volume (LV) by running
sudo lvdisplay
and making note that the LV Path (for me) was /dev/old_disk/lv. I was then able to mount it by creating a folder for it withsudo mkdir /mnt/old
and thensudo mount /dev/old_disk/lv /mnt/old
- Executed
With all of this complete, I can now browse my old file system in /mnt/old and copy data back to my volume by going into specific folders I wanted to copy data from and executing cp -R . /volume1/scratch
where 'scratch' is the folder I created way up above in the pre-reqs.
Reverting my Changes
When the copy completed, I wanted to make sure I backed out of the changes and got back to a clean environment. To do so, I took these steps below. Be super careful here, you don't want to remove the wrong data. Also, I took these steps knowing I'd be wiping the disk entirely when all was done.
- Unmounted the volume using
sudo umount /mnt/old
(note there is no 'n' in umount). - Removed the 'old' folder that I had created with
sudo rm -r /mnt/old
- Removed the volume group using
sudo vgremove old_disk
(Couldn't remove the RAID until I did). - Stopped RAID using
sudo mdadm /dev/md3 --stop
- Undid the change I made to lvm.conf (See original steps above).
- Shutdown the NAS and removed the disk.
I'll stress again that none of this is supported, but it did work for me and got me through capturing my old data. In my new setup, I'm encrypting my important folders and archiving them off to Glacier instead of depending on just RAID (which was never a good idea to begin with).
Credit
I couldn't have done any of this without the following sites that all helped everytime I hit the next roadblock:
- A beginner’s guide to disks and disk partitions in Linux
- Mdadm Cheat Sheet
- How to mount space in /dev/sda2
- mount: unknown filesystem type 'linux_raid_member'
- Synology Community: Trying to mount a non-synology disk (ext4, lvm, non-raid)
- Synology Community: Diskstation rename volume - rename volume group
- Synology Community: Data corruption after recovery (SOLVED)
- Solving the error "mount: unknown filesystem type LVM2_member"
1 comment
Kudos for documenting this process, I just used it for a similar need to copy data off an old RAID set during a system rebuild.