Awesome article. I've had a DiskStation for over 3 years without any hiccups, but it's always nice to see an article with multiple recovery solutions presented in the event my luck takes a turn for the worse someday. I also back the NAS up to a USB drive, but don't want that to be my only hope in the event something ever happens.
Issue related hard-disk are common so don't worry if you need a instant recovery of the data and partition, first try any free tool like mini tool that is secure,user-friendly and free. If the issue is not solved.
I have DS1512+ with four 1 TB drives set-up using Synology Hybrid RAID and the fifth 3 TB drive holds a backup of the data. Coincidentally, I have been concerned about the backup integrity and I just had some discussions with Synology on the subject. The Synology Backup application cannot create a complete backup of my data because it skips all the symbolic links present in my data on the RAID volume. The only workaround at this point is to set up a cron job to use rsync for backup purposes as opposed to the Synology GUI backup application. Synology have promised to look into correcting this in the future, but right now the backups created by the DSM GUI are not complete.
Interesting article and a nice look at some possible ways to try to recover data. But I think that you got lucky as it was a hardware failure and not a drive failure, especially when you mentioned that the UFS Explorer automatically found the array.
Wonder if other people, as I am curious, may want to see how this would play out with a simulated drive failure, ie leave one of the drives off to simulate a failed / clicking drive.
I do know that he does the raid rebuilds, but more a worse case scenario where the raid rebuild does not work due to some hardware issue and have to pull the drive and put them into another system.
Although I know ideal that where your backups come in.
Compared to my experiences with low-end NAS units from other vendors this actually seems quite reasonable. It's the sort of thing that most enthusiasts or IT people could do without having to send it out for data recovery.
I set up a DS411j a few years ago with 2 TB Seagate drives in RAID 5 that's still working perfectly fine. I was explicit about RAID 5 because I had manually run arrays in mdadm for a while and noticed that's all it was when I logged into a shell on the NAS. I never had any doubt that if the NAS itself failed, but not the drives, that I could just plug them into another machine and have access to everything. It would never have crossed my mind to look for a Windows tool to access them. Having to stop an array that wasn't quite there before forcing it to show up probably would have taken me awhile to figure out, too.
The worst that happened to me is when I had drives split between multiple SATA controller cards, and one of the controller cards flaked out and dropped half the drives in my RAID 5 array all at once. Since the array wasn't just degraded, it was down, there were no changes made. I just had to convince madam they weren't half spares. Calling madam --assume-clean was the ticket. You just have to get the order of the drives right when you pass in the devices else the file system is corrupt. You can stop and restart the array with another --assume-clean and another guess at order until the file system is valid without problem.
I love madam. Unfortunately, I also had a drive dying slowly with lots of bad sectors silently corrupting data for months. That led me to ZFS and data checksums, which are completely awesome. I'm not nearly as familiar with ZFS as I am with mdadm, so it makes me a little nervous. It also doesn't allow online capacity expansion, like mdadm. I think my next array is going to be a little more bleeding edge and use btrfs. Should be close to the best of both worlds.
I agree with your sentiments about considering btrfs, however I'd advise against it for RAID5 equivalence for quite some time. Not only is it still considered experimental, it flat out isn't finished. Last I checked, it wasn't capable of automatically dropping a drive as it goes bad in parity mode.
Great article. I think the two biggest issues were that the QSync app didn't do what you told it or expected it to do, and when it failed, the QSync app didn't tell you that it failed. hardware has come a long way in the past 15 years or so, but the robustness of backup/sync apps has not - we had apps that didn't do what we wanted many years ago.
Given the vital importance of backup and sync apps to do what we expect them to do, app developers should spend much more effort with scripts or the like to set up backups more robustly, conduct self-tests of configs and settings to ensure settings will do what you expect, and better alert reporting if things occur that you don't expect. Put another way, you found out that your backup failed only when you needed it, which was exactly the SOTA 20 years ago. Disappointing (but not surprising) that it still may be for many users..
If instead of using one qnap and one synology they were both the same brand, you wouldn't have had an issue. You could have just popped the drives immediately into the other nas, and sent the synology back for refurbishing. They way we did it was 2 Qnaps, one at the office, and one at my house. When we had a failure of the main Qnap we sent it in for repairs, and brought the one from home in. You have them doing remote replication, and then using dropbox sync we had one version in the cloud that was synced to individual workstations. So workstations doing video editing could do that much faster locally and then that would get synced to the main drive and then to the remote version at the same time.
we had a Thecus that died and were told that we could simply plug the drives into the shipped replacement unit. When we did so, it initialized the array. Now, I'd go UFS every time instead (having used it successfully on a single drive to get the contents of an XFS drive that would not mount on another occasion). But I did not have a spare machine capable of connecting all the drives. Luckily nothing of importance was lost.
This article really takes me back to my own experience with NAS data recovery. After a firmware upgrade in 2012, my QNAP completely lost its encrypted RAID 6, and claimed it had 6 unassigned drives. After much Googling and careful experimenting with nothing but a CentOS VM on my notebook, I was able to extract all the files with all Unicode filenames intact (VERY important in a tri-lingual family).
- SSH into the NAS as root and create a new mountpoint that's not referenced in any possibly corrupted config (I used /share/md1_data, where the default is md0_data) - Assemble and mount the mdadm volume using the same console commands Ganesh used - go into the web GUI and unlock the volume that "magically" appears in the Encrypted File System section - Open WinSCP and log into the QNAP as root - Copy out the contents of /share/md1_data to a backup volume of your choice (I used a NexStar HX4R with a 4x4TB RAID 5+1)
After successfully extracting all the files from the array, I completely nuked the QNAP configuration, built a new array from scratch, and copied the files back. These days, the Nexstar acts as a backup repository using TrueCrypt and a Windows batch script. Ugly, but functional, and the QNAP hasn't had a single config panic since.
"If QNAP's QSync had worked properly, I could have simply tried to reinitialize the NAS instead of going through the data recovery process."
It seems you're blaming QSync for the failure as well... didn't you say the Synology circuit board died? How do you expect any external applications to "talk" with the Synology unit? Can you share your thoughts on why/how you expected QSync to function in this scenario?
This is no different than having OneDrive on two machines, and then blaming OneDrive for not syncing when one of the machines die on you!?!?
The fact that the circuit board died is orthogonal to QSync's working.
The data was in the hard drives of the DS414j for more than 12 hours before the 414j died. The CIFS share on the unit was used to upload the data, so there was actually no problem accessing the CIFS share at that time and for some time thereafter too.
The CIFS share was mapped as the 'QSync' folder on a separate Windows 8 PC (actually a VM on the TS451). QSync was installed on that PC. QSync's design intent or the way it presents itself to users is that it does real time bidirectional sync. It should have backed up the new data in the QSync PC folder (i.e, the DS414j share) to the TS451, but it didn't do it.
I had personally seen the backup taking place for the other data that I had uploaded earlier - to either the TS451 QSync folder or the DS414j share - so the concept does work. I actually noted this in my coverage of the VM applications earlier this week - the backup / sync apps don't quite deliver on the promise.
Thanks for the detailed clarification. Much appreciated.
Two things I want to point out:
1) I was under the impression that QSync is simply for syncing folders. I'm surprised you're using it for full blown backups. Was this something QNAP suggested? I'm asking because I own a QNAP and would be good to know where QNAP is taking QSync.
2) I have backups setup on my QNAP NAS using the Backup Station app. I was always under the impression that Backup Station is the go-to app for maintaining proper backups on QNAP (it even provides rsync and remote NAS to NAS replication). This app has a notification feature which ties in with the notification settings in the Control Panel. I haven't had anything fail on me, but I tested the notification functionality using a test email, and it worked fine. I'd think had you utilized Backup Station, you would have been notified the moment things stopped working.
Just to point out, I'm in no way being defensive about the QNAP. I'm in full agreement with you that some of these utilities could use more work. Especially, something that allows us to read raw drives in a PC environment in the face of a failure.
I am not sure how QSync is being understood by the users, but my impression after reading the feature list was that it could be used as an alternative to Dropbox, except that it used a 'private cloud'.
Do I use Dropbox for folder syncing or backup? I would say, both. On my primary notebook, I work on some files and upload it to Dropbox. On my work PC, I could work on other files and upload them to the same Dropbox. In the end, I expect to be able to work with both files on both the notebook as well as the work PC. Extending this to QSync - I could put the files to 'sync/backup' through the QNAP QSync folder or upload it to some other path mapped as a QSync target along with the QSync program / application on a PC.
I believe backup and RTRR (real-time remote replication) are both uni-directional only. My intent was to achieve bidirectional sync / backup, which is possible only through 'Dropbox-style' implementations. If there are alternatives, I would love to hear about it.
I noticed AT's steady increase in NAS coverage and I wondered how long it would take to get to this point. Well, not too long it would seem.
It just proves that once again, complexity kills. RAID, NAS, etc. aren't backup solutions, but rather are high capacity, high availability, and high performance solutions.
It's good to see that most people writing articles, and commenting today, already understand that a NAS isn't a 'safety' device. It is a fools errand to think of a NAS or RAID as providing any safety.
The value of a NAS as a high performance solution is questionable because a single SSD popped into a spare bay on a desktop system will outperform the NAS. Except when the NAS is populated with SSDs in a performance RAID configuration. Then you have the problem of getting a high enough bandwidth connection between the NAS and client. For performance, you are best sticking with a high performance desktop. If you insist on a laptop as your main machine, then connect it to the high performance desktop using Thunderbolt.
As for high availability, you are either a business and know how to do this yourself (and have a competent IT department to implement it), or you are a consumer. A consumer can just buy high availability as a service (such as from Amazon services). Or the consumer is a tinkerer, and doesn't care about efficiency, or cost effectiveness. Which brings us back to AT's series of articles on NAS devices.
If you are like me, and aren't ready to relinquish everything to the cloud, or a dodgy proprietary NAS scheme, or an even dodgier RAID setup, an alternative is to just build a low-power PC fitted with a 16-port HBA card and an appropriate chassis with racks. The hardest part these days is finding a case that is appropriate for a bunch of front loading racks to hold all of the quick swap drives. But, it is nonetheless one of the most viable ways to improve capacity and safety without going to the cloud.
As SSD prices slowly descend, this even becomes a viable performance option, with non-RAID drive setups capable of supplanting a bunch of spinning disks in a performance RAID setup.
While I generally agree with your logic (having never given my desktop up as my primary system, and being single), saying "just plug the laptop in via TB" or whatever isn't exactly a viable alternative for many users.
I don't own a NAS, but it seems to me the biggest market for units are laptop dependant and/or multi-user households... When you have a couple and possibly kids each with their own laptop it's much easier to have a centralized media store in a NAS than anything directly attached.
Losing a day's work is considered acceptable in most environments. In theory yesterday (or last Friday) is fresh in everyone's mind, and the raw source material (emails, experimental data, FAXes, etc.) are still available in their original form to redo any data entry.
What is interesting about the QSync situation is the cascading affect of failures. If caught early, through log examination, dashboards, whatever, true disaster can be averted. If minor issues like sync fails are allowed to continue, and RAID failure follows, say, a month later, then a month's worth of work can be lost. That is not acceptable in any environment.
Thanks for the article! I have a ~6 year old TS-409 Pro that is still running great, but internal component failure has been on my mind for awhile now. I'll be bookmarking this in case I ever need to use recovery options on it as I wasn't aware of either of these tools!
Nice article! I use a WHS 2011 server, with Stablebit DrivePool for redundancy. The nice thing about Drivepool is that the drives are kept in standard NTFS format. You can just take the drive out, and plug it into any computer to retrieve files, so data recovery is a piece of cake.
Yeah. I'm really hoping ZFS or Btrfs NASes (without huge price premiums) will be available in the next year and a half as reasonable replacements for my current WHS 2011 box.
That'd be nice, I never bought my own but I recommended and set up several for various clients & family members with small businesses...
No clue what I'd tell them to migrate to right now if one were to break down, the ease of recovery and expansion was one of the biggest draws to WHS and in fact the reason many picked it over cheaper NAS boxes.
There's a reason it was killed off. It has some serious design flaws that trigger data corruption, and Microsoft couldn't figure out how to resolve them. It has great flexibility but I wouldn't trust it with my data.
Hi there, I was wondering if you would recommend "Windows Server 2012 R2 Essentials with Update x64" as a home server for backups? I can get this for free through Dreamspark (https://www.dreamspark.com/Product/Product.aspx?pr... but I have never used WHS before and I'm a little intimidated by it. Reading "Windows Server 2012 R2 Essentials ... continues to have the requirement that it must be an Active Directory domain controller and that it must be the root of the forest and domain" (Source here: http://winsupersite.com/windows-server-2012/window... makes me think I'm in over my head, but I REALLY get lost when folks here talk Linux/Unix file systems and custom RAID stuff with a half-dozen drives. I'm a Windows guy for 20+ years now so I think I can learn it but I wonder if it'd be worth it. Thanks for your input.
Completely agree. I wish there was a NAS available which used NTFS and simple disk mirroring. It would make data recovery extremely easy, if the NAS were to suffer hardware failure. I have an aging Buffalo Linkstation Quad, and hardware failure worries me. Is there any NAS out there which uses NTFS ? And no, I dont want to build/buy a server. I want an appliance.
Yeah, I migrated from WHS 2003 to Win8 with DriveBender, similar to Drivepool. Love that if it goes sideways all my stuff is NTFS. I don't have time or stress levels to deal with linux command line stuff to get it working again.
I recently decided to drop my home NAS (synology ds212j), since I no longer have multiple PCs...and getting that data off was a nightmare. It had a backup drive that was formatted in ext4, since synology didn't support incremental backups to NTFS.
Because of the way it stored the incremental backup, it was basically useless for reading directly through an ext driver for windows. I had to completely wipe the backup drive and reformat it in NTFS to make a one time backup, and cross my fingers that I didn't lose a drive during the damn near 24 hour process (thanks to the hyper fragmented NAS drives, barely adequate NAS CPU and USB 2.0.) Then I had to pull the drives, reformat them, and pray the backup worked. Then transfer everything back. This process literally took days.
If it was a windows based box, I could have just pulled the drives, dropped them in the PC and been done with it in 5 minutes, without even rebooting. I probably would have never even dropped the NAS, since I could upgrade it without having to migrate anything.
Basically the entire experience put me off of ever using a Linux based NAS ever again. Between the file system incompatibility and the potential for RAID array failure....it's just not worth it. My data has never felt so unsafe than during that process.
"In the end, I decided to go with a portable installed system, which, unlike a persistent install, can be upgraded / updated without issues. I used a Corsair Voyager GT USB 3.0 128 GB thumb drive to create a 'Ubuntu-to-go' portable installation in which I installed mdadm and lvm2 manually."
Even though it wasn't specified in the Synology FAQ, wouldn't a portable installation of Parted Magic or SystemRescueCd work OK?
From my experience with my synology 413j for a while I've learned a few things:
First: Turn on the smart disk check to run weekly, otherwise if you never restart the device you won't have a hint of failure coming.
Second: 5TB usb's are cheap, setup a backup task to these.
Third: UPS. Always run a ups. A $50 apc is enough and will hold the device up for a while and allow it to shutdown properly in a power outage.
Last: If you have a disk failure it locks up the device. You will probably be able to detect which disk it is by pulling one at a time. When you get the proper one it will be accessible and you know which disk to replace.
Very interesting article - have often asked myself what to do if my NAS ever should die a sudden death, because it's a bit old (Thecus 4200) and I probably would neither manage to buy another one nor want to (not because it is bad, just because I'd rather switch to something newer). More articles like this, please!
I'd be curious what exactly was the component that died - I've had yet another capacitor failure in the last week which took out an old LGA 775 motherboard (though it happily enabled an upgrade to an i7-4790K).
All in all, the vast majority of hardware failures I've had in the last 5 years has been due to capacitor death, usually in power supplies, causing general flakyness, and eventually becoming terminal. I'm curious if that was the case here as well.
At work we recently had 2 drives error on a Synology RAID 5. It wasn't quite total failure of two drives, but it crashed the volume. It's a 30+ TB system for our film digitization business, 13-disk RAID (1812+ and 513 expansion) and we've tried just about everything to recover, to no avail. UFS didn't help, an expert in the Ubuntu method we hired couldn't fix it either. Lesson learned: back everything up every night! Did find this useful guide for situations like ours, though:
Shame about the lost data, but the link is definitely interesting.
In case of drive failures within accepted limits (1 for RAID-5, 2 for RAID-6), the NAS itself should be able to rebuild the array. If more drives are lost, RAID rebuild softwares can't help since data is actually missing and there is no parity to recover the lost data.
That said, if there is a drive failure as well as a NAS failure, I would personally make sure to image the remaining live drives on to some other storage before attempting recovery using software (rather than trying to recover from the live disks themselves)
These days for my personal stuff, I use cloud backups (CrashPlan) and a single disk or striped pair (space and/or performance). If quick recovery is imperative, I'll employ something like AeroFS to sync data between two hosts on the same LAN. Pretty decent setup if you don't need to maintain meta-data like owner & ACLs.
I'll spare you a long diatribe about software RAID5 and how a partial stripe write can silently corrupt data on a crash. As far as I can tell, this isn't fixed in Linux's RAID implementation. At the time Sun was very proud of their ZFS / RAID-Z implementation which fixed the partial write problem. For light write workloads, partial stripe writes are unlikely, but still a very real risk.
In reply to DNABlob: As far as I know, no released version of the Linux RAID has had a problem with silent data corruption, so there is no need for a fix.
The author of the article you've linked acknowledges that RAID 5 can be implemented correctly in software when he writes, "There are software-only workarounds for this, but they're so slow that software RAID has died in the marketplace." It is true that an incorrect implementation of RAID 5 could result in silent data corruption, but the same thing can be said of any software, including ZFS. ZFS includes checksums on all data, but those checksums don't do any good if a careless programmer has neglected to call the code that verifies the checksums.
What the article shows is that if your device uses a Linux raid implementation you can get the data off of your drives if your device goes belly up by using free or commercial tools. While useful, you could have done the same thing by buying a new device and dropping your drives in - correct?
This isn't really data recovery where your raid craps out because of a two-drive failure or some other condition that whacks your data. This is recovery due to an enclosure failure. Show me a recovery where your RAID dies, not where your enclosure dies.
Well, if you have a service managing a RAID volume, that is, you have the raid volume mounted, obviously you have to first unmount it, so you have to stop the daemon... Synology's FAQ assumes that the RAID volume is not mounted and that no daemon has interacted with the RAID device first. So I would not recommend Windows either. And probably you can get by with more complicated cases with Ubuntu than with Windows and UFS explorer. Anyways, I'd use a GNU/Linux box with ZFS which as no write holes.
The best disaster recovery plan is backup, backup, backup. I've lost hundreds of gigabytes of data before so I know how valuable having a solid backup process is.
My DS 412+ runs automated backups done to external drives twice a week (rotated on a monthly basis).
I have the Synology DS211j and last year I have some problem with one of the disk, at that moment I didn't have another disk to replace the failed one, so my priority was to backup the data from the good drive first. so I searched the internet and found a program called "File Scavenger 4.2" for windows, I was able to recover the data using that program, everything was very easy, I think that program is very similar to USB Explorer, is good to know we have several options to restore the data. After that event I also attached two external USB drive to my NAS and started making backups daily and weekly. Also I'm backing up the most important forlders to Amazon S3, is very cheap and best of all, you can access your data even through Internet Explorer or any other Amazon S3 client. Remember to make at least 3 backups, and one of them must be online or in another physical location.
Only one month ago i had said just the same: I Have A RAID So My Data Is Backed Up. So i'm knowing about this topic almost evething now. This text is good, there are the most common RAID in it, but i can to advise the blog hetmanrecovery (com). Hetman Software real helped me with Data Recovery, and if you beginner in these questions, it'll help you too.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
55 Comments
Back to Article
omgyeti - Friday, August 22, 2014 - link
Awesome article. I've had a DiskStation for over 3 years without any hiccups, but it's always nice to see an article with multiple recovery solutions presented in the event my luck takes a turn for the worse someday. I also back the NAS up to a USB drive, but don't want that to be my only hope in the event something ever happens.garrytaylo987 - Wednesday, June 6, 2018 - link
Issue related hard-disk are common so don't worry if you need a instant recovery of the data and partition, first try any free tool like mini tool that is secure,user-friendly and free. If the issue is not solved.t-rexky - Friday, August 22, 2014 - link
A very interesting read - thank you.I have DS1512+ with four 1 TB drives set-up using Synology Hybrid RAID and the fifth 3 TB drive holds a backup of the data. Coincidentally, I have been concerned about the backup integrity and I just had some discussions with Synology on the subject. The Synology Backup application cannot create a complete backup of my data because it skips all the symbolic links present in my data on the RAID volume. The only workaround at this point is to set up a cron job to use rsync for backup purposes as opposed to the Synology GUI backup application. Synology have promised to look into correcting this in the future, but right now the backups created by the DSM GUI are not complete.
Dahak - Friday, August 22, 2014 - link
Interesting article and a nice look at some possible ways to try to recover data.But I think that you got lucky as it was a hardware failure and not a drive failure, especially when you mentioned that the UFS Explorer automatically found the array.
Wonder if other people, as I am curious, may want to see how this would play out with a simulated drive failure, ie leave one of the drives off to simulate a failed / clicking drive.
DanNeely - Friday, August 22, 2014 - link
Ganesh already does do raid rebuild tests that simulate this.Dahak - Friday, August 22, 2014 - link
I do know that he does the raid rebuilds, but more a worse case scenario where the raid rebuild does not work due to some hardware issue and have to pull the drive and put them into another system.Although I know ideal that where your backups come in.
Flunk - Friday, August 22, 2014 - link
Compared to my experiences with low-end NAS units from other vendors this actually seems quite reasonable. It's the sort of thing that most enthusiasts or IT people could do without having to send it out for data recovery.icrf - Friday, August 22, 2014 - link
I set up a DS411j a few years ago with 2 TB Seagate drives in RAID 5 that's still working perfectly fine. I was explicit about RAID 5 because I had manually run arrays in mdadm for a while and noticed that's all it was when I logged into a shell on the NAS. I never had any doubt that if the NAS itself failed, but not the drives, that I could just plug them into another machine and have access to everything. It would never have crossed my mind to look for a Windows tool to access them. Having to stop an array that wasn't quite there before forcing it to show up probably would have taken me awhile to figure out, too.The worst that happened to me is when I had drives split between multiple SATA controller cards, and one of the controller cards flaked out and dropped half the drives in my RAID 5 array all at once. Since the array wasn't just degraded, it was down, there were no changes made. I just had to convince madam they weren't half spares. Calling madam --assume-clean was the ticket. You just have to get the order of the drives right when you pass in the devices else the file system is corrupt. You can stop and restart the array with another --assume-clean and another guess at order until the file system is valid without problem.
I love madam. Unfortunately, I also had a drive dying slowly with lots of bad sectors silently corrupting data for months. That led me to ZFS and data checksums, which are completely awesome. I'm not nearly as familiar with ZFS as I am with mdadm, so it makes me a little nervous. It also doesn't allow online capacity expansion, like mdadm. I think my next array is going to be a little more bleeding edge and use btrfs. Should be close to the best of both worlds.
Gigaplex - Saturday, August 23, 2014 - link
I agree with your sentiments about considering btrfs, however I'd advise against it for RAID5 equivalence for quite some time. Not only is it still considered experimental, it flat out isn't finished. Last I checked, it wasn't capable of automatically dropping a drive as it goes bad in parity mode.isa - Friday, August 22, 2014 - link
Great article. I think the two biggest issues were that the QSync app didn't do what you told it or expected it to do, and when it failed, the QSync app didn't tell you that it failed. hardware has come a long way in the past 15 years or so, but the robustness of backup/sync apps has not - we had apps that didn't do what we wanted many years ago.Given the vital importance of backup and sync apps to do what we expect them to do, app developers should spend much more effort with scripts or the like to set up backups more robustly, conduct self-tests of configs and settings to ensure settings will do what you expect, and better alert reporting if things occur that you don't expect. Put another way, you found out that your backup failed only when you needed it, which was exactly the SOTA 20 years ago. Disappointing (but not surprising) that it still may be for many users..
deeceefar2 - Friday, August 22, 2014 - link
If instead of using one qnap and one synology they were both the same brand, you wouldn't have had an issue. You could have just popped the drives immediately into the other nas, and sent the synology back for refurbishing. They way we did it was 2 Qnaps, one at the office, and one at my house. When we had a failure of the main Qnap we sent it in for repairs, and brought the one from home in. You have them doing remote replication, and then using dropbox sync we had one version in the cloud that was synced to individual workstations. So workstations doing video editing could do that much faster locally and then that would get synced to the main drive and then to the remote version at the same time.ruidc - Friday, August 22, 2014 - link
we had a Thecus that died and were told that we could simply plug the drives into the shipped replacement unit. When we did so, it initialized the array. Now, I'd go UFS every time instead (having used it successfully on a single drive to get the contents of an XFS drive that would not mount on another occasion). But I did not have a spare machine capable of connecting all the drives. Luckily nothing of importance was lost.imaheadcase - Friday, August 22, 2014 - link
Ganesh T S any plans to do a custom NAS buying guide like the one done in 2011? Lots of custom options out now for that.matt_v - Friday, August 22, 2014 - link
This article really takes me back to my own experience with NAS data recovery. After a firmware upgrade in 2012, my QNAP completely lost its encrypted RAID 6, and claimed it had 6 unassigned drives. After much Googling and careful experimenting with nothing but a CentOS VM on my notebook, I was able to extract all the files with all Unicode filenames intact (VERY important in a tri-lingual family).- SSH into the NAS as root and create a new mountpoint that's not referenced in any possibly corrupted config (I used /share/md1_data, where the default is md0_data)
- Assemble and mount the mdadm volume using the same console commands Ganesh used
- go into the web GUI and unlock the volume that "magically" appears in the Encrypted File System section
- Open WinSCP and log into the QNAP as root
- Copy out the contents of /share/md1_data to a backup volume of your choice (I used a NexStar HX4R with a 4x4TB RAID 5+1)
After successfully extracting all the files from the array, I completely nuked the QNAP configuration, built a new array from scratch, and copied the files back. These days, the Nexstar acts as a backup repository using TrueCrypt and a Windows batch script. Ugly, but functional, and the QNAP hasn't had a single config panic since.
Oyster - Friday, August 22, 2014 - link
"If QNAP's QSync had worked properly, I could have simply tried to reinitialize the NAS instead of going through the data recovery process."It seems you're blaming QSync for the failure as well... didn't you say the Synology circuit board died? How do you expect any external applications to "talk" with the Synology unit? Can you share your thoughts on why/how you expected QSync to function in this scenario?
This is no different than having OneDrive on two machines, and then blaming OneDrive for not syncing when one of the machines die on you!?!?
ganeshts - Friday, August 22, 2014 - link
The fact that the circuit board died is orthogonal to QSync's working.The data was in the hard drives of the DS414j for more than 12 hours before the 414j died. The CIFS share on the unit was used to upload the data, so there was actually no problem accessing the CIFS share at that time and for some time thereafter too.
The CIFS share was mapped as the 'QSync' folder on a separate Windows 8 PC (actually a VM on the TS451). QSync was installed on that PC. QSync's design intent or the way it presents itself to users is that it does real time bidirectional sync. It should have backed up the new data in the QSync PC folder (i.e, the DS414j share) to the TS451, but it didn't do it.
I had personally seen the backup taking place for the other data that I had uploaded earlier - to either the TS451 QSync folder or the DS414j share - so the concept does work. I actually noted this in my coverage of the VM applications earlier this week - the backup / sync apps don't quite deliver on the promise.
Oyster - Friday, August 22, 2014 - link
Thanks for the detailed clarification. Much appreciated.Two things I want to point out:
1) I was under the impression that QSync is simply for syncing folders. I'm surprised you're using it for full blown backups. Was this something QNAP suggested? I'm asking because I own a QNAP and would be good to know where QNAP is taking QSync.
2) I have backups setup on my QNAP NAS using the Backup Station app. I was always under the impression that Backup Station is the go-to app for maintaining proper backups on QNAP (it even provides rsync and remote NAS to NAS replication). This app has a notification feature which ties in with the notification settings in the Control Panel. I haven't had anything fail on me, but I tested the notification functionality using a test email, and it worked fine. I'd think had you utilized Backup Station, you would have been notified the moment things stopped working.
Just to point out, I'm in no way being defensive about the QNAP. I'm in full agreement with you that some of these utilities could use more work. Especially, something that allows us to read raw drives in a PC environment in the face of a failure.
ganeshts - Friday, August 22, 2014 - link
I am not sure how QSync is being understood by the users, but my impression after reading the feature list was that it could be used as an alternative to Dropbox, except that it used a 'private cloud'.Do I use Dropbox for folder syncing or backup? I would say, both. On my primary notebook, I work on some files and upload it to Dropbox. On my work PC, I could work on other files and upload them to the same Dropbox. In the end, I expect to be able to work with both files on both the notebook as well as the work PC. Extending this to QSync - I could put the files to 'sync/backup' through the QNAP QSync folder or upload it to some other path mapped as a QSync target along with the QSync program / application on a PC.
I believe backup and RTRR (real-time remote replication) are both uni-directional only. My intent was to achieve bidirectional sync / backup, which is possible only through 'Dropbox-style' implementations. If there are alternatives, I would love to hear about it.
Gigaplex - Saturday, August 23, 2014 - link
"I was under the impression that QSync is simply for syncing folders. I'm surprised you're using it for full blown backups."What is a backup? It is a copy of the data. What does syncing do? It copies data.
hrrmph - Friday, August 22, 2014 - link
I noticed AT's steady increase in NAS coverage and I wondered how long it would take to get to this point. Well, not too long it would seem.It just proves that once again, complexity kills. RAID, NAS, etc. aren't backup solutions, but rather are high capacity, high availability, and high performance solutions.
It's good to see that most people writing articles, and commenting today, already understand that a NAS isn't a 'safety' device. It is a fools errand to think of a NAS or RAID as providing any safety.
The value of a NAS as a high performance solution is questionable because a single SSD popped into a spare bay on a desktop system will outperform the NAS. Except when the NAS is populated with SSDs in a performance RAID configuration. Then you have the problem of getting a high enough bandwidth connection between the NAS and client. For performance, you are best sticking with a high performance desktop. If you insist on a laptop as your main machine, then connect it to the high performance desktop using Thunderbolt.
As for high availability, you are either a business and know how to do this yourself (and have a competent IT department to implement it), or you are a consumer. A consumer can just buy high availability as a service (such as from Amazon services). Or the consumer is a tinkerer, and doesn't care about efficiency, or cost effectiveness. Which brings us back to AT's series of articles on NAS devices.
If you are like me, and aren't ready to relinquish everything to the cloud, or a dodgy proprietary NAS scheme, or an even dodgier RAID setup, an alternative is to just build a low-power PC fitted with a 16-port HBA card and an appropriate chassis with racks. The hardest part these days is finding a case that is appropriate for a bunch of front loading racks to hold all of the quick swap drives. But, it is nonetheless one of the most viable ways to improve capacity and safety without going to the cloud.
As SSD prices slowly descend, this even becomes a viable performance option, with non-RAID drive setups capable of supplanting a bunch of spinning disks in a performance RAID setup.
Impulses - Friday, August 22, 2014 - link
While I generally agree with your logic (having never given my desktop up as my primary system, and being single), saying "just plug the laptop in via TB" or whatever isn't exactly a viable alternative for many users.I don't own a NAS, but it seems to me the biggest market for units are laptop dependant and/or multi-user households... When you have a couple and possibly kids each with their own laptop it's much easier to have a centralized media store in a NAS than anything directly attached.
Gigaplex - Saturday, August 23, 2014 - link
"A consumer can just buy high availability as a service (such as from Amazon services)"Not on ADSL2+ when dealing with multiple TBs of data I can't.
wintermute000 - Saturday, August 23, 2014 - link
QFTWHangFire - Friday, August 22, 2014 - link
Losing a day's work is considered acceptable in most environments. In theory yesterday (or last Friday) is fresh in everyone's mind, and the raw source material (emails, experimental data, FAXes, etc.) are still available in their original form to redo any data entry.What is interesting about the QSync situation is the cascading affect of failures. If caught early, through log examination, dashboards, whatever, true disaster can be averted. If minor issues like sync fails are allowed to continue, and RAID failure follows, say, a month later, then a month's worth of work can be lost. That is not acceptable in any environment.
Kougar - Friday, August 22, 2014 - link
Thanks for the article! I have a ~6 year old TS-409 Pro that is still running great, but internal component failure has been on my mind for awhile now. I'll be bookmarking this in case I ever need to use recovery options on it as I wasn't aware of either of these tools!kmmatney - Friday, August 22, 2014 - link
Nice article! I use a WHS 2011 server, with Stablebit DrivePool for redundancy. The nice thing about Drivepool is that the drives are kept in standard NTFS format. You can just take the drive out, and plug it into any computer to retrieve files, so data recovery is a piece of cake.Impulses - Friday, August 22, 2014 - link
Shame WHS is now RIPDanNeely - Friday, August 22, 2014 - link
Yeah. I'm really hoping ZFS or Btrfs NASes (without huge price premiums) will be available in the next year and a half as reasonable replacements for my current WHS 2011 box.Impulses - Friday, August 22, 2014 - link
That'd be nice, I never bought my own but I recommended and set up several for various clients & family members with small businesses...No clue what I'd tell them to migrate to right now if one were to break down, the ease of recovery and expansion was one of the biggest draws to WHS and in fact the reason many picked it over cheaper NAS boxes.
Gigaplex - Saturday, August 23, 2014 - link
There's a reason it was killed off. It has some serious design flaws that trigger data corruption, and Microsoft couldn't figure out how to resolve them. It has great flexibility but I wouldn't trust it with my data.Lerianis - Friday, September 5, 2014 - link
Links to the articles supporting that please.YoshoMasaki - Monday, August 25, 2014 - link
Hi there, I was wondering if you would recommend "Windows Server 2012 R2 Essentials with Update x64" as a home server for backups? I can get this for free through Dreamspark (https://www.dreamspark.com/Product/Product.aspx?pr... but I have never used WHS before and I'm a little intimidated by it. Reading "Windows Server 2012 R2 Essentials ... continues to have the requirement that it must be an Active Directory domain controller and that it must be the root of the forest and domain" (Source here: http://winsupersite.com/windows-server-2012/window... makes me think I'm in over my head, but I REALLY get lost when folks here talk Linux/Unix file systems and custom RAID stuff with a half-dozen drives. I'm a Windows guy for 20+ years now so I think I can learn it but I wonder if it'd be worth it. Thanks for your input.YoshoMasaki - Monday, August 25, 2014 - link
Post above ate my links ... please remove the parenthesis from the end or click here:https://www.dreamspark.com/Product/Product.aspx?pr...
http://winsupersite.com/windows-server-2012/window...
fatbong - Monday, August 25, 2014 - link
Completely agree. I wish there was a NAS available which used NTFS and simple disk mirroring. It would make data recovery extremely easy, if the NAS were to suffer hardware failure. I have an aging Buffalo Linkstation Quad, and hardware failure worries me. Is there any NAS out there which uses NTFS ? And no, I dont want to build/buy a server. I want an appliance.Stylex - Thursday, August 28, 2014 - link
Yeah, I migrated from WHS 2003 to Win8 with DriveBender, similar to Drivepool. Love that if it goes sideways all my stuff is NTFS. I don't have time or stress levels to deal with linux command line stuff to get it working again.BD2003 - Friday, August 22, 2014 - link
I recently decided to drop my home NAS (synology ds212j), since I no longer have multiple PCs...and getting that data off was a nightmare. It had a backup drive that was formatted in ext4, since synology didn't support incremental backups to NTFS.Because of the way it stored the incremental backup, it was basically useless for reading directly through an ext driver for windows. I had to completely wipe the backup drive and reformat it in NTFS to make a one time backup, and cross my fingers that I didn't lose a drive during the damn near 24 hour process (thanks to the hyper fragmented NAS drives, barely adequate NAS CPU and USB 2.0.) Then I had to pull the drives, reformat them, and pray the backup worked. Then transfer everything back. This process literally took days.
If it was a windows based box, I could have just pulled the drives, dropped them in the PC and been done with it in 5 minutes, without even rebooting. I probably would have never even dropped the NAS, since I could upgrade it without having to migrate anything.
Basically the entire experience put me off of ever using a Linux based NAS ever again. Between the file system incompatibility and the potential for RAID array failure....it's just not worth it. My data has never felt so unsafe than during that process.
dabotsonline - Friday, August 22, 2014 - link
"In the end, I decided to go with a portable installed system, which, unlike a persistent install, can be upgraded / updated without issues. I used a Corsair Voyager GT USB 3.0 128 GB thumb drive to create a 'Ubuntu-to-go' portable installation in which I installed mdadm and lvm2 manually."Even though it wasn't specified in the Synology FAQ, wouldn't a portable installation of Parted Magic or SystemRescueCd work OK?
Christobevii3 - Friday, August 22, 2014 - link
From my experience with my synology 413j for a while I've learned a few things:First: Turn on the smart disk check to run weekly, otherwise if you never restart the device you won't have a hint of failure coming.
Second: 5TB usb's are cheap, setup a backup task to these.
Third: UPS. Always run a ups. A $50 apc is enough and will hold the device up for a while and allow it to shutdown properly in a power outage.
Last: If you have a disk failure it locks up the device. You will probably be able to detect which disk it is by pulling one at a time. When you get the proper one it will be accessible and you know which disk to replace.
jbm - Friday, August 22, 2014 - link
Very interesting article - have often asked myself what to do if my NAS ever should die a sudden death, because it's a bit old (Thecus 4200) and I probably would neither manage to buy another one nor want to (not because it is bad, just because I'd rather switch to something newer). More articles like this, please!Nogami - Saturday, August 23, 2014 - link
I'd be curious what exactly was the component that died - I've had yet another capacitor failure in the last week which took out an old LGA 775 motherboard (though it happily enabled an upgrade to an i7-4790K).All in all, the vast majority of hardware failures I've had in the last 5 years has been due to capacitor death, usually in power supplies, causing general flakyness, and eventually becoming terminal. I'm curious if that was the case here as well.
zodiacfml - Saturday, August 23, 2014 - link
Still that hard to restore files from a NAS? Vendors should develop a better way.colecrowder - Saturday, August 23, 2014 - link
At work we recently had 2 drives error on a Synology RAID 5. It wasn't quite total failure of two drives, but it crashed the volume. It's a 30+ TB system for our film digitization business, 13-disk RAID (1812+ and 513 expansion) and we've tried just about everything to recover, to no avail. UFS didn't help, an expert in the Ubuntu method we hired couldn't fix it either. Lesson learned: back everything up every night! Did find this useful guide for situations like ours, though:http://community.spiceworks.com/how_to/show/24731-...
ganeshts - Saturday, August 23, 2014 - link
Shame about the lost data, but the link is definitely interesting.In case of drive failures within accepted limits (1 for RAID-5, 2 for RAID-6), the NAS itself should be able to rebuild the array. If more drives are lost, RAID rebuild softwares can't help since data is actually missing and there is no parity to recover the lost data.
That said, if there is a drive failure as well as a NAS failure, I would personally make sure to image the remaining live drives on to some other storage before attempting recovery using software (rather than trying to recover from the live disks themselves)
Navvie - Monday, September 1, 2014 - link
30TB RAID5? Please tell me you replaced that array with something more suitable.DNABlob - Sunday, August 24, 2014 - link
Good article & research.These days for my personal stuff, I use cloud backups (CrashPlan) and a single disk or striped pair (space and/or performance). If quick recovery is imperative, I'll employ something like AeroFS to sync data between two hosts on the same LAN. Pretty decent setup if you don't need to maintain meta-data like owner & ACLs.
I'll spare you a long diatribe about software RAID5 and how a partial stripe write can silently corrupt data on a crash. As far as I can tell, this isn't fixed in Linux's RAID implementation. At the time Sun was very proud of their ZFS / RAID-Z implementation which fixed the partial write problem. For light write workloads, partial stripe writes are unlikely, but still a very real risk.
https://blogs.oracle.com/bonwick/en_US/entry/raid_...
KAlmquist - Monday, August 25, 2014 - link
In reply to DNABlob: As far as I know, no released version of the Linux RAID has had a problem with silent data corruption, so there is no need for a fix.The author of the article you've linked acknowledges that RAID 5 can be implemented correctly in software when he writes, "There are software-only workarounds for this, but they're so slow that software RAID has died in the marketplace." It is true that an incorrect implementation of RAID 5 could result in silent data corruption, but the same thing can be said of any software, including ZFS. ZFS includes checksums on all data, but those checksums don't do any good if a careless programmer has neglected to call the code that verifies the checksums.
elFarto - Sunday, August 24, 2014 - link
The reason your mdadm commands weren't working is because you were attempting to use the disks themselves, not their partitions.mannyvel - Monday, August 25, 2014 - link
What the article shows is that if your device uses a Linux raid implementation you can get the data off of your drives if your device goes belly up by using free or commercial tools. While useful, you could have done the same thing by buying a new device and dropping your drives in - correct?This isn't really data recovery where your raid craps out because of a two-drive failure or some other condition that whacks your data. This is recovery due to an enclosure failure. Show me a recovery where your RAID dies, not where your enclosure dies.
Lerianis - Friday, September 5, 2014 - link
Not always. Some machines are so badly designed that they INITIALIZE (wipe the drives) when old drives with data are put into them.crashplan - Thursday, September 18, 2014 - link
True. Anyways great article.Filiprino - Monday, August 25, 2014 - link
Well, if you have a service managing a RAID volume, that is, you have the raid volume mounted, obviously you have to first unmount it, so you have to stop the daemon...Synology's FAQ assumes that the RAID volume is not mounted and that no daemon has interacted with the RAID device first.
So I would not recommend Windows either. And probably you can get by with more complicated cases with Ubuntu than with Windows and UFS explorer.
Anyways, I'd use a GNU/Linux box with ZFS which as no write holes.
cyberguyz - Tuesday, August 26, 2014 - link
The best disaster recovery plan is backup, backup, backup. I've lost hundreds of gigabytes of data before so I know how valuable having a solid backup process is.My DS 412+ runs automated backups done to external drives twice a week (rotated on a monthly basis).
batatudo - Tuesday, September 16, 2014 - link
I have the Synology DS211j and last year I have some problem with one of the disk, at that moment I didn't have another disk to replace the failed one, so my priority was to backup the data from the good drive first. so I searched the internet and found a program called "File Scavenger 4.2" for windows, I was able to recover the data using that program, everything was very easy, I think that program is very similar to USB Explorer, is good to know we have several options to restore the data. After that event I also attached two external USB drive to my NAS and started making backups daily and weekly. Also I'm backing up the most important forlders to Amazon S3, is very cheap and best of all, you can access your data even through Internet Explorer or any other Amazon S3 client. Remember to make at least 3 backups, and one of them must be online or in another physical location.Midav - Thursday, June 16, 2016 - link
Only one month ago i had said just the same: I Have A RAID So My Data Is Backed Up. So i'm knowing about this topic almost evething now. This text is good, there are the most common RAID in it, but i can to advise the blog hetmanrecovery (com). Hetman Software real helped me with Data Recovery, and if you beginner in these questions, it'll help you too.mmuu296 - Sunday, January 21, 2018 - link
it is so interesting!!!!