1) "Of course this list is certainly not exhaustive, with many companies now offering online backup solutions. A quick search in your favorite search engine will provide dozens of options. Be sure to choose the one that works best for you."
2) Look at the 5th item on the bulleted list above the paragraph I just quoted...
CrashPlan is fantastic. I used to use JungleDisk with S3, but the software was forgotten and became problematic and buggy. I gave it up and switched to CrashPlan. The client is easy to use and backups seem to happen fast and reliably.
Crashplan's friend to friend option is amazing. I have 3 or 4 people backing up to my home NAS, and my personal pictures and important documents all back up to my PC at work.
For most regular consumers, CrashPlan is something I'd definitely recommend; it's pretty easy to use and has unlimited storage, plus if you like you can specify the encryption key that is used (though of course you then have to find a way to keep that safe instead). Given the pricing of cloud storage it's also pretty well priced. I'm sure there are other cloud backup services, but CrashPlan is what I'm using.
Personally though I've gone for the total overkill approach; I have my Mac's main system volume which I'm about to switch over to RAID-5, a Time Machine backup volume on RAID-5, a Synology NAS (no RAID since it's only two-disk), and the NAS is also configured to heedlessly run CrashPlan to backup my files. So I have a total of three redundant copies of my data, albeit one in the cloud that is usually a day or two behind, and would take weeks to re-download, but in the event of a fire burning down everything else I'd rather have that off-site protection.
Still, I'd personally recommend local back-up drive + NAS for most serious computer users, especially if working with that computer is your job, as a single backup isn't enough IMO, as the last thing you want is to be in the middle of restoring your system, only for the backup to fail as well.
Be really careful about RAID-5. It protects very well against a complete drive failure, but drive corruption or a drive that starts returning garbage will trash everything on the disk. You need a RAID level that does double parity or checksums, such as RAID-6 and RAID-Z, to actually protect against almost all hardware failures. Of course it still is not then a backup.
The bigger problem is that with large modern disks, a drive failure in RAID5 means that you're extremely likely to encounter unreadable sectors trying to resilver a replacement disk. A drive that starts returning garbage during regular operation should cause no problem with any competent RAID implementation though.
Also, RAIDZ is single-parity - RAIDZ2, RAIDZ3, etc are the multi-parity versions. The other bonus with ZFS-based RAID implementations is full checksumming of all data and metadata on-disk, plus COW snapshots, and the latter means it can actually serve the role of a self-contained backup solution, using something like zfs-auto-snapshot to provide granular, aged snapshots of changed data.
ZFS RAIDZ implements strong checksums within each drive, such that it can reliably detect if a drive is returning bad data and ignore it. In some respects it's actually stronger than RAID6 in terms of its ability to deal with silent corruption. That's why NonSequitor wrote "double parity OR checkums" (RAID6 is double-parity, RAIDZ is single-parity augmented with strong per-disk, per-chunk checksums).
If you're halfway competent then it's not "extremely likely" that you'll encounter unreadable sectors during a RAID5 rebuild. There's a reason why both good RAID controllers and ZFS implement scrubbing (i.e. they can periodically read every disk end to end and remap any unreadable sectors). If you do that every couple days then the likelihood of encountering a new (since the last scrub) unreadable sector may or may not be high depending on your rebuild time.
For example I have a 5-disk RAID5 array that I use for "cold" storage. I scrub it daily, and rebuilding to a hot-spare takes 6 hours (I've tested it several times, verifying the results against separate copies of the same files), which means that the maximum delay between the most recent scrub and the end of a rebuild is 30 hrs. The scrubs have only found one bad sector in ~2 years, so I respectfully submit that the likelihood of an additional failure within 30 hours of a scrub is pretty darned low.
exactly. anything above 2 TB drives becomes really problematic in this regard. With 4 TB drives it's almost guaranteed a RAID-5 rebuild will fail. IMHO if you do RAID, do RAID-1.
RAID1 has exactly the same problems as RAID5 - In the case of silent corruption it can't determine which disk is bad, and it's vulnerable to a single disk failure during a rebuild. The likelihood of such a failure is obviously lower (now you only have to worry about 1 other disks instead of 2 or more) but not hugely so. RAID6/RAIDZ2 is statistically much better until you get up to really high drive counts.
The "big boys" with truly mission-critical data do N-way replication, i.e. all critical data is replicated (n>=3) times on different systems.
+1 to crashplan. For my most important data, I have n+2 backups: n being the number of computers I have (meaning that Onedrive automatically syncs them); 2 being a crashplan online subscription as well as a local crashplan backup. I also have restore previous versions running, and use it on occasion, but don't consider it a backup per se.
I plowed through all the competitors a few years ago, and Crashplan was the one I selected. It had the cheapest unlimited storage with version history, and the (for me) killer feature that there exists a Linux client. I have a FreeNAS box that I use for media storage. I can mount it as a drive on my Linux machine and the Crashplan client will back it up just as if it were a local drive. There is also an Android client that gives you access to all your files, functioning as a sort of personal Dropbox, without sharing but with better security.
I've had occasion to use my backups a couple of times and found it easy and speedy, much more so than I expected for a cloud service.
Everything in my house goes to a personally built server onto dedicated RAID storage drives. No accounts other than my personal administrator account have access to do anything but read.
Those drives are then backed up to the cloud via CrashPlan.
Simple, effective, and as foolproof as I can get for now.
One thing to note with Time Machine: You don't need to use the fancy interface for restoring files. Just browse your backup disk with the Finder (or on the command line), there's a directory for every backup from which you can just copy things over.
To CrashPan: I used that for a while, but found it to be utterly uncontrollable. The log files are a joke, the status mails arrive at random times (or not at all) and are useless ("Files: 117k, Backed Up: 99.9%") and often enough when a backup didn't run for some reason it's impossible to debug because there's no real error reporting. It may work somehow, but it has all the marks of something I don't want to rely on.
Yes, that's indeed a big point for Apple's TimeMachine — that and it being included free with every Mac. If necessary you can just go digging into a TM archive and pull out what you need.
great guide but can you or others please check if the OneDrive backupsolution is throttled and capped at 355kb/s under win8.1 (Desktop)? I read that many people complaining about that. Thanks.
I've been using Duplicati for a few years and it has been good. "It works with Amazon S3, Windows Live SkyDrive, Google Drive (Google Docs), Rackspace Cloud Files or WebDAV, SSH, FTP (and many more)." It uses rsync under the covers and I use it to backup to my RAID-5 NAS in the basement. I also perform offsite backups (rotate 1TB disks to and from my workplace).
after fifteen years of backupping i can share the following: - user initiated (e.g. all usb, some network/cloud) backups agree with less than 0.1% of the human population (but hey it is better than nothing) - with consumer hdds, raid5 nas are totally overrated and *in the real world* rarely protect data better than jbod. - raid6 is better than jbod. but with consumer disks the only real alternative to jbod is zfs (linux md (which all nas employ) + consumer hdds = shaky unless you have a 24/7sysadmin...)
so either build/buy a zfs nas or backup to the cloud.
Are there any consumer grade NASes with ZFS support enabled by default now? The peanut gallery on yesterday's Synology review was arguing for ZFS being a key reason for rolling your own instead of buying an off the shelf NAS.
but none of them have 16G of ECC memory and a processor at this level, with dual intel gig nics and expansion ports to support a 10G or other parts. It appears to be able to transcode 3 HD streams without the benefit of the acceleration shown in this product. So, perhaps a bit of premium, but not that much.
I personally use RAID1 with a 2 bay NAS that's worked fine so far. Granted that I don't have very rigorous needs, but then this isn't for enterprise critical data.
I've had good luck with RAID5 on small scale arrays. The main reason to go to RAID6 is due to the chance of a disk failing during the rebuild process. Consumer NAS typically are not under that much load so the rebuild times are short. In an enterprise environment where disk counts are higher in an array as well as the load on the array, using RAID6 makes sense.
Personally I have an 8 drive RAIDZ2 array in a NAS4Free system that I use at home. Portability and reliability are some of the reasons I went with ZFS. So far it has been purely hands off once I got the system up and running. Admittedly it took a bit longer to get up and running as I'm doing some odd things like hosting virtual machines on the same system.
The reason for RAID 6 is because statistically 1 bit out of every 10^14 bits (12TB) is bad on a hard drive... with all drives operational a RAID 5's parity can compensate for said bad bit, with a degraded RAID 5 the sector will be un-recoverable and you'll lose data. RAID 6 has double parity so even with a drive down if (when) a bad sector is encountered it is still possible to recover the data. RAID 5 is obsolete for large arrays.
I just built my first systen in nearly 20 years. I need a backup system. And so far, despite the fact that I have both a Blu-ray and DVD burners in this box, Windows 8.1 does not appear to be willing to let me just burn a (series of) full system backup disk(s) once a week that I can take anywhere I want.
Isnt this 2014 or is Microsoft still stuck in 1988?
It's not a big consumer demand, that's why they are cutting back on backup. There are literally 100's of 3rd party alternatives from $0 up to any price you feel like paying for extra features and performance.
Crashplan can back up to/from a NAS and/or network drives; it's not baked in to the client out of the box, but there are workarounds to do so. I pull files off a network drive at home, and we back up to our Synology NAS at work on servers ranging from 2003 R2 to 2012. Pre-2008 is more of a PITA (because you have to create a scheduled task), but it's still fairly easy to do.
The trouble with CrashPlan, or any of the other remote storage solutions, is that for larger backups you're severely limited by the physics of data transfer. For instance at any given moment we have 1.5-2TB of active business files on our main volume, and depending on the day, at least 15GB to be backed up nightly. However sometimes we have 100-200GB in the nightly backup.
It would take an eternity to upload our initial backup and an only slightly lesser eternity to download it again in case of total loss. When there's one of the big backups to be made it probably could not be completed in one night, even though we have a reasonably fast 50/10 Gbs (nominal) connection. Instead we have multiple redundant backups, and regularly rotate them through off-site storage.
One of the things I've found from painful experience (mostly with Retrospect) is not to use backup software that stores in a proprietary format. There's simply too much risk of the software's recovery process not working as expected, at which point you're stuck.
Generally agreed. AWS has an option to do initial data import by shipping a box of hard drives. Any full system image or enterprise level cloud backup system needs to offer physical media import/export options.
If your business's daily new data volume is high enough you can still swamp nightly updates; but disk based options would really extend the level of users who could effectively make use of such services.
Currently for personal data I've got full local backups, periodically rotated offsite USB drives, and document/media files backed up in Amazon's cloud. A full drive image in the cloud would be nice; but the recovery time is just too long. If my parents were running something faster than cheapskate DSL, I'd probably setup a nas box at their house and sync to it; but currently I couldn't do that without crushing their connection.
Deduplication and compression here would help out but how much is entirely dependent on your data.
Though with such large data sets in a business environment, it sounds like a solution like Commvault, CDP or Avamar would be better suited. They still use proprietary formats but at this level it is hard to avoid to get features like deduplication.
As Kevin G said, deduplication would make that fairly easy.
There's nothing wrong with backups to tape for your situation, but tapes are a pain. Avamar or other backup systems would be able to handle that with ease though.
Availability, capacity, cost: pick two. Sounds like (for a business) that you need an enterprise-grade solution, and if you need next-day availability, crashplan won't be able to deliver that. Crashplan does offer initial drive seeding and backup-to-door: however those also take a week. If a single day of downtime is unacceptable, you probably need something in-house combined with professional services that offer overnight rush delivery -- and that's $$$.
I took a much simpler approach. I have a hard drive which is just a clone of my entire computer, and I keep it in my desk at work. Once a week, I bring it home, run error checks, and do another clone onto the disk, take it back to work the next morning. I also have a local backup disk for files, a portable hard drive. The benefit is that one of my backups is off-site, and both of the backups are never plugged in during non-use, so there's no threat of power surges killing the drives. I'm only susceptible to fire or theft at this point, and that would have to happen to both my home and my work simultaneously to be a problem.
A drive at work is a good idea; an alternative (work isn't always an option) is to store the cloned drive in a safety deposit box which provides an extremely secure location. One probably wouldn't clone once a week, but once a quarter would protect against the worst case of total data loss.
I did the same - am using SyncToy between my internal data drives and FreeFileSync between my computer and two external HDDs. The external HDD is entirely encrypted with TrueCrypt. I have a couple of external HDDs that are copies of my data drives (leave Windows on C: alone). Then I just take a drive to work once a week or two. Daily syncs (why bother with a backup program, when one can use a sync program?) to an encrypted USB stick. Works well and with 2Tb of HDD costing around 100$ there is no excuse for not having a couple of those.
Backups using Shadow Volumes should note some of its limitations: you'll need to have enough free disc space to store another copy of your largest file. For example, say you downloaded a 10 GB installed for a new game you'd need to have another 10 GB of free disk space for Window's Shadow volume to back it up. With the move to SSD's, this could be an issue in some cases.
I do agree that RAID IS NOT A BACKUP but when backing up to a NAS, the NAS should be using RAID 1/5/6 etc. A paragraph on the introductory page does go into these points but I've always felt the need to discuss backup reliability in this context. It helps clear up potential questions like 'if RAID isn't a backup, why are you backing up to RAID storage?' The answer is in the same paragraph as RAID projects against disk failure. Just in my experience, I typically need to hammer in the idea of what 'what good is backing up to a hard drive if a hard drive dies?' as the case for RAID 1/5/6 on a NAS. This idea can be obtained from the context of the article but I've found this needs more emphasis in my experience.
The issue of RAID disk failure leads into one topics that I've found missing: media reliability. The article mentions hard drives, USB sticks, optical and the cloud as targets for backup storage. (For consumer usage, I would say it is safe to omit tape but it still exists.) How long the media is stored and its ability to be retained over time does matter. This is more of a long term problem with USB and optical media as after several years, corruption can creep in. Hard drives of course can fail but typically they're in an active environment so that you'd know exactly when it failed. With RAID, it is possible to recover from failed media but once an optical disk rots or a USB flash stick is dead, the data on it is gone. This article does cover the media reliability of the cloud which is unique: you continually have to pay. Stop paying and you lose your backup data. There is one open question though with cloud backups as none have been around for a long time. Issues like outages are also possible with the cloud but so far many of the backup providers have been good in this regard.
For the built-in backup options for Windows 7/8 and OS X is there a way to limit the size of the backup without having to partition such that multiple computers can backup to one drive without directly competing with each other for space?
For Time Machine you mention that it'll automatically delay the scheduled backup if the backup location is unavailable. Does Windows 7/8 do the same? I'm thinking of laptops that are always out and about so hopefully they won't throw up distracting error prompts when the network store location is not available.
I don't have any Windows 7 machines to test, so I can't answer that. Windows 8 has an offline cache though to which allows backups/restores when the device is disconnected:
Advanced Settings in File History
File History allows you to fine tune how it works including: Which target storage device is used How frequently files are checked and backed up How much space is used locally to cache backup versions of your files when the target backup device is disconnected How long backup files are retained Which folders in your libraries are excluded from backup
Have to say...I've been running Windows Server Essentials 2012 since I was sad about WHS going by the wayside, and I love it.
I'm running it in a VM on my ESXi server, it backs up all my clients with no issues. Then the WSE backs that up to a different storage pool (Solaris/ZFS), and then that gets kicked off-site.
Now I just need to find out a cheap solution to backup off-site the ~40TB worth of stuff on the file server (and then the upload speeds to actually back it up!).
I just lost 6TB to a failed RAID 5 array. Thank you Seagate/China. The RMA drives are malaysian, so maybe that gives hope. Anyway, you can talk backups all you want but backing up 6TB is neither time nor cost economical.
Backing up 6 TB is not an issue unless you modify all 6 TB between each backup, and even then it's really not a big deal. You can stand up 6 TB of network storage for not very much money.
It all comes down to whether or not the data is important to you.
Backing up 6TB is most definitely an issue. Local backups require several hundred dollars worth of additional hardware (closing in on $1000 depending on the type of NAS). Off site is even harder, cloud backup isn't even an option so you need some kind of sneakernet.
Gigaplex, what I'm saying is that it's not very difficult to achieve this. If you have 6 TB of data that's worth backing up, then I assume that data's worth the couple hundred dollars in hardware required to back it up.
Once you move into this kind of data storage requirement, obviously cloud backup is going to get expensive, but there are other options.
You just have to decide if it's worth backing up, or perhaps a subset of the data is worth backing up.
Before you can dismiss any backup, all you have to do is decide how much the data is worth to you.
There's SyncBack which has multiple levels of features, the lowest being free. I guess it's technically not a backup solution, but more of a backup helper, but it allows you to schedule and backup specific files/folders (or whole drives) and much more. It was recommended by a friend
SyncToy did not scale well to a hundred thousand picture files I had years ago...it starting creeping extremely slow even on incremental backups. Maybe there was some improvement somewhere? Didn't see any new version last time I looked.
SyncToy has been reliable for me. But lately I have shifted to FreeFileSync for syncs to my external HDDs. Seems to be faster and work better (sometimes the total file sizes on my internal backup drive and the external HDD differed with SyncToy - set to Echo - never figured out why). I have confidential work files on my external HDD (WD Passports) so I encrypt everything with TrueCrypt. Can afford to loose a HDD, but wouldn't like the data to end up in the wrong hands.
Not so sure that is true. I have 'played' with RAID 0 for the last ten years and never (fingers crossed) had an array fail. Neither HDD or SSD. I try to run Syncs 2-3 times per week to avoid data loss should it happen. Am using the in-built Intel RAID controllers on Asus motherboards. Seems to be very reliable.
However, I agree that it is a crying shame that Microsoft has left the home server market to wither on the vine. WHS was a good product. WHS 2011 was not the advance over the original that it should have been, because of Microsoft organizational politics. Still, it was a real bargain at $40. Windows Server Essentials 2012 is too expensive for home users to even consider.
I agree. I Use Windows Server 2012 R2 because I got a license from DreakSpark but there is absolutely no way I'd consider buying that for home server use. I'd probably just run a client version of Windows and use third party programs to handle backups. It's unfortunate that they removed automatic system image backups from Windows 8.1
WHS 2011 with Stablebit DrivePool has been working for me. I plan on keeping it a while, but after that I'll just figure out a way to pretty a regular version of Windows on the machine, and third party central backup programs. I also use my WHS server as a Minecraft server, and media server.
Don't you mean 30TB, not 300TB? For Google Drive on 'Consumer Cloud and What I Do' page? If not, then 300TB is for sure the way to go. What a savings...
I mentioned Crashplan has a Linux client. I don't use Linux on my home computers, and neither do most people, so I didn't discuss it for the most part.
I plan on backing up my cousin's windows computer (I owe him one, otherwise highly not recommended). Best I can tell, the way to do this (especially if you have some handy drives that will store the data compressed, but not uncompressed) is to install Linux on some removeable drives (OpenSUSE looks promising, see below) and then use the dd command to copy the windows drive completely as an image to the Linux external drive (this gives you the option to either copy the entire system back (with everything already installed) or to mount the file with -loopback and copy individual files). Note that you will likely want a compressed linux drive (to save space). This looks easiest with btfs (thus Opensuse, and don't forget the forcing option otherwise the compressor will give up before it hits all that empty space). Using this system for incremental backups requires a bit of programming (but is surprisingly easy with pyfuse).
Quite frankly, the dd "disk destroyer" command is so famous to get wrong (and thus write empty sectors over what you wanted to back up) that I would be afraid to include step by step instructions in something like this. You have been warned.
It would be nice to see if you could back up with something like Anaconda, especially for free.
Ah sweet I didn't know there was still a GUI method for doing a system backup in Windows 8.1! I thought you had to use wbadmin. I was wondering why the Windows 8.1 recovery still supported system image restores when I thought there was no way to create a system image without the command line.
Anyways for previous versions on Windows 8.1: they're still there! Just the tab is not shown for local drives. If you access a network share you can still see the previous versions of files done with Volume Shadow Copy. Turn on system recovery for all your hard drives (it's on for the system drive as default) and then access your own computer via it's UNC path (\localhost\C$ or whatever other share you want to access). Then when you press properties on folder or file the previous versions tab is present!
Another way to access volume shadow copies of files is to use ShadowExplorer.
Previously I used Dropbox to sync between three systems, including my file server, and then periodically do a manual backup to a dedicated backup drive. Recently this failed me for the exact reason stated early in this article: user error. After doing a restore on one of my computers, Dropbox then synced forward and wiped out one of my folders almost completely because the computer was restored to an earlier date. I didn't notice for a long time and the rollback period on Dropbox had elapsed. I would advocate a different solution or more frequent archiving.
Re. Windows 8.1: 'Unfortunately, you can’t add user defined folders here which you do want backed up.' True, but can't you add those folders to a library that gets backed up?
My system is a bit more complicated but it works for me:
1) Office documents and such are stored in OneDrive. 2) OneDrive sync to my Synology NAS with BittorrentSync. 3) Time Machine Backup on my Synology NAS whenever I am home. 4) Synology NAS backup nightly to Amazon Glacier with Glacier Sync.
This provides multiple local backups as well as a cloud backup that's mostly automatic. I don't directly back up to the cloud from my MacBook Pro simply because when I am out an about, internet connection usually sucks too much to bother.
I didn't really mention Glacier, but it's easily the most cost effective cloud storage. Obviously it has it's drawbacks but price sure isn't one of them. Are you happy with Glacier?
some have found Glacier pricing difficult to figure out. I have about 100 gig in pictures that I wanted to backup but was warned on some photo forums that the per item pricing can get costly and I should zip by year or some other form so not as many individual files were transferred. Sounded like too much human interaction so I passed on Glacier inside my Synology NAS for now.
how come nobody mentioned https://copy.com?r=uABGaD ? You get so much space, especially if you invite people to it, atm i have 62GB, and i will get more if you use link above, plus you get 5GB via referral link.
I use it for my photos, this way i deliver photos to my client via public link.
I use Windows Home server, with DrivePool to duplicate data across random drives. The automatic backups work great - and it stores backups for the last 3 days, a backup from 3 weeks ago, and one from 3 months ago, for each PC in the house. it works perfectly, and I never have to think about it, and pulling out data from a backup is easy.
Okay, let me see if I understand this clearly: A "backup" is when you copy your important data to a different storage device, so that if anything happens to your original device, you still have a copy of your data, right? And there are various ways to do this, but they all basically involve copying your data from one device to another. I think even I can understand that.
Still, it's important to have articles on things like this. It keeps writers busy and off the streets—so naturally, crime goes down... Good thinking!
I understand why I need revision control for files, but what about say, my music collection, which I just transcoded from WMA lossless to FLAC? No hash based deduplication is going to realize that they're the same... if I had revision control working on that, I would have an extra 10 GB of stuff sitting around...
For anyone using the Windows 7 built in backup, have you noticed if it re-schedules backups if it misses a time? My machine is typically powered off if I'm not using it, so hopefully Windows is smart enough to just do the backup as soon as it get the chance.
From my experience, Windows 7/Vista built-in backup doesn't automatically delete old backups when the backup disk becomes full (and it fills up quickly). The user must manually delete old backups by clicking "Manage Space" and deleting the old backups.
That's easy for everybody reading this Anandtech article, but not so easy for my computer novice grandmother.
For novices, I install the free version of Crashplan and set it up to automatically delete old backups every 90 days (Settings -> Backup tab -> Frequency and Versions -> Remove deleted files).
There is one way to get around the Windows 8.1 backup limitations. File History also allows you to backup Library locations. So... All you have to do is make any desired backup (folder, drive, etc...) into a library location (right click drive/folder -> Include in library -> Create new library [or choose an existing one]).
SpiderOak is another Consumer Cloud backup service that has unlimited versioning with no time limits. Multiplatform support. It is slower than some of these other services because, since your files are encrypted, they don't deduplicate across different users the way that, e.g., Dropbox does.
I used to use Acronis for backups to external drives but late last year switched to Macrium Reflect and a File Server housing 24TB of storage inside and a NAS with 12TB.
If you're good with MS-DOS scripts Macrium has a lot of functionality that you can get access to. Not exactly user friendly but very useful.
You neglected to mention one of the best cloud backup solutions - SpiderOak. They ran a promotion earlier this year on "backup day" to give unlimited storage for $120 per year. They support Linux (GUI and headless CLI), Windows, Mac, iOS, Android, and Blackberry OS. And unlike a lot of cloud backup services, you can back up network locations - so you can run SpiderOak on one computer and back up data from other computers if they're on the same network and have shares accessible. They do versioning and deduplication (and pass the space savings of deduplication along to you). They also don't retain an encryption key to your data as part of their "zero knowledge" policy. They also allow you to specify a local target to use as a local repository so that when you need to restore something, it doesn't necessarily have to pull it down via the Internet, just that local device whether it's a NAS device or another hard drive in one of your computers. It doesn't do image backups, but if you're looking for image backups, just use the built in utility in Windows to create one and back up the location of those files to the SpiderOak cloud.
I was wondering what backup strategy you were using and I'm happy to hear that its a WHS 2011 solution. I'm rocking the same setup and added time machine support to my WHS to backup macs.
The whole point of RAID *is* to protect you from things like bit rot. The difference between RAID5 and RAID6 is that RAID6 protects you from two rotted bits in a single sector (more specifically, two different drives with failures in the same location). You should be able to avoid this with RAID5 by periodically reading the entire drive and correcting any single error you find (called "scrubbing").
It's not really a sure thing with the RAID though. The array has no idea which version is correct, and which one is rotten. The best it can do is take a consensus and go with whatever version of the file the most drives agree is correct. They did an article about bit rot over at Ars Technica, and the author's RAID 5 happily used the rotten version.
not, really, wumpus. The whole point of RAID (minus 0) is to protect you from a disk failure. By itself it does not deal with bit rot at all. On a mirror, who is right? In typical implementations, disk 0 is presumed to have the correct copy. ZFS (and I believe MS's knockoff, ReFS) implemented scrubbing with checksumming to give a means to identifying the correct copy.
I use Microsoft's free tool SyncToy. With it you can synchronize folders to anywhere else, like an external hdd. And of course only updates are synched and you can specify in which direction to sync. I use it to backup my media collection. The external hard drive can then be stored off-site (at work). The advantage I see with this is that the media files are copied over and are readable on the backup directly. You can take the external hdd on the road and have your full media collection at hand. With image files you will have to first restore them before being able to use them.
Important documents should be stored in the "cloud". This can be a simple encrypted zip sent by email and it will be stored on the email server (say gmail) or whatever. That was possible like over a decade ago already.
I do most of my backups from Linux: I use rsync to sync my home directory and other relevant files outside of /home and ntfsclone to backup my Windows drives. The latter option is definitely slower than incremental backups or somesuch but allows me to restore a Windows installation very quickly w/o need for reinstalling. It's also handy when moving Windows from a hard drive to another.
Another aspect to backups is bit rot. Both on the backup media (are the files in the backup still good?) and on the live media (do I need to restore this file from backup, as it has become corrupted?)
For a decent backup system, I want checkusms stored with the backed up data, and verified regularly. I also want the backup to actually read all files to be backed up from the source, even if they are not supposed to be modified since the last backup, and check that they still have the same checksum. Unfortunately, this takes rather a long time, but I don't see any alternative to discovering months down the line that some rarely accessed files have become corrupted, and worse, been backed up in a corrupted state.
>Windows 8 fixes that issue, but creates new ones by no longer allowing automated image backups Well, I didn't think supposed IT pros at anandtech would be so casual as to be afraid of command line. If you cannot live in this world without regular image backups, who prevents you from adding a task in task scheduler with wbadmin call? Come on now.
There are extended tutorials to Windows native backup setting, but for Winserver essentials, here are very compressed version of descriptions. Could you more explain it - for example - "Once the connector software is installed" - this is big shortcut - after installation is backup set up from server or from local machine? How is linux / macs backup support, because of this is real different, Windows backup solution isnt now big problem. From my experience - best solution are form Acronis and Paragon, but they have lots of limitations and known issues.
We have two laptops, and two desktops. Each has a boot drive and a separate physical backup drive for images using acronis. All pictures/music/data reside on the server which has separate backup drives for its OS and data (again with acronis). I'll be looking into S3 again as a result of this article (last time I looked I thought 2tb was too much). My wife has an external drive we use as off site backup of her important data (downside is that that is current only).
+1 for Acronis True Image. Amazing product, I've purchased the newer editions every time a major Windows OS is released (TrueImage 2008, TrueImage 2010, TrueImage 2013, now TrueImage 2014)
I purchased TrueImage 2014 because it introduced integrated cloud backup which works extremely well, and they give you 250GB for the first year free (and $50/year after that.) Since I upgraded from 2013, it only cost $20 for the upgrade to 2014. You can also sneakily use the Intel SSD Migration software as your "upgrade" edition if you happen to have an Intel SSD, essentially getting you the full software and a year of cloud storage for $20 bucks.
The downside is additional storage is expensive, where as Crashplan and Carbonite have what are virtually unlimited plans starting at $100, some plans even covering "unlimited" users making it perfect for small business.
Either way, great article. We need to spread the backup knowledge so everyone does it, because I think the reason most people don't backup is because they don't know how.
Brett- the primary con of cloud services that I think must always be kept in mind is the consequence of your provider going out of business. We've seen this before, and so long as we see newcomers offering unlimited storage cheap as an initial lure to get customers, we'll continue to see it. If it's just your backup, then the cost is the effort required to identify another and get the first full dump done. If we're in the 100s of gigs and beyond, that is significant. So my philosophy is to pick a stable vendor who is making a profit on me, not finding the one offering terms I can exploit. Generally this means pricing based on data size, and a preference to very stable firms like Amazon or Google.
Notebook users should really, really, have a networked backup target as part of their mix.
External HDDs don't really cut it for Notebook users, unless they regularly "dock" with a monitor or USB hub connected to the drive. Some people do, but I know many many that don't, and while they may have the best of intentions, they will not remember to hook up the external drive on a regular basis.
A network target on their LAN will ensure that automatic backups happen in the course of regular use. A publicly accessable network target, like Crashplan Cloud, or AWS Glacier, or even Crashplan's PTP with portmapping or UPnP enabled, allow automatic backups to happen whenever they have an internet connection. Anything less is a disaster waiting to happen.
One issue I had with dropbox, box, or copy is they all wanted to setup their own directory and do the backup from there. If I have a well organized set of drives with various folders and subfolders, I'd like to be able to choose what to backup and skip (as I can do in Crashplan). Have any of the cloud ones mentioned above made it so you can choose your own directories to backup?
If you purchase the OS X Server app ($20) for one of your macs, you can enable networked Time Machine backups for the other macs on your network. I have an external drive connected to my iMac that my wife's Macbook Pro backs up to wirelessly. While it's not technically "built-in" and does come at a cost, it's not "third party" either.
I have been using Acronis for years to backup my main OS drive into my data drive and then do a copy of the whole data drive onto an external hard drive.
I switched to Windows 7's built-in backup tool once to replace Acronis to see if I could just have a free tool. Well, I corrupted my Windows 7 OS once, and after I restored the image, a ton of programs didn't work, including Microsoft Office. I tried uninstalling an reinstalling some programs, but for some reason, there were still some things messed up. I had to do a clean install of Windows 7, and I vowed to never use the Windows 7 built in backup ever again. Since my Acroins version was old at the 2009 version, I went ahead and got the 2013 version, and now that's what I have for backups. I have had to restore images from Acronis before (the 2009 version), so I know I can at least trust them.
I'm not too fond of using the cloud to backup files. I used to put some non-private files on megaupload, and we all know how that went - goodbye megaupload. Now I just fear any sort of cloud storage as a backup - I simply use it for syncing, and then I back up my cloud data locally.
I also tried a NAS once to backup both mine and my girlfriend's computer, but that WD MyBook Live (before they went to this whole MyCloud thing) ended up dying after a random power outtage we had. Granted, it was a single drive nas box, but I thought I could live with it. Nope, my external drive has been my main backup source ever since. It sure isn't any sort of advanced backup solution, but it does the job for me.
Great article learnt a lot as I just copy my documents onto USB two hard drives on a weekly basis one kept in my computer bag the other in the office. I have a MacBook Pro and an iMac with files shared between so its a bit of a nightmare to keep track of the most up to date ones.
A question; would things be easier if I invested in a TimeCapsule and used it with TimeMachine? would TImeMachine work with both computers on the one TimeCapsule or would I have to have one for each machine? if I need two then it starts to get expensive
Look forward to getting so useful feedback to decide which way I should go
thanks Brett Hoswe think thats the way i'll go then as its personal stuff and i have no real need for cloud storage My off site hard drive will be there and if I get broken into or theres a flood or fire
If you use full disk encryption on your computer make sure your NAS/local drive backups are encrypted as well!
I turned off Time Machine and switched to using CrashPlan for both local & cloud backups. I get the the same every 15 minute snapshot as Time Machine but I found crashplan more reliable.
I also use Super-Duper! to make a boot drive clone nightly.
While my main storage is RAID-5, the external drive I use for backups is RAID-0. With the redundancy of the RAID-5 and offsite of crashplan I figure the risk of losing the local backup is acceptable. I'm not in dire need of an infinite timeline of files, the important ones are in the offsite backup anyway. So losing a year of backups and starting over with new drives is no biggie.
great article Brett Hoswe. i've been using shadow protect software to backup my desktop pc, the C: drive to a 2nd internal drive for about 6 years. it never fails. i have 23 GB on my main drive which takes 12 minutes to backup. OS windows 7.
I use windows8.1 and DriveBender to pool my drives, ala WHS as my NAS. Awesome thing about Drive Bender is that it stores the data in NTFS so if something craters I can still grab the data off the drives without worrying about RAID. Also, selective folder duplication across drives is awesome. Some stuff needs backup, some stuff does not.
I'm less looking for a backup tech, more for an archive tech. I want to put my data (photos/documents/PDFs) onto a server that can index them for meta data and full text search and ultimately off load the files onto DVD/bluRay disks for long term storage.
I'd expect the meta data index to be fully backed up onto the cloud and the files being kept safe on media.
Except this: Here is a list of several vendors offering their own take on cloud backups: ◾ Arq ◾ Backblaze ◾ Carbonite ◾ Cloudberry ◾ Crashplan ◾ JungleDisk ◾ Mozy
Another one to consider: unRAID. It uses filesystem-level RAID-ing, with one parity disk. The biggest pro is that if two disks fail, you get to keep your data on the other disks, as opposed to having to resort to very expensive and not fully effective specialist recovery services with RAID-5. You can expand the cluster to (IIRC) 23 drives max, with a separate cache drive if it seems too slow for you. The cons are: it's not free, and you have to build your own NAS for it. But so far it turned out to be best for me.
I keep it simple. Once every month or 3 I'll backup my local TB media drive on my main computer to an external 2TB drive. The data is now duplicated and not in danger of electric surge. Fire/flood/etc. still not protected but OK.
About every 4-6 months I'll take the 2TB to my parents house and backup the new files to their media computer. That takes the danger of disaster out of the equation. Unless both houses suffer catastrophe (we are only ~30miles away from each other....) there is little loss of data.
Since this is all mechanical HDD's I'm wondering peoples thoughts on recopying files? I just continuously add and update the files but never "refresh" the drives. Is this something that should (very infrequently) be done? i.e. format a drive then reload with the same files?
Good article. And people DO NOT think it won't happen to you. I was luckily able to recover a coworkers computer after a power outtage. Fried his PSU, but fortunately stopped there. I was able to grab his family media without issue. They had NO backup, nothing. You would have thought after this they would take my advice and backup? Nope. I doubt they will be as lucky next time.
I back up all my irreplaceable stuff (family photos and home movies for example) on LTO tape. Got a Dell SCSI LTO2 drive for next to nothing and the tapes are dirt cheap and last decades. LTO1 tapes are so cheap I even use them for less important stuff like backups of all my Steam game installers. Tapes are certainly not the most popular solution for consumers but for long term archival use I've yet to find anything better.
Time Machine is simple yes, but you can't boot from it. And if you're on a new system, needing files from an old backed up system, it gets awfully problematic.
Why No mention of StorageCraft ShadowProtect? 😩 Also, one has to figure out which data is really important enough to backup in multiple ways. A list of op just back up everything. All Their personal documents, wedding pictures, and all their movies, tv series, music etc. All those entertainment media can be recreated from originals or redownloaded, so why even back them up. I think most people do that because if disaster hits, they couldn't remember what they have lost, so they back up everything unnessecary file.
Instead, include a list of all your easily recreatable files (with hashes) in your important backup that is replicated offsite, to cloud etc. That way you can easily go back and see that "Ohh I lost my Days of Our Lives TV folder" and then just redownload or rerip the dvds. So much space saved by not backing up unimportant data you could easily recreate.
Computer Repair Computer repair is the process of identifying, troubleshooting and resolving problems and issues in a faulty computer. Computer repair is a broad field encompassing many tools, techniques, and procedures used to repair computer hardware, software or network/Internet problems. Our company provides the best service for computer repair Our company provides 24*7 services to the customer. For more details visit our site:
Cloud data backups can help you to protect you from your data getting stolen or corrupted. With the help of https://spinbackup.com/products/google-apps-backup... you will be able to do a google apps backups and preserve your G Suite. They offer an impressive variety of options for you. You can fast, easy search for your backed up items to help you recover lost data immediately and much more.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
133 Comments
Back to Article
tribunal88 - Wednesday, May 21, 2014 - link
Any reason that CrashPlan wasn't considered?DanNeely - Wednesday, May 21, 2014 - link
1) "Of course this list is certainly not exhaustive, with many companies now offering online backup solutions. A quick search in your favorite search engine will provide dozens of options. Be sure to choose the one that works best for you."2) Look at the 5th item on the bulleted list above the paragraph I just quoted...
antef - Wednesday, May 21, 2014 - link
CrashPlan is fantastic. I used to use JungleDisk with S3, but the software was forgotten and became problematic and buggy. I gave it up and switched to CrashPlan. The client is easy to use and backups seem to happen fast and reliably.Kenazo - Tuesday, May 27, 2014 - link
Crashplan's friend to friend option is amazing. I have 3 or 4 people backing up to my home NAS, and my personal pictures and important documents all back up to my PC at work.Haravikk - Wednesday, May 21, 2014 - link
For most regular consumers, CrashPlan is something I'd definitely recommend; it's pretty easy to use and has unlimited storage, plus if you like you can specify the encryption key that is used (though of course you then have to find a way to keep that safe instead). Given the pricing of cloud storage it's also pretty well priced. I'm sure there are other cloud backup services, but CrashPlan is what I'm using.Personally though I've gone for the total overkill approach; I have my Mac's main system volume which I'm about to switch over to RAID-5, a Time Machine backup volume on RAID-5, a Synology NAS (no RAID since it's only two-disk), and the NAS is also configured to heedlessly run CrashPlan to backup my files. So I have a total of three redundant copies of my data, albeit one in the cloud that is usually a day or two behind, and would take weeks to re-download, but in the event of a fire burning down everything else I'd rather have that off-site protection.
Still, I'd personally recommend local back-up drive + NAS for most serious computer users, especially if working with that computer is your job, as a single backup isn't enough IMO, as the last thing you want is to be in the middle of restoring your system, only for the backup to fail as well.
NonSequitor - Wednesday, May 21, 2014 - link
Be really careful about RAID-5. It protects very well against a complete drive failure, but drive corruption or a drive that starts returning garbage will trash everything on the disk. You need a RAID level that does double parity or checksums, such as RAID-6 and RAID-Z, to actually protect against almost all hardware failures. Of course it still is not then a backup.pdf - Wednesday, May 21, 2014 - link
The bigger problem is that with large modern disks, a drive failure in RAID5 means that you're extremely likely to encounter unreadable sectors trying to resilver a replacement disk. A drive that starts returning garbage during regular operation should cause no problem with any competent RAID implementation though.Also, RAIDZ is single-parity - RAIDZ2, RAIDZ3, etc are the multi-parity versions. The other bonus with ZFS-based RAID implementations is full checksumming of all data and metadata on-disk, plus COW snapshots, and the latter means it can actually serve the role of a self-contained backup solution, using something like zfs-auto-snapshot to provide granular, aged snapshots of changed data.
Morawka - Wednesday, May 21, 2014 - link
i havent heard RAIDZ recommended for 10 yearspiroroadkill - Wednesday, May 21, 2014 - link
What are you even on about?ZFS was only widely available in November 2005.
Mr Perfect - Friday, May 23, 2014 - link
Guess that's only eight and a half years then.patrickjchase - Thursday, May 22, 2014 - link
ZFS RAIDZ implements strong checksums within each drive, such that it can reliably detect if a drive is returning bad data and ignore it. In some respects it's actually stronger than RAID6 in terms of its ability to deal with silent corruption. That's why NonSequitor wrote "double parity OR checkums" (RAID6 is double-parity, RAIDZ is single-parity augmented with strong per-disk, per-chunk checksums).If you're halfway competent then it's not "extremely likely" that you'll encounter unreadable sectors during a RAID5 rebuild. There's a reason why both good RAID controllers and ZFS implement scrubbing (i.e. they can periodically read every disk end to end and remap any unreadable sectors). If you do that every couple days then the likelihood of encountering a new (since the last scrub) unreadable sector may or may not be high depending on your rebuild time.
For example I have a 5-disk RAID5 array that I use for "cold" storage. I scrub it daily, and rebuilding to a hot-spare takes 6 hours (I've tested it several times, verifying the results against separate copies of the same files), which means that the maximum delay between the most recent scrub and the end of a rebuild is 30 hrs. The scrubs have only found one bad sector in ~2 years, so I respectfully submit that the likelihood of an additional failure within 30 hours of a scrub is pretty darned low.
beginner99 - Thursday, May 22, 2014 - link
exactly. anything above 2 TB drives becomes really problematic in this regard. With 4 TB drives it's almost guaranteed a RAID-5 rebuild will fail. IMHO if you do RAID, do RAID-1.patrickjchase - Thursday, May 22, 2014 - link
RAID1 has exactly the same problems as RAID5 - In the case of silent corruption it can't determine which disk is bad, and it's vulnerable to a single disk failure during a rebuild. The likelihood of such a failure is obviously lower (now you only have to worry about 1 other disks instead of 2 or more) but not hugely so. RAID6/RAIDZ2 is statistically much better until you get up to really high drive counts.The "big boys" with truly mission-critical data do N-way replication, i.e. all critical data is replicated (n>=3) times on different systems.
jimhsu - Wednesday, May 21, 2014 - link
+1 to crashplan. For my most important data, I have n+2 backups: n being the number of computers I have (meaning that Onedrive automatically syncs them); 2 being a crashplan online subscription as well as a local crashplan backup. I also have restore previous versions running, and use it on occasion, but don't consider it a backup per se.pjcamp - Wednesday, May 21, 2014 - link
I plowed through all the competitors a few years ago, and Crashplan was the one I selected. It had the cheapest unlimited storage with version history, and the (for me) killer feature that there exists a Linux client. I have a FreeNAS box that I use for media storage. I can mount it as a drive on my Linux machine and the Crashplan client will back it up just as if it were a local drive. There is also an Android client that gives you access to all your files, functioning as a sort of personal Dropbox, without sharing but with better security.I've had occasion to use my backups a couple of times and found it easy and speedy, much more so than I expected for a cloud service.
cknobman - Wednesday, May 21, 2014 - link
Everything in my house goes to a personally built server onto dedicated RAID storage drives. No accounts other than my personal administrator account have access to do anything but read.Those drives are then backed up to the cloud via CrashPlan.
Simple, effective, and as foolproof as I can get for now.
uhuznaa - Wednesday, May 21, 2014 - link
One thing to note with Time Machine: You don't need to use the fancy interface for restoring files. Just browse your backup disk with the Finder (or on the command line), there's a directory for every backup from which you can just copy things over.To CrashPan: I used that for a while, but found it to be utterly uncontrollable. The log files are a joke, the status mails arrive at random times (or not at all) and are useless ("Files: 117k, Backed Up: 99.9%") and often enough when a backup didn't run for some reason it's impossible to debug because there's no real error reporting. It may work somehow, but it has all the marks of something I don't want to rely on.
NCM - Wednesday, May 21, 2014 - link
Yes, that's indeed a big point for Apple's TimeMachine — that and it being included free with every Mac. If necessary you can just go digging into a TM archive and pull out what you need.SkateboardP - Wednesday, May 21, 2014 - link
Hi Brett,great guide but can you or others please check if the OneDrive backupsolution is throttled and capped at 355kb/s under win8.1 (Desktop)? I read that many people complaining about that. Thanks.
plext0r - Wednesday, May 21, 2014 - link
I've been using Duplicati for a few years and it has been good. "It works with Amazon S3, Windows Live SkyDrive, Google Drive (Google Docs), Rackspace Cloud Files or WebDAV, SSH, FTP (and many more)." It uses rsync under the covers and I use it to backup to my RAID-5 NAS in the basement. I also perform offsite backups (rotate 1TB disks to and from my workplace).bernstein - Wednesday, May 21, 2014 - link
after fifteen years of backupping i can share the following:- user initiated (e.g. all usb, some network/cloud) backups agree with less than 0.1% of the human population (but hey it is better than nothing)
- with consumer hdds, raid5 nas are totally overrated and *in the real world* rarely protect data better than jbod.
- raid6 is better than jbod. but with consumer disks the only real alternative to jbod is zfs (linux md (which all nas employ) + consumer hdds = shaky unless you have a 24/7sysadmin...)
so either build/buy a zfs nas or backup to the cloud.
bernstein - Wednesday, May 21, 2014 - link
or buy insanely expensive enterprise disksDanNeely - Wednesday, May 21, 2014 - link
Are there any consumer grade NASes with ZFS support enabled by default now? The peanut gallery on yesterday's Synology review was arguing for ZFS being a key reason for rolling your own instead of buying an off the shelf NAS.questionlp - Wednesday, May 21, 2014 - link
There's FreeNAS Mini, which is a 4-bay NAS. I'm actually considering getting one to replace my current file server at home.DanNeely - Thursday, May 22, 2014 - link
Yikes! At $1k diskless, that's well above the typical price for a consumer nas.bsd228 - Thursday, May 22, 2014 - link
but none of them have 16G of ECC memory and a processor at this level, with dual intel gig nics and expansion ports to support a 10G or other parts. It appears to be able to transcode 3 HD streams without the benefit of the acceleration shown in this product. So, perhaps a bit of premium, but not that much.CadentOrange - Wednesday, May 21, 2014 - link
I personally use RAID1 with a 2 bay NAS that's worked fine so far. Granted that I don't have very rigorous needs, but then this isn't for enterprise critical data.Kevin G - Wednesday, May 21, 2014 - link
I've had good luck with RAID5 on small scale arrays. The main reason to go to RAID6 is due to the chance of a disk failing during the rebuild process. Consumer NAS typically are not under that much load so the rebuild times are short. In an enterprise environment where disk counts are higher in an array as well as the load on the array, using RAID6 makes sense.Personally I have an 8 drive RAIDZ2 array in a NAS4Free system that I use at home. Portability and reliability are some of the reasons I went with ZFS. So far it has been purely hands off once I got the system up and running. Admittedly it took a bit longer to get up and running as I'm doing some odd things like hosting virtual machines on the same system.
MrBungle123 - Friday, May 23, 2014 - link
The reason for RAID 6 is because statistically 1 bit out of every 10^14 bits (12TB) is bad on a hard drive... with all drives operational a RAID 5's parity can compensate for said bad bit, with a degraded RAID 5 the sector will be un-recoverable and you'll lose data. RAID 6 has double parity so even with a drive down if (when) a bad sector is encountered it is still possible to recover the data. RAID 5 is obsolete for large arrays.Kevin G - Friday, May 23, 2014 - link
If you know what sector is bad in RAID5, you can still recover the data.The tricker thing is silent corruption where all blocks appear to be OK. There an error can be detected but not necessarily which block contains it.
SirMaster - Wednesday, May 21, 2014 - link
There is no way around it.If you are keeping data, you need to budget for 2x that space at a minimum, otherwise you cannot truly afford to keep that much data.
Mark_gb - Wednesday, May 21, 2014 - link
I just built my first systen in nearly 20 years. I need a backup system. And so far, despite the fact that I have both a Blu-ray and DVD burners in this box, Windows 8.1 does not appear to be willing to let me just burn a (series of) full system backup disk(s) once a week that I can take anywhere I want.Isnt this 2014 or is Microsoft still stuck in 1988?
theduckofdeath - Wednesday, May 21, 2014 - link
It's not a big consumer demand, that's why they are cutting back on backup. There are literally 100's of 3rd party alternatives from $0 up to any price you feel like paying for extra features and performance.Duckeenie - Wednesday, May 21, 2014 - link
You almost answer your own question here. Discs in 2014?zero2dash - Wednesday, May 21, 2014 - link
Crashplan can back up to/from a NAS and/or network drives; it's not baked in to the client out of the box, but there are workarounds to do so. I pull files off a network drive at home, and we back up to our Synology NAS at work on servers ranging from 2003 R2 to 2012. Pre-2008 is more of a PITA (because you have to create a scheduled task), but it's still fairly easy to do.Brett Howse - Wednesday, May 21, 2014 - link
You can understand me not writing about workarounds. Also this is 100% on Crashplan not sure why they don't add the support it's not very difficult.NCM - Wednesday, May 21, 2014 - link
The trouble with CrashPlan, or any of the other remote storage solutions, is that for larger backups you're severely limited by the physics of data transfer. For instance at any given moment we have 1.5-2TB of active business files on our main volume, and depending on the day, at least 15GB to be backed up nightly. However sometimes we have 100-200GB in the nightly backup.It would take an eternity to upload our initial backup and an only slightly lesser eternity to download it again in case of total loss. When there's one of the big backups to be made it probably could not be completed in one night, even though we have a reasonably fast 50/10 Gbs (nominal) connection. Instead we have multiple redundant backups, and regularly rotate them through off-site storage.
One of the things I've found from painful experience (mostly with Retrospect) is not to use backup software that stores in a proprietary format. There's simply too much risk of the software's recovery process not working as expected, at which point you're stuck.
DanNeely - Wednesday, May 21, 2014 - link
Generally agreed. AWS has an option to do initial data import by shipping a box of hard drives. Any full system image or enterprise level cloud backup system needs to offer physical media import/export options.If your business's daily new data volume is high enough you can still swamp nightly updates; but disk based options would really extend the level of users who could effectively make use of such services.
Currently for personal data I've got full local backups, periodically rotated offsite USB drives, and document/media files backed up in Amazon's cloud. A full drive image in the cloud would be nice; but the recovery time is just too long. If my parents were running something faster than cheapskate DSL, I'd probably setup a nas box at their house and sync to it; but currently I couldn't do that without crushing their connection.
Kevin G - Wednesday, May 21, 2014 - link
Deduplication and compression here would help out but how much is entirely dependent on your data.Though with such large data sets in a business environment, it sounds like a solution like Commvault, CDP or Avamar would be better suited. They still use proprietary formats but at this level it is hard to avoid to get features like deduplication.
Brett Howse - Wednesday, May 21, 2014 - link
As Kevin G said, deduplication would make that fairly easy.There's nothing wrong with backups to tape for your situation, but tapes are a pain. Avamar or other backup systems would be able to handle that with ease though.
jimhsu - Thursday, May 22, 2014 - link
Availability, capacity, cost: pick two. Sounds like (for a business) that you need an enterprise-grade solution, and if you need next-day availability, crashplan won't be able to deliver that. Crashplan does offer initial drive seeding and backup-to-door: however those also take a week. If a single day of downtime is unacceptable, you probably need something in-house combined with professional services that offer overnight rush delivery -- and that's $$$.dstarr3 - Wednesday, May 21, 2014 - link
I took a much simpler approach. I have a hard drive which is just a clone of my entire computer, and I keep it in my desk at work. Once a week, I bring it home, run error checks, and do another clone onto the disk, take it back to work the next morning. I also have a local backup disk for files, a portable hard drive. The benefit is that one of my backups is off-site, and both of the backups are never plugged in during non-use, so there's no threat of power surges killing the drives. I'm only susceptible to fire or theft at this point, and that would have to happen to both my home and my work simultaneously to be a problem.rooman - Wednesday, May 21, 2014 - link
A drive at work is a good idea; an alternative (work isn't always an option) is to store the cloned drive in a safety deposit box which provides an extremely secure location. One probably wouldn't clone once a week, but once a quarter would protect against the worst case of total data loss.dstarr3 - Wednesday, May 21, 2014 - link
Yeah, I considered a safety deposit box, as well, but in some areas (like mine), it's bloody impossible to get one. hahaBeethovensCat - Saturday, May 24, 2014 - link
I did the same - am using SyncToy between my internal data drives and FreeFileSync between my computer and two external HDDs. The external HDD is entirely encrypted with TrueCrypt. I have a couple of external HDDs that are copies of my data drives (leave Windows on C: alone). Then I just take a drive to work once a week or two. Daily syncs (why bother with a backup program, when one can use a sync program?) to an encrypted USB stick. Works well and with 2Tb of HDD costing around 100$ there is no excuse for not having a couple of those.Kevin G - Wednesday, May 21, 2014 - link
Overall an excellent article!
Backups using Shadow Volumes should note some of its limitations: you'll need to have enough free disc space to store another copy of your largest file. For example, say you downloaded a 10 GB installed for a new game you'd need to have another 10 GB of free disk space for Window's Shadow volume to back it up. With the move to SSD's, this could be an issue in some cases.
I do agree that RAID IS NOT A BACKUP but when backing up to a NAS, the NAS should be using RAID 1/5/6 etc. A paragraph on the introductory page does go into these points but I've always felt the need to discuss backup reliability in this context. It helps clear up potential questions like 'if RAID isn't a backup, why are you backing up to RAID storage?' The answer is in the same paragraph as RAID projects against disk failure. Just in my experience, I typically need to hammer in the idea of what 'what good is backing up to a hard drive if a hard drive dies?' as the case for RAID 1/5/6 on a NAS. This idea can be obtained from the context of the article but I've found this needs more emphasis in my experience.
The issue of RAID disk failure leads into one topics that I've found missing: media reliability. The article mentions hard drives, USB sticks, optical and the cloud as targets for backup storage. (For consumer usage, I would say it is safe to omit tape but it still exists.) How long the media is stored and its ability to be retained over time does matter. This is more of a long term problem with USB and optical media as after several years, corruption can creep in. Hard drives of course can fail but typically they're in an active environment so that you'd know exactly when it failed. With RAID, it is possible to recover from failed media but once an optical disk rots or a USB flash stick is dead, the data on it is gone. This article does cover the media reliability of the cloud which is unique: you continually have to pay. Stop paying and you lose your backup data. There is one open question though with cloud backups as none have been around for a long time. Issues like outages are also possible with the cloud but so far many of the backup providers have been good in this regard.
ltcommanderdata - Wednesday, May 21, 2014 - link
For the built-in backup options for Windows 7/8 and OS X is there a way to limit the size of the backup without having to partition such that multiple computers can backup to one drive without directly competing with each other for space?For Time Machine you mention that it'll automatically delay the scheduled backup if the backup location is unavailable. Does Windows 7/8 do the same? I'm thinking of laptops that are always out and about so hopefully they won't throw up distracting error prompts when the network store location is not available.
Brett Howse - Wednesday, May 21, 2014 - link
I don't have any Windows 7 machines to test, so I can't answer that. Windows 8 has an offline cache though to which allows backups/restores when the device is disconnected:Advanced Settings in File History
File History allows you to fine tune how it works including:
Which target storage device is used
How frequently files are checked and backed up
How much space is used locally to cache backup versions of your files when the target backup device is disconnected
How long backup files are retained
Which folders in your libraries are excluded from backup
sepffuzzball - Wednesday, May 21, 2014 - link
Have to say...I've been running Windows Server Essentials 2012 since I was sad about WHS going by the wayside, and I love it.I'm running it in a VM on my ESXi server, it backs up all my clients with no issues. Then the WSE backs that up to a different storage pool (Solaris/ZFS), and then that gets kicked off-site.
Now I just need to find out a cheap solution to backup off-site the ~40TB worth of stuff on the file server (and then the upload speeds to actually back it up!).
coburn_c - Wednesday, May 21, 2014 - link
I just lost 6TB to a failed RAID 5 array. Thank you Seagate/China. The RMA drives are malaysian, so maybe that gives hope. Anyway, you can talk backups all you want but backing up 6TB is neither time nor cost economical.Brett Howse - Wednesday, May 21, 2014 - link
Backing up 6 TB is not an issue unless you modify all 6 TB between each backup, and even then it's really not a big deal. You can stand up 6 TB of network storage for not very much money.It all comes down to whether or not the data is important to you.
Gigaplex - Wednesday, May 21, 2014 - link
Backing up 6TB is most definitely an issue. Local backups require several hundred dollars worth of additional hardware (closing in on $1000 depending on the type of NAS). Off site is even harder, cloud backup isn't even an option so you need some kind of sneakernet.Brett Howse - Wednesday, May 21, 2014 - link
Gigaplex, what I'm saying is that it's not very difficult to achieve this. If you have 6 TB of data that's worth backing up, then I assume that data's worth the couple hundred dollars in hardware required to back it up.Once you move into this kind of data storage requirement, obviously cloud backup is going to get expensive, but there are other options.
You just have to decide if it's worth backing up, or perhaps a subset of the data is worth backing up.
Before you can dismiss any backup, all you have to do is decide how much the data is worth to you.
kamiyo - Wednesday, May 21, 2014 - link
There's SyncBack which has multiple levels of features, the lowest being free. I guess it's technically not a backup solution, but more of a backup helper, but it allows you to schedule and backup specific files/folders (or whole drives) and much more. It was recommended by a friendImpulses - Wednesday, May 21, 2014 - link
SyncBack and MS SyncToy are both decent freeware options for the simpler backup jobs.SeanFL - Friday, May 23, 2014 - link
SyncToy did not scale well to a hundred thousand picture files I had years ago...it starting creeping extremely slow even on incremental backups. Maybe there was some improvement somewhere? Didn't see any new version last time I looked.dstarr3 - Wednesday, May 21, 2014 - link
I swear by SyncBack. It's a ridiculously simple way to manage file backups on an external drive.BeethovensCat - Sunday, May 25, 2014 - link
SyncToy has been reliable for me. But lately I have shifted to FreeFileSync for syncs to my external HDDs. Seems to be faster and work better (sometimes the total file sizes on my internal backup drive and the external HDD differed with SyncToy - set to Echo - never figured out why). I have confidential work files on my external HDD (WD Passports) so I encrypt everything with TrueCrypt. Can afford to loose a HDD, but wouldn't like the data to end up in the wrong hands.hasseb64 - Wednesday, May 21, 2014 - link
For nonprofessionals: Avoid RAID!BeethovensCat - Sunday, May 25, 2014 - link
Not so sure that is true. I have 'played' with RAID 0 for the last ten years and never (fingers crossed) had an array fail. Neither HDD or SSD. I try to run Syncs 2-3 times per week to avoid data loss should it happen. Am using the in-built Intel RAID controllers on Asus motherboards. Seems to be very reliable.gcoupe - Wednesday, May 21, 2014 - link
Well, to be fair WHS 2011 is not yet "defunct". It is in mainstream support until April 2016:http://support.microsoft.com/lifecycle/search/defa...
However, I agree that it is a crying shame that Microsoft has left the home server market to wither on the vine. WHS was a good product. WHS 2011 was not the advance over the original that it should have been, because of Microsoft organizational politics. Still, it was a real bargain at $40. Windows Server Essentials 2012 is too expensive for home users to even consider.
peterfares - Wednesday, May 21, 2014 - link
I agree. I Use Windows Server 2012 R2 because I got a license from DreakSpark but there is absolutely no way I'd consider buying that for home server use. I'd probably just run a client version of Windows and use third party programs to handle backups. It's unfortunate that they removed automatic system image backups from Windows 8.1kmmatney - Wednesday, May 21, 2014 - link
WHS 2011 with Stablebit DrivePool has been working for me. I plan on keeping it a while, but after that I'll just figure out a way to pretty a regular version of Windows on the machine, and third party central backup programs. I also use my WHS server as a Minecraft server, and media server.SirGCal - Wednesday, May 21, 2014 - link
100 GB - $241 TB / $120
10 TB / $1200
20 TB / $2400
300 TB / $3600
Don't you mean 30TB, not 300TB? For Google Drive on 'Consumer Cloud and What I Do' page? If not, then 300TB is for sure the way to go. What a savings...
Brett Howse - Wednesday, May 21, 2014 - link
Yes I did fixed tyvm!Guinpen - Wednesday, May 21, 2014 - link
No mention of options for Linux? Why?Brett Howse - Wednesday, May 21, 2014 - link
I mentioned Crashplan has a Linux client. I don't use Linux on my home computers, and neither do most people, so I didn't discuss it for the most part.wumpus - Thursday, May 22, 2014 - link
I plan on backing up my cousin's windows computer (I owe him one, otherwise highly not recommended). Best I can tell, the way to do this (especially if you have some handy drives that will store the data compressed, but not uncompressed) is to install Linux on some removeable drives (OpenSUSE looks promising, see below) and then use the dd command to copy the windows drive completely as an image to the Linux external drive (this gives you the option to either copy the entire system back (with everything already installed) or to mount the file with -loopback and copy individual files). Note that you will likely want a compressed linux drive (to save space). This looks easiest with btfs (thus Opensuse, and don't forget the forcing option otherwise the compressor will give up before it hits all that empty space). Using this system for incremental backups requires a bit of programming (but is surprisingly easy with pyfuse).Quite frankly, the dd "disk destroyer" command is so famous to get wrong (and thus write empty sectors over what you wanted to back up) that I would be afraid to include step by step instructions in something like this. You have been warned.
It would be nice to see if you could back up with something like Anaconda, especially for free.
peterfares - Wednesday, May 21, 2014 - link
Ah sweet I didn't know there was still a GUI method for doing a system backup in Windows 8.1! I thought you had to use wbadmin. I was wondering why the Windows 8.1 recovery still supported system image restores when I thought there was no way to create a system image without the command line.Anyways for previous versions on Windows 8.1: they're still there! Just the tab is not shown for local drives. If you access a network share you can still see the previous versions of files done with Volume Shadow Copy. Turn on system recovery for all your hard drives (it's on for the system drive as default) and then access your own computer via it's UNC path (\localhost\C$ or whatever other share you want to access). Then when you press properties on folder or file the previous versions tab is present!
Another way to access volume shadow copies of files is to use ShadowExplorer.
peterfares - Wednesday, May 21, 2014 - link
More details http://winhowto.blogspot.com/2012/09/windows-8-how...cgalyon - Wednesday, May 21, 2014 - link
Previously I used Dropbox to sync between three systems, including my file server, and then periodically do a manual backup to a dedicated backup drive. Recently this failed me for the exact reason stated early in this article: user error. After doing a restore on one of my computers, Dropbox then synced forward and wiped out one of my folders almost completely because the computer was restored to an earlier date. I didn't notice for a long time and the rollback period on Dropbox had elapsed. I would advocate a different solution or more frequent archiving.pirspilane - Wednesday, May 21, 2014 - link
Re. Windows 8.1: 'Unfortunately, you can’t add user defined folders here which you do want backed up.' True, but can't you add those folders to a library that gets backed up?peterfares - Wednesday, May 21, 2014 - link
Yes. You can also create as many libraries as you want and put whatever folders you want in them.Brett Howse - Wednesday, May 21, 2014 - link
Hi. I've updated the guide to reflect this and make it more clear. Thanks!jeffkibuule - Wednesday, May 21, 2014 - link
My system is a bit more complicated but it works for me:1) Office documents and such are stored in OneDrive.
2) OneDrive sync to my Synology NAS with BittorrentSync.
3) Time Machine Backup on my Synology NAS whenever I am home.
4) Synology NAS backup nightly to Amazon Glacier with Glacier Sync.
This provides multiple local backups as well as a cloud backup that's mostly automatic. I don't directly back up to the cloud from my MacBook Pro simply because when I am out an about, internet connection usually sucks too much to bother.
Brett Howse - Wednesday, May 21, 2014 - link
I didn't really mention Glacier, but it's easily the most cost effective cloud storage. Obviously it has it's drawbacks but price sure isn't one of them. Are you happy with Glacier?SeanFL - Friday, May 23, 2014 - link
some have found Glacier pricing difficult to figure out. I have about 100 gig in pictures that I wanted to backup but was warned on some photo forums that the per item pricing can get costly and I should zip by year or some other form so not as many individual files were transferred. Sounded like too much human interaction so I passed on Glacier inside my Synology NAS for now.dado023 - Wednesday, May 21, 2014 - link
how come nobody mentioned https://copy.com?r=uABGaD ?You get so much space, especially if you invite people to it, atm i have 62GB, and i will get more if you use link above, plus you get 5GB via referral link.
I use it for my photos, this way i deliver photos to my client via public link.
kmmatney - Wednesday, May 21, 2014 - link
I use Windows Home server, with DrivePool to duplicate data across random drives. The automatic backups work great - and it stores backups for the last 3 days, a backup from 3 weeks ago, and one from 3 months ago, for each PC in the house. it works perfectly, and I never have to think about it, and pulling out data from a backup is easy.ander111 - Wednesday, May 21, 2014 - link
Okay, let me see if I understand this clearly: A "backup" is when you copy your important data to a different storage device, so that if anything happens to your original device, you still have a copy of your data, right? And there are various ways to do this, but they all basically involve copying your data from one device to another. I think even I can understand that.Still, it's important to have articles on things like this. It keeps writers busy and off the streets—so naturally, crime goes down... Good thinking!
Egg - Wednesday, May 21, 2014 - link
I understand why I need revision control for files, but what about say, my music collection, which I just transcoded from WMA lossless to FLAC? No hash based deduplication is going to realize that they're the same... if I had revision control working on that, I would have an extra 10 GB of stuff sitting around...Brett Howse - Wednesday, May 21, 2014 - link
Two things I guess. Hash based deduplication is awful on any media, other than to say the file is already copied, so it wouldn't really matter.Second, most of the backup systems listed allow you to control how many days you keep deleted files.
Mr Perfect - Wednesday, May 21, 2014 - link
For anyone using the Windows 7 built in backup, have you noticed if it re-schedules backups if it misses a time? My machine is typically powered off if I'm not using it, so hopefully Windows is smart enough to just do the backup as soon as it get the chance.Stanand - Wednesday, May 21, 2014 - link
From my experience, Windows 7/Vista built-in backup doesn't automatically delete old backups when the backup disk becomes full (and it fills up quickly). The user must manually delete old backups by clicking "Manage Space" and deleting the old backups.That's easy for everybody reading this Anandtech article, but not so easy for my computer novice grandmother.
For novices, I install the free version of Crashplan and set it up to automatically delete old backups every 90 days (Settings -> Backup tab -> Frequency and Versions -> Remove deleted files).
SenilePlatypus - Wednesday, May 21, 2014 - link
There is one way to get around the Windows 8.1 backup limitations. File History also allows you to backup Library locations. So... All you have to do is make any desired backup (folder, drive, etc...) into a library location (right click drive/folder -> Include in library -> Create new library [or choose an existing one]).johnthacker - Wednesday, May 21, 2014 - link
SpiderOak is another Consumer Cloud backup service that has unlimited versioning with no time limits. Multiplatform support. It is slower than some of these other services because, since your files are encrypted, they don't deduplicate across different users the way that, e.g., Dropbox does.DeathReborn - Wednesday, May 21, 2014 - link
I used to use Acronis for backups to external drives but late last year switched to Macrium Reflect and a File Server housing 24TB of storage inside and a NAS with 12TB.If you're good with MS-DOS scripts Macrium has a lot of functionality that you can get access to. Not exactly user friendly but very useful.
Jeff7181 - Wednesday, May 21, 2014 - link
You neglected to mention one of the best cloud backup solutions - SpiderOak. They ran a promotion earlier this year on "backup day" to give unlimited storage for $120 per year. They support Linux (GUI and headless CLI), Windows, Mac, iOS, Android, and Blackberry OS. And unlike a lot of cloud backup services, you can back up network locations - so you can run SpiderOak on one computer and back up data from other computers if they're on the same network and have shares accessible. They do versioning and deduplication (and pass the space savings of deduplication along to you). They also don't retain an encryption key to your data as part of their "zero knowledge" policy. They also allow you to specify a local target to use as a local repository so that when you need to restore something, it doesn't necessarily have to pull it down via the Internet, just that local device whether it's a NAS device or another hard drive in one of your computers. It doesn't do image backups, but if you're looking for image backups, just use the built in utility in Windows to create one and back up the location of those files to the SpiderOak cloud.MrX8503 - Wednesday, May 21, 2014 - link
I was wondering what backup strategy you were using and I'm happy to hear that its a WHS 2011 solution. I'm rocking the same setup and added time machine support to my WHS to backup macs.iwod - Wednesday, May 21, 2014 - link
What about Bit Rot? I heard RAID doesn't protect you with it and you will basically have two bad copy of the data.wumpus - Thursday, May 22, 2014 - link
The whole point of RAID *is* to protect you from things like bit rot. The difference between RAID5 and RAID6 is that RAID6 protects you from two rotted bits in a single sector (more specifically, two different drives with failures in the same location). You should be able to avoid this with RAID5 by periodically reading the entire drive and correcting any single error you find (called "scrubbing").Mr Perfect - Thursday, May 22, 2014 - link
It's not really a sure thing with the RAID though. The array has no idea which version is correct, and which one is rotten. The best it can do is take a consensus and go with whatever version of the file the most drives agree is correct. They did an article about bit rot over at Ars Technica, and the author's RAID 5 happily used the rotten version.http://arstechnica.com/information-technology/2014...
bsd228 - Thursday, May 22, 2014 - link
not, really, wumpus. The whole point of RAID (minus 0) is to protect you from a disk failure. By itself it does not deal with bit rot at all. On a mirror, who is right? In typical implementations, disk 0 is presumed to have the correct copy. ZFS (and I believe MS's knockoff, ReFS) implemented scrubbing with checksumming to give a means to identifying the correct copy.beginner99 - Thursday, May 22, 2014 - link
I use Microsoft's free tool SyncToy. With it you can synchronize folders to anywhere else, like an external hdd. And of course only updates are synched and you can specify in which direction to sync. I use it to backup my media collection. The external hard drive can then be stored off-site (at work). The advantage I see with this is that the media files are copied over and are readable on the backup directly. You can take the external hdd on the road and have your full media collection at hand. With image files you will have to first restore them before being able to use them.Important documents should be stored in the "cloud". This can be a simple encrypted zip sent by email and it will be stored on the email server (say gmail) or whatever. That was possible like over a decade ago already.
gsvelto - Thursday, May 22, 2014 - link
I do most of my backups from Linux: I use rsync to sync my home directory and other relevant files outside of /home and ntfsclone to backup my Windows drives. The latter option is definitely slower than incremental backups or somesuch but allows me to restore a Windows installation very quickly w/o need for reinstalling. It's also handy when moving Windows from a hard drive to another.AlexIsAlex - Thursday, May 22, 2014 - link
Another aspect to backups is bit rot. Both on the backup media (are the files in the backup still good?) and on the live media (do I need to restore this file from backup, as it has become corrupted?)For a decent backup system, I want checkusms stored with the backed up data, and verified regularly. I also want the backup to actually read all files to be backed up from the source, even if they are not supposed to be modified since the last backup, and check that they still have the same checksum. Unfortunately, this takes rather a long time, but I don't see any alternative to discovering months down the line that some rarely accessed files have become corrupted, and worse, been backed up in a corrupted state.
boomie - Thursday, May 22, 2014 - link
>Windows 8 fixes that issue, but creates new ones by no longer allowing automated image backupsWell, I didn't think supposed IT pros at anandtech would be so casual as to be afraid of command line.
If you cannot live in this world without regular image backups, who prevents you from adding a task in task scheduler with wbadmin call?
Come on now.
ruthan - Thursday, May 22, 2014 - link
There are extended tutorials to Windows native backup setting, but for Winserver essentials, here are very compressed version of descriptions. Could you more explain it - for example - "Once the connector software is installed" - this is big shortcut - after installation is backup set up from server or from local machine?How is linux / macs backup support, because of this is real different, Windows backup solution isnt now big problem. From my experience - best solution are form Acronis and Paragon, but they have lots of limitations and known issues.
davidpappleby - Thursday, May 22, 2014 - link
We have two laptops, and two desktops. Each has a boot drive and a separate physical backup drive for images using acronis. All pictures/music/data reside on the server which has separate backup drives for its OS and data (again with acronis). I'll be looking into S3 again as a result of this article (last time I looked I thought 2tb was too much). My wife has an external drive we use as off site backup of her important data (downside is that that is current only).Mikuni - Thursday, May 22, 2014 - link
Mega gives 10GB for free, encrypted storage, why wasn't it mentioned?Samus - Thursday, May 22, 2014 - link
+1 for Acronis True Image. Amazing product, I've purchased the newer editions every time a major Windows OS is released (TrueImage 2008, TrueImage 2010, TrueImage 2013, now TrueImage 2014)I purchased TrueImage 2014 because it introduced integrated cloud backup which works extremely well, and they give you 250GB for the first year free (and $50/year after that.) Since I upgraded from 2013, it only cost $20 for the upgrade to 2014. You can also sneakily use the Intel SSD Migration software as your "upgrade" edition if you happen to have an Intel SSD, essentially getting you the full software and a year of cloud storage for $20 bucks.
The downside is additional storage is expensive, where as Crashplan and Carbonite have what are virtually unlimited plans starting at $100, some plans even covering "unlimited" users making it perfect for small business.
Either way, great article. We need to spread the backup knowledge so everyone does it, because I think the reason most people don't backup is because they don't know how.
bsd228 - Thursday, May 22, 2014 - link
Brett- the primary con of cloud services that I think must always be kept in mind is the consequence of your provider going out of business. We've seen this before, and so long as we see newcomers offering unlimited storage cheap as an initial lure to get customers, we'll continue to see it. If it's just your backup, then the cost is the effort required to identify another and get the first full dump done. If we're in the 100s of gigs and beyond, that is significant. So my philosophy is to pick a stable vendor who is making a profit on me, not finding the one offering terms I can exploit. Generally this means pricing based on data size, and a preference to very stable firms like Amazon or Google.toyotabedzrock - Thursday, May 22, 2014 - link
I wish they would provide a way for families to store each others data encrypted so they could provide for their own disaster recovery.RoboKaren - Tuesday, June 10, 2014 - link
It's called Crashplan. Backing up to another person's (friend's, family, etc.) drive is a free option.easp - Thursday, May 22, 2014 - link
Notebook users should really, really, have a networked backup target as part of their mix.External HDDs don't really cut it for Notebook users, unless they regularly "dock" with a monitor or USB hub connected to the drive. Some people do, but I know many many that don't, and while they may have the best of intentions, they will not remember to hook up the external drive on a regular basis.
A network target on their LAN will ensure that automatic backups happen in the course of regular use. A publicly accessable network target, like Crashplan Cloud, or AWS Glacier, or even Crashplan's PTP with portmapping or UPnP enabled, allow automatic backups to happen whenever they have an internet connection. Anything less is a disaster waiting to happen.
Z15CAM - Thursday, May 22, 2014 - link
So a 1.32MB DOS app named GHOST.EXE writing to FAT32 is no longer applicable - As if.Z15CAM - Thursday, May 22, 2014 - link
You can't migrate BackUp's over Networks thru GUI using Symantec's Ghost Counsel.Rogatti - Thursday, May 22, 2014 - link
Already in the linux world:-
ISO for DVD
$ dd if=/dev/sr0 of=imagem.iso bs=2048 ##(if=dvd unit - of= name.iso)##
-
MondoRescue
http://www.mondorescue.org/docs/mondorescue-howto....
Z15CAM - Thursday, May 22, 2014 - link
Already in the Linux world:NO the DOS World
SeanFL - Friday, May 23, 2014 - link
One issue I had with dropbox, box, or copy is they all wanted to setup their own directory and do the backup from there. If I have a well organized set of drives with various folders and subfolders, I'd like to be able to choose what to backup and skip (as I can do in Crashplan). Have any of the cloud ones mentioned above made it so you can choose your own directories to backup?mrweir - Friday, May 23, 2014 - link
If you purchase the OS X Server app ($20) for one of your macs, you can enable networked Time Machine backups for the other macs on your network. I have an external drive connected to my iMac that my wife's Macbook Pro backs up to wirelessly.While it's not technically "built-in" and does come at a cost, it's not "third party" either.
metayoshi - Friday, May 23, 2014 - link
I have been using Acronis for years to backup my main OS drive into my data drive and then do a copy of the whole data drive onto an external hard drive.I switched to Windows 7's built-in backup tool once to replace Acronis to see if I could just have a free tool. Well, I corrupted my Windows 7 OS once, and after I restored the image, a ton of programs didn't work, including Microsoft Office. I tried uninstalling an reinstalling some programs, but for some reason, there were still some things messed up. I had to do a clean install of Windows 7, and I vowed to never use the Windows 7 built in backup ever again. Since my Acroins version was old at the 2009 version, I went ahead and got the 2013 version, and now that's what I have for backups. I have had to restore images from Acronis before (the 2009 version), so I know I can at least trust them.
I'm not too fond of using the cloud to backup files. I used to put some non-private files on megaupload, and we all know how that went - goodbye megaupload. Now I just fear any sort of cloud storage as a backup - I simply use it for syncing, and then I back up my cloud data locally.
I also tried a NAS once to backup both mine and my girlfriend's computer, but that WD MyBook Live (before they went to this whole MyCloud thing) ended up dying after a random power outtage we had. Granted, it was a single drive nas box, but I thought I could live with it. Nope, my external drive has been my main backup source ever since. It sure isn't any sort of advanced backup solution, but it does the job for me.
KPNuts - Saturday, May 24, 2014 - link
Great article learnt a lot as I just copy my documents onto USB two hard drives on a weekly basis one kept in my computer bag the other in the office. I have a MacBook Pro and an iMac with files shared between so its a bit of a nightmare to keep track of the most up to date ones.A question; would things be easier if I invested in a TimeCapsule and used it with TimeMachine? would TImeMachine work with both computers on the one TimeCapsule or would I have to have one for each machine? if I need two then it starts to get expensive
Look forward to getting so useful feedback to decide which way I should go
Brett Howse - Saturday, May 24, 2014 - link
You can backup multiple machines to a single Time Capsule so that won't be an issue.KPNuts - Saturday, May 24, 2014 - link
thanks Brett Hoswe think thats the way i'll go then as its personal stuff and i have no real need for cloud storage My off site hard drive will be there and if I get broken into or theres a flood or firetitanmiller - Saturday, May 24, 2014 - link
Just putting in a plug for Backblaze. Great service. I store about 1.5TB on it for $3.96/month.Kvanh - Saturday, May 24, 2014 - link
If you use full disk encryption on your computer make sure your NAS/local drive backups are encrypted as well!I turned off Time Machine and switched to using CrashPlan for both local & cloud backups. I get the the same every 15 minute snapshot as Time Machine but I found crashplan more reliable.
I also use Super-Duper! to make a boot drive clone nightly.
While my main storage is RAID-5, the external drive I use for backups is RAID-0. With the redundancy of the RAID-5 and offsite of crashplan I figure the risk of losing the local backup is acceptable. I'm not in dire need of an infinite timeline of files, the important ones are in the offsite backup anyway. So losing a year of backups and starting over with new drives is no biggie.
nytopcat98367 - Monday, May 26, 2014 - link
great article Brett Hoswe. i've been using shadow protect software to backup my desktop pc, the C: drive to a 2nd internal drive for about 6 years. it never fails. i have 23 GB on my main drive which takes 12 minutes to backup. OS windows 7.Stylex - Monday, May 26, 2014 - link
I use windows8.1 and DriveBender to pool my drives, ala WHS as my NAS. Awesome thing about Drive Bender is that it stores the data in NTFS so if something craters I can still grab the data off the drives without worrying about RAID. Also, selective folder duplication across drives is awesome. Some stuff needs backup, some stuff does not.Conficio - Monday, May 26, 2014 - link
I'm less looking for a backup tech, more for an archive tech. I want to put my data (photos/documents/PDFs) onto a server that can index them for meta data and full text search and ultimately off load the files onto DVD/bluRay disks for long term storage.I'd expect the meta data index to be fully backed up onto the cloud and the files being kept safe on media.
Any pointers?
HachavBanav - Tuesday, May 27, 2014 - link
Not a single word of my current UNLIMITED ONLINE BACKUP solution....$50/yearhttp://www.backblaze.com/partner/af2141
Brett Howse - Tuesday, May 27, 2014 - link
Except this:Here is a list of several vendors offering their own take on cloud backups:
◾ Arq
◾ Backblaze
◾ Carbonite
◾ Cloudberry
◾ Crashplan
◾ JungleDisk
◾ Mozy
nagi603 - Wednesday, May 28, 2014 - link
Another one to consider: unRAID. It uses filesystem-level RAID-ing, with one parity disk. The biggest pro is that if two disks fail, you get to keep your data on the other disks, as opposed to having to resort to very expensive and not fully effective specialist recovery services with RAID-5. You can expand the cluster to (IIRC) 23 drives max, with a separate cache drive if it seems too slow for you. The cons are: it's not free, and you have to build your own NAS for it. But so far it turned out to be best for me.7Enigma - Wednesday, May 28, 2014 - link
I keep it simple. Once every month or 3 I'll backup my local TB media drive on my main computer to an external 2TB drive. The data is now duplicated and not in danger of electric surge. Fire/flood/etc. still not protected but OK.About every 4-6 months I'll take the 2TB to my parents house and backup the new files to their media computer. That takes the danger of disaster out of the equation. Unless both houses suffer catastrophe (we are only ~30miles away from each other....) there is little loss of data.
Since this is all mechanical HDD's I'm wondering peoples thoughts on recopying files? I just continuously add and update the files but never "refresh" the drives. Is this something that should (very infrequently) be done? i.e. format a drive then reload with the same files?
Good article. And people DO NOT think it won't happen to you. I was luckily able to recover a coworkers computer after a power outtage. Fried his PSU, but fortunately stopped there. I was able to grab his family media without issue. They had NO backup, nothing. You would have thought after this they would take my advice and backup? Nope. I doubt they will be as lucky next time.
rickydlam - Wednesday, May 28, 2014 - link
Google Drive is $1.99/month for 100GB now.Archipelago - Saturday, June 7, 2014 - link
and from what I can tell, Google Drive does offer versioning through the "manage revisions" feature.Thrackerzod - Thursday, May 29, 2014 - link
I back up all my irreplaceable stuff (family photos and home movies for example) on LTO tape. Got a Dell SCSI LTO2 drive for next to nothing and the tapes are dirt cheap and last decades. LTO1 tapes are so cheap I even use them for less important stuff like backups of all my Steam game installers. Tapes are certainly not the most popular solution for consumers but for long term archival use I've yet to find anything better.Hauken - Saturday, May 31, 2014 - link
Time Machine is simple yes, but you can't boot from it. And if you're on a new system, needing files from an old backed up system, it gets awfully problematic.MDX - Wednesday, June 4, 2014 - link
I wish there was a backup service that was immune to national security letters, but alas. Mega.co.nz will have to do for now.pslind69 - Monday, July 7, 2014 - link
Why No mention of StorageCraft ShadowProtect? 😩 Also, one has to figure out which data is really important enough to backup in multiple ways. A list of op just back up everything. All Their personal documents, wedding pictures, and all their movies, tv series, music etc. All those entertainment media can be recreated from originals or redownloaded, so why even back them up. I think most people do that because if disaster hits, they couldn't remember what they have lost, so they back up everything unnessecary file.Instead, include a list of all your easily recreatable files (with hashes) in your important backup that is replicated offsite, to cloud etc. That way you can easily go back and see that "Ohh I lost my Days of Our Lives TV folder" and then just redownload or rerip the dvds. So much space saved by not backing up unimportant data you could easily recreate.
Click4Support Complain - Tuesday, August 26, 2014 - link
I was very pleased to find this. I wanted to thank you for this useful information !! .Altafa - Thursday, August 22, 2019 - link
Computer RepairComputer repair is the process of identifying, troubleshooting and resolving problems and issues in a faulty computer. Computer repair is a broad field encompassing many tools, techniques, and procedures used to repair computer hardware, software or network/Internet problems. Our company provides the best service for computer repair Our company provides 24*7 services to the customer.
For more details visit our site:
https://computerrepairuae.webs.com/
hermanmoore - Monday, September 16, 2019 - link
Cloud data backups can help you to protect you from your data getting stolen or corrupted. With the help of https://spinbackup.com/products/google-apps-backup... you will be able to do a google apps backups and preserve your G Suite. They offer an impressive variety of options for you. You can fast, easy search for your backed up items to help you recover lost data immediately and much more.