Comments Locked

52 Comments

Back to Article

  • Ryan Smith - Tuesday, April 27, 2010 - link

    Just to start off the comments here, there's an active feature request in Microsoft Connect for Microsoft to do something about the fact that non-WHS v2 computers currently can't read storage pool disks due to the new low-level file system. For those of you unfamiliar with Connect, feature requests can be voted on by registered members. So if you're are (or will be) a WHS v2 user and want to get on Microsoft's case about the issue, Connect is the right place to do it.

    https://connect.microsoft.com/WindowsHomeServer/fe...

    Conceivably, they should be able to port WHS v2's Drive Extender driver to Win 7 without too much trouble, although it would be nice to have it for Linux too, since Linux seems to do a better job of trying to read from failing disks.
  • davepermen - Wednesday, April 28, 2010 - link

    I guess on windows, that driver will come one day anyways. at least on the server side for full usage. on the client side at least reading for administration.

    at least it would be a waste of resources to not use that technology for servers bigger than home servers.

    maybe in win8 / server2011 (or what ever), even the boot disk will be based on it? (which would then be great for whs3).

    at least i hope, i like the tech. and the more widespread, the more reliable it will get.
  • Ryan Smith - Wednesday, April 28, 2010 - link

    Bits of DE showing up in a future desktop OS is a distinct possibility. However most of DE is only useful for data storage pools, which doesn't typically apply to a desktop usage scenario. Besides the ECC checks, there's nothing that comes to mind that would be immediately useful.
  • theeldest - Wednesday, April 28, 2010 - link

    They could conceivably modify the technology to manage mirrored blocks between a mechanical disk and SSD to improve speed of boot/frequent files in a desktop environment.

    Also, first post!
  • DanNeely - Wednesday, April 28, 2010 - link

    Is the URL you posted correct. I'm getting an error after logging into connect adn trying to access it: "The content that you requested cannot be found or you do not have permission to view it."
  • Ryan Smith - Wednesday, April 28, 2010 - link

    It's working here, and I got this link from the MVP that created the suggestion. So there shouldn't be any issues.
  • Basilisk - Thursday, April 29, 2010 - link

    It worked for me. I would expect a problem if one lacks an MS Connect account, and a hiccup if the identifying Cookie is missing.
  • djos - Tuesday, April 27, 2010 - link

    Does this mean that you can finally use the built in software mirroing function of WinSrv2008 to mirror the system partition and not have to rebuild your entire WHS install from scratch if Disk1 fails?

    If so about damn time as there is no way in hell im letting my family members buy WHS boxes until they fix this! (cause i dont want to rebuild their boxes - If they can just slap a new drive in and WHS rebuilds itself ala Raid 1 WHS would be perfect!)
  • Ryan Smith - Tuesday, April 27, 2010 - link

    Sadly no. It does have a built-in backup system for the system partition so that you can schedule regular backups to a non-pooled drive, but it doesn't have live duplication. The system partition is still regular NTFS, it's not DEv2-based like the data partitions are.
  • MadMan007 - Wednesday, April 28, 2010 - link

    One of the previews mentioned a ten drive limit. Is that true or not? If so it's a terrible move by MS and one that I hope very much is changed.
  • Ryan Smith - Wednesday, April 28, 2010 - link

    Supposedly it's a scalability issue that they hope to lessen for RTM. I don't think it's going to be a settled issue until then.
  • clex - Wednesday, April 28, 2010 - link

    Its just a recommendation for people using the beta to be aware that it has only been tested internally with 16 drives totaling 16TB. It has been mentioned by the WHS team that errors begin to happen after that. They said that they are trying to fix this "bug" before WHS v2 ships.
  • MadMan007 - Wednesday, April 28, 2010 - link

    Great, another WHS drive extender bug...'hopefully' solved before RTM. I'm not too sure the downsides outweigh the upsides here. 12% lost space is nothing to sneeze at in addition to the other negatives.
  • funkyd99 - Wednesday, April 28, 2010 - link

    ..but the addition of error correction is huge in my opinion, assuming it works correctly. Take the following situation I had to deal with last year:

    1. WHS reports a disk as unhealthy. Repairing disk returns it to healthy. Running a manual chkdsk /r files a handful of bad sectors, but I pass it off as a one-time error.

    2. A week later and the same problem crops up. Repeat step one and also run disk through manufacturer's diagnostic utility, which reports disk as "healthy". I return the disk to the server and examine the shared folders.

    3. Notice a decent amount of corrupted files on the server. Replace drive and restore all files from a backup.

    I can only assume WHS was overwriting valid files with versions that were "fixed" by chkdsk. I'd gladly take a 12% storage hit if I don't have to worry about situations like this cropping up again.
  • gipper - Wednesday, April 28, 2010 - link

    I'm dealing with this right now. However, I have four drives and can't find the culprit. File corruption is not a feature I'd expect from a system that includes "duplication" and "redundant storage". Plus, whs has no way of identifying the offending disk to you.
  • gipper - Wednesday, April 28, 2010 - link

    I'm dealing with this right now. However, I have four drives and can't find the culprit. File corruption is not a feature I'd expect from a system that includes "duplication" and "redundant storage". Plus, whs has no way of identifying the offending disk to you.
  • funkyd99 - Wednesday, April 28, 2010 - link

    I thought the offending disk was shown in the Server Storage tab? Or are you saying you aren't sure what physical drive has the errors? If that's the case, you can download the Disk Management plugin (you may still be able to find the older free version, but it's worth $10 IMO), which will tell you what channel the offending drive is connected to.
  • Brian Klug - Wednesday, April 28, 2010 - link

    I love the concept of WHS, I really do. In fact, it's impossible to argue that this kind of model isn't the future, simply because as you add work devices (we're up to a desktop, laptop, netbook and possibly an iPad now), you need to centralize storage somewhere common to preserve sanity. But until Microsoft can find a way to span files across volumes in a pseudo-RAID5 fashion with similar overhead and data integrity, I can't see myself using it.

    I mean, that's the real inherent danger that they're creating here; more drives increases your chance of drive failure resulting in data loss. Creating huge volumes that span across more than 3 drives without using RAID5 just scares me. I'm unclear about how the data duplication works with the new drive extender - though it'd seem that breaking files up into blocks would make it much easier to say, take all your data and back it up redundantly across unused space on the other drives. Microsoft argues that WHS is still faster than their own software RAID5; any validity those claims? In practice, I always see hardware limitations before software on my server running software RAID.

    Full disclosure here is that I currently run a home server of my own with 2 large RAID5 arrays (4x750GB), (4x1.5 TB). Although I'd love to switch (just because frankly I don't need a fully featured Windows Server install with all the overhead), I feel like power users that are going to want WHS really demand that kind of solid data duplication.

    I feel like my views on WHS might be outdated; does V2 address those concerns?
  • clex - Wednesday, April 28, 2010 - link

    The differences between v1 and v2 are awesome. I've been running a WHS since it was released and love it (8 HDD totaling 6 TB). The new web interface is fantastic. When more people realize that they need this and how easy it is to use and administrate, WHS will no longer be a "best kept secret."
  • davepermen - Wednesday, April 28, 2010 - link

    Indeed.

    Hope they find some way to market it better. maybe starting with a more homefriendly name? server doesn't fit. media hub maybe (fitting the new win7mobile hubs?), home center, or what ever.

    sadly media center is gone, which would actually fit quite well :)
  • ATimson - Wednesday, April 28, 2010 - link

    Awesome as v2 might be, unless there's an in-place upgrade for my v1 server it'll be a hard sell for me...
  • rrinker - Wednesday, April 28, 2010 - link

    That's pretty much a given, since there is no in-place upgrade capability to move from a 32 bit to 64 bit OS. I am definitely planning to upgrade - my existing box has become maxxed out anyway so I need new hardware to add capacity. I'm thinkign an i3 Clarkdale would make a reasonable processor, witht he power save features - even if the video might actually be overkill since you only need video for the OS install.
    At any rate I'm really looking forward for this Vail to be finished and released, I'm making do with what I have now but I really need more space.
  • clex - Wednesday, April 28, 2010 - link

    An option to upgrade was my #1 feature request for WHS v2. Its going to be a huge pain for me to put the 4.5TB of data currently on my WHS v1 somewhere while I install WHS v2 on my server. I guess I'm going to have to buy three 2TB drives and some SATA to USB dongles in order to upgrade. Bad part is that chances of a failure on new hardware are high. I might have to but the HDDs now, run them for 6 months and then do the transfer.
  • davepermen - Wednesday, April 28, 2010 - link

    well, whs does (on duplicated drives) something like raid1, which is just as secure as raid5. your data is always on two disks.

    where is which superior?

    in data-loss-security terms, they are equal. one failing drive in the drivepool == no loss. two failing drives == loss.

    in storage-loss for the security, raid5 is superior. if all your data is in duplication mode on whs, it needs 2x the storage space. raid5 needs "one additional disk".

    in flexibility, whs wins (and this is why whs uses that method). need more space? drop in a new disk of any size, and the storage pool grows.

    you can't simply do that. you can't set up a raid5 with a 500gb disk, a 1tb disk, a 1.5tb disk, a 750gb disk, 2 2tb disks, etc.. and switch as matching around when ever you see fit.

    the home server is designed to be easy to use. expanding a raid5 isn't that easy. (esp not with varying disk sizes).

    so what you lose size-efficiency on the data-duplication side. what you gain is massive flexibility.

    you don't lose any data security.
  • Nomgle - Wednesday, April 28, 2010 - link

    "in data-loss-security terms, they are equal. one failing drive in the drivepool == no loss. two failing drives == loss."

    The level of loss is vastly different though.
    Two failing drives on a RAID5 pool == total loss of *everything* !
    Two failing drives on a DriveExtender pool == loss of that data only. The other drives in the pool will still be readable - if you've got a lot of drives, this could add up to a *lot* of data :)
  • davepermen - Wednesday, April 28, 2010 - link

    oh right! so it's actually a gain in security :)
  • Nomgle - Thursday, April 29, 2010 - link

    It was, yes.

    That feature is now gone with DriveExtender2 though - hence the confusion.

    If you have two drives die in Vail, you could potentially potentially lose everything ... even WITH duplication enabled !
    (because your files may be spread over all your drives)
  • -=Hulk=- - Wednesday, April 28, 2010 - link

    "The NTFS volumes that Drive Extender v2 generates can be treated like any other NTFS volumes, enabling support for NTFS features such as Encrypted File System and NTFS compression that WHS v1 couldn’t handle."

    Are you sure that encryption is supported?

    http://social.microsoft.com/Forums/en-US/whsvailbe...
  • Ryan Smith - Wednesday, April 28, 2010 - link

    EFS (per file encryption), not BitLocker (whole drive encryption).
  • Ryan Smith - Wednesday, April 28, 2010 - link

    With respect to RAID 5, that's certainly an option. Going back to our ZFS comparison, ZFS has a not-quite-RAID mode called RAID-Z that functions very similar to RAID-5 while maintaining the storage pool concept. So it would be entirely possible to implement this on MS's new low-level file system if they wanted to do the work (and boy, it would be a lot of work!).

    However they won't, and I'll tell you why. Parity RAID is not user friendly. If you lose a drive not only does performance suffer, but you have to go through a long and slow rebuilding process. Drives fail and MS knows this, and this is why they have duplication, which in a fully duplicated storage pool is just as good (if not better) than RAID-5 when it comes to handling single drive failures. This issue has been hashed over in and out of MS since WHS became a product, and Microsoft's decision has been that it's Windows HOME Server - they want something that a novice can handle, and in their eyes duplication is easier than any kind of parity RAID system.

    Duplication also involves a lot less CPU overhead, which is a very big deal for the price range MS is targeting. WHS is intended to run decently on Atom processors in order to keep system costs down along with power consumption and heat dissipation. Parity RAID is going to have a large amount of overhead, and while that shouldn't be a huge deal for a proper dual-core CPU (C2D, Athlon/Phenom II) it would be a big deal on an Atom. So for most of these OEM WHS systems, there's a great deal of truth to any claims that WHS duplication is faster than RAID-5.

    With that said I don't doubt that parity RAID is still going to be the most popular option for techies. It takes more work to setup and ideally you want a dedicated controller ($$), but it's more space efficient than duplication.

    As for the consumer side, there's a pretty vocal faction within MS that wants the WHS storage pool to always be duplicated (i.e. you can't turn duplication off), as these are the server guys looking at how to maximize uptime and minimize the chance of data loss. They won't get their way for WHS v2, largely thanks to the fact that such a requirement would drive up WHS computer costs due to the need for a second (or more) hard drive. But depending on how things go with WHS v2, I would not find it surprising if WHS v3 was was forcibly redundant.
  • strikeback03 - Wednesday, April 28, 2010 - link

    Speaking of wanting it to run on Atom processors, I read that for streaming video it is going to transcode the stream on the fly. I can't see an Atom processor doing much transcoding.
  • QChronoD - Wednesday, April 28, 2010 - link

    Considering how many formats most of the new settop boxes can handle natively, I don't see the system needing to do much transcoding anymore. (Unless you are using a 360 and can't use anything but MPEG2 and WMV.)
  • strikeback03 - Thursday, April 29, 2010 - link

    The review I saw said it was for remote viewing, so I would assume they mean so you aren't streaming a full-size DVD rip over the internet.
  • strikeback03 - Wednesday, April 28, 2010 - link

    My understanding from another review is that any computer can only be part of one WHS network at a time. So to properly test WHS v2 wouldn't you need not only a spare machine to run it on, but other systems which did not need to be part of the main WHS network?
  • Roland00 - Wednesday, April 28, 2010 - link

    With how cheap flash memory is, and the fact they are no longer using the system partition as a landing zone, I don't see why OEMS make the system partition on bundled flash drives. Make the system partition on a bundled flash drive, and then have a spot for a second flash drive you plug in. And schedule automatic backups of the system partition onto the second flash drive. If the system partition drive fails, well you move up the bundled second drive up a usb port, and then you buy a generic usb flash drive to plug into the second slot and use that as your backup for your system partition.

    4gbs of flash memory is going for $10 dollars on the open market. There is little reasons not to do this.
  • funkyd99 - Wednesday, April 28, 2010 - link

    These were my thoughts exactly! A small, quick 40GB SSD (although it sounds like 60GB is written into the system requirements) containing the OS tucked away inside a Chenbro ES34069 mini-ITX case, leaving 4 hot swappable bays available for the data disks. That's my dream home server, anyway :) I'm not sure where some of you find room for 10+ drive cases!
  • clex - Wednesday, April 28, 2010 - link

    I put mine in the guest room. When people stay in there they usually say, "I can barely hear it." Then, at about 1 in the morning the backups begin and all those drives start buzzing. After that I shut it off at night and forgo backups while anyone is visiting.
  • Ryan Smith - Wednesday, April 28, 2010 - link

    The Flash used in an SSD is much more expensive than your cheap stuff that goes in to USB flash drives and such, since it needs to be far more reliable and faster. Right now for the price of a 64GB SSD, you can pick up a 1.5TB hard drive.

    For that reason I don't expect to see OEMs go the SSD route on WHS boxes any time soon. They'll simply stick in one large, cheap hard drive.
  • Roland00 - Wednesday, April 28, 2010 - link

    I understand that SSD memory is the higher bin stuff, but is it really necessary for WHS system drive? If you aren't using the system drive as a landing drive, than you don't really need a big system drive, nor are you writing to it all the time like a traditional OS drive. You don't need a SSD. I don't see why you couldn't limit the system drive to 4 or 8 gbs of information, and just use cheap flash memory.

    And if the cheap flash memory fails, it doesn't matter for the OEMS have an automatic backup to another cheap flash drive which the user provides. If the flash drive fails, just plug in a new one.

    It will be so much easier than using a big hard drive as a storage drive and a system drive. If this "combo" drive fails than it is a pain in the ass to rebuild the array for the non technically inclined (it is easy if it is just a storage drive). My mom can easily move a flash drive from slot B to slot A if the computer tells her a problem is occurring with slot A. My mom is also educated enough to go buy a new 4 or 8gb of flash drive from best buy or another retail store, if the OS tells her she needs a new storage device.

    It just makes no sense to me from an HP or ACER standpoint.
  • AMv8(1day) - Thursday, April 29, 2010 - link

    That would be perfect, I was planning on separating the OS and storage so that I could run up to 6 2TB HDDs exclusively for storage with a 32-64GB 2.5" SSD as the primary OS and a cheap USB stick as backup.
    By the way, anyone looking for a good small 5+ HDD capable box should look at the mini-ITX/DTX capable Lian Li PC-Q08 or the Fractal Design Array mini-ITX case. There are a few mini-ITX boards that have come out recently that support 6 SATA ports + 1 eSATA port.
  • gipper - Wednesday, April 28, 2010 - link

    Please please please let this fix the file conflicts issue.

    These corrupt files are KILLING me. Duplication is virtually worthless right now, and how in the heck can it NOT tell you WHICH physical disk is giving it trouble?
  • clex - Wednesday, April 28, 2010 - link

    Supposedly its fixed in this version. You can even remove hard drives that have files open on them. Files open on another computer can now be moved around the drive pool.
  • Dug - Wednesday, April 28, 2010 - link

    Try Disk Management Add-In.
  • rrinker - Wednesday, April 28, 2010 - link

    I've had my system running for a year and a half, storing everything from software installes to movies and music, and have had not one problem with file corruption on duplicated folders - and I have EVERYTHING duplicated. I only log in to the box once in a while to check on disk space, otherwise it simply sits there and runs unattended on the floor alongside my desk. I used all WD Green drives in it, and with 5 drives it's quieter than the single Black 750GB drive in my desktop.
    The Disk Management add-in is probably the best thing ever, it displays all the stats on each drive, including SMART statistics so you can see drive activity and temperatures. The wireframe model is great when you roll your own and have 5 or 6 drives inside a case - it gives you an easy way to relate a specific device to the physical location inside the box.
    I'll say it again the new version can't come soon enough, I want to use 2TB drives so I don;t have to find a motherboard with 10+ SATA ports on it. It's going to take a while to copy all my data to a new box but starting over will be worth it to take advantage of AHCI mode and large drives, and the latest in low power processors (my existing server has an AMD 4050e dual core). I can't see how I would have been happy with a plain old NAS device in this role, I started with 1.5TB in this box and expanded it to over 4. Combined with a media player (I have a Popcorn Hour) it can't be beat. Stable, reliable, and nearly invisible.
  • koshling - Wednesday, April 28, 2010 - link

    One thing this level of abstraction could provide is copy-on-write semantics allowing file hierrachy copies to be near instantanious (just duplicate block references) with block chnages only happening (splitting the modified copies) on subsequent writes. This is not only a lot faster buit alos a great deal more storage efficient since duplicated files that are NOT subsequently modified are only stored once. I wonder how this would play with Sandforce if the underlying storage pool is comprised of Sandforce SSDs - might eliminate a lot of redundancy Sandforce uses to extract its performance?
  • -=Hulk=- - Wednesday, April 28, 2010 - link

    Do you thing that it is still possible to use a hardware raid 1 solution (like Intel chipset) instead of folder duplication?
    I find the folder duplication feature totally useless (for my usage), all my folders would have to be duplicated anyway and the negativ effect would be that the system partition still wouldn't be duplicated.

    That explains why I use Raid 1 with my WHS v1, and I would like to continue to use it with WHS v2, event if there seem to be the ability to backup the system partition twice a day if needed. But I still prefer a real time full mirroring of the HDDs, 2 backup a day of the system backup don't seem to be enough in my opinion.

    But the driver extender of WHS v2 works in a lower level than the file system NTFS, unlike the one in WHS v1.

    Do you thing that I will be able to use Raid 1 with Vail?
  • davepermen - Wednesday, April 28, 2010 - link

    you should still not use raid. what if you want to expand your pool to 3, 4, 5 disks? of different sizes?

    the disk mode they do is for flexiblity. once you understand that, you never want to touch raid again.

    for the system disk, well, you can back it up now. i'll have to test that out soon. still having to think about a plan on how to test that beta :)
  • -=Hulk=- - Thursday, April 29, 2010 - link

    "you should still not use raid. what if you want to expand your pool to 3, 4, 5 disks? of different sizes?"

    I don't need it. I prefer to buy new high capacity discs than having too many discs that increases the probability of HDD failure.

    I bought 2 1TB HDD for WHS v1 for 2 years, and there are still enough space left. With WHS v2 I will buy 2 2TB HDD which are not too expensiv and use the 2 old 1TB ones as external HDD for server backup.
  • AMv8(1day) - Thursday, April 29, 2010 - link

    So I'll be honest, I'm more of a Network/hardware guy. I'm not familiar with alot of the nuances of file storage and server implementation but I was hoping to run the OS on a 32GB SSD and keep it separate from the Storage pool, keeping the critical OS functions relatively safe from drive failure and allowing all of the HDD's to go to sleep while the OS specific processes are running exclusively on the lower powered SSD. Will this be feasible with this new Drive Extender v2? As far as I know it wasn't possible on v1.
  • -=Hulk=- - Thursday, April 29, 2010 - link

    It won't work with a 32Go SSD.

    The system partition has to be at least 160GB big.
  • Ryan Smith - Thursday, April 29, 2010 - link

    160GB big for the beta, but we don't know what the final requirements will be.

    As it stands a full install is only around 12GB after installing the latest round of Windows updates. If you're willing to get your hands dirty, it's possible to do a WHS v2 install such that it'll install to a system partition smaller than 60GB (the current partition size it creates) and to have it avoid using the system disk as a storage pool disk so long as you have another disk already in the system. This would let you install WHS v2 to a smaller SSD right off the bat, and right now it looks entirely practical that you could do this with a 30GB/32GB SSD.

    However it's a lot more effort than a regular WHS v2 install, and of course even small SSDs are expensive. MS also won't support it, but since this is an OEM product in the first place, there is no meaningful end-user support.
  • Marsha - Sunday, May 2, 2010 - link

    This is just a weird addition. There is plenty of error checking and correction at the lowest level on drives. No MS nor Sun nor EMC nor NetApps storage system needs anything like this at all. It wastes enormous space (12%) and will certainly slow server functions on gigabit networks. Why is nobody raising this as the big issue it really is?

Log in

Don't have an account? Sign up now