Are EDSFF (Intel) and NF1/NGSFF (Samsung) competing form factors for this type of server storage? From the pictures, they are physically different, with the Samsung solution being more M.2 compatible with the connector interface centered on the PCB, while the Intel solution seems more "designed for 1U servers" and associated thermal challenges, but also has an offset connector.
The confusing part (for me at least) is that both companies are listed as EDSFF consortium members....so, hey, why not pick a standard form factor and then produce the product?
Ruler and NF1 are competing standards. Ruler is being standardised at SNIA, whereas NF1 is at JEDEC. Both are designed for 1U servers and enable max 576TB in 1U using 16TB SSDs (Intel's "long" ruler could enable twice that, if/when it comes to market). NF1 uses M.2 connector to lower enabling cost, while ruler uses a new connector, which is a bit more versatile (supports up to x8.
Samsung is a member of SNIA and EDSFF board, but it doesn't mean that Samsung is a supporter of ruler spec. Similarly, Intel is a JEDEC member but they are not supporting NF1.
Firewire's mistake was that it tried to compete in the consumer market, where cheap always wins over fast - hence USB. In the server market, speed and form factor are worth more, ergo Ruler should win - assuming Intel isn't stupid enough to tie it to only their chipsets, or charge a licensing fee for using it, or some such other greedy idiocy.
Bandwidth of a single drive is meaningless when you have 36 drives sitting behind two CPUs that don't even have 4 lanes to give to each drive. Let alone trying to get all that bandwidth through a network... Even if that problem is solved, next you would run into thermal issues because x8 means twice the power and 36 drives pulling over 20W each is not realistic to cool with the limited airflow between the drives.
tl;dr Ruler merely adds complexity to server designs and supply chain management by introducing too many options with very limited viability in the real world.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
7 Comments
Back to Article
RU482 - Monday, June 25, 2018 - link
Are EDSFF (Intel) and NF1/NGSFF (Samsung) competing form factors for this type of server storage? From the pictures, they are physically different, with the Samsung solution being more M.2 compatible with the connector interface centered on the PCB, while the Intel solution seems more "designed for 1U servers" and associated thermal challenges, but also has an offset connector.The confusing part (for me at least) is that both companies are listed as EDSFF consortium members....so, hey, why not pick a standard form factor and then produce the product?
Kristian Vättö - Monday, June 25, 2018 - link
Ruler and NF1 are competing standards. Ruler is being standardised at SNIA, whereas NF1 is at JEDEC. Both are designed for 1U servers and enable max 576TB in 1U using 16TB SSDs (Intel's "long" ruler could enable twice that, if/when it comes to market). NF1 uses M.2 connector to lower enabling cost, while ruler uses a new connector, which is a bit more versatile (supports up to x8.Samsung is a member of SNIA and EDSFF board, but it doesn't mean that Samsung is a supporter of ruler spec. Similarly, Intel is a JEDEC member but they are not supporting NF1.
The_Assimilator - Tuesday, June 26, 2018 - link
tl;dr Ruler will win because of its higher bandwidth.close - Tuesday, June 26, 2018 - link
Firewire is shedding a tear reading your comment :).The_Assimilator - Tuesday, June 26, 2018 - link
Firewire's mistake was that it tried to compete in the consumer market, where cheap always wins over fast - hence USB. In the server market, speed and form factor are worth more, ergo Ruler should win - assuming Intel isn't stupid enough to tie it to only their chipsets, or charge a licensing fee for using it, or some such other greedy idiocy.Kristian Vättö - Wednesday, June 27, 2018 - link
Bandwidth of a single drive is meaningless when you have 36 drives sitting behind two CPUs that don't even have 4 lanes to give to each drive. Let alone trying to get all that bandwidth through a network... Even if that problem is solved, next you would run into thermal issues because x8 means twice the power and 36 drives pulling over 20W each is not realistic to cool with the limited airflow between the drives.tl;dr Ruler merely adds complexity to server designs and supply chain management by introducing too many options with very limited viability in the real world.
stifan - Tuesday, July 3, 2018 - link
Thanks for this great post. This is really helpful for me. Also, seehttps://epsxeapp.wordpress.com/