Comments Locked

52 Comments

Back to Article

  • lilmoe - Tuesday, August 9, 2016 - link

    They're holding back too darn much with this one. Get it out in the wild already.............
    IO is the next big thing in computing it seams.
  • rahvin - Tuesday, August 9, 2016 - link

    There are several potential competitive technologies on the edge of production. They do not want to tip their hand until the product is shipping. This is creating doubt in the competitive tech's and will give Intel/Micron an edge. This is business 101. They probably won't say a word until you can buy it.
  • frenchy_2001 - Tuesday, August 9, 2016 - link

    True, but still disappointing from intel and Micron.
    They announced it LAST YEAR, at the same show. saying it would come to market "fairly soon".
    This was rather disingenuous, as we can see a year later, and done purely to handicap competing techs (many companies are working on similar products, all claiming "soon").
    I expected better from intel. They are usually much better at their introduction estimation and this announcement feels like they abused that trust.
  • Kevin G - Wednesday, August 10, 2016 - link

    The expectation was that NVMe solutions would arrive in 2016 and NVDIMM solutions to launch shortly after SkyLake-EP (due in 2017).
  • Refuge - Wednesday, August 10, 2016 - link

    I mean... I love and adore Intel because they have fed/clothed/housed me for the first half of my life (Dad just retired this year).

    But being close with the company and the goings on I can say that I would never expect better than this from them.

    They are good, too good at times even. But they didn't get to where they are right now by playing fair and nice. They will cock-shot the competition anytime they can get away with it.
  • ddriver - Wednesday, August 10, 2016 - link

    Don't waste your love, intel has definitely taken from your dad more than they've paid him for. If they didn't, they wouldn't be posting such high profits. Intel is a big, greedy and quite frankly lazy monopolist, they've barely made any improvements the last few years, and wherever they did it isn't really worth it. iGPUs have improved quite a lot, but most people out there would prefer that silicon invested in twice the core count rather than mediocre graphics, and sure, they have added more cores to certain product lines, but at very high cost and poor purchase value. They also miserably failed in the mobile device, and practically gave up on that market, IMO because their business model cannot do with the kind of slim profit margins ARM chips have made standard for mobile. Intel are in lazy mode, and if by chance they release something that's actually new and better, it will undoubtedly cost an arm and a leg, not because it is that expensive to manufacture, but because of shameless profit margins.
  • Samus - Wednesday, August 10, 2016 - link

    I agree. Intel has been incredibly lazy since the Core microarchitecture, and completely threw in the towel around Nehalem/Sandybridge (2008-2010) and we all know why.

    No competition. AMD basically challenged Intel when we are being fed crap like Netburst, and Intel works fantastic under pressure and fired back hard while starving AMD of OEM contracts. The problem now is obviously there is no pressure, but there is also no sign Intel is going to take the risks they took with Netburst again to give the competition the "in" they need to compete.
  • ddriver - Thursday, August 11, 2016 - link

    At this point it looks like xpoint is about 95% hype, meaning that it will offer about 5% of the benefits intel claimed. Density is pathetically low, much lower than 3d nand, endurance is only like 2.5 better than MLC (which would make it lower than SLC), real world tests indicate it will be 2-3x better than current enterprise PCIe SSDs, and largely due to the controller, which SSDs can easily improve upon. At this rate, intel's claims of "affordability" will end up being just as bogus as the performance claims, and by the time actual products arrive, samsung's stacked SLC will be trashing it.
  • Samus - Wednesday, August 10, 2016 - link

    I'd suspect the reason for delaying its launch is demand. There is little real-world application for this right now because platforms and software are not really designed to take advantage of ultra-fast IO...perhaps Micron is waiting on Intel's partners for a balls deep launch.

    Sadly, the majority of consumers from SMB to Enterprise are satisfied with SATA solutions, hence the rarity of SAS SSD's. For those that need more, there are PCIe SSD's in a variety of formats offering 2000+MB/sec performance, and even this is grossly unutilized is almost all non-application specific scenario's.

    What Intel and Micron have here is at the end of the day bottlenecked by the interconnects of the platform, specifically, common network infrastructure. Even teamed 10-Gigabit implementations will be easily saturated by a single PCIe 8x SSD. So there is no realistic application for datacenters, either.

    As of right now, the only market for this thing is ultra-heavy databases (like...millions of IOPS) and linear 8K video editing, an equally niche market. This is, after all, the first real jump in storage innovation in over a decade.
  • Anato - Thursday, August 11, 2016 - link

    They were sampling these year ago. I seriously doubt they can keep performance secret from competitors (like Samsung, SK Hynix, IBM). I'd be surprised if competitors haven't got XPoint under microscope soon after sampling or at least report from microscopy analysis.
  • beginner99 - Wednesday, August 10, 2016 - link

    True. We want edram/l4 caches. Why no skylake-c intel WTF? Only hope is kabylake-x will have it but not holding my breath. We want faster RAM and we want faster storage. I mean with L4 cache and 3D XPoint between RAM and storage as a cache could make SSD for consumers obsolete.
  • JoeyJoJo123 - Wednesday, August 10, 2016 - link

    >We want faster RAM
    Doesn't make a huge difference in most applications, and particularly not games (unless you're using an integrated graphics package that includes no DRAM).

    >and we want faster storage
    Faster storage is making a decreasingly smaller and smaller impact as it gets faster through generations. Improving Read/Write access over HDDs by 70% was a big thing for SSDs and the way they affect the general snappiness and speed of a system, then improving it by another 70%, has pretty much less of an upfront impact than before.

    Eventually those big improvements are going to be affecting it by a few dozen nanoseconds, if anything, which is hardly a big deal.
  • wumpus - Friday, August 12, 2016 - link

    You can't have "faster RAM/faster Storage", but your suggestion seems about right.

    I'd rather have HBM[2/3] connected DRAM as a cache and use this "other stuff" (similar to memristers, but not quite it) as primary storage. eDRAM is bigger than it should be and tends to disappoint.

    Can you get lower latency DRAM? I remember back in the dawn of time (especially for 3d graphics before GDDR[n] took over) there was stuff called MDRAM. Basically it was DRAM stored in smaller squares that had lower latency. I wonder if they could fill the HDM DRAM caches with low-latency DRAM (not really required but it should work better). Otherwise it would mainly be for high-thread count CPUs (and software) and iffy elsewhere.
  • Eden-K121D - Tuesday, August 9, 2016 - link

    Let's see how it compares with NAND on price and performance. The SM 961 sure looks sweet at this time for me
  • MrSpadge - Tuesday, August 9, 2016 - link

    The performance is significantly better (otherwise they would not introduce it to the market), but will require new controllers and probably faster interconnects to fully reveal their potential. Due to this simple reason expect higher prices than for NAND, especially initially. The middle between NAND and DRAM seems like a fair guess.
  • saratoga4 - Tuesday, August 9, 2016 - link

    FWIW, Intel has been planning NVMe with their custom Xpoint controllers from the beginning, so that shouldn't be a problem. I don't think NVMe will be too big a problem either (at least initially), the main advantage Intel seems to be pushing is latency, and NVMe interface latency is an order of magnitude lower than the NAND delay, so they've got at least a factor of 10 improvement.
  • frenchy_2001 - Tuesday, August 9, 2016 - link

    Xpoint slots between NAND and DRAM.
    It's:
    + faster than NAND (about 50% slower than DRAM, so 10s if not 100s times faster than NAND)
    + has much better write endurance than NAND (100ks if not millions...)
    + uses direct addressing like DRAM (due to the endurance)
    - it's less dense than NAND (but much denser than DRAM
    - hence its cost will be between NAND and DRAM

    Its first function will be as an additional level of storage, between DRAM and NAND (PCIe bus, NVME protocol). They had talked about replacing DRAM is some cases (NVDIMMs), but this seems pushed back.
  • p1esk - Tuesday, August 9, 2016 - link

    Did you write this after reading the marketing slides, or the actual real-world test results?
  • Xanavi - Tuesday, August 9, 2016 - link

    It"s more dense than NAND, and it is bit addressable because of the CROSS POINT design
  • Maxx Hoo - Tuesday, August 9, 2016 - link

    Don't miss new requirements
    Great surprise in latency performance Will help IoT-enabled apps
  • beginner99 - Wednesday, August 10, 2016 - link

    My guess is DIMMs are delayed because endurance is an issue. You would need trillions of cycles to make it usable as RAM or even more. It's just not comparable to what NAND flash currently does.
  • Kevin G - Wednesday, August 10, 2016 - link

    The DIMM form factor needs host memory controller support. The first chip scheduled to include that is SkyLake-EP which is set to arrive sometime next year.
  • TheWrongChristian - Wednesday, August 10, 2016 - link

    So, as the cache memory in the SSD controller, replacing the existing DRAM cache used in existing designs? It can also replace the pseudo-SLC used in the NAND as well. Something like 32GB of Xpoint cache in a 1TB SSD will provide useful latency benefits for cached data, and buffer writes to improve performance and preserve erase cycles much as the pseudo-SLC does now, only better.

    Hell, stick that 32GB Xpoint cache in a hybrid SMR HDD. The cache could all but hide the shingling overhead for most common usage patterns.
  • emn13 - Wednesday, August 10, 2016 - link

    At some point, intel/micron claimed DRAM-like performance. However, based on their *own* slides here, it's around 1000 (!) times slower (in terms of latency) - not quite that slow for reads, much slower for writes.
  • limitedaccess - Wednesday, August 10, 2016 - link

    It was presented as slower than DRAM originally.

    NVMe may also be a bottleneck in this case as well.
  • Flunk - Tuesday, August 9, 2016 - link

    I don't think we're going to see consumer products based on this tech for a while.
  • frenchy_2001 - Tuesday, August 9, 2016 - link

    Indeed. The cost and availability for the first few years will be reserved for higher margin enterprise products. One good way to recoup development costs.
  • zodiacfml - Tuesday, August 9, 2016 - link

    Even if it is available at twice consumer SSD prices, the advantages will not be noticeable in real world use.

    This is for enterprises or businesses with very particular needs.
  • beginner99 - Wednesday, August 10, 2016 - link

    I'm not so sure. The biggest advanatge of SSD vs hdd isn't bandwidth/max speed but latency. Even my old Intel G2 made a huge difference and at 80 GB it actually had slower max write speed than most HDDs of that time but that wasn't noticeable at all.
  • hansmuff - Wednesday, August 10, 2016 - link

    Agreed. Now, for HDDs we had what, ~20ms service times (seek plus latency). NAND SSDs about 100us (microsec) for reads, according to the slides. so it's about a factor of 200 between those technologies. Like you said, even an old SSD makes a huge difference where the factor may only be 100-150.

    But that's coming from milliseconds*thousands of files for booting Windows for instance. We're in the microseconds*thousands now and we can already see that newer, much faster SSDs compared to first gen don't make as much of a difference as one would like to believe. Games load maybe a second or three faster. Some niches like Windows hibernation can really benefit from the higher transfer speeds. But it's kind of "meh".

    Databases will certainly love it, much higher IOPS and lower latency is what they live for, but the performance gain in client space I think is going to be somewhat muted.
  • Maxx Hoo - Tuesday, August 9, 2016 - link

    Well watch out with video-crazy consumers and Great surprise in latency performance Will help IoT-enabled apps
  • psychobriggsy - Tuesday, August 9, 2016 - link

    Hmm, QuantX sounds far snazzier than Optane.
  • zodiacfml - Tuesday, August 9, 2016 - link

    Not holding my breath for consumer/enthusiast use as the improvements versus costs is not substantial.
    Something is cooking in the computing world, where they want more memory despite having SSD or flash memory performance. I guess there is a big market now for deep learning or AI where large data is required in memory as much as possible.
  • jjj - Tuesday, August 9, 2016 - link

    Any PCI 4.0 talk?
    And any new hints about pricing and timing? - not asking for old info or speculation, just if there was any new info during this presentation.
  • HollyDOL - Wednesday, August 10, 2016 - link

    If the slides can be trusted this thing gets bottlenecked by commonly available PCIe 3 slots, which means it will take at least Cannon Lake to get PCIe 4 (but since it's a die shrink I'd more expect it on Ice lake). Although, Kaby Lake claims to have 'Optane support' whatever they mean by that (it is obviously running on currently avail cpus) if wikipedia can be trusted.
    That is... for common desktops where PCIe lanes are quite limited.
  • extide - Wednesday, August 10, 2016 - link

    Don't forget we got PCIe 3.0 on Ivy Bridge, which was the 22nm shrink of Sandy Bridge. It is definitely not unprecedented to get a new PCIe rev on a die-shrink generation.
  • jjj - Wednesday, August 10, 2016 - link

    Wasn't really asking about timing,just if they made any comments on xpoint and PCIe 4.0.
    Toshiba yesterday said they'll have PCIe 4.0 SSDs in 2019 but i hope we see it in PC much sooner, at least in the Intel E series.
  • fanofanand - Wednesday, August 10, 2016 - link

    It was Ian or Ryan, but one of them tweeted yesterday about having to choose between three presentations, one of which was on PCI-E 4.0. I don't know the timelines of anything, but clearly there is some sort of progress in that area. Hopefully the presentation was to state they will be ready for Kaby/Zen :)
  • jjj - Wednesday, August 10, 2016 - link

    In theory PCIe 4.0 is more or less ready, was curious if Micron mentioned anything about it as it relates to xpoint.
    When we see it in PC ,maybe Intel talks about it next week at IDF. Maybe they rush it a bit if they expect xpoint to see substantial adoption in consumer.
  • Maxx Hoo - Tuesday, August 9, 2016 - link

    Great surprise in latency performance Will help IoT-enabled apps
  • AnnonymousCoward - Wednesday, August 10, 2016 - link

    Huh? IoT doesn't need speed.
  • Wardrop - Tuesday, August 9, 2016 - link

    Can we get rid of this Maxx Hop guy in the comments.
  • ddriver - Tuesday, August 9, 2016 - link

    Nah, IoT lives matter!
  • Lolimaster - Wednesday, August 10, 2016 - link

    With the Radeon SSG I think AMD is developing their own version of 3dxpoint hence the "next gen memory" after HBM2,

    It would be awesome if they change the cpu market with an integrated APU+(VRAM+NVDIMM) all in one package.
  • HollyDOL - Wednesday, August 10, 2016 - link

    I understand the reasons for integrated components - it's much faster, compact etc. Otoh old times modularity had its benefits and I kind of liked it more - you bought exactly what you needed. These days you have plenty... and while for some pple it has benefits for others it's just waste silicon. Wish I could configure cpu on hardware level not to have iGPU at all, or motheboard to have specific NIC, 6 SATA-3, U.2, pump capable 4pin fan slot, this and this PCIE slot but not firewire or sound. Often you pay quite a bit extra and get one feature you wanted or sacrifice the feature.

    Sounds like I am getting old runt, doh.
  • BrokenCrayons - Wednesday, August 10, 2016 - link

    You can get iGPU-less chips from AMD. Athlon x4 CPUs have no onboard graphics (though they're probably all harvested A-series processors with faulty graphics). Of course, you'll suffer in processor performance for it. I own an x4 860K which was purchased to replace an old Q6600. The x4 was faster, but surprisingly not by as much of a margin as I thought.
  • fanofanand - Wednesday, August 10, 2016 - link

    This is nostalgia for the sake of nostalgia. Having everything on the package decreases interconnect lengths, simplifies wiring, reduces complexity for motherboard manufacturers, decreases latency, and reduces electrical usage. There is absolutely zero reason for having the "modularity" you describe.
  • BrokenCrayons - Wednesday, August 10, 2016 - link

    I admit that I'm conflicted when it comes to greater component integration's benefits and the flexibility of end user replaceable components. There's no doubt the reasons for shifting more functions into the CPU package are really good ones, but I disagree with the idea that there's no reason at all to retain the modularity that Holly's pointed out. Dispersing heat generation out to a larger physical area is a good idea. Look at how hot Skylake processors run when they have 72 EUs and eDRAM included in the chip package and though for iGPUs, those parts are extremely fast, they still fall short, giving end users justification for adding a dedicated graphics card when they reach a GPU performance wall, yet also have more than enough CPU power to handle their workloads...such is the case with many owners of Sandy Bridge desktops right now where a GPU and a SSD upgrade will keep them happy for a year or three more. Build-to-order is another good reason to retain some flexibility in computing devices. OEMs have an easier time swapping or adding parts to suit the needs of their customers rather than hoping to sell all of them the same system or keeping many differently configured devices for sale that cannot be changed just prior to shipping.
  • FunBunny2 - Wednesday, August 10, 2016 - link

    -- I admit that I'm conflicted when it comes to greater component integration's benefits and the flexibility of end user replaceable components.

    "integration" is just another word for monopoly. and folks wonder why it's so easy to attack X86 machines?
  • fanofanand - Wednesday, August 10, 2016 - link

    He has made the same stupid iOT comment several times. Nobody gives a rip about iOT. Maybe in 5 years but the computing power simply is inadequate in such tiny devices today.
  • bronan - Wednesday, August 10, 2016 - link

    lol this will not be available for consumers at all for next coming years.
    maybe in the future some highly slowed down products will hit consumer market, but don't be surprised it never gets to consumers at all
  • fanofanand - Wednesday, August 10, 2016 - link

    I disagree, there are fewer and fewer reasons to select the EP lately, and with a mainstream 6-core part being rumored there will be even less reason. limiting Optane/Quantx to EP for 2 years could be a shot in the arm for their higher margin products.

Log in

Don't have an account? Sign up now