Comments Locked

136 Comments

Back to Article

  • SaturnusDK - Wednesday, January 30, 2019 - link

    The price is the only big surprise here. At $3000 for the CPU alone and three times that in system price it's actually pretty decently priced. The performance is as expected but it will soon be eclipsed. The only question is what price AMD will change for it's coming Zen2 based processors in the same performance bracket, we won't know until then if the W3175X is a worthwhile investment.
  • HStewart - Wednesday, January 30, 2019 - link

    I thought the rumors were that this chip was going to be $8000. I am curious what Covey version of this chip will perform and when it comes out.

    But lets be honest, unless you are extremely rich or crazy, buying any processor with large amount of cores is crazy - to me it seems like high end gaming market is being taking for ride with all this core war - buy high end core now just to say you have highest performance and then next year purchase a new one. Of course there is all the ridicules process stuff. It just interesting to find a 28 core beats a AMD 32 core with Skylake and 14nm on Intel.

    As for Server side, I would think it more cost effective to blade multiple lower core units than less higher core units.
  • jakmak - Wednesday, January 30, 2019 - link

    Its not really surprising to see an 28 Intel beating an 32Core AMD. After all, it is not a hidden mystery that the Intel chips not only have a small IPC advantage, but also are able to run with a higher clockrate (nevertheless the power wattage). In this case, the Xeon-W excells where these 2 advantages combined are working 28x, so the 2 more cores on AMD side wont cut it.
    It is also obvious that the massive advantage works mostly in those cases where clock rate is the most important part.
  • MattZN - Wednesday, January 30, 2019 - link

    Well, it depends on whether you care about power consumption or not, jakmak. Traditionally the consumer space hasn't cared so much, but its a bit of a different story when whole-system power consumption starts reaching for the sky. And its definitely reaching for sky with this part.

    The stock intel part burns 312W on the Blender benchmark while the stock threadripper 2990WX burns 190W. The OC'd Intel part burns 672W (that's right, 672W without a GPU) while the OCd 2990WX burns 432W.

    Now I don't know about you guys, but that kind of power dissipation in such a small area is not something I'm willing to put inside my house unless I'm physically there watching over it the whole time. Hell, I don't even trust my TR system's 330W consumption (at the wall) for continuous operation when some of the batches take several days to run. I run it capped at 250W.

    And... I pay for the electricity I use. Its not cheap to run machines far away from their maximally efficient point on the curve. Commercial machines have lower clocks for good reason.

    -Matt
  • joelypolly - Wednesday, January 30, 2019 - link

    Do you not have a hair dryer or vacuum or oil heater? They can all push up to 1800W or more
  • evolucion8 - Wednesday, January 30, 2019 - link

    That is a terrible example if you ask me.
  • ddelrio - Wednesday, January 30, 2019 - link

    lol How long do you keep your hair dryer going for?
  • philehidiot - Thursday, January 31, 2019 - link

    Anything up to one hour. I need to look pretty for my processor.
  • MattZN - Wednesday, January 30, 2019 - link

    Heh. That's is a pretty bad example. People don't leave their hair dryers turned on 24x7, nor floor heaters (I suppose, unless its winter). Big, big difference.

    Regardless, a home user is not likely to see a large bill unless they are doing something really stupid like crypto-mining. There is a fairly large distinction between the typical home-use of a computer vs a beefy server like the one being reviewed here, let alone a big difference between a home user, a small business environment (such as popular youtube tech channels), and a commercial setting.

    If we just use an average electricity cost of around $0.20/kWh (actual cost depends on where you live and the time of day and can range from $0.08/kWh to $0.40/kWh or so)... but lets just $0.20/kWh.

    For a gamer who is spending 4 hours a day burning 300W the cost of operation winds up being around $7/month. Not too bad. Your average gamer isn't going to break the bank, so to speak. Mom and Dad probably won't even notice the additional cost. If you live in cold environment, your floor heater will indeed cost more money to operate.

    If you are a solo content creator you might be spending 8 to 12 hours a day in front of the computer. For the sake of argument, running blender or encoding jobs in the background. 12 hours of computer use a day @ 300W costs around $22/month.

    If you are GN or Linus or some other popular YouTube site and you are running half a dozen servers 24x7 plus workstations for employees plus running numerous batch encoding jobs on top of that, the cost will begin to become very noticable. Now you are burning, say, 2000W 24x7 (pie in the sky rough average), costing around $290/month ($3480/year). That content needs to be making you money.

    A small business or commercial setting can wind up spending a lot of money on energy if no care at all is taken with regards to power consumption. There are numerous knock-on costs, such as A/C in the summer which has to take away all the equipment heat on top of everything else. If A/C is needed (in addition to human A/C needs), the cost is doubled. If you are renting colocation space then energy is the #1 cost and network bandwidth is the #2 cost. If you are using the cloud then everything has bloated costs (cpu, network, storage, and power).

    In anycase, this runs the gamut. You start to notice these things when you are the one paying the bills. So, yes, Intel is kinda playing with fire here trying to promote this monster. Gaming rigs that aren't used 24x7 can get away with high burns but once you are no longer a kid in a room playing a game these costs can start to matter. As machine requirements grow then running the machines closer to their maximum point of efficiency (which is at far lower frequencies) begins to trump other considerations.

    If that weren't enough, there is also the lifespan of the equipment to consider. A $7000 machine that remains relevant for only one year and has as $3000/year electricity bill is a big cost compared to a $3000 machine that is almost as fast and only has $1500/year electricity bill. Or a $2000 machine. Or a $1000 machine. One has to weigh convenience of use against the total cost of ownership.

    When a person is cognizant of the costs then there is much less of an incentive to O.C. the machines, or even run them at stock. One starts to run them like real servers... at lower frequencies to hit the maximum efficiency sweet spot. Once a person begins to think in these terms, buying something like this Xeon is an obvious and egregious waste of money.

    -Matt
  • 808Hilo - Thursday, January 31, 2019 - link

    Most servers run at idle speed. That is a sad fact. The sadder fact is that they have no discernible effect on business processes because they are in fact projected and run by people in a corp that have a negative cost to benefit ratio. Most important apps still run on legacy mainframe or mini computers. You know the one that keep the electricity flowing, planes up, ticketing, aisles restocked, powerplants from exploding, ICBM tracking. Only social constructivists need an overclocked server. Porn, youtubers, traders, datacollectors comes to mind. Not making much sense.
  • johngardner58 - Monday, February 24, 2020 - link

    Again it depends on the need. If you need speed, there is no alternative. You can't get it by just running blades because not everything can be broken apart into independent parallel processes. Our company once ran an analysis that took a very long time. When time is money this is the only thing that will fill the bill for certain workloads. Having shared high speed resources (memory and cache) make the difference. That is why 255 Raspberry PIs clustered will not outperform most home desktops unless they are doing highly independent parallel processes. Actually the MIPS per watt on such a processor is probably lower than having individual processors because of the combined inefficiencies of duplicate support circuitry.
  • SanX - Friday, February 1, 2019 - link

    Every second home has few running space heaters 1500W at winter time
  • johngardner58 - Monday, February 24, 2020 - link

    Server side: depends on workload, usually yes a bladed or multiprocessor setup is usually better for massively parallel (independent) tasks, but cores can talk to each other much much much faster than blades, as they share caches, memory. So for less parallel work loads (single process multiple threads: e.g. rendering, numerics & analytics) this can provide far more performance and reduced costs. Probably the best example of the need for core count is GPU based processing. Intel also had specialized high core count XEON based accelerator cards with 96 cores at one point. There is a need even if limited.
  • Samus - Thursday, January 31, 2019 - link

    The problem is in the vast majority of the applications an $1800 CPU from AMD running on a $300 motherboard (that's an overall platform savings of $2400!) the AMD CPU either matches or beats the Intel Xeon. You have to cherry-pick the benchmarks Intel leads in, and yes, it leads by a healthy margin, but they basically come down to 7-zip, random rendering tasks, and Corona.

    Disaster strikes when you consider there is ZERO headroom for overclocking the Intel Xeon, where the AMD Threadripper has some headroom to probably narrow the gap on these few and far between defeats.

    I love Intel but wow what the hell has been going on over there lately...
  • Jimbo2K7 - Wednesday, January 30, 2019 - link

    Baby's on fire? Better throw her in the water!

    Love the Eno reference!
  • repoman27 - Wednesday, January 30, 2019 - link

    Nah, I figure Ian for more of a Die Antwoord fan. Intel’s gone zef style to compete with AMD’s Zen style.
  • Ian Cutress - Wednesday, January 30, 2019 - link

    ^ repoman gets it. I actually listen mostly to melodic/death metal and industrial. Something fast paced to help overclock my brain
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    My man
  • IGTrading - Wednesday, January 30, 2019 - link

    Was testing done with mediation regarding the specific windows BUG that affects AMD's CPUs with more than 16 cores? Or was it done with no attempt to ensure normal processing conditions for ThreadRipper, despite the known bug?
  • eva02langley - Thursday, January 31, 2019 - link

    Insomnium, Kalmah, Hypocrisy, Dark Tranquility, Ne Obliviscaris...

    By the way, Saor and Rotting Christ are releasing their albums in two weeks.

    You might want to check out Carpenter Brut - Leether Teeths and Rivers of Nihil - Where Owls Know My Name.
  • eva02langley - Thursday, January 31, 2019 - link

    Forget to mention Omnium Gatherum - The Burning Cold
  • PeachNCream - Thursday, January 31, 2019 - link

    "On the power side of the equation, again the W-3175X comes in like a wrecking ball, and this baby is on fire."

    It's more like a Miley Cyrus licking a sledgehammer thing to me.
  • sgeocla - Wednesday, January 30, 2019 - link

    Computex 2018: Intel 28 core 5 Ghz out by end of year.
    February 2019: Intel 28 core 4.5 Ghz, costs 70% more than competing product.
    Intel is early on promises and late on delivery as always.
  • BigMamaInHouse - Wednesday, January 30, 2019 - link

    The CPU is 3000$ + 1500$ MB+ ECC + eXtreme case/PSU/AIO.
    Thanks Ian Cutress for the honest review!
    (unlike "JustBuyIt that gave this fail product(Total System) 4.5/5 rating vs 2990WX 3.5/5 because its expensive!")
  • Morawka - Wednesday, January 30, 2019 - link

    oh wow, i didn't realize the Dominus Extreme was so expensive.
  • tamalero - Wednesday, January 30, 2019 - link

    We're getting a ton of "sponsored" BS articles lately that are cynical.
  • eva02langley - Thursday, January 31, 2019 - link

    WCCFtech gave the MSI 2080 TI lightning 1600$ GPU a 10/10 for value...
  • FMinus - Friday, February 1, 2019 - link

    RX 570 is 10/10 along with maybe the GTX 1060, everything else is going down the value ladder pretty fast from that point on. For any consumer/gaming oriented GPU that passes the $500 mark I'd give it -1/10 value score.
  • DanNeely - Wednesday, January 30, 2019 - link

    The on stage demo was using a chilled water setup, that they managed to push that system higher than Ian could with room temperature water is only to be expected.
  • jardows2 - Wednesday, January 30, 2019 - link

    This seems like a really good processor for a productivity station. I think, especially at the expected price, it would sell really well. That has me puzzled then as to why Intel would have such a limited run. The supposed figures is barely enough to send to review sites around the world, let alone have a profitable product line. If they produced 10X the amount of these, they'd probably sell them all. Why is Intel leaving easy money on the table? Something doesn't seem right about this picture.
  • Yorgos - Wednesday, January 30, 2019 - link

    I don't know what's more delusional, Intel selling a 500Watt CPU
    or fanboys thinking that this product will sell well at $3000.

    Listen buddy, most sales from these systems come from Dell, HP and/or Lenovo workstations.
    Nobody is going to bother messing with a cpu that's a fire hazard, gives you the same PCI-e lanes as a $300 CPU and you are obliged, as Dell/HP/Lenovo, to buy special motherboards from asus or another manufacturer.
    Finally, the product placement here from Purch media targets Gamers, plus intel targets gamers.
    This is just a shelf product, just like 8700k and 9900k to fight for the first position in some benchmarks, against zen, due to the unfortunate circumstances of zen being that good.
    This product takes binned cpus away from much higher priced Xeons. They are not going to make great numbers available, by default, even if they could supply the market.
  • eddman - Wednesday, January 30, 2019 - link

    "delusional", "fanboys", "Listen buddy", "fire hazard"

    You don't have to get so emotional to make a point.
  • Yorgos - Wednesday, January 30, 2019 - link

    I like my comments to be vivid.
    I don't write NPC comments.
  • Arbie - Wednesday, January 30, 2019 - link

    Then you should move to somewhere like Wccftech, where you won't even have to rationalize picking fights and being gratuitously rude. People like you ruin the tone of a quality forum.
  • eddman - Wednesday, January 30, 2019 - link

    Being vulgar and crass isn't the same as being "vivid". If you cannot reply without resorting to name calling, then this not the place for you.
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    What part of his comment insulted you, snowflake?
  • eddman - Wednesday, January 30, 2019 - link

    I didn't say he's not right.

    None. The point is this is a tech site. There is no need for such remarks.

    "Snowflake"

    This is what I'm talking about. Randomly calling people names with no reason. You don't even know me.
  • PeachNCream - Thursday, January 31, 2019 - link

    I've run a lot of paper and pencil RPGs over the years and I'm disappointed to say that a number of my cookie cutter NPCs had more personality than some of the player characters, but I'm one of those story first GM types.
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    Buzzwords or not the man is right.
  • BGADK - Wednesday, January 30, 2019 - link

    You have no idea what professional software costs. In the end my clients dont care if the PC costs 5000USD, 7000 USD or 12000USD.
    The difference disapear when you add the software costs and my fee.
  • Icehawk - Thursday, January 31, 2019 - link

    Yup. At the desktop level we have things like Adobe for $1k/seat/yr.

    Our big iron costs an order of magnitude more than these machines (recent orders were $150k ea and were mid-spec HP boxes). In the end most of the costs of a big server are memory and storage (SSDs). The high heat/energy consumption of this setup would be a concern, especially if in a colo.
  • jardows2 - Wednesday, January 30, 2019 - link

    What are you rambling on about? It's a solid performing product, at a much reduced price than Intel's normal markup. I don't get where you come off thinking this is a fanboy post, and you totally missed my point - why is it limited to so few pieces? In Intel's lineup, it's a winner, and there are plenty of people in workstation markets who will only buy systems with Intel CPUs. So for Intel to make a good performing product, at a much lower than normal for Intel price, but only make a couple thousand of them? What's going on over there?
  • edzieba - Thursday, January 31, 2019 - link

    Because this is a cherry-picked part from a low-run die production. Intel don't make many XCC dies, and only a handful will be able to tolerate the high voltages and frequencies of this part across all 28 cores. It's also not going to be a big earner at $3000, that may break even on production but probably a loss overall when you take R&D into account.
  • mapesdhs - Saturday, February 2, 2019 - link

    A movie company I know buys systems in such bulk, a CPU/system like this wouldn't even show up on their radar. They prefer systems they can buy lots of, for multiple sites with a common setup.

    People are arguing here about A vs. B, about the CPU cost, but as many have pointed out it's often the sw cost and availability which determine what a company will purchase. As for workstation use, especially the prosumer market, that has its own set of issues, especially whether a particular app is written well enough to exploit so many cores. Blender is, but Premiere isn't.
  • FMinus - Friday, February 1, 2019 - link

    Or you can get two TR 2970W system and make them work in tandem for what I would think would be almost half the price at this point, considering you can buy this Intel gem only pre-built for probably well bloated prices.
  • SanX - Friday, February 1, 2019 - link

    Intel are killing good at particle movement -- 4x faster then TR2. Till AMD makes AVX512 they are still dead for science
  • ET - Wednesday, January 30, 2019 - link

    I find it amazing how application dependent performance is. Whether a product is a good buy depends so much on precisely what you're going to do with it, down to the application level.

    Still, on the whole, it looks like Intel has little to offer over AMD's much cheaper Threadripper platform.
  • BigMamaInHouse - Wednesday, January 30, 2019 - link

    I think soon we gonna see "Leaks" about new TR64 cores, this "5GHZ 28C" stunt made AMD to release 2990WX instead just 24C 2970WX, now after the Fail attempt by Intel - We gonna see new leaks :-).
  • FMinus - Friday, February 1, 2019 - link

    Considering AMD was attending the same trade show, where Intel announced this 28 core chip and AMD a day later announced the new TR lineup, I'd say AMD planned to release the 2990WX regardless of what Intel had.
  • mapesdhs - Saturday, February 2, 2019 - link

    Yes, but the tinfoil hat industry is strong. :D
  • Yorgos - Wednesday, January 30, 2019 - link

    it's not only program dependent, it's also scheduler dependent.
    It is found that the windows scheduler doesn't treat TR very well and throttles it down.(ref. L1T)
  • MattZN - Wednesday, January 30, 2019 - link

    Yup, in a nutshell. When Microsoft finally fixes that scheduler issue all of these sites will have to rerun their benchmarks. While it won't run away on performance, the results will start to look more like they should given the HW capabilities. Not a problem for me with Linux but its kinda amusing that Windows users are so beholden to bugs like these and even the professional reviewers get lost when there isn't a convenient UI button that explains what is going on.

    -Matt
  • mapesdhs - Saturday, February 2, 2019 - link

    Is that the same issue as the one referring to running on core zero? I watched a video about it recently but I can't recall if it was L1T or elsewhere.
  • jospoortvliet - Sunday, February 3, 2019 - link

    it is that issue yes. blocking use of core is a work-around that kind'a works.
  • jospoortvliet - Sunday, February 3, 2019 - link

    (in some workloads, not all)
  • Coolmike980 - Monday, February 4, 2019 - link

    So here's my thing: Why can't we have good benchmarks? Nothing here on Linux, and nothing in a VM. I'd be willing to be good money I could take a 2990, run Linux, run 5 VM's of 6 cores each, run these benchmarks (the non-gpu dependent ones), and collectively beat the pants off of this CPU under any condition you want to run it. Also, this Civ 6 thing - the only benchmark that would be of any value would be the CPU one, and they've been claiming to want to make this work for 2 years now. Either get it working, or drop it altogether. Rant over. Thanks.
  • FlanK3r - Wednesday, January 30, 2019 - link

    where is CinebenchR15 results? In testing methology is it, but in results I can not find it :)
  • MattsMechanicalSSI - Wednesday, January 30, 2019 - link

    der8auer did a delid video, and a number of CB runs. https://www.youtube.com/watch?v=aD9B-uu8At8 Also, Steve at GN has had a good look at it. https://www.youtube.com/watch?v=N29jTOjBZrw
  • MattZN - Wednesday, January 30, 2019 - link

    @MattsMechanicalSSI Yup... both are very telling.

    I give the 3175X a pass on DDR connectivity (from the DerBauer video) since he's constantly having to socket and unsocket the chip, but I agree with him that there should be a carrier for a chip that large. Depending on the user to guess the proper pressure is a bad idea.

    But, particularly the GN review around 16:00 or so where we see the 3175X pulling 672W at the wall (OC) for a tiny improvement in time over the 2990WX. Both AMD and Intel goose these CPUs, even at stock, but the Intel numbers are horrendous. They aren't even trying to keep wattages under control.

    The game tests are more likely an issue with the windows scheduler (ala Wendel's work). And the fact that nobody in their right mind runs games on these CPUs.

    The Xeon is certainly a faster CPU, but the price and the wattage cost kinda make it a non-starter. There's really no point to it, not even for professional work. Steve (GN) kinda thinks that there might be a use-case with Premier but... I don't really. At least not for the ~5 months or so before we get the next node on AMD (and ~11 months for Intel).

    -Matt
  • mapesdhs - Saturday, February 2, 2019 - link

    Cinebench is badly broken at this level of cores, it's not scaling properly anymore. See:

    https://www.servethehome.com/cinebench-r15-is-now-...
  • Kevin G - Wednesday, January 30, 2019 - link

    For $3000 USD, a 28 core unlocked Xeon chip isn't terribly bad. The real issue is its incredibly low volume nature and that in effect only two motherboards are going to be supporting it. LGA 3647 is a wide spread platform but the high 255W TDP keeps it isolated.

    Oddly I think Intel would have had better success if they also simultaneously launched an unlocked 18 core part with even higher base/turbo clocks. This would have threaded the needle better in terms of per thread performance and overall throughput. The six channel memory configuration would have assisted in performance to distinguish itself from the highend Core i9 Extreme chips.

    The other aspect is that there is no clear upgrade path from the current chips: pretty much one chip to board ratio for the life time of the product. There is a lot on the Xeon side Intel has planned like on package FGPAs, Omnipath fabric and Nervana accelerators which could stretch their wings with a 255 W TDP. The Xeon Gold 6138P is an example of this as it comes with an Arria 10 FPGA inside but a slightly reduced clock 6138 die as well at a 195 W TDP. At 255 W, that chip wouldn't have needed to compromise the CPU side. For the niche market Intel is targeting, a FPGA solution would be interesting if they pushed ideas like OpenCL and DirectCompute to run on the FPGA alongside the CPU. Doing something really bold like accelerating PhysX on the FPGA would have been an interesting demo of what that technology could do. Or leverage the FGPA for DSP audio effects in a full 3D environment. That'd give something for these users to look forward to.

    Well there is the opportunity to put in other LGA 3647 parts into these boards but starting off with a 28 core unlocked chip means that other offering are a downgrade. With luck, Ice Lake-SP would be an upgrade but Intel hasn't committed to it on LGA 3647.

    Ultimately this looks like AMD's old 4x4/QuadFX efforts that'll be quickly forgotten by history.

    Speaking of AMD, Intel missing the launch window by a few months places it closer to the eminent launch of new Threader designs leveraging Zen 2 and AMD's chiplet strategy. I wouldn't expect AMD to go beyond 32 cores for Threadripper but the common IO die should improve performance overall on top of the Zen 2 improvements. Intel has some serious competition coming.
  • twtech - Wednesday, January 30, 2019 - link

    Nobody really upgrades workstation CPUs, but it sounds like getting a replacement in the event of failure.could be difficult if the stock will be so limited.

    If Dell and HP started offering this chip in their workstation lineup - which I don't expect to happen given the low-volume CPU production and needing a custom motherboard - then I think it would have been a popular product.
  • DanNeely - Wednesday, January 30, 2019 - link

    Providing the replacement part (and thus holding back enough stock to do so) is on Dell/HP/etc via the support contract. By the time it runs out in a few years the people who buy this sort of prebuilt system will be upgrading to something newer and much faster anyway.
  • MattZN - Wednesday, January 30, 2019 - link

    I have to disagree re: upgrades. Intel has kinda programmed consumers into believing that they have to buy a whole new machine whenever they upgrade. In the old old days we actually did have to upgrade in order to get better monitor resolutions because the busses kept changing.

    But in modern times that just isn't the case any more. For Intel, it turned into an excuse to get people to pay more money. We saw it in spades with offerings last year where Intel forced people into a new socket for no reason (a number of people were actually able to get the cpu to work in the old socket with some minor hackery). I don't recall the particular CPU but it was all over the review channels.

    This has NOT been the case for Intel's commercial offerings. The Xeons traditionally have had a whole range of socket-compatible upgrade options. It's Intel's shtick 'Scaleable Xeon CPUs' for the commercial space. I've upgraded several 2S Intel Xeon systems by buying CPUs on E-Bay... its an easy way to double performance on the cheap and businesses will definitely do it if they care about their cash burn.

    AMD has thrown cold water on this revenue source on the consumer side. I think consumers are finally realizing just how much money Intel has been squeezing out of them over the last decade and are kinda getting tired of it. People are happily buying new AMD CPUs to upgrade their existing rigs.

    I expect that Intel will have to follow suit. Intel traditionally wanted consumers to buy whole new computers but now that CPUs offer only incremental upgrades over prior models consumers have instead just been sticking with their old box across several CPU cycles before buying a new one. If Intel wants to sell more CPUs in this new reality, they will have to offer upgradability just like AMD is. I have already upgraded two of my AM4 boxes twice just by buying a new CPU and I will probably do so again when Zen 2 comes out. If I had had to replace the entire machine it would be a non-starter. But since I only need to do a BIOS update and buy a new CPU... I'll happily pay AMD for the CPU.

    Intel's W-3175X is supposed to compete against threadripper, but while it supposedly supports ECC I do not personally believe that the socket has any longevity and that it is a complete waste of money and time to buy into it verses buying into threadripper's far more stable socket and far saner thermals. Intel took a Xeon design that is meant to run closer to the maximally efficient performance/power point on the curve and tried to turn it into a pro-sumer or small-business competitor to the threadripper by removing OC limits and running it hot, on an unstable socket. No thanks.

    -Matt
  • Kevin G - Thursday, January 31, 2019 - link

    I would disagree with this. Workstations around here are being retrofitted with old server hand-me-downs from the data center as that requipment is quietly retired. Old workstations make surprisingly good developer boxes, especially considering that the costs is just moving parts from one side of the company to the other.

    Though you do have point that the major OEMs themselves are not offering upgrades.
  • drexnx - Wednesday, January 30, 2019 - link

    wow, I thought (and I think many people did) that this was just a vanity product, limited release, ~$10k price, totally a "just because we're chipzilla and we can" type of thing

    looks like they're somewhat serious with that $3k price
  • MattZN - Wednesday, January 30, 2019 - link

    The word 'nonsensical' comes to mind. But setting aside the absurdity of pumping 500W into a socket and trying to pass it off as a usable workstation for anyone, I have to ask Anandtech ... did you run with the scheduler fixes necessary to get reasonable results out of the 2990WX in the comparisons? Because it kinda looks like you didn't.

    The Windows scheduler is pretty seriously broken when it comes to both the TR and EPYCs and I don't think Microsoft has pushed fixes for it yet. That's probably what is responsible for some of the weird results. In fact, your own article referenced Wendel's work here:

    https://www.anandtech.com/show/13853/amd-comments-...

    That said, of course I would still expect this insane monster of Intel's to put up better results. It's just that... it is impractical and hazardous to actually configure a machine this way and expect it to have any sort of reasonable service life.

    And why would Anandtech run any game benchmarks at all? This is a 28-core Xeon... actually, it's two 14-core Xeons haphazardly pasted together (but that's another discussion). Nobody in their right mind is going to waste it by playing games that would run just as well on a 6-core cpu.

    I don't actually think Intel has any intention of actually selling very many of these things. This sort of configuration is impractical with 14nm and nobody in their right mind would buy it with AMD coming out with 10nm high performance parts in 5 months (and Intel probably a bit later this year). Intel has no business putting a $3000 price tag on this monster.

    -Matt
  • eddman - Thursday, January 31, 2019 - link

    "it's two 14-core Xeons haphazardly pasted together"

    Where did you get that info? Last time I checked each xeon scalable chip, be it LCC, HCC or XCC, is a monolithic die. There is no pasting together.
  • eddman - Thursday, January 31, 2019 - link

    Didn't you read the article? It's right there: "Now, with the W-3175X, Intel is bringing that XCC design into the hands of enthusiasts and prosumers."

    Also, der8auer delidded it and confirmed it's an XCC die. https://youtu.be/aD9B-uu8At8?t=624
  • mr_tawan - Wednesday, January 30, 2019 - link

    I'm surprised you put the Duron 900 on the image. That makes me expecting the test result from that CPU too!!
  • eastcoast_pete - Wednesday, January 30, 2019 - link

    @Ian: Thanks for the review. I guess the "lower" price of this 28-core Xeon shows the benefit of having strong competition in the market - without the large Threadrippers, that price wouldn't have come down from the $ 8,000 mark.
    Two questions: I am still struck by how often the higher-end "consumer" grade CPUs beat the pants off the many-core monsters. Is high single-thread performance still that dominant in the applications in which the 9900K or 2700x lead the pack?
    Second, did Intel really recommend to plug this monster directly into a wall outlet? If yes, wow. Guess you need a surge-protected, line-conditioned house line then, so not exactly standard equipment. Having encountered brownouts and voltage spikes, I wouldn't plug even a $ 500 PC straight into an unprotected household socket, never mind a $ 7,000 rig. I guess if that's what they recommend, it doesn't void the warranty when stuff happens.
    My other comment is that this chip is really about workstation-type tasks, and while I know that coming up with more workstation-specific test suites is too specialized, that's where these Xeons and the big Threadrippers start making sense.
    Regarding gaming: As you also hint at, much of that $ 3,000 budget for the CPU would be better spend on two or more high-end graphics cards (2080 GTX), all liquid cooling, a hand-selected eight core CPU, and a large, ultra-wide aspect fast refresh HDR-capable monitor.
  • zepi - Wednesday, January 30, 2019 - link

    Ian is working in UK. He has most likely something like 230V single phase 80A feed-in to his house, if not 100 or even 120A, depending if he has electric heating or gas.

    One main fuse for that surely. Then that phase is split to some smaller circuits feeding into separate rooms & sockets etc. probably 8-16A fuses. Some stronger ones (30+A) if he has electric heaters in the taps / shower without using a boiler & heating circuits.

    Then another fuse in each wall socket. And most likely a fourth fuse inside the actual cable.

    And @230V, the cable "only" needs to support 7A, so it is actually nothing spectacular.

    1500W devices are perfectly fine in Europe, mostly because of the 230V voltage. It just makes things much easier.
  • SaturnusDK - Wednesday, January 30, 2019 - link

    Many if not most European households have 3 phase 230V 16A power, so you can power standard 400V appliances.
  • BushLin - Wednesday, January 30, 2019 - link

    In the UK a standard wall outlet is rated for 13A. Our kettles are nearly all 3KW. We value our tea and have built our homes around it.
  • eastcoast_pete - Wednesday, January 30, 2019 - link

    But then, your kettle doesn't require clean sine wave AC current, and won't suffer much if the voltage drops or spikes. In contrast, an expensive rig like this might. My comment wasn't about overall power need of this setup, but my surprise over the "unfiltered wall socket is fine" instruction from Intel.
  • eastcoast_pete - Wednesday, January 30, 2019 - link

    I am quite familiar with the situation in Europe. But, even there, I wouldn't just trust a regular power outlet (220 or 230 V) to provide clean sine power free from interference, voltage drops (brownouts) and voltage spikes, and neither do several friends of mine who live and work in Europe. They also use, at minimum, a good surge protector, and, for expensive systems, a UPS and line conditioner, just like we do here in the States.
  • SaturnusDK - Thursday, January 31, 2019 - link

    Surge protection is built into all regulatory fuse boxes so you don't need that in Europe since 2003 unless the building hasn't been updated to the current building code. Also before 2003 it was 220V in Europe and 240V in the UK. Now it's just 230V everywhere. Last there was a registered brown out in the area I live and work was February 1987... almost 32 years ago. In many areas of Europe it's not even worth considering as a risk anymore. You still need an UPS for obvious reasons though.
  • maroon1 - Wednesday, January 30, 2019 - link

    At least it is faster and has more consistence performance than 2990WX. Gaming performance also much better without the need to disable cores like you do for 2990WX
  • tamalero - Wednesday, January 30, 2019 - link

    I'm still scratching my head on who would buy this thing for "gaming" o_O
  • alacard - Wednesday, January 30, 2019 - link

    Damn Ian you're on a roll with this on the heels of your incredible Intel's 10nm Cannon Lake and Core i3-8121U Deep Dive Review. Do you ever sleep?

    There's so much talent here that all you guys really should quit working for Purch and start your own independent tech site where the ads are reasonable and not exploitative. I can imagine everyone running straight to it and supporting it. Make it run on small ads and donations and you'd probably make out like kings.

    Purch doesn't deserve you, period. Takes your talents elsewhere.
  • silverblue - Thursday, January 31, 2019 - link

    Future bought the bulk of Purch not too long ago, but if it's same old same old, then I agree.
  • Cooe - Wednesday, January 30, 2019 - link

    Until these benches are all repeated in Linux all of these results are wortheless. Nobody would buy these CPU's for a Windows machine, and the 2990WX is totally borked by running in Windows as well.
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    True but the 2990WX is 1300$ cheaper and gives roughly the same performance. No one is going to buy this intel chip unless they have money to burn.
  • maroon1 - Wednesday, January 30, 2019 - link

    Same performance ?! Did you look a the benchmarks ?! w-3175x is cleary winning in majority of benchmarks (and some benchmarks should big advantage for w-3175x)

    I agree about price, but performance is not same
  • MattZN - Wednesday, January 30, 2019 - link

    It depends on what precise application(s) you are running. But yes, the performance is about the same. Those benchmarks are pretty broken for a host of reasons... windows scheduler nonwithstanding, looking at a bunch of benchmarks doesn't really tell you a whole lot about how a machine will work in your actual environment.

    All that matters is whether the machine's performance affects your workflow in a noticeable way or not. Nobody is going to justify buying something like this if all they get out of it is a 15 minute faster encoding on a 2-hour job. Imagine that! Let alone a few seconds here and there, or a slightly slower or faster frame rate. For longer jobs you'll notice if something takes half the time. You won't notice if something is 20% slower or 20% faster. You just won't.

    Many video encoding workloads are GPU accelerated, for example. Many are run as overnight batches, for example. If you go through all the benchmarks in this article, almost none of them are even remotely relevant to actual use cases. The Blender one maybe, Handbrake, and Adobe Premier and that's just about it. And surprise, surprise, the TR2950X or TR2990WX actually wins some of those.

    For example, does anyone actually care how fast 7-zip runs? I sure as heck don't! I zip something up, it's well neigh instantaneous on just about any machine. Encryption? Nobody cares, it isn't a use case that anyone will notice. Office applications like spreadsheets? Come on... that's ridiculous. A 2-core mobile CPU can update a spreadsheet just as fast as one of these behemoths.

    -Matt
  • GreenReaper - Thursday, January 31, 2019 - link

    It does kinda matter for server-level ctivities. Say you have a SQL dump that you want to backup without using too much transfer. You can't run nightly backups until it's done. Even compressed it's 4GB. I use xz but it's essentially the same as 7-zip. More threads and faster threads can make a significant difference in run time, and in turn this impacts when you can backup or how much data you can handle on the system.

    I think you may be mistaken about the 20% difference if it effectively means you have to pay 20% more people. The question, as always, is is the price and other costs associated worth the improvement.
  • FMinus - Friday, February 1, 2019 - link

    Why would you do any of that on this chip that sucks 600W, for all of your listed task a dedicated server chip would be better, you run them in batches over night as said, so the speed really does not matter at that point.

    This chip here is intended as a workstation work horse, and yes, with the price of the single chip and the expensive motherboards (which we still don't know if they will be available to the end-user directly) makes this an quite pointless platform, except if you are running Adobe Premiere 24/7.

    For everything else you are better of with the cheaper AMD and Intel solutions, and you can get multiple systems of those for the price of one of these 3175X systems, split the work load or make them work together and they deliver faster results.
  • tamalero - Friday, February 1, 2019 - link

    Not everyone has access to full blown server rendering farms. A lot of remote workers or freelancers would render with this "behemoth". Not everyone can blow 10,000 USD to have a bunch of EPYC servers just standing by.

    Still.. This thing doesnt seem THAT good compared to AMD's (both price, performance and power usage) to justify it.
    only the AVX512 benches I guess.

    But then.. Zen2 is supposed to double the output of AVX if I remember correctly.
  • WasHopingForAnHonestReview - Thursday, January 31, 2019 - link

    7zip and some specific renders... The time saving isnt much. Its not even close to make this a worthwhile buy. When you take into account the windows scheduler bug fix coming... The amd TR for $1300 is still the obvious winner.
  • eddman - Wednesday, January 30, 2019 - link

    "No one is going to buy this intel chip unless they have money to burn."

    So it's not "No one" then.

    I suppose 3D modeling and rendering studios or individuals that have lots of customers will probably be quite ok with buying these. That price is nothing compared to their income. They probably care more about reducing rendering time than saving a few thousand dollars, which they can recoup in probably a week or two.
  • FMinus - Friday, February 1, 2019 - link

    Not really, 3D rednering is done on specialized render farms, the modeling work, key framing etc. can be done on any decent modern mainstream CPU, and especially well on any modern HEDT chip, for prototyping and preview, once satisfied, send it out to render properly.
  • eastcoast_pete - Wednesday, January 30, 2019 - link

    The only scenario where this or similar Xeons do outperform the AMD lineup is if (!) the key application (s) in question make good use of AVX512. In those situations, Intel is still way ahead. In all others, a similar or lower priced Threadripper will give more bang for the buck.
  • Tango - Wednesday, January 30, 2019 - link

    There are scenarios in which this is perfect, and in fact my research department is looking into acquiring two of them. Our algorithms include both highly parallelized instructions and completely non parallelizable ones where clock speed dominates. We estimate models that take a whole weekend to spot out a result, and the alternative is paying top money for supercomputer time.
    At $3000 it is a steal. The problem is half Wall Street will be sending orders to get one, since the use case is similar for high frequency trading applications.
  • MattZN - Wednesday, January 30, 2019 - link

    I expect all the review sites will redo their 2990WX benchmarks once Microsoft is able to fix the scheduler. The question is really... how long will it take Microsoft to fix their scheduler? That said, nobody should be expecting massive improvements. Some of the applications will improve a ton, but not all of them. It will be more like a right-sizing closer to expected results and less like hitting the ball out of the park.

    -Matt
  • BGADK - Wednesday, January 30, 2019 - link

    Little professional software exists for Linux, so these machines WILL run windows for most parts.
  • cmcl - Thursday, January 31, 2019 - link

    Agree that there is more professional software for Windows, but in visual effects (where I work), 90% of our workstations (and all render) runs on Linux (24-core workstations, with P6000s), running Nuke, Maya etc. Apart from the gaming benchmarks (and who would buy one of these for gaming), a lot of the tests could be done in Linux as that software runs on Linux
  • Icehawk - Thursday, January 31, 2019 - link

    Workstations aside, these mega-core beasts are run as VM hosts on bare metal. I don't have a single server here that just runs an OS & app suite, it's not 2000 anymore everything is virtualized as much as possible.
  • WasHopingForAnHonestReview - Wednesday, January 30, 2019 - link

    Holy shit. AMD absolutely bent intel over on this one. The price for performance ratio is overwhelming in AMD ls favor! Intel would have released this for 8k if the 2990wx wasnt so competitive!

    WOW!
  • GreenReaper - Thursday, January 31, 2019 - link

    They probably wouldn't have released it at all. As noted, most of these could easily be server cores on which they could make plenty more money. This appears to be largely a PR effort.
  • jcc5169 - Wednesday, January 30, 2019 - link

    Who in the world would buy this over-priced piece-of-crap?
  • tamalero - Wednesday, January 30, 2019 - link

    Aaah yes.. the presenter "forgot" to say it was heavily overclocked..
  • arh2o - Wednesday, January 30, 2019 - link

    Hey Ian, nice review. But you guys really need to stop testing games with an ancient GTX 1080 from 1H 2016...it's almost 3 years old now. You're clearly GPU bottle-necked on a bunch of these games you've benchmarked. At least use a RTX 2080, but if you're really insistent on keeping the GTX 1080, bench at 720p with it instead of your IGP. For example:

    Final Fantasy XV: All your CPUs have FPS between 1-4 frames of difference. Easy to spot GPU bottleneck here.

    Shadow of War Low: Ditto, all CPUs bench within the 96-100 FPS range. Also, what's the point of even including the medium and high numbers? It's decimal point differences on the FPS, not even a whole number difference. Clearly GPU bottle-necked here even at 1080p unfortunately.
  • eddman - Wednesday, January 30, 2019 - link

    Xeons don't even have an IGP. That IGP in the tables is simply the name they chose for that settings, which includes 720 resolution, since it represents a probable use case for an IGP.

    Anyway, you are right about the card. They should've used a faster one, although IMO game benchmarks are pointless for such CPUs.
  • BushLin - Wednesday, January 30, 2019 - link

    I'm glad they're using the same card for years so it can be directly compared to previous benchmarks and we can see how performance scales with cores vs clock speed.
  • Mitch89 - Friday, February 1, 2019 - link

    That’s a poor rationale, you wouldn’t pair a top-end CPU with an outdated GPU if you were building a system that needs both CPU and GPU performance.
  • SH3200 - Wednesday, January 30, 2019 - link

    For all the jokes its getting doesn't the 7290F actually run at a higher TDP using the same socket? Intel couldn't have just have taken the coolers from the Xeon DAP WSes and used those instead?
  • evernessince - Wednesday, January 30, 2019 - link

    How is 3K priced right? You can purchased a 2990WX for half that price and 98% of the performance. $1,500 is a lot of extra money in your wallet.
  • GreenReaper - Thursday, January 31, 2019 - link

    Maybe they thought since it was called the 2990WX it cost $2990...
  • tygrus - Wednesday, January 30, 2019 - link

    1) A few cases showed the 18core Intel CPU beat their 28core. I assume the benchmark and/or OS is contributing to a reduced performance for the 28 core Intel and the 32 core AMD (TR 2950 beats TR 2990 a few times).

    2) Do you really want to use 60% more power for <25% increase of performance?

    3) This chip is a bit like the 1.13GHz race in terms of such a small release & high cost it should be ignored by most of us as a marketing stunt.
  • GreenReaper - Thursday, January 31, 2019 - link

    Fewer cores may be able to boost faster and have less contention for shared resources such as memory bandwidth. This CPU tends to only win by any significant margin when it whenuse all of its cores. Heck, you have the 2700X up there in many cases.
  • abufrejoval - Wednesday, January 30, 2019 - link

    So with the limited edition production numbers, quite clearly the price can be symbolically low, just so Intel can claim bragging rights: They are not interested in satisfying market demands, especially since far too many workstation and server customers might switch over from a Xeon Scalable offering, they just want to claim victory... everywhere... including 10nm

    Just pathetic!

    And honestly, you shouldn't even report about it. Your mission is to inform consumers on products they can buy. If consumers cannot buy it, you should treat it very, very differently, if at all.

    You're just being abused by Intel to push a brand that suffers for reasons.
  • SH3200 - Wednesday, January 30, 2019 - link

    This product cannibalizes half the current Xeon lineup as it provides ecc/rdimm at a fraction of the cost. I’d be amazed if FSI customers don’t prebuy every single one ever made before it even hits the public.
  • br83taylor - Wednesday, January 30, 2019 - link

    I find it odd Ian is happy to run and report benchmarks with this processor going against Intel recommendation for bios settings, yet would not do it for the AMD processors with their PBO setting. Both of which seem to do very similar things. Letting the 2990WX run with PBO would give it a much fairer chance against this.
  • GreenReaper - Thursday, January 31, 2019 - link

    It's not unressonablr to test the system as given to you, especially when it is provided as a complete system, so may represent what end-users get. That said, it seems like he kinda called that out in the sideways manner by highlighting the fact that the system Intel had shipped to him was not actually using those specifications.
  • outsideloop - Wednesday, January 30, 2019 - link

    This reminds me of the FX-9590. Massively overclocking silicon to keep up. Except now, the shoe is on the other foot.
  • wow&wow - Wednesday, January 30, 2019 - link

    Xeon W-3175X: $2999+$1500= $4499
    Ryzen TR 2900WX: $1799+$300= $2099

    Is Intel trying to market it for “stupids” or those whose left brains being not right and right brains having nothing left :-D
  • ksec - Wednesday, January 30, 2019 - link

    I think we need some new innovation in Thermal Cooling, how we could cool more with less, ease and cheaper. We pushed to near 600W for CPU and GPU alone.
  • mapesdhs - Saturday, February 2, 2019 - link

    Ye cannae beat the laws of physics. :D Thermal density is a hard problem. Check out AdoredTV's video on the subject, it explains things nicely.
  • The_Assimilator - Wednesday, January 30, 2019 - link

    This is Pentium 4 days all over again.
  • bananaforscale - Thursday, January 31, 2019 - link

    Kitty approves of heat output.
  • cmcl - Thursday, January 31, 2019 - link

    Hi Ian,

    Your testing mentions running Linux 'when feasible' and when in 'full swing' - is that any time soon? I think benchmarks would benefit from cross-platform tests (remember the debacle of Threadripper performance on Windows..)

    Cheers
  • DARK_BG - Thursday, January 31, 2019 - link

    This review sucks there is not a single data for how the Socket A Duron 900 stacks up against the rest :D After all 19 years passed we should know how far the CPUs advanced :D
  • Tesseramous - Thursday, January 31, 2019 - link

    I would not want to buy a limited edition piece of hardware with only 1500 sold as there is likely to be a lack of support; compatibility issues.
  • evilpaul666 - Friday, February 1, 2019 - link

    Console emulation tests?
  • mocseg - Friday, February 1, 2019 - link

    I think I'll wait for an updated linux comparison
    https://www.phoronix.com/scan.php?page=article&...
  • Beaver M. - Friday, February 1, 2019 - link

    I just want a Intel 12-core 5 GHz chip. Lets hope the next generation delivers that.
  • mapesdhs - Saturday, February 2, 2019 - link

    Why the brand loyalty? That's not logical. Just buy the best solution for the intended task, no matter who makes it. If right now that's Intel, because of AVX or 1080p high-refresh gaming, then so be it, but for Blender atm AMD is king. It depends on the task. Some absolute notion of branded desire is bizarre.
  • Beaver M. - Sunday, February 3, 2019 - link

    Mixed. Too many bad experiences with stability and software support with AMD.
  • Updated Software - Sunday, February 3, 2019 - link

    https://freeproductkeys.org/wise-care-365-pro-crac...">Wise Care 365 Pro Crack
    Insightful Care 365 PRO Crack is a splendid and ground-breaking application for cleaning and upgrading your framework. This product enables you to clean your framework and increment framework preparing speed. With this, you can without much of a stretch erase all the garbage information, junk containers, and excess documents just as pointless procedures
  • Updated Software - Sunday, February 3, 2019 - link

    <a href="https://freeproductkeys.org/wise-care-365-pro-crac... Care 365 Pro Crack</a>
    Insightful Care 365 PRO Crack is a splendid and ground-breaking application for cleaning and upgrading your framework. This product enables you to clean your framework and increment
  • Madvocal1 - Monday, February 4, 2019 - link

    Ian and readers, The ASUS ROG motherboard looks great the Intel 28 cores seem like a beast to, your scores might be way better if you contact ASUS and have them help you because you sound confused on what to set in BIOS and how to run high end system correctly.
  • MikeV8 - Wednesday, February 6, 2019 - link

    3175X Intel’s biggest chip ever? Not really. You're missing it's predecessor from 1995 - the legendary Pentium Pro for Socket 8, which is even bigger that the Threadripper. Or maybe you're too young to remember even the Pentium II Xeon which superseded Pentium Pro in 1998.
    Ah, I miss good old days with Anand Lal Shimpi.
  • MackerVII - Tuesday, March 19, 2019 - link

    I'm sure someone mentioned this already....

    Everyone is complaining so much about how expensive this processor is but I have two points to mention.
    - It's pretty much the same as Intel's top Xeon the 8180 which is a $10,000 processor.
    - AMD's processor is basically a fake, 4 processors in one (and I love AMD).
    So now a consumer can buy Intel's best Server processor for $7,000 less.
  • ADVenturePO - Saturday, May 4, 2019 - link

    Well, this is heavy price. Maybe fair, but availability is a madness. But taking in account LC i9--7980XE is the best here. It kills competition with speeds. On LC it can be clocked after precise regulation of voltages up to 4.8GHz and up to 4.7GHz with 128GB of 3200MHz RAM.
    I'm selling stations like that. Easy to build, easy to run, easy to cool. MBs at stock.
    That chip is just a showoff .
  • urbanman2004 - Saturday, May 18, 2019 - link

    AMD's "EPYC" is gonna cause Intel a epic fail
  • eqlrutaoyqsm - Tuesday, August 25, 2020 - link

    http://bitly.com/zoom-viber-skype-psy

Log in

Don't have an account? Sign up now