The whole "pricetag" thing is not really an issue when you start to look at what *really* costs money in many servers in the real world. Especially when you consider that there's really no need to pay for the highest-end Xeon Platinum parts to compete with Epyc in most real-world benchmarks that matter. In general even the best Epyc 7601 is roughly equivalent to similarly priced Xeon Gold parts there.
If you seriously think that even $10000 for a Xeon Platinum CPU is "omg expensive"... try speccing out a full load of RAM for a two or four socket server sometime and get back to me with what actually drives up the price.
In addition to providing actually new features like AVX-512, Intel has already shown us very exciting technologies like Optane that will allow for lower overall prices by reducing the need to buy gobs and gobs of expensive RAM just to keep the system running.
One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc. Also RAM prices are typically not a big issue. Outfitting any of these systems with 128GB of ECC memory costs around $1,500 tops, and that's before any volume discounts that the company in question may get. Altogether a server with a 32 core AMD EPYC, 128 GB of RAM, and an array of 2-4 TB drives will cost under $10,000 and may typically be less than $8,000 depending on the drive configuration, so YES price is a factor when the CPU makes up half the machine's budget.
"One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc"
Well, maybe *you* don't build your own servers. But my point is 100% valid for pre-bought servers too, just check Dell's prices on RAM if you are such an expert.
Incidentally, you also fell into the trap of assuming that the MSRPs of Intel CPUs are actually what those big companies like Dell are paying & charging. That's clearly shows that *you* don't really know much about how the enterprise hardware market actually works.
As someone who has purchased servers from Dell for 'enterprise', I know exactly how much large enterprise users pay. I was using the prices quoted here for comparison. I don't know of a single large enterprise company that builds their own servers. I have a very long work history working with multiple companies.
There are plenty. I’ve been in data centers filled with thousands of super micro or chenbro home built and maintained servers. The cost savings are immense, even if you partner with a systems integrator to piece together and warranty the builds you design. Anyone doing scalable cloud/web/VM probably has thousands of cheap servers rather than HP/Dell.
When AMD had total IPC and power consumption superiority people said "yeah buh software's optimized for Intel so better buy Xeon" .
When AMD had complete superiority with higher IPC and a platform so mature they were building Supercomputers out of it, people said "yeah buh Intel has a tiny bit better power consumption and over the long term..."
Over the long term "nothing" would be the truth. :)
Then Bulldozer came and AMD's IPC took a hit, power consumption as well, but they were still dong ok in general and had some applicatioms where they did excell such as encrypting and INT calculus , plus they had the cost advantage.
People didn't even listen ... and went Xeon.
Now Intel comes and says that they've lost the power consumption crown, the core count crown, the PCIe I/O crown, the RAID crown, the RAM capacity crown, the FPU crown and he platform cost crown.
But they come with this compilation of particular cases where Xeon has a good showing and people say: "uh oh you see ?! EPYC is still imature, we go Xeon" .
What ?!
Is this even happening ?! How many crowns AMD needs to win to be accepted as the better choice overall ?! :)
Really ?! Intel writes a mostly marketing compilation of particular use cases and people take it as gospel ?!
Honestly ... how many crowns does AMD need to win ?! :)
In the end, I would like to point out that in an AMD EPYC article there was no mention of AMD's EPYC 2 SPEC World Records .....
Not that it does anything to affect Intel's particular benchmarking, but really ?!
You write an AMD EPYC article within less than a week since HP announces 2 World Records with EPYC and you don't even put it in thw article ?!?
This is the nTH AMD bashing piece from AnandTech :(
I still remeber how they've said that AMD's R290 for 550 USD was "not recommended" despite beating nVIDIA's 1000 USD Titan because "the fan was loud" :)
This coming after nVIDIA's driver update resulting in dead GPUs ....
But AMD's fan was "loud" :)
WTF ?! ...
For 21 years I've been in this industry and I'm really tired to have to stomach these ...
And yeah .. I'm not bashing AnandTech at all. I'll keep reading it like I did since back in 1998 when I've first found it.
But I really see the difference between an independent Anandtech article and this one where I'm VERY sure some Intel PR put in "a good word" and politely asked for a "positive spin" .
I agree with a lot of your points in general, though not really directed toward Anandtech.
The article sounded pretty technical, unbiased and the final page was listing facts, the server CPU are similar, Intel showed a lot of benches that show the similarities and ignore the fact that their CPU costs twice as much.
In all things, which CPU works best comes down to the actual app used. I was browsing the benches the other day, the FX six core actually beats the Ryzen quad and six core in a couple of benches (like literally one or two) so if that is your be-all end-all program, Ryzen isn't worth it.
So far it looks like AMD has a good server product. Quite like the FX line it looks like the Epyc is going to be better at load bearing than maximum speed and honestly I'm okay with that.
I've just realized something even more despicable in this marketing compilation of particular use cases :
1) Intel built a 2S Intel based server that is comparable in price with the AMD built.
2) That Intel built gets squashed in almost all benchmarks or barely overtakes AMD in some.
3) But then Intel added on all graphs a built that is 11,000 USD more expensive which also performed better, without clearly stating just how much more expensive that system is.
4) Also Intel says that it's per core performance in some special use cases is 38% better without saying that AMD offers 33% more cores that, overall, overtake the Intel built.
In conclusion, the more you look at it, the more this becomes clear as an elaborate marketing trick that has been treated like "technical" information by websites like AnandTech.
It is not. It is an elaborate marketing trick that doesn't clearly state that the system that looks so good in these particular benchmarks is 11,000 USD more expensive. That over 60% extra money.
Like I've said, AnandTech needs to be more critical of these marketing ploys.
They should be informing and educating us, their readers, not dishing out marketing info and making it look technical and objective when it clearly is not.
The only reason I still sometimes read Anandtech's articles, is because some of the reader like you are not stupid and fall for this rubbish. I get more info from the comments than fro the articles itself. WCCF have great news posts, but the comments are like from 12 year-olds.
Anandtech used to be a top rated review site and therefore some of the old die hard readers are still commenting on these articles.
I think that the performance crown which AMD has just won has some catches unfortunately.
First of all, I'll probably start working with ARM and Intel as opposed to AMD an Intel... not because AMD is not a good source, but because from a business infrastructure perspective, Intel is better positioned to provide support. In addition, I'm looking into FPGA + CPU solutions which are not offered or even on the road map for AMD.
Where AMD really missed the mark this time is that if AMD delivers better performance on 64-cores as Intel does 48-cores with current storage technologies, both CPUs are probably facing starvation anyway. The performance difference doesn't count anymore. If there's no data to process, there's no point in upping the core performance.
The other huge problem is that software is licensed on core count, not on sockets. As such, requiring 33% more cores to accomplish the same thing... even if it's faster can cost A LOT more money. I suppose if AMD can get Microsoft, Oracle, SAP, etc... to license on users or flops... AMD would be better here. But with software costs far outweighing hardware costs, fake cores (hyperthreading) are far more interesting than real cores from a licensing angle.
We already have virtualization sorted out. We have our Windows Servers running as VMs and unless you have far too many VMs for no particular reason or if you're simply wasting resources for no reason, you probably are far over-provisioned anyway. I know very few enterprises who really need more than two half-racks of servers to handle their entire enterprise workload... and I work with 120,000+ employee enterprises regularly.
So, then there's the other workload. There's Cloud. Meaning business systems running on a platform which developers can code against and reliably expect it to run long term. For these systems, all the code is written in interpreted or JITed languages. These platforms look distinctly different than servers. You would never for example use a 64-core dual socket server in a cloud environment. You would instead use extremely low cost, 8-16 core ARM servers with low cost SSD drives. You would have a lot of them. These systems run "Lambdas" as Amazon would call them and they scale storage out across many nodes and are designed for massive failure and therefore don't need redundancy like a virtualized data center. A 256 node cluster would be enough to run a massive operation and cost less than 2 enterprise servers and a switch. It would have 64TB aggregate storage or about 21TB effective storage with incredible response times that no VM environment could ever dream of providing. You'd buy nodes with long term support so that you can keep buying them for 10-20 years and provide a reliable platform that a business could run on for decades.
So, again, AMD doesn't really have a home here... not unless they will provide a sub-$100 cloud node (storage included).
I'm a huge fan of competition, but I just simply don't see outside of HPC where AMD really fits right now. I don't know of any businesses looking to increase their Windows/VMWare licensing costs based on core count. (each dollar spent on another core will cost $3 on software licensed for it). It's a terrible fit for OpenStack which really performs best on $100 ARM devices. I suppose it could be used in Workstations, but they are generally GPU beasts not CPU. If you need CPU, you'd prefer MIC which is much faster. There are too many storage and RAM bottlenecks to run 64-core AMD or 48-core Intel systems as well.
Maybe it would be suitable for a VDI environment. But we've learned that VDI doesn't really fit anywhere in enterprise. And to be fair, this is another place where we've learned that GPU is far more valuable than CPU as most CPU devoted to VDI goes to waste when there is GPU present.
You could have a point... but I wonder if it's just too little too late.
I also question the wisdom of investing too heavily on spending tens of thousands of dollars on an untested platform. Consider that even if AMD is a better choice, to run even a basic test would require investment in 12 sockets worth of these chips. To test it properly would require a minimum of $500,000 worth of hardware and let's assume about $200,000 in human resources to get a lab up and running. If software is licensed (not trial), consider another $900,000 for VMware, Windows and maybe Oracle or SQL server. That's a $700,000 $1.6 million investment to see if you can save a few thousand on a save a few thousand on CPUs.
While Fortune 500s could probably waste that kind of money on an experiment, I don't see it making sense to smaller organizations who can go with a proven platform instead.
I think these processors will probably find a good home in Microsoft's Azure, Google's Cloud and Amazon AWS and I wish them well and really hope AMD and the cloud beasts profit from it.
In the mean time, I'll focus on moving our platforms to Cloud systems which generally work best on Raspberry Pi style systems.
I have stated before that Anandtech is on Intel's payroll. You could see it especially with the first Threadripper review, it was horrendous to say the least. This article goes the same route. You see, two people can say the same thing, but project a completely different picture. I do not disagree that Intel has it strengths over EPYC, but this article basically just agrees with Intel,s presentation. Ha ha, that would have been funny, but it is not.
Intel is corrupt company and Anandtech is missing the point on how they present their "facts." I now very rarely read anything Anandtech publishes. In the 90's they were excellent - those were the days...
Even mom and pop shops shouldn't have servers built from scratch. Who's going to support and validate that hardware for the long haul?
HP and Dell have the best servers in my opinion. Top to bottom. Lenovo servers are at best just rehashes of their crappy workstations. If you want to get exotic (I don't) one could consider Supermicro...friends in the industry have always mentioned good luck with them, and good support. But my experience is with the big three.
You are both wrong in my experience. These days the software that runs on servers usually costs more (often by a wide margin) than the hardware it runs on. I was once running a software package the company paid $320K for on a VM environment of five two socket Dell servers and a SAN where the total hardware cost was $165K. But that was for the whole VM environment that ran many other servers besides the two that ran this package. Even the $165K for the VM environment included VMWare licensing so that was part software too. Considering the resources the two VMs running this package used, the total cost for the project was probably somewhere around 10% hardware and 90% software licensing. For my particular usage, the virtualization numbers are the most important so if we accept these numbers, Intel seems to be the way to go. The $10K CPU's seem pretty outlandish though. For virtualization purposes it seems like there might be more bang for the buck by going with the 8160 and just adding more hosts. Would have to get down to actually doing the math to decide on that one.
Go over to servethehome and check results from someone who is not paid to pimp intel. Epyc enjoys ample lead against similarly priced xeons.
The only niche it is at a disadvantage is the low core count high clock speed skus, simply because for some inexplicable reason amd decided to not address that important market.
Lastly, nobody buys those 10+k $$$ xeons with his own money. Those are bought exclusively with "others' money" by people who don't care about purchase value, because they have deals with intel that put a percent of that money right back into their pockets, which is their true incentive. If they could put that money in their pockets directly, they would definitely seek the best purchase value rather than going through intel to essentially launder it for them.
Er, yes, if you want just 128Gb of RAM it may cost you $1,500, but if you actually want to use the capacity of those servers you'll want a good deal more than that.
The server mentioned in the Intel example can take 1.5Tb of ECC RAM, at a total cost of about $20k - at which point the cost of the CPU is much less of an impact.
As CajunArson said, a full load of RAM on one of these servers is expensive. Your response of "yes well if you only buy 128Gb of RAM it's not that expensive", while true, is a tad asinine - you're not addressing the point he made.
Not every workload requires that the RAM be topped off. We are currently in the middle of building our own private cloud on Hyper-V to replace our AWS presence, which involves building out at multiple datacenters around the country. Our servers have half a terabyte of RAM. Even with that much RAM, CPUs like this would still be (and are) a major factor in the overall cost of the server. The importance for our use case is the ability to scale, not the ability to cram as many VMs into one machine as possible. 2 servers with half a terabyte of RAM are far more valuable to us than 1 server with 1-1.5 terabytes due to redundancy.
CPU price or server price are almost always irrelevant because the software running on them costs at least an order of magnitude more than the hardware itself. So you get the fastest server you need / the software profits from.
Not necessarily, there is a lot of free and opensource software that is enterprise-capable.
Also "the fastest server" actually sell in very small quantities. Clearly the cpu cost is not irrelevant as you claim. And clearly if it was irrelevant, intel would not even bother offering low price skus, which actually constitute the bulk of it sales, in terms of quantity as well as revenue.
128GB for 32 core is suspiciously low.... For that kind of core count, generally the server has 512GB or above.
Also, 128GB of memory in this day and age is definitely not $1,500 tops. Maybe in early 2016, but definitely not this year, and definitely not next year.
And from what I've seen, the two biggest cost factors in an enterprise grade server is the SSDs and memory. Generally memory accounts for 20% of the server cost, while SSD accounts for about 30%.
CPU generally accounts for 10% of the cost. Not insignificant, but definitely not "makes up half of the machine's budget".
AMD has a very hard battle to get back into the datacenter. Intel is already competing aggressively.
Care to share with us your "correct ram amount per cpu core" formula? There I was, thinking that the amount of ram necessary was determined by the use case, turns out it is a product of core count.
Not necessarily. It depends on what kind of work will those VMs be doing. Visualized or bare metal, configuration details are dictated by the target use case. Sure, you can also build universal machines and cram them full of as much cores and memory they can take, but that is very cost ineffective.
I can think of a usage scenario that will be most balanced with a quad core cpu and 1 terabyte of ram. Lots of data, close to no computation taking place, just data reads and writes. A big in-memory database server.
I can think of a usage scenario that will be most balanced with a 32 core cpu and 64 gigabytes of ram. An average sized data set involved in heavy computation. A render farm node server.
It is certainly determined by the use cases, but after interacting with hundreds of companies and their respective workloads, generally higher core counts are mapped to higher memory capacity.
Of course, there are always very few fringe use cases that focuses significantly on compute.
What about large players like Microsoft Azure or AWS? I have worked with both and neither uses anything close to what you guys talk about in terms of RAM or CPU. Its all about getting the most performance per watt. When you data center has its own substation your electric bill might be kinda high.
I will overlook the rudeness of your comment. I actively work with enterprise hardware and would probably not make comments like that and then recommend outfitting a server with 128GB of RAM. I don't think I've been near anything with as little as that in a long while. 128GB is circa 2012-2013.
An enterprise needs 6 servers to ensure one operational node in a redundant environment. This is because in two data centers, you have 3 servers each. In case of a catastrophe, a full data center is lost and while a server is in maintenance and then finally another server fails. Therefore, you need precisely 6 servers to provide a reasonable SLA. 9 servers is technically more correct, in a proper 3 data center design.
If you know anything about storage, you would prefer more servers as more servers provides better storage response times... unless you're using SAN which is pretty much reserved strictly to people who simply don't understand storage and are willing to forfeit price, performance, reliability, stability, etc... to avoid actually taking a computer science education.
In enterprise IT, there are many things to consider. But for your virtualization platform, it's pretty simple. Fit as much capacity as possible in to as few U as possible while never dropping below 6 servers. Of course, I rarely work with less than 500 servers at a time, but I focus on taking messy 10,000+ server environments and shrinking them to 500 or less.
See, each server you add adds cost to operation. This means man-hours. Storage costs. Degradation of performance in the fabrics, etc... it introduces meaningless complexity and requires IT engineers to waste more and more hours building illogical platforms more focused on technology than the business they were implemented for.
If I approach a customer, I tend to let them know that unless they are prepared to invest at least $50,000 per server for 6 servers and $140,000 for the appropriate network, they should deploy using an IaaS solution (not cloud, never call IaaS cloud) where they can share a platform that was built to these requirements. The breaking point where IaaS is less economical than DIY is at about $500,000 with an OpEx investment of $400,000-$600,000 for power, connectivity, human resources, etc... annually and this doesn't even include having experts on the platform running on the data center itself.
So with less than a minimum of $1 million a year investment in just providing infrastructure (VMware, Nutanix, KVM, Hyper-V), not even providing a platform to run on it, you're just pissing the wrong way in the wind tunnel and wasting obscene amounts of money for no apparent reason on dead-end projects run by people who spend money without considering the value provided.
In addition, the people running your data center for that price are increasing in cost and their skillset is aging and decreasing in value over that time.
I haven't even mentioned power, cooling, rack space, cabling, managed PDUs, electricians, plumbers, fire control, etc...
Unless you're working with BIG DATA, an array of 2-4 TB drives for under $10,000 to feed even one 32-core AMD EPYC is such an insanely bad idea, it's borderline criminal stupidity. Let's not even discuss feeding pipelines of 6 32-core current generation CPUs per data center. It would be like trying to feed a blue whale with a teaspoon. In a virtualized configuration a deal EPYC server probably would need 100GB/s+ bandwidth to barely keep ahead of process starvation.
If you have any interest at all in return on investment in enterprise IT, you really need to up your game to make it work on paper.
Now... consider that if you're running a virtual data center... plain vanilla. Retail license cost of Windows Enterprise and VMware (vCenter, NSX, vSAN) for a dual 32-core EPYC is approximately $125,000 a server. Cutting back to dual 24-core with approximately the same performance would save about $30,000 a server in software alone.
I suppose I can go on and on... but let's be pretty clear CajunArson made a fair comment and probably is considering the cost of 1-2TB per server of RAM. Not 128GB which is more of a graphics workstation in 2017.
Epyc single socket 32core/64 thread CPU is ~2000$. There is no Intel equivalent here, which is disappointing. As the single socket systems are only ~22 core max and no 205 watt parts.
I'd pay extra to have extra physical cores when I'm speccing a server holding VMs, but AMD gives us more cores for less money.
I also love AMD's RAID which works absolutely great and it's free while Intel's is annoyingly a paid-for solution.
Intel doesn't say one peep about Full Encrypted RAM, because they don't have it.
Intel doesn't say a pee about power consumption because their platform looses in every test.
Intel doesn't say a peep about EPYC 1.1 or EPYC Plus or whatever which will be a drop-in upgrade for the current platforms.
I was put in the shitty situation of speccing Xeon based machines because the per-core licenses were extremely expensive and the Xeon solution is offering us better performance, but other than this situation, we're doing everything to avoid working with Intel.
We still have servers that started out with dual Opterons and grew to Hexa-Core over the years.
That saved our clients a ton of money and their jaws dropped when we advised that they need to move back to Xeon if they want to upgrade (EPYC was still 2 years away then) .
It may be fashionable as a young lads to root for the "cool winner" like Ferrari, Bugatti or Intel , but when you've worked multiple decades in the industry and had to swallow all the crap Intel was pulling, you start rooting for the little guy.
Paying anywhere between $12K-$50+K more per machine just to have the Intel logo tends to add up. Ending up with up to 200W more per machine also incurs some extra costs.
If you said the cost fades when compared to licensing costs of many software solutions I would understand. But the metal itself... no, the extra cost for that Xeon is either stupidity or protection tax.
How many server software really using AVX-512? Can you give us a list (excluding AI and machine learning apps, because those ran better on GPU/Dedicate hardware).
I haven't come across in personally, but something else to add is the amount of heat these chips generate when running AVX-512 under load. Running any AVX benchmarks on Intel chips usually results in throttling.
"The whole "pricetag" thing is not really an issue": No? Is that why the volume sales in the server market is the mid-section of the former Xeon E5? Wouldn't people be buying top end E7s (Platinum in today's lingo)? Of course pricetag matters, and matters even more when you're deploying tens of thousands of nodes.
Incidentally, for anybody who think Intel "cheated" with those numbers there's concrete proof from independent third-party reviewers that at least the GROMACS benchmark results that Intel itself is showing are not fully accurate... as in they are not fully accurate in *AMD's favor*.
Here's a link to GROMACS results from Serve the Home that are actually using the newest version that finally turns on the AVX-512 support to show you what the Xeon platform was actually designed to do: https://www.servethehome.com/wp-content/uploads/20...
So just remember that Intel is being pretty conservative with these numbers if their own published GROMACS results are anything to go by.
I would hope they’d be conservative in this sector. I’m guessing very knowledgeable people will be making the buying decisions here, and there may even be contractual expectations from the purchasing companies. Over promising and under delivering on an internal report might not just cost a few big sales, they might even result in lawsuits.
I think the problem is while intel usually uses the most optimized compilers and systems. They usually do not optimize the intel systems at all. At least in the consumer benchmarks.
Not so sure about these enterprise because I have no idea what most of these tests do.
The biggest lie is through omission, the bulk of the volumes is at rather low ASP so if you are gonna test test 1k$ and bellow SoCs and use the I/O offered by each.
Wrong. You clearly don't understand how Epyc works. Literally every Epyc chip has the same number of NUMA nodes regardless of core count from the 7601 all the way down to the super-stripped down parts.
Each chip has 4 dies that produce the same number of NUMA nodes, AMD just turns off cores on the lower-end parts.
Maybe you should have actually learned about what Epyc was instead of wasting your time posting ignorant attacks on other people's posts.
The same goes for you. My ignorance with EPYC stems from poor availability and the lack of desire to learn about EPYC. You seem to have a full time job trolling on AnandTech. Go troll somewhere else.
Reading your posts I see you're both right, just using examples of different use cases.
P.S. Cajun seems like a bit of an avid Intel supporter as well, but he's right : in AVX512 and in some particular software, Intel offers excellent performance.
But that comes at a price, plus some more power consumption, plus the inability to upgrade (considering what Intel usually does to its customers) .
"The benchmarking scenario also has a big question mark, as in the footnotes to the slides Intel achieved this victory by placing 58 VMs on the Xeon 8160 setup versus 42 VMs on the EPYC 7601 setup."
Given how well AMDs SMT scales, a real client can put up to 128 single-CPU VMs on the EPIC 7601, and 58 VMs on Xeon 8160 would be tramped ridiculously. Here Intel just had to rely on the shenanigans so obvious it is just fraud.
Yeah, that really stuck out for me too. "We outperform AMD when running a different benchmark!" And to be frank, it casts a pall over Intel's entire PR release since it IS blatantly not how benchmarks work.
Many HPC task are memory bandwidth limited, and then AVX-512 is of little help. In Spec.org CFP2006 none of the recent results are using AVX-512 but instead rely on AVX2. The few tests posted using AVX-512 come out worse than the tests on similar systems using AVX2. For memory bandwidth limited tasks the EPYC has an advantage with its 8 memory channels compared to Intels 6 channels. For both architectures, a high end processor is not needed for bandwidth limited task, since the don't offer more memory channels.
I myself was confused and dissapointed reading the summary where agreement with Intel seems to be presented by the authors. Using prases like "there is no denying that the Intel Xeon is a 'safer bet' for VMware virtualization" without testing it pushes AT into the realm of paid for shills. Independent reviews wouldn't trust anyone's marketing and even if they were to publish an article on benchmarks from a competitor, they would fill the thing with hefty amounts of skepticism until they could test it themselves. What Intel presents could very realistically be true (personally, I don't doubt that their benchmarks are within the ballpark of being legit), but I want my independent review sites to have as little bias as possible and that means objectively testing the hardware and ignoring the marketing.
These type of servers are rarely bought by customers for personal use. Instead, they are bought for a 'real job' where CYA decisions outweigh any performance benefits (to a degree, the end product has to work). If something really goes wrong, you can always expect to get the blame for buying the "off brand" instead of following the sheep, regardless of what really caused the failure (typically with highly annoyed management who can't tell *anything* about the server than it is the "off brand").
If this isn't a consideration you have a "great job". Expect the owner to sell at some point or expand to the point it is controlled by MBAs and downgrade everybody's job to a "real job'. Sorry to say, but at least in the USA that is life.
People sometimes really surprise me. What support doe you want from AMD? Yes if there is a booboo like Intel has (present tense) with its security flaw, you need support from them. I have sold numerous systems and servers in my life and never did I go to AMD or Intel to ask for support. It either the OEM, component supplier or component manufacturer (like motherboards etc) who you go to for support.
If the CPU works as it should, you do not need support. CPU's were in my experience the one component that rarely if ever dies on you. So if you trust Tyan to make good products, which they do, they are the ones to give you support, not AMD. AMD has to help with Bioses etc. with which they are very good.
So please stop with this support issue and safer bet. If the system runs unstable because of hardware issues, sure they have to sort it out, but till now, none has been reported.
What has Intel done about the bug recently found? Did they come to you to fix it and support you? Nope, you have to fix it yourself, that is if the motherboard manufacturer has a bios update. So, for me it looks like AMD might just be the safer bet after all...
Yeah I want to give them the benefit of the doubt and I have no problem with them posting numbers even as analyzation of Intel in regards to EPYC. But a full page "review" of Intel's Epyc benchmarks as a product is kind of schilly. I mean where is their tests to back up the information? Where are the counterpart test where they test something similar that wasn't handpicked by Intel. How can any company assess the validity of a product based solely off of it's competitors testing of the product?
To throw some context in here, the purpose of this article isn't to publish Intel's benchmarks. Rather, it's commentary on what has been a very unusual situation.
Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing.
"AMD's technical marketing of the new CPU has been surprisingly absent, as the company not published any real server benchmarks. The only benchmarks published were SPEC CPU and Stream, with AMD preferring for its partners and third parties to promote performance"
This despite the fact that AMD and Intel's server products haven't been competitive like this in nearly a decade. Normally you'd expect there to be case studies flying out left and right, which has not been the case. And it's especially surprising since, as the underdog, AMD needs to claw back lost ground.
Consequently, Intel's own efforts are, to date, the first efforts by a server vendor to do a comprehensive set of benchmarks over a range of use cases. And let's be clear here: this is Intel doing this for Intel's own benefit. Which is why we've already previously reviewed the two CPUs, as have other 3rd party groups.
Still, I think it's very interesting to look at what Intel has chosen to represent, and what their numbers show. Intel has more resources than pretty much everyone else when it comes to competitive analysis, after all. So their choices and where they show themselves falling behind AMD says a lot about the current situation.
TL;DR: We thought this stuff was interesting, especially since neither vendor until now has done a Xeon-SP vs. EPYC comparison. And since we've already done our own independent review (https://www.anandtech.com/show/11544/intel-skylake... ), it gives us a set of data to compare to our own conclusions (and to be clear, this isn't a review nor are we trying to call it one)
Yeah, you were so doing your righteous complaints when Anandtech did literally the same thing for AMD when AMD went out and misconfigured Intel boxes to pretend that Epyc was better than it actually was.
The problem is the heavily biased towards intel AT coverage you clog. How could anyone complain about the opposite when AT have never displayed pro-amd bias? I have a problem with bias, and I point it out when I see it. You can bet your ass the moment AT shows unfair bais toward amd I will be there to point it out. But I cannot point it out if it doesn't exist.
If Intel suddenly feels the need to compete with AMD, that 's news (practically "man bites dog" news judging from the last decade or so).
The fact that they have to pick carefully contrived benchmarks to appear superior to AMD is even more telling. Totally ignoring power consumption (one of the biggest concerns for datacenters) is even more telling.
When Skylake runs AVX 512 and AVX2 instructions it causes both the clock frequency to go down *and* the voltage to go up. (https://www.intel.com/content/dam/www/public/us/en... However, it can only bring the voltage back down within 1ms. If you get a mix of AVX2 and regular instructions, like you do in the POV ray test, then it's going to be running on higher voltage the whole time. That probably explains why the Xeon 8176 drawed so much more power than the EPYC in your Energy consumption test. The guys at cloudflare also observed a similar effect (although they only notice the performance degrade): https://blog.cloudflare.com/on-the-dangers-of-inte...
In the HPC section, the article indicates that NAMD is faster on the Epyc system but the accompanying graphic points toward a draw with the Xeon Gold 6148 and a win for the Xeon Platinum 8160. Epyc does win a few benchmarks in the list prior to NAMD though.
HIghly questionable article: "A lot of time the software within a system will only vaguely know what system it is being run on, especially if that system is virtualised". Why do you say this if you publish HPC results? There the software knows exactly whay type of processor in what kind of configuration it is running. "The second is the compiler situation: in each benchmark, Intel used the Intel compiler for Intel CPUs, but compiled the AMD code on GCC, LLVM and the Intel compiler, choosing the best result" More important, what type of math library did they use? The Intel MKL has an unmatched optimization, have they used the same for the AMD system? "Firstly is that these are single node measurements: One 32-core EPYC vs 20/24-core Intel processors." Why don't you make it clear, that by doing this, the benchmark became useless!!! Performance doesn't scale linearly with core count: http://www.gromacs.org/@api/deki/files/240/=gromac... So it makes a huge difference if I compare a simulation which runs on 32 cores with a simulation which runs on 20 cores. If I calculate the performance per core then, I always see that the lower core CPU is much much faster, because of scaling issues of the simulation software. You haven't disclosed how Intel got their 'relative performance' value.
Do we know for sure that the Omni-Path Skylake CPUs actually use PCIe internally for the fabric port? If you look at Intel's "ark" database, all of the "F" parts have one fewer UPI links, which seems weird.
I think this was a realistic article on analysis of the two systems. And it does point to important that Intel system is more mature system than AMD EPYC system. My personally feeling is that AMD is thrown together so that claim core count without realistically thinking about the designed.
But it does give Intel a good shot in ARM with completion and I expect Intel's next revision to have significantly leap in technology.
I did like the systems for similarly configured - as the cost, I build myself 10 years a dual Xeon 5160 that was about $8000 - but it was serious machine at the time and significantly faster than normal desktop and last much longer. It was also from Supermicro and find machine - for the longest time it was still faster than a lot machine you can get at BestBuy - it has Windows 10 on it now and still runs today - but I rarely used it because I like the portability of laptops
Gromacs is a very narrow niche product and also very biased - they heavily optimize for intel and nvidia and push amd products to take an inefficient code path.
AVX512 is twice as wide as AVX2 and significant more power than the AVX2 - so yes it very possible in this this test that CPU with 1/4 the normal CPU cores can have more power because AVX512.
Also I heard AMD's implementation of AVX2 is actually two 128 bits together - these results could show that is true.
And the hilarity continues. So AMD posts in house benchmarks and the crowd goes: Derp, these are AMD supplied benchmarks, best wait for third party benchmarks. Intel posts in house benchmarks and the crowd goes: Wow awesome dude, that's the shitsors! Who needs third party benchmarks, AMD should post more in house benchmarks. derp derp
Guerrilla marketing at its finest? The hilarity is that when Intel was dominating.. they never mentioned intel nor they needed.
Now that AMD has a compelling product. They suddenly started doing "comparisons" left and right and claiming how bad "glue" is in AMD cpus (while ignoring the drama bout using cheap TIM instead of solder)
Yeah, I don't get it. I mean even Ryzen mobile launched well before we saw it. Eypc announcement early was important to build up demand with OEM's. Something that wasn't as important with a consumer product that needed announcement with availability. EPYC's announcement wasn't for the end purchaser. Both these need long testing periods and seed supply. Epyc then has ODM builds for cloud services that they have supply. Ryzen mobile launched when OEM's had products to ship. EPYC launched when they products to ship to manufacturers. When those Manufacturers offered EPYC depends completely on their development cycle.
Can we get ANSYS Structural or Comsol benchmarks for the HPC sections? Building machines using Xeons for these applications is beyond expensive for engineering design on fixed price contracts.
To throw some context in here, the purpose of this article isn't to publish Intel's benchmarks. Rather, it's commentary on what has been a very unusual situation.
Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing.
"AMD's technical marketing of the new CPU has been surprisingly absent, as the company not published any real server benchmarks. The only benchmarks published were SPEC CPU and Stream, with AMD preferring for its partners and third parties to promote performance"
This despite the fact that AMD and Intel's server products haven't been competitive like this in nearly a decade. Normally you'd expect there to be case studies flying out left and right, which has not been the case. And it's especially surprising since, as the underdog, AMD needs to claw back lost ground.
Consequently, Intel's own efforts are, to date, the first efforts by a server vendor to do a comprehensive set of benchmarks over a range of use cases. And let's be clear here: this is Intel doing this for Intel's own benefit. Which is why we've already previously reviewed the two CPUs, as have other 3rd party groups.
Still, I think it's very interesting to look at what Intel has chosen to represent, and what their numbers show. Intel has more resources than pretty much everyone else when it comes to competitive analysis, after all. So their choices and where they show themselves falling behind AMD says a lot about the current situation.
(And no, this doesn't quality for #ad as Intel hasn't paid us. That's not how this works; that's not how any of this works)
@Ryan Smith: "Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing." I think that's largely because the market is different from a decade ago. Hyperscalers do their own testing and aren't swayed by Intel's or AMD's whitepapers. They do their own thing. There are still many companies that buy and maintain their own servers, but my understanding is that this market is shrinking or at least not growing. Cloud is where the money is, and they know what they want. I don't think AMD is trying to go after enterprise this time around (I'm sure they'll take their business but the main target seems to be hyperscalers. The CCX, MCM, large memory footprint etc all point to them saying we'll target scale-out as opposed to scale-up. AMD does quite well in scale-out, while taking a hit in scale-up.).
Also, AMD might still be in the process of doing minor firmware tweaks as evidenced by tier-1 server availability (HP/Dell) coming online only end of Q4.
I am so glad people are realising ANandtechs rubish, probably led by Ian who wrote that terrible Threadripper review. Maybe he will realise it as more complain. It all depends on how much Intel is paying him...
ANSYS is one of those cases where having massive RAM really matters. I doubt if any site would bother speccing out a system properly for that. One ANSYS user told me he didn't care about the CPU, just wanted 1TB RAM, and that was over a decade ago.
The pricetag discussion really needs to include software licensing as well. Windows Datacenter and SQL server on a machine with 64 cores will cost more than the hardware itself. This is the reason that the Xeon 5122 exists.
Also isnt it kind of silly to invest in a server platform with limited PCIE performance when faster and faster storage and networking is becoming commonplace?
it really seems that AMD has crushed Intel this time. Also Charlie has some interest points about security ( has this topic being even analyzed here ? https://www.semiaccurate.com/2017/11/20/epyc-arriv... ) Software WILL be tuned for Epyc, so a safe bet will not be getting Xeon but Epic, for sure. And power consumption and heat is really important as is an interesting part of datacenter maintenance costs. I really don't get how the article ends up in this conclusion.
As usually Intel cheated. Clients won't use neither their property compiler nor a software but GNU one's. Now let me show you a difference: https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-s... Other than that this is boring as ARM NUMA based server chips are coming with some backup from good old veterans when it comes to to supercomputing and this time around Intel won't have even a compiler advantage to drag about it. Sources: https://www.nextplatform.com/2017/11/27/cavium-tru... http://www.cavium.com/newsevents-Cray-Catapults-Ar... Now this are the real news & melancholic ones for me as it brings back memories how it all started. & guess what? We are back their on the start again.
Linux 4.15 has code to increase EPYC performance and enable the memory encryption features. 4.16 will have the code to enable the virtual machine memory encryption.
thx for sharing the article Johan, as usual those are the ones I will always read.
Interesting to get feedback from Intel on benchmark compares, this tells how scared they really are from the competition. There is no way around, I' ve been to many OEM and large vendor events lately. One thing is for sure, the blue team was caught with there pants down and there is for sure interest from IT into this new competitor.
Now talking a bit under the hood, having had both systems from beta stages.
I am sure Intel will be more then happy to tell you if they were running the systems with jitter control. Off course they wont tell the world about this and its related performance issues.
Second, will they also share to the world that there so called AVX enhancement have major clock speed disadvantages to the whole socket. really nice in virtual environments :)
Third, the turbo boosting that is nowhere near the claimed values when running virtualization? Yes the benchmarking results are nice, but they don't give real world reality, its based on synthetic benches. Real world gets way less turbo boost due to core hot spots and there co-related TDP.
There are reasons why large OEM did not yet introduce EPYC solutions, they are still optimizing BIOS and microcode as they want to bring a solid performing platform. The early tests from Intel show why.
Even the shared VMware bench can be debated with no shared version info as the 6.5u1 has got major updates to the hypervisor with optimizations for EPYC.
Sure DB benches are an Intel advantage, there is no magic to it looking at the die configurations, there are trade offs. But this is ONLY when the DB are bigger then certain amount of dies so we are talking here about 16+ cores from the 32 cores/socket systems for example, anything lower will have actually more memory bandwidth then the Intel part. So how reliable are these benchmarks for a day to day production.... not all are running the huge sizes. And those who do should not just compare based on synthetical benches provided but do real life testing.
Aint it nice that a small company brings a new CPU line and already Intel needs to select there top bin parts as a counter part to show benchmarks to be better. There are 44 other bins available on the Intel portfolio, you can probably already start guessing how well they really fare against there competitor....
@Ian Cutress & Johan De Gelas - Could you please update this by running your own numbers AFTER the full implementation of Spectre and Meltdown fixes. That would be so helpful in showing how much these have effected both platforms and whether your conclusions remain after the fixes. Thank you!
What this doesn't really address is Memory configurations. RAM configurations are very much limited with intel given the 2 DIMMS per channel configuration and 6 channels vs 8 with AMD.
With Intel you can only get 384GB with 16GB DIMMS, compared to 512GB with AMD. If you need 512GB then you have to use 32GB DIMMS on intel which again pushes the price up considerably. Which is why customers often choose a Broadwell system over Skylake, to keep memory costs down.
Раньше пользователи CryptoTab PRO могли ускорять майнинг токмо быть помощи функции Cloud.Boost. Мы решили начинать дальше и предоставить вам доступ к новейшей опции Super.Boost, чтобы вы зарабатывали больше — и быстрее! Оцените преимущества сверхскоростного майнинга с расширением Super.Boost, доступным в PRO-версии Android приложения. Используйте разом порядочно ускорений Super.Boost и увеличьте общую прыть майнинга для всех ваших устройствах с включеннойй функцией Cloud.Boost. https://bit.ly/33KzbIO
Скажем, проворство вашего устройства — 100H/s, с Cloud.Boost X10 живость составит уже 1000H/s. А с двумя дополнительными ускорениями Super.Boost +80% и +60% суммарная скорость майнинга будет равна 2400H/s (1000H/s + 1000H/s*0,8 + 1000H/s*0.6 = 1000 + 800 + 600). Скачай CryptoTab PRO и открой ради себя сверхбыстрый майнинг! https://clck.ru/QAzh8
Устанавливая PRO-версию Android-приложения помимо опции Super.Boost, вы также получите расширенный ассортимент функций: Cloud Expel — часто ускоренный майнинг Занятие SDP: сервер-зависимый майнинг не тратит заряд батареи Быстрый доступ к кабинету ради хранения и вывода криптовалюты Неограниченное число подключённых удалённых устройств Обновление баланса некогда в 10 минут Неограниченный нравоучение средств от 0.00001 BTC Отдельные профили чтобы разных пользователей Защищенный профиль для публичного Wi-Fi Приоритетная техническая поддержка Адаптивная новостная лента Никакой надоедливой рекламы Специальные промо-материалы и распродажи Майнинг на мобильных устройствах снова сроду не был таким быстрым! Скачай Android-приложение, включи Cloud.Boost и открой для себя суперскоростной майнинг с функцией Super.Boost. Перейти сообразно ссылке: https://clck.ru/QAzyL
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
105 Comments
Back to Article
CajunArson - Tuesday, November 28, 2017 - link
The whole "pricetag" thing is not really an issue when you start to look at what *really* costs money in many servers in the real world. Especially when you consider that there's really no need to pay for the highest-end Xeon Platinum parts to compete with Epyc in most real-world benchmarks that matter. In general even the best Epyc 7601 is roughly equivalent to similarly priced Xeon Gold parts there.If you seriously think that even $10000 for a Xeon Platinum CPU is "omg expensive"... try speccing out a full load of RAM for a two or four socket server sometime and get back to me with what actually drives up the price.
In addition to providing actually new features like AVX-512, Intel has already shown us very exciting technologies like Optane that will allow for lower overall prices by reducing the need to buy gobs and gobs of expensive RAM just to keep the system running.
eek2121 - Tuesday, November 28, 2017 - link
One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc. Also RAM prices are typically not a big issue. Outfitting any of these systems with 128GB of ECC memory costs around $1,500 tops, and that's before any volume discounts that the company in question may get. Altogether a server with a 32 core AMD EPYC, 128 GB of RAM, and an array of 2-4 TB drives will cost under $10,000 and may typically be less than $8,000 depending on the drive configuration, so YES price is a factor when the CPU makes up half the machine's budget.CajunArson - Tuesday, November 28, 2017 - link
"One thing is clear, you've never touched enterprise hardware in your life. You don't build servers, you typically buy them from companies like Dell, etc"Well, maybe *you* don't build your own servers. But my point is 100% valid for pre-bought servers too, just check Dell's prices on RAM if you are such an expert.
Incidentally, you also fell into the trap of assuming that the MSRPs of Intel CPUs are actually what those big companies like Dell are paying & charging. That's clearly shows that *you* don't really know much about how the enterprise hardware market actually works.
eek2121 - Tuesday, November 28, 2017 - link
As someone who has purchased servers from Dell for 'enterprise', I know exactly how much large enterprise users pay. I was using the prices quoted here for comparison. I don't know of a single large enterprise company that builds their own servers. I have a very long work history working with multiple companies.sor - Tuesday, November 28, 2017 - link
There are plenty. I’ve been in data centers filled with thousands of super micro or chenbro home built and maintained servers. The cost savings are immense, even if you partner with a systems integrator to piece together and warranty the builds you design. Anyone doing scalable cloud/web/VM probably has thousands of cheap servers rather than HP/Dell.IGTrading - Tuesday, November 28, 2017 - link
This is one thing I HATE :)When AMD had total IPC and power consumption superiority people said "yeah buh software's optimized for Intel so better buy Xeon" .
When AMD had complete superiority with higher IPC and a platform so mature they were building Supercomputers out of it, people said "yeah buh Intel has a tiny bit better power consumption and over the long term..."
Over the long term "nothing" would be the truth. :)
Then Bulldozer came and AMD's IPC took a hit, power consumption as well, but they were still dong ok in general and had some applicatioms where they did excell such as encrypting and INT calculus , plus they had the cost advantage.
People didn't even listen ... and went Xeon.
Now Intel comes and says that they've lost the power consumption crown, the core count crown, the PCIe I/O crown, the RAID crown, the RAM capacity crown, the FPU crown and he platform cost crown.
But they come with this compilation of particular cases where Xeon has a good showing and people say: "uh oh you see ?! EPYC is still imature, we go Xeon" .
What ?!
Is this even happening ?! How many crowns AMD needs to win to be accepted as the better choice overall ?! :)
Really ?! Intel writes a mostly marketing compilation of particular use cases and people take it as gospel ?!
Honestly ... how many crowns does AMD need to win ?! :)
In the end, I would like to point out that in an AMD EPYC article there was no mention of AMD's EPYC 2 SPEC World Records .....
Not that it does anything to affect Intel's particular benchmarking, but really ?!
You write an AMD EPYC article within less than a week since HP announces 2 World Records with EPYC and you don't even put it in thw article ?!?
This is the nTH AMD bashing piece from AnandTech :(
I still remeber how they've said that AMD's R290 for 550 USD was "not recommended" despite beating nVIDIA's 1000 USD Titan because "the fan was loud" :)
This coming after nVIDIA's driver update resulting in dead GPUs ....
But AMD's fan was "loud" :)
WTF ?! ...
For 21 years I've been in this industry and I'm really tired to have to stomach these ...
And yeah .. I'm not bashing AnandTech at all. I'll keep reading it like I did since back in 1998 when I've first found it.
But I really see the difference between an independent Anandtech article and this one where I'm VERY sure some Intel PR put in "a good word" and politely asked for a "positive spin" .
0ldman79 - Tuesday, November 28, 2017 - link
I agree with a lot of your points in general, though not really directed toward Anandtech.The article sounded pretty technical, unbiased and the final page was listing facts, the server CPU are similar, Intel showed a lot of benches that show the similarities and ignore the fact that their CPU costs twice as much.
In all things, which CPU works best comes down to the actual app used. I was browsing the benches the other day, the FX six core actually beats the Ryzen quad and six core in a couple of benches (like literally one or two) so if that is your be-all end-all program, Ryzen isn't worth it.
So far it looks like AMD has a good server product. Quite like the FX line it looks like the Epyc is going to be better at load bearing than maximum speed and honestly I'm okay with that.
IGTrading - Friday, December 1, 2017 - link
I've just realized something even more despicable in this marketing compilation of particular use cases :1) Intel built a 2S Intel based server that is comparable in price with the AMD built.
2) That Intel built gets squashed in almost all benchmarks or barely overtakes AMD in some.
3) But then Intel added on all graphs a built that is 11,000 USD more expensive which also performed better, without clearly stating just how much more expensive that system is.
4) Also Intel says that it's per core performance in some special use cases is 38% better without saying that AMD offers 33% more cores that, overall, overtake the Intel built.
In conclusion, the more you look at it, the more this becomes clear as an elaborate marketing trick that has been treated like "technical" information by websites like AnandTech.
It is not. It is an elaborate marketing trick that doesn't clearly state that the system that looks so good in these particular benchmarks is 11,000 USD more expensive. That over 60% extra money.
Like I've said, AnandTech needs to be more critical of these marketing ploys.
They should be informing and educating us, their readers, not dishing out marketing info and making it look technical and objective when it clearly is not.
Johan Steyn - Monday, December 18, 2017 - link
The only reason I still sometimes read Anandtech's articles, is because some of the reader like you are not stupid and fall for this rubbish. I get more info from the comments than fro the articles itself. WCCF have great news posts, but the comments are like from 12 year-olds.Anandtech used to be a top rated review site and therefore some of the old die hard readers are still commenting on these articles.
submux - Thursday, November 30, 2017 - link
I think that the performance crown which AMD has just won has some catches unfortunately.First of all, I'll probably start working with ARM and Intel as opposed to AMD an Intel... not because AMD is not a good source, but because from a business infrastructure perspective, Intel is better positioned to provide support. In addition, I'm looking into FPGA + CPU solutions which are not offered or even on the road map for AMD.
Where AMD really missed the mark this time is that if AMD delivers better performance on 64-cores as Intel does 48-cores with current storage technologies, both CPUs are probably facing starvation anyway. The performance difference doesn't count anymore. If there's no data to process, there's no point in upping the core performance.
The other huge problem is that software is licensed on core count, not on sockets. As such, requiring 33% more cores to accomplish the same thing... even if it's faster can cost A LOT more money. I suppose if AMD can get Microsoft, Oracle, SAP, etc... to license on users or flops... AMD would be better here. But with software costs far outweighing hardware costs, fake cores (hyperthreading) are far more interesting than real cores from a licensing angle.
We already have virtualization sorted out. We have our Windows Servers running as VMs and unless you have far too many VMs for no particular reason or if you're simply wasting resources for no reason, you probably are far over-provisioned anyway. I know very few enterprises who really need more than two half-racks of servers to handle their entire enterprise workload... and I work with 120,000+ employee enterprises regularly.
So, then there's the other workload. There's Cloud. Meaning business systems running on a platform which developers can code against and reliably expect it to run long term. For these systems, all the code is written in interpreted or JITed languages. These platforms look distinctly different than servers. You would never for example use a 64-core dual socket server in a cloud environment. You would instead use extremely low cost, 8-16 core ARM servers with low cost SSD drives. You would have a lot of them. These systems run "Lambdas" as Amazon would call them and they scale storage out across many nodes and are designed for massive failure and therefore don't need redundancy like a virtualized data center. A 256 node cluster would be enough to run a massive operation and cost less than 2 enterprise servers and a switch. It would have 64TB aggregate storage or about 21TB effective storage with incredible response times that no VM environment could ever dream of providing. You'd buy nodes with long term support so that you can keep buying them for 10-20 years and provide a reliable platform that a business could run on for decades.
So, again, AMD doesn't really have a home here... not unless they will provide a sub-$100 cloud node (storage included).
I'm a huge fan of competition, but I just simply don't see outside of HPC where AMD really fits right now. I don't know of any businesses looking to increase their Windows/VMWare licensing costs based on core count. (each dollar spent on another core will cost $3 on software licensed for it). It's a terrible fit for OpenStack which really performs best on $100 ARM devices. I suppose it could be used in Workstations, but they are generally GPU beasts not CPU. If you need CPU, you'd prefer MIC which is much faster. There are too many storage and RAM bottlenecks to run 64-core AMD or 48-core Intel systems as well.
Maybe it would be suitable for a VDI environment. But we've learned that VDI doesn't really fit anywhere in enterprise. And to be fair, this is another place where we've learned that GPU is far more valuable than CPU as most CPU devoted to VDI goes to waste when there is GPU present.
You could have a point... but I wonder if it's just too little too late.
I also question the wisdom of investing too heavily on spending tens of thousands of dollars on an untested platform. Consider that even if AMD is a better choice, to run even a basic test would require investment in 12 sockets worth of these chips. To test it properly would require a minimum of $500,000 worth of hardware and let's assume about $200,000 in human resources to get a lab up and running. If software is licensed (not trial), consider another $900,000 for VMware, Windows and maybe Oracle or SQL server. That's a $700,000 $1.6 million investment to see if you can save a few thousand on a save a few thousand on CPUs.
While Fortune 500s could probably waste that kind of money on an experiment, I don't see it making sense to smaller organizations who can go with a proven platform instead.
I think these processors will probably find a good home in Microsoft's Azure, Google's Cloud and Amazon AWS and I wish them well and really hope AMD and the cloud beasts profit from it.
In the mean time, I'll focus on moving our platforms to Cloud systems which generally work best on Raspberry Pi style systems.
Johan Steyn - Monday, December 18, 2017 - link
I have stated before that Anandtech is on Intel's payroll. You could see it especially with the first Threadripper review, it was horrendous to say the least. This article goes the same route. You see, two people can say the same thing, but project a completely different picture. I do not disagree that Intel has it strengths over EPYC, but this article basically just agrees with Intel,s presentation. Ha ha, that would have been funny, but it is not.Intel is corrupt company and Anandtech is missing the point on how they present their "facts." I now very rarely read anything Anandtech publishes. In the 90's they were excellent - those were the days...
Jumangi - Tuesday, November 28, 2017 - link
Maybe you have herd of Google..or Facebook. Not only d9 they build but they design their own rack systems to suit their massive needs.Samus - Wednesday, November 29, 2017 - link
Even mom and pop shops shouldn't have servers built from scratch. Who's going to support and validate that hardware for the long haul?HP and Dell have the best servers in my opinion. Top to bottom. Lenovo servers are at best just rehashes of their crappy workstations. If you want to get exotic (I don't) one could consider Supermicro...friends in the industry have always mentioned good luck with them, and good support. But my experience is with the big three.
Ratman6161 - Wednesday, November 29, 2017 - link
You are both wrong in my experience. These days the software that runs on servers usually costs more (often by a wide margin) than the hardware it runs on. I was once running a software package the company paid $320K for on a VM environment of five two socket Dell servers and a SAN where the total hardware cost was $165K. But that was for the whole VM environment that ran many other servers besides the two that ran this package. Even the $165K for the VM environment included VMWare licensing so that was part software too. Considering the resources the two VMs running this package used, the total cost for the project was probably somewhere around 10% hardware and 90% software licensing.For my particular usage, the virtualization numbers are the most important so if we accept these numbers, Intel seems to be the way to go. The $10K CPU's seem pretty outlandish though. For virtualization purposes it seems like there might be more bang for the buck by going with the 8160 and just adding more hosts. Would have to get down to actually doing the math to decide on that one.
meepstone - Thursday, December 7, 2017 - link
So I'm not sure who has the bigger e-peen between eek2121 and CajunArson. The drama in the comments were more entertaining than the article!ddriver - Tuesday, November 28, 2017 - link
Take a chill pill you intel shill :)Go over to servethehome and check results from someone who is not paid to pimp intel. Epyc enjoys ample lead against similarly priced xeons.
The only niche it is at a disadvantage is the low core count high clock speed skus, simply because for some inexplicable reason amd decided to not address that important market.
Lastly, nobody buys those 10+k $$$ xeons with his own money. Those are bought exclusively with "others' money" by people who don't care about purchase value, because they have deals with intel that put a percent of that money right back into their pockets, which is their true incentive. If they could put that money in their pockets directly, they would definitely seek the best purchase value rather than going through intel to essentially launder it for them.
iwod - Tuesday, November 28, 2017 - link
This. Go to servethehome and make up your own mind.lazarpandar - Tuesday, November 28, 2017 - link
It's one thing to sound like a dick, it's another thing to sound like a dick and be wrong at the same time.mkaibear - Tuesday, November 28, 2017 - link
Er, yes, if you want just 128Gb of RAM it may cost you $1,500, but if you actually want to use the capacity of those servers you'll want a good deal more than that.The server mentioned in the Intel example can take 1.5Tb of ECC RAM, at a total cost of about $20k - at which point the cost of the CPU is much less of an impact.
As CajunArson said, a full load of RAM on one of these servers is expensive. Your response of "yes well if you only buy 128Gb of RAM it's not that expensive", while true, is a tad asinine - you're not addressing the point he made.
eek2121 - Tuesday, November 28, 2017 - link
Not every workload requires that the RAM be topped off. We are currently in the middle of building our own private cloud on Hyper-V to replace our AWS presence, which involves building out at multiple datacenters around the country. Our servers have half a terabyte of RAM. Even with that much RAM, CPUs like this would still be (and are) a major factor in the overall cost of the server. The importance for our use case is the ability to scale, not the ability to cram as many VMs into one machine as possible. 2 servers with half a terabyte of RAM are far more valuable to us than 1 server with 1-1.5 terabytes due to redundancy.beginner99 - Tuesday, November 28, 2017 - link
CPU price or server price are almost always irrelevant because the software running on them costs at least an order of magnitude more than the hardware itself. So you get the fastest server you need / the software profits from.ddriver - Tuesday, November 28, 2017 - link
Not necessarily, there is a lot of free and opensource software that is enterprise-capable.Also "the fastest server" actually sell in very small quantities. Clearly the cpu cost is not irrelevant as you claim. And clearly if it was irrelevant, intel would not even bother offering low price skus, which actually constitute the bulk of it sales, in terms of quantity as well as revenue.
yomamafor1 - Tuesday, November 28, 2017 - link
128GB for 32 core is suspiciously low.... For that kind of core count, generally the server has 512GB or above.Also, 128GB of memory in this day and age is definitely not $1,500 tops. Maybe in early 2016, but definitely not this year, and definitely not next year.
And from what I've seen, the two biggest cost factors in an enterprise grade server is the SSDs and memory. Generally memory accounts for 20% of the server cost, while SSD accounts for about 30%.
CPU generally accounts for 10% of the cost. Not insignificant, but definitely not "makes up half of the machine's budget".
AMD has a very hard battle to get back into the datacenter. Intel is already competing aggressively.
ddriver - Tuesday, November 28, 2017 - link
Care to share with us your "correct ram amount per cpu core" formula? There I was, thinking that the amount of ram necessary was determined by the use case, turns out it is a product of core count.bcronce - Tuesday, November 28, 2017 - link
In general a server running VMs is memory limited well before CPU limited.ddriver - Tuesday, November 28, 2017 - link
Not necessarily. It depends on what kind of work will those VMs be doing. Visualized or bare metal, configuration details are dictated by the target use case. Sure, you can also build universal machines and cram them full of as much cores and memory they can take, but that is very cost ineffective.I can think of a usage scenario that will be most balanced with a quad core cpu and 1 terabyte of ram. Lots of data, close to no computation taking place, just data reads and writes. A big in-memory database server.
I can think of a usage scenario that will be most balanced with a 32 core cpu and 64 gigabytes of ram. An average sized data set involved in heavy computation. A render farm node server.
ddriver - Tuesday, November 28, 2017 - link
*virtualized not visualized LOL, did way too many visualizations back in the day, hands now type on autopilot...yomamafor1 - Tuesday, November 28, 2017 - link
It is certainly determined by the use cases, but after interacting with hundreds of companies and their respective workloads, generally higher core counts are mapped to higher memory capacity.Of course, there are always very few fringe use cases that focuses significantly on compute.
Holliday75 - Saturday, December 9, 2017 - link
What about large players like Microsoft Azure or AWS? I have worked with both and neither uses anything close to what you guys talk about in terms of RAM or CPU. Its all about getting the most performance per watt. When you data center has its own substation your electric bill might be kinda high.submux - Thursday, November 30, 2017 - link
I will overlook the rudeness of your comment. I actively work with enterprise hardware and would probably not make comments like that and then recommend outfitting a server with 128GB of RAM. I don't think I've been near anything with as little as that in a long while. 128GB is circa 2012-2013.An enterprise needs 6 servers to ensure one operational node in a redundant environment. This is because in two data centers, you have 3 servers each. In case of a catastrophe, a full data center is lost and while a server is in maintenance and then finally another server fails. Therefore, you need precisely 6 servers to provide a reasonable SLA. 9 servers is technically more correct, in a proper 3 data center design.
If you know anything about storage, you would prefer more servers as more servers provides better storage response times... unless you're using SAN which is pretty much reserved strictly to people who simply don't understand storage and are willing to forfeit price, performance, reliability, stability, etc... to avoid actually taking a computer science education.
In enterprise IT, there are many things to consider. But for your virtualization platform, it's pretty simple. Fit as much capacity as possible in to as few U as possible while never dropping below 6 servers. Of course, I rarely work with less than 500 servers at a time, but I focus on taking messy 10,000+ server environments and shrinking them to 500 or less.
See, each server you add adds cost to operation. This means man-hours. Storage costs. Degradation of performance in the fabrics, etc... it introduces meaningless complexity and requires IT engineers to waste more and more hours building illogical platforms more focused on technology than the business they were implemented for.
If I approach a customer, I tend to let them know that unless they are prepared to invest at least $50,000 per server for 6 servers and $140,000 for the appropriate network, they should deploy using an IaaS solution (not cloud, never call IaaS cloud) where they can share a platform that was built to these requirements. The breaking point where IaaS is less economical than DIY is at about $500,000 with an OpEx investment of $400,000-$600,000 for power, connectivity, human resources, etc... annually and this doesn't even include having experts on the platform running on the data center itself.
So with less than a minimum of $1 million a year investment in just providing infrastructure (VMware, Nutanix, KVM, Hyper-V), not even providing a platform to run on it, you're just pissing the wrong way in the wind tunnel and wasting obscene amounts of money for no apparent reason on dead-end projects run by people who spend money without considering the value provided.
In addition, the people running your data center for that price are increasing in cost and their skillset is aging and decreasing in value over that time.
I haven't even mentioned power, cooling, rack space, cabling, managed PDUs, electricians, plumbers, fire control, etc...
Unless you're working with BIG DATA, an array of 2-4 TB drives for under $10,000 to feed even one 32-core AMD EPYC is such an insanely bad idea, it's borderline criminal stupidity. Let's not even discuss feeding pipelines of 6 32-core current generation CPUs per data center. It would be like trying to feed a blue whale with a teaspoon. In a virtualized configuration a deal EPYC server probably would need 100GB/s+ bandwidth to barely keep ahead of process starvation.
If you have any interest at all in return on investment in enterprise IT, you really need to up your game to make it work on paper.
Now... consider that if you're running a virtual data center... plain vanilla. Retail license cost of Windows Enterprise and VMware (vCenter, NSX, vSAN) for a dual 32-core EPYC is approximately $125,000 a server. Cutting back to dual 24-core with approximately the same performance would save about $30,000 a server in software alone.
I suppose I can go on and on... but let's be pretty clear CajunArson made a fair comment and probably is considering the cost of 1-2TB per server of RAM. Not 128GB which is more of a graphics workstation in 2017.
sharath.naik - Tuesday, November 28, 2017 - link
Epyc single socket 32core/64 thread CPU is ~2000$. There is no Intel equivalent here, which is disappointing. As the single socket systems are only ~22 core max and no 205 watt parts.IGTrading - Tuesday, November 28, 2017 - link
You're talking nonsense mate :)I'd pay extra to have extra physical cores when I'm speccing a server holding VMs, but AMD gives us more cores for less money.
I also love AMD's RAID which works absolutely great and it's free while Intel's is annoyingly a paid-for solution.
Intel doesn't say one peep about Full Encrypted RAM, because they don't have it.
Intel doesn't say a pee about power consumption because their platform looses in every test.
Intel doesn't say a peep about EPYC 1.1 or EPYC Plus or whatever which will be a drop-in upgrade for the current platforms.
I was put in the shitty situation of speccing Xeon based machines because the per-core licenses were extremely expensive and the Xeon solution is offering us better performance, but other than this situation, we're doing everything to avoid working with Intel.
We still have servers that started out with dual Opterons and grew to Hexa-Core over the years.
That saved our clients a ton of money and their jaws dropped when we advised that they need to move back to Xeon if they want to upgrade (EPYC was still 2 years away then) .
It may be fashionable as a young lads to root for the "cool winner" like Ferrari, Bugatti or Intel , but when you've worked multiple decades in the industry and had to swallow all the crap Intel was pulling, you start rooting for the little guy.
ddrіver - Tuesday, November 28, 2017 - link
Paying anywhere between $12K-$50+K more per machine just to have the Intel logo tends to add up. Ending up with up to 200W more per machine also incurs some extra costs.If you said the cost fades when compared to licensing costs of many software solutions I would understand. But the metal itself... no, the extra cost for that Xeon is either stupidity or protection tax.
Geranium - Tuesday, November 28, 2017 - link
How many server software really using AVX-512? Can you give us a list (excluding AI and machine learning apps, because those ran better on GPU/Dedicate hardware).SaltyVincent - Wednesday, November 29, 2017 - link
I haven't come across in personally, but something else to add is the amount of heat these chips generate when running AVX-512 under load. Running any AVX benchmarks on Intel chips usually results in throttling.deltaFx2 - Wednesday, November 29, 2017 - link
"The whole "pricetag" thing is not really an issue": No? Is that why the volume sales in the server market is the mid-section of the former Xeon E5? Wouldn't people be buying top end E7s (Platinum in today's lingo)? Of course pricetag matters, and matters even more when you're deploying tens of thousands of nodes.Ro_Ja - Tuesday, November 28, 2017 - link
Head title needs a wee bit edit.negusp - Tuesday, November 28, 2017 - link
Your comment needs a big bit edit.Ryan Smith - Tuesday, November 28, 2017 - link
Head title? I'm not sure I follow.IGTrading - Tuesday, November 28, 2017 - link
These TSX instructions have a lot in common with AMD's own proposed ASF instructions which were discussed 3 years before TSX.Don't you think so ?
smilingcrow - Tuesday, November 28, 2017 - link
TSX has been implemented but how about ASF?CajunArson - Tuesday, November 28, 2017 - link
Incidentally, for anybody who think Intel "cheated" with those numbers there's concrete proof from independent third-party reviewers that at least the GROMACS benchmark results that Intel itself is showing are not fully accurate... as in they are not fully accurate in *AMD's favor*.Here's a link to GROMACS results from Serve the Home that are actually using the newest version that finally turns on the AVX-512 support to show you what the Xeon platform was actually designed to do: https://www.servethehome.com/wp-content/uploads/20...
So just remember that Intel is being pretty conservative with these numbers if their own published GROMACS results are anything to go by.
MonkeyPaw - Tuesday, November 28, 2017 - link
I would hope they’d be conservative in this sector. I’m guessing very knowledgeable people will be making the buying decisions here, and there may even be contractual expectations from the purchasing companies. Over promising and under delivering on an internal report might not just cost a few big sales, they might even result in lawsuits.tamalero - Tuesday, November 28, 2017 - link
I think the problem is while intel usually uses the most optimized compilers and systems. They usually do not optimize the intel systems at all. At least in the consumer benchmarks.Not so sure about these enterprise because I have no idea what most of these tests do.
jjj - Tuesday, November 28, 2017 - link
The biggest lie is through omission, the bulk of the volumes is at rather low ASP so if you are gonna test test 1k$ and bellow SoCs and use the I/O offered by each.eek2121 - Tuesday, November 28, 2017 - link
I would be interested to see how AMD EPYC processors with lower core counts perform in the database benchmarks, as they should have few NUMA nodes.CajunArson - Tuesday, November 28, 2017 - link
Wrong. You clearly don't understand how Epyc works. Literally every Epyc chip has the same number of NUMA nodes regardless of core count from the 7601 all the way down to the super-stripped down parts.Each chip has 4 dies that produce the same number of NUMA nodes, AMD just turns off cores on the lower-end parts.
Maybe you should have actually learned about what Epyc was instead of wasting your time posting ignorant attacks on other people's posts.
eek2121 - Tuesday, November 28, 2017 - link
The same goes for you. My ignorance with EPYC stems from poor availability and the lack of desire to learn about EPYC. You seem to have a full time job trolling on AnandTech. Go troll somewhere else.IGTrading - Tuesday, November 28, 2017 - link
Chill guys :)Reading your posts I see you're both right, just using examples of different use cases.
P.S. Cajun seems like a bit of an avid Intel supporter as well, but he's right : in AVX512 and in some particular software, Intel offers excellent performance.
But that comes at a price, plus some more power consumption, plus the inability to upgrade (considering what Intel usually does to its customers) .
iwod - Tuesday, November 28, 2017 - link
poor availability - do dell offer AMD now?Ashari - Tuesday, November 28, 2017 - link
LOL, "GloFo 16nm"... tsts, one would think people like Johan De Gelas and Ian Cutress would know which node is GloFo and which one is TSMCIan Cutress - Tuesday, November 28, 2017 - link
That's my brain fart. I've been writing about other things recently. Edited.peevee - Tuesday, November 28, 2017 - link
"The benchmarking scenario also has a big question mark, as in the footnotes to the slides Intel achieved this victory by placing 58 VMs on the Xeon 8160 setup versus 42 VMs on the EPYC 7601 setup."Given how well AMDs SMT scales, a real client can put up to 128 single-CPU VMs on the EPIC 7601, and 58 VMs on Xeon 8160 would be tramped ridiculously.
Here Intel just had to rely on the shenanigans so obvious it is just fraud.
LordOfTheBoired - Tuesday, November 28, 2017 - link
Yeah, that really stuck out for me too. "We outperform AMD when running a different benchmark!"And to be frank, it casts a pall over Intel's entire PR release since it IS blatantly not how benchmarks work.
Andresen - Tuesday, November 28, 2017 - link
Many HPC task are memory bandwidth limited, and then AVX-512 is of little help. In Spec.org CFP2006 none of the recent results are using AVX-512 but instead rely on AVX2. The few tests posted using AVX-512 come out worse than the tests on similar systems using AVX2. For memory bandwidth limited tasks the EPYC has an advantage with its 8 memory channels compared to Intels 6 channels. For both architectures, a high end processor is not needed for bandwidth limited task, since the don't offer more memory channels.Johan Steyn - Monday, December 18, 2017 - link
AVX also heats up the CPU a lot and it has to throttle down. With AVX, Intel cannot run high clock speesds.ddriver - Tuesday, November 28, 2017 - link
Just when you think AT cannot possibly sink any lower, they now directly publish publish intel benchmarks of a competing product.Coldfriction - Tuesday, November 28, 2017 - link
I myself was confused and dissapointed reading the summary where agreement with Intel seems to be presented by the authors. Using prases like "there is no denying that the Intel Xeon is a 'safer bet' for VMware virtualization" without testing it pushes AT into the realm of paid for shills. Independent reviews wouldn't trust anyone's marketing and even if they were to publish an article on benchmarks from a competitor, they would fill the thing with hefty amounts of skepticism until they could test it themselves. What Intel presents could very realistically be true (personally, I don't doubt that their benchmarks are within the ballpark of being legit), but I want my independent review sites to have as little bias as possible and that means objectively testing the hardware and ignoring the marketing.wumpus - Wednesday, November 29, 2017 - link
These type of servers are rarely bought by customers for personal use. Instead, they are bought for a 'real job' where CYA decisions outweigh any performance benefits (to a degree, the end product has to work). If something really goes wrong, you can always expect to get the blame for buying the "off brand" instead of following the sheep, regardless of what really caused the failure (typically with highly annoyed management who can't tell *anything* about the server than it is the "off brand").If this isn't a consideration you have a "great job". Expect the owner to sell at some point or expand to the point it is controlled by MBAs and downgrade everybody's job to a "real job'. Sorry to say, but at least in the USA that is life.
Johan Steyn - Monday, December 18, 2017 - link
People sometimes really surprise me. What support doe you want from AMD? Yes if there is a booboo like Intel has (present tense) with its security flaw, you need support from them. I have sold numerous systems and servers in my life and never did I go to AMD or Intel to ask for support. It either the OEM, component supplier or component manufacturer (like motherboards etc) who you go to for support.If the CPU works as it should, you do not need support. CPU's were in my experience the one component that rarely if ever dies on you. So if you trust Tyan to make good products, which they do, they are the ones to give you support, not AMD. AMD has to help with Bioses etc. with which they are very good.
So please stop with this support issue and safer bet. If the system runs unstable because of hardware issues, sure they have to sort it out, but till now, none has been reported.
What has Intel done about the bug recently found? Did they come to you to fix it and support you? Nope, you have to fix it yourself, that is if the motherboard manufacturer has a bios update. So, for me it looks like AMD might just be the safer bet after all...
Topweasel - Tuesday, November 28, 2017 - link
Yeah I want to give them the benefit of the doubt and I have no problem with them posting numbers even as analyzation of Intel in regards to EPYC. But a full page "review" of Intel's Epyc benchmarks as a product is kind of schilly. I mean where is their tests to back up the information? Where are the counterpart test where they test something similar that wasn't handpicked by Intel. How can any company assess the validity of a product based solely off of it's competitors testing of the product?bmf614 - Tuesday, November 28, 2017 - link
If you could actually get ahold of Epyc they would probably review the hardware themselves but as of yet it is a paper launch.supdawgwtfd - Wednesday, November 29, 2017 - link
It's not a paper launch dipshit.They can bearly keep up with orders for large companies.
Ryan Smith - Wednesday, November 29, 2017 - link
To throw some context in here, the purpose of this article isn't to publish Intel's benchmarks. Rather, it's commentary on what has been a very unusual situation.Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing.
"AMD's technical marketing of the new CPU has been surprisingly absent, as the company not published any real server benchmarks. The only benchmarks published were SPEC CPU and Stream, with AMD preferring for its partners and third parties to promote performance"
This despite the fact that AMD and Intel's server products haven't been competitive like this in nearly a decade. Normally you'd expect there to be case studies flying out left and right, which has not been the case. And it's especially surprising since, as the underdog, AMD needs to claw back lost ground.
Consequently, Intel's own efforts are, to date, the first efforts by a server vendor to do a comprehensive set of benchmarks over a range of use cases. And let's be clear here: this is Intel doing this for Intel's own benefit. Which is why we've already previously reviewed the two CPUs, as have other 3rd party groups.
Still, I think it's very interesting to look at what Intel has chosen to represent, and what their numbers show. Intel has more resources than pretty much everyone else when it comes to competitive analysis, after all. So their choices and where they show themselves falling behind AMD says a lot about the current situation.
TL;DR: We thought this stuff was interesting, especially since neither vendor until now has done a Xeon-SP vs. EPYC comparison. And since we've already done our own independent review (https://www.anandtech.com/show/11544/intel-skylake... ), it gives us a set of data to compare to our own conclusions (and to be clear, this isn't a review nor are we trying to call it one)
CajunArson - Tuesday, November 28, 2017 - link
Yeah, you were so doing your righteous complaints when Anandtech did literally the same thing for AMD when AMD went out and misconfigured Intel boxes to pretend that Epyc was better than it actually was.Oh wait, you weren't.
ddriver - Tuesday, November 28, 2017 - link
The problem is the heavily biased towards intel AT coverage you clog. How could anyone complain about the opposite when AT have never displayed pro-amd bias? I have a problem with bias, and I point it out when I see it. You can bet your ass the moment AT shows unfair bais toward amd I will be there to point it out. But I cannot point it out if it doesn't exist.Hurr Durr - Tuesday, November 28, 2017 - link
He was too busy ordering specific platters for his thousands of HDDs with one hand and screaming in threads about hypetane with the other.lkuzmanov - Tuesday, November 28, 2017 - link
I've frequented the site for what must be over 10 years, but I fully agree this is, at the very least, a terrible idea.bmf614 - Tuesday, November 28, 2017 - link
Toms and many other sites also covered this.wumpus - Wednesday, November 29, 2017 - link
If Intel suddenly feels the need to compete with AMD, that 's news (practically "man bites dog" news judging from the last decade or so).The fact that they have to pick carefully contrived benchmarks to appear superior to AMD is even more telling. Totally ignoring power consumption (one of the biggest concerns for datacenters) is even more telling.
lefty2 - Tuesday, November 28, 2017 - link
When Skylake runs AVX 512 and AVX2 instructions it causes both the clock frequency to go down *and* the voltage to go up. (https://www.intel.com/content/dam/www/public/us/en... However, it can only bring the voltage back down within 1ms. If you get a mix of AVX2 and regular instructions, like you do in the POV ray test, then it's going to be running on higher voltage the whole time. That probably explains why the Xeon 8176 drawed so much more power than the EPYC in your Energy consumption test.The guys at cloudflare also observed a similar effect (although they only notice the performance degrade): https://blog.cloudflare.com/on-the-dangers-of-inte...
Kevin G - Tuesday, November 28, 2017 - link
In the HPC section, the article indicates that NAMD is faster on the Epyc system but the accompanying graphic points toward a draw with the Xeon Gold 6148 and a win for the Xeon Platinum 8160. Epyc does win a few benchmarks in the list prior to NAMD though.Frank_han - Tuesday, November 28, 2017 - link
When you run those tests, have you bind CPU threads, how did you take care of different layers of numa domains.UpSpin - Tuesday, November 28, 2017 - link
HIghly questionable article:"A lot of time the software within a system will only vaguely know what system it is being run on, especially if that system is virtualised". Why do you say this if you publish HPC results? There the software knows exactly whay type of processor in what kind of configuration it is running.
"The second is the compiler situation: in each benchmark, Intel used the Intel compiler for Intel CPUs, but compiled the AMD code on GCC, LLVM and the Intel compiler, choosing the best result" More important, what type of math library did they use? The Intel MKL has an unmatched optimization, have they used the same for the AMD system?
"Firstly is that these are single node measurements: One 32-core EPYC vs 20/24-core Intel processors." Why don't you make it clear, that by doing this, the benchmark became useless!!! Performance doesn't scale linearly with core count: http://www.gromacs.org/@api/deki/files/240/=gromac...
So it makes a huge difference if I compare a simulation which runs on 32 cores with a simulation which runs on 20 cores. If I calculate the performance per core then, I always see that the lower core CPU is much much faster, because of scaling issues of the simulation software. You haven't disclosed how Intel got their 'relative performance' value.
Elstar - Tuesday, November 28, 2017 - link
Do we know for sure that the Omni-Path Skylake CPUs actually use PCIe internally for the fabric port? If you look at Intel's "ark" database, all of the "F" parts have one fewer UPI links, which seems weird.HStewart - Tuesday, November 28, 2017 - link
I think this was a realistic article on analysis of the two systems. And it does point to important that Intel system is more mature system than AMD EPYC system. My personally feeling is that AMD is thrown together so that claim core count without realistically thinking about the designed.But it does give Intel a good shot in ARM with completion and I expect Intel's next revision to have significantly leap in technology.
I did like the systems for similarly configured - as the cost, I build myself 10 years a dual Xeon 5160 that was about $8000 - but it was serious machine at the time and significantly faster than normal desktop and last much longer. It was also from Supermicro and find machine - for the longest time it was still faster than a lot machine you can get at BestBuy - it has Windows 10 on it now and still runs today - but I rarely used it because I like the portability of laptops
gescom - Tuesday, November 28, 2017 - link
https://www.servethehome.com/wp-content/uploads/20...And suddenly - 8 core 6134 Skylake-SP - equals - 32 core Epyc 7601.
Amazing. Really amazing.
gescom - Tuesday, November 28, 2017 - link
Huh, I forgot - and that is Skylake at 130W vs Epyc at 180W.ddriver - Tuesday, November 28, 2017 - link
Gromacs is a very narrow niche product and also very biased - they heavily optimize for intel and nvidia and push amd products to take an inefficient code path.HStewart - Tuesday, November 28, 2017 - link
This is comparison with AVX2 / AVX512AVX512 is twice as wide as AVX2 and significant more power than the AVX2 - so yes it very possible in this this test that CPU with 1/4 the normal CPU cores can have more power because AVX512.
Also I heard AMD's implementation of AVX2 is actually two 128 bits together - these results could show that is true.
piesquared - Tuesday, November 28, 2017 - link
And the hilarity continues. So AMD posts in house benchmarks and the crowd goes: Derp, these are AMD supplied benchmarks, best wait for third party benchmarks.Intel posts in house benchmarks and the crowd goes: Wow awesome dude, that's the shitsors! Who needs third party benchmarks, AMD should post more in house benchmarks. derp derp
tamalero - Tuesday, November 28, 2017 - link
Guerrilla marketing at its finest? The hilarity is that when Intel was dominating.. they never mentioned intel nor they needed.Now that AMD has a compelling product. They suddenly started doing "comparisons" left and right and claiming how bad "glue" is in AMD cpus (while ignoring the drama bout using cheap TIM instead of solder)
bmf614 - Tuesday, November 28, 2017 - link
Epyc really hasnt even launched yet. Try buying a Dell or HP with Epyc. Nope.supdawgwtfd - Wednesday, November 29, 2017 - link
It's launched. Demand has outstripped supply. They are now starting to get on top of it.Maybe stop being an Intel biased dickhead and go look at what is actually happening?
Topweasel - Wednesday, November 29, 2017 - link
Yeah, I don't get it. I mean even Ryzen mobile launched well before we saw it. Eypc announcement early was important to build up demand with OEM's. Something that wasn't as important with a consumer product that needed announcement with availability. EPYC's announcement wasn't for the end purchaser. Both these need long testing periods and seed supply. Epyc then has ODM builds for cloud services that they have supply. Ryzen mobile launched when OEM's had products to ship. EPYC launched when they products to ship to manufacturers. When those Manufacturers offered EPYC depends completely on their development cycle.Johan Steyn - Monday, December 18, 2017 - link
Haha so trueNinhalem - Tuesday, November 28, 2017 - link
Can we get ANSYS Structural or Comsol benchmarks for the HPC sections? Building machines using Xeons for these applications is beyond expensive for engineering design on fixed price contracts.anactoraaron - Tuesday, November 28, 2017 - link
No, because AT didn't test anything here. They are just 'publishing' Intel's benchmarks and calling it an 'analysis'.Doesn't this qualify for the #ad in the title?
Ryan Smith - Wednesday, November 29, 2017 - link
To throw some context in here, the purpose of this article isn't to publish Intel's benchmarks. Rather, it's commentary on what has been a very unusual situation.Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing.
"AMD's technical marketing of the new CPU has been surprisingly absent, as the company not published any real server benchmarks. The only benchmarks published were SPEC CPU and Stream, with AMD preferring for its partners and third parties to promote performance"
This despite the fact that AMD and Intel's server products haven't been competitive like this in nearly a decade. Normally you'd expect there to be case studies flying out left and right, which has not been the case. And it's especially surprising since, as the underdog, AMD needs to claw back lost ground.
Consequently, Intel's own efforts are, to date, the first efforts by a server vendor to do a comprehensive set of benchmarks over a range of use cases. And let's be clear here: this is Intel doing this for Intel's own benefit. Which is why we've already previously reviewed the two CPUs, as have other 3rd party groups.
Still, I think it's very interesting to look at what Intel has chosen to represent, and what their numbers show. Intel has more resources than pretty much everyone else when it comes to competitive analysis, after all. So their choices and where they show themselves falling behind AMD says a lot about the current situation.
(And no, this doesn't quality for #ad as Intel hasn't paid us. That's not how this works; that's not how any of this works)
deltaFx2 - Wednesday, November 29, 2017 - link
@Ryan Smith: "Up until now, neither AMD nor Intel have engaged in any serious Skylake Xeon vs. Zen EPYC technical marketing." I think that's largely because the market is different from a decade ago. Hyperscalers do their own testing and aren't swayed by Intel's or AMD's whitepapers. They do their own thing. There are still many companies that buy and maintain their own servers, but my understanding is that this market is shrinking or at least not growing. Cloud is where the money is, and they know what they want. I don't think AMD is trying to go after enterprise this time around (I'm sure they'll take their business but the main target seems to be hyperscalers. The CCX, MCM, large memory footprint etc all point to them saying we'll target scale-out as opposed to scale-up. AMD does quite well in scale-out, while taking a hit in scale-up.).Also, AMD might still be in the process of doing minor firmware tweaks as evidenced by tier-1 server availability (HP/Dell) coming online only end of Q4.
Johan Steyn - Monday, December 18, 2017 - link
I am so glad people are realising ANandtechs rubish, probably led by Ian who wrote that terrible Threadripper review. Maybe he will realise it as more complain. It all depends on how much Intel is paying him...mapesdhs - Wednesday, November 29, 2017 - link
ANSYS is one of those cases where having massive RAM really matters. I doubt if any site would bother speccing out a system properly for that. One ANSYS user told me he didn't care about the CPU, just wanted 1TB RAM, and that was over a decade ago.rtho782 - Tuesday, November 28, 2017 - link
> Xeon Platinum 8160 (24 cores at 2.1 - 3.7 GHz, $4702k)$4,702,000? Intel really have bumped up their pricing!!
bmf614 - Tuesday, November 28, 2017 - link
The pricetag discussion really needs to include software licensing as well. Windows Datacenter and SQL server on a machine with 64 cores will cost more than the hardware itself. This is the reason that the Xeon 5122 exists.bmf614 - Tuesday, November 28, 2017 - link
Also isnt it kind of silly to invest in a server platform with limited PCIE performance when faster and faster storage and networking is becoming commonplace?Polacott - Tuesday, November 28, 2017 - link
it really seems that AMD has crushed Intel this time. Also Charlie has some interest points about security ( has this topic being even analyzed here ? https://www.semiaccurate.com/2017/11/20/epyc-arriv... )Software WILL be tuned for Epyc, so a safe bet will not be getting Xeon but Epic, for sure.
And power consumption and heat is really important as is an interesting part of datacenter maintenance costs.
I really don't get how the article ends up in this conclusion.
Johan Steyn - Monday, December 18, 2017 - link
Intel's financial support helps them reach this conclusion. Very sadZolaIII - Tuesday, November 28, 2017 - link
As usually Intel cheated. Clients won't use neither their property compiler nor a software but GNU one's. Now let me show you a difference:https://3s81si1s5ygj3mzby34dq6qf-wpengine.netdna-s...
Other than that this is boring as ARM NUMA based server chips are coming with some backup from good old veterans when it comes to to supercomputing and this time around Intel won't have even a compiler advantage to drag about it.
Sources:
https://www.nextplatform.com/2017/11/27/cavium-tru...
http://www.cavium.com/newsevents-Cray-Catapults-Ar...
Now this are the real news & melancholic ones for me as it brings back memories how it all started. & guess what? We are back their on the start again.
toyotabedzrock - Tuesday, November 28, 2017 - link
Linux 4.15 has code to increase EPYC performance and enable the memory encryption features. 4.16 will have the code to enable the virtual machine memory encryption.duploxxx - Friday, December 1, 2017 - link
thx for sharing the article Johan, as usual those are the ones I will always read.Interesting to get feedback from Intel on benchmark compares, this tells how scared they really are from the competition. There is no way around, I' ve been to many OEM and large vendor events lately. One thing is for sure, the blue team was caught with there pants down and there is for sure interest from IT into this new competitor.
Now talking a bit under the hood, having had both systems from beta stages.
I am sure Intel will be more then happy to tell you if they were running the systems with jitter control. Off course they wont tell the world about this and its related performance issues.
Second, will they also share to the world that there so called AVX enhancement have major clock speed disadvantages to the whole socket. really nice in virtual environments :)
Third, the turbo boosting that is nowhere near the claimed values when running virtualization?
Yes the benchmarking results are nice, but they don't give real world reality, its based on synthetic benches. Real world gets way less turbo boost due to core hot spots and there co-related TDP.
There are reasons why large OEM did not yet introduce EPYC solutions, they are still optimizing BIOS and microcode as they want to bring a solid performing platform. The early tests from Intel show why.
Even the shared VMware bench can be debated with no shared version info as the 6.5u1 has got major updates to the hypervisor with optimizations for EPYC.
Sure DB benches are an Intel advantage, there is no magic to it looking at the die configurations, there are trade offs. But this is ONLY when the DB are bigger then certain amount of dies so we are talking here about 16+ cores from the 32 cores/socket systems for example, anything lower will have actually more memory bandwidth then the Intel part. So how reliable are these benchmarks for a day to day production.... not all are running the huge sizes. And those who do should not just compare based on synthetical benches provided but do real life testing.
Aint it nice that a small company brings a new CPU line and already Intel needs to select there top bin parts as a counter part to show benchmarks to be better. There are 44 other bins available on the Intel portfolio, you can probably already start guessing how well they really fare against there competitor....
hsupengjun - Sunday, December 3, 2017 - link
Wow, the first few pages are sooo biased, but damn, are they rightfully so.ajc9988 - Tuesday, January 16, 2018 - link
@Ian Cutress & Johan De Gelas - Could you please update this by running your own numbers AFTER the full implementation of Spectre and Meltdown fixes. That would be so helpful in showing how much these have effected both platforms and whether your conclusions remain after the fixes. Thank you!FentonW - Wednesday, January 17, 2018 - link
What this doesn't really address is Memory configurations.RAM configurations are very much limited with intel given the 2 DIMMS per channel configuration and 6 channels vs 8 with AMD.
With Intel you can only get 384GB with 16GB DIMMS, compared to 512GB with AMD.
If you need 512GB then you have to use 32GB DIMMS on intel which again pushes the price up considerably.
Which is why customers often choose a Broadwell system over Skylake, to keep memory costs down.
Justinodomi - Thursday, August 20, 2020 - link
Зарабатывай до 47000 рублей в месяц https://bit.ly/3k71xT0Помощь в оформлении мед. книжки
https://clck.ru/QH2ia
Georgeyqp - Wednesday, September 9, 2020 - link
https://bit.ly/31vZZtsРаньше пользователи CryptoTab PRO могли ускорять майнинг токмо быть помощи функции Cloud.Boost. Мы решили начинать дальше и предоставить вам доступ к новейшей опции Super.Boost, чтобы вы зарабатывали больше — и быстрее! Оцените преимущества сверхскоростного майнинга с расширением Super.Boost, доступным в PRO-версии Android приложения.
Используйте разом порядочно ускорений Super.Boost и увеличьте общую прыть майнинга для всех ваших устройствах с включеннойй функцией Cloud.Boost. https://bit.ly/33KzbIO
Скажем, проворство вашего устройства — 100H/s, с Cloud.Boost X10 живость составит уже 1000H/s. А с двумя дополнительными ускорениями Super.Boost +80% и +60% суммарная скорость майнинга будет равна 2400H/s (1000H/s + 1000H/s*0,8 + 1000H/s*0.6 = 1000 + 800 + 600).
Скачай CryptoTab PRO и открой ради себя сверхбыстрый майнинг! https://clck.ru/QAzh8
Устанавливая PRO-версию Android-приложения помимо опции Super.Boost, вы также получите расширенный ассортимент функций:
Cloud Expel — часто ускоренный майнинг
Занятие SDP: сервер-зависимый майнинг не тратит заряд батареи
Быстрый доступ к кабинету ради хранения и вывода криптовалюты
Неограниченное число подключённых удалённых устройств
Обновление баланса некогда в 10 минут
Неограниченный нравоучение средств от 0.00001 BTC
Отдельные профили чтобы разных пользователей
Защищенный профиль для публичного Wi-Fi
Приоритетная техническая поддержка
Адаптивная новостная лента
Никакой надоедливой рекламы
Специальные промо-материалы и распродажи
Майнинг на мобильных устройствах снова сроду не был таким быстрым! Скачай Android-приложение, включи Cloud.Boost и открой для себя суперскоростной майнинг с функцией Super.Boost.
Перейти сообразно ссылке: https://clck.ru/QAzyL
@Crypto_Good