I think the news of Intel Agilex FPGA is significant.
1. Intel 10nm in Q3 2. PCIe 5.0 (not 4.0) in 2019 3. EMiB is used 4. 40TFLOPs
I wonder if this chip is planned to be use in new Cray Supercomputers.
I am also curious what difference a 56 core 9200 would be compare to dual socket 28 core xeon. It just seems to me that industry is stuck in making more cores on cpu - but at least these module shave more than 8. For example with same cores what is the difference of the following
1. single 16 core cpu 2. single die with 2 8 core cpus 3. dual socket 8 core cpus.
My guess and only that is the level of performance is 1 is at top, follow by 2 and then 3
From other articles about the 56-core 9200, it sounds like it essentially is a dual-socket-in-one-mount solution. It's also BGA-only. It's designed for ultra-high-density servers (blade servers, etc,) to have the effective nature of dual-socket in far less physical space. It's 12 memory channels per socket, compared to 6 per socket for the 8200-series, so it's almost certainly just two 8200-series dies in one package. It's also nearly double the wattage.
And it can have a two-package solution (I would say "two socket," but it's soldered not socketed,) to have a four-socket-equivalent system in less space than a two-socket system.
As for performance on the various types, it depends on how the CPUs are architected, and the workload. If your workload isn't heavily multithreaded, but is *VERY* memory intensive, then fewer cores-per-socket with more sockets (each with their own memory channels) may be better for you than more cores in one socket. And again, the 9200 is more like your option 3, just in a single physical package, than even like your option 2.
1. em, no 2. serdes is still 16/20 nm so it will have same feature level as competitors. 3. sure but any volume product would be success, not these prototype parts 4. 40 in bfloat16. In regular 32b IEEE 754 it is just 10 TFLOPs!!! This is what they originaly advertised for Stratix 10.
Btw.: FPGA is pretty small. Only 30% more elements compared to Stratix 10. Where is claimed 2,7x better density?
40% performance gain is based on estimates in highest speed grade compared to Stratix 10 in -2 without mentioned voltage. Point is that these settings can cause double digit performance difference by itself.
This is shame. It is probably worst FPGA in Alteras history.
@Ian Cutress : I am really looking forward to see Storage Class Memory (SCM) like 3D X-Point being used on the memory bus of consumer laptop computers and smartphones, and that it could be used as a bootable RAMDisk : in theory, it should help lower latency of several order of magnitude and bring some noticeable responsiveness improvements a bit like replacing a Hard Disk Drive (HDD) by a Solid State Drive (SSD) did.
I know it is hard to predict the future, but approximatively what year do you think it would begin to be possible for consumers to buy a laptop / smartphone with 256GB or more of SCM plugged on a memory bus channel ?
I would think 2022 / 2023 at the earliest (probably even later than that) as it probably requires some maturing of memory agnotstic protocol like Gen-z...
There have already been laptops with configurations like 4 GB RAM + 16 GB 3D XPoint. I don't know if the XPoint is on the memory bus, but even that small amount should be enough to hold the OS and some applications.
I don't know if describing XPoint as "slightly slower" than DRAM is accurate. That seems like Intel marketing speak. Certainly, there is room for another post-NAND technology to improve beyond XPoint and the bridge the gap between memory and storage.
And while XPoint would be good for increasing the amount of "memory", the real hotness in the next 10 years will be stacking and later integrating memory directly into the CPU. This could allow CPU performance to increase by orders of magnitude even with small amounts (4 GB) of DRAM. Source: https://www.darpa.mil/attachments/3DSoCProposersDa...
In addition to DRAM, XPoint or another post-NAND technology could also be added for another level of cache.
@nandnandnand : The configuration that you are describing for the laptop is 16GB DDR-Ram + 16GB NVMe Storage Class Memory (SCM) (Optane storage memory)
Although the NVMe protocol is faster than the SATA protocal, from what I read on different websites, is that it still add much more latency than the same SCM plugged on the memory bus.
Launching a game application that may take 60s to load from an NVMe SSD could be 5 to 10 times faster without much software optimization, so could take from 6s/12s...
Launching any current size application / rebooting any Operating System (OS) would feel significantly shorter / near instantaneous, which in terms of overall customer experience would be amazing !!!
For a better overall customer experience, I think at least 64GB could be needed in the coming years to have full Windows + some of the most used applications being stored in the RAMDisk.
If you want to also store some big applications (like games) and some videos data (like 4K / 8K videos) to add more responsiveness, I think that you can quickly need 256GB of RAMDisk storage...
well since this is a cache, the difference may not be that big: Much of the bloat in Windows is perhaps just stuff that rarely ever gets uses but is also far from the critical path during reboots or normal operations. BTW, optimizing boot is optimizing a failure and using persistent memory to support zero power standby the much more attractive usage IMHO.
@abufrejoval : I am interested in both using Persistent Memory (PM) for importantly lowering the latency of software launch & most used data access, AND also to support zero power stanby as the combination of both have the potential to importantly increase consumer end users experiences.
In theory, with the development of <10ns very low latency persistent memory like Spin Orbit Torque - Magnetic Random Access Memory (SOT-MRAM) / Spin Torque Transfer - MRAM (STT-MRAM) and also <1000ns low latency memory Storage Class Memory (SCM) and also the development of technologies like 3D- System On Chip (3D-Soc), we may at the horizon of 2025 begin to see chip combining compute logic + several Gigabytes of MRAM cache memory replacing L2 / L3 SRAM cache memory, and several 100’s of Gigabytes / a few Terabytes of SCM replacing storage, and no need for DRAM : I do believe that it could importantly lower latency and power consumption, and in the end, importantly increase the user experience, but one of the challenge is to be able to do this in a way that it is reasonably cost competitive...
It is really the kind of innovation I would like to see happening as soon as possible in smartphones, but as of 2019, smartphone manufacturers are more investing in flexible displays (Huawei Mate X) which should cost ~2000€ at launch, so roughly 1000€ more than Huawei premium P30Pro smartphone...
On a personal basis, I would have no issue to pay 1500€ (so 500€ premium) for a much lower latency / zero power stanby smartphone that would provide a much better consumer experience that what we have nowadays...
I like the notion of NV-RAM to enable energy proportional compute on the server side. Terabytes of Memristor NV-RAM really got me excited, especially because they promised stacking at linear cost and no issues with energy density. Not sure the other technology will be able to deliver quite what the memristor failed to provide.
But I see the smartphone as the least of worries or least to improve. Their mobile DRAM, even when active seems a minor energy draw compared to all the on-screen time batteries have to support and quite unnoticeable in standby: With networks all shut off, I’ve seen my Androids last weeks without charging on suspended DRAM, while I never need them to last more than a day without charging.
The only reason they ever seem to commit application state to flash is that they run out of memory. I still manage that on really old devices like my Nexus 10, which combines a high-resolution display with just 1GB of OS usable DRAM.
On my €500 phone with 8GB I have had real trouble just trying to reach the 50% mark. I only managed to fill significant parts by running a Ubuntu userland in a chroot() container with a full Mate desktop running a major compile job with CC-Cache via X2Go from my desktop.
Unless we’re talking games or HPC I actually have zero performance complaints about my smartphone, even if as an 835 it’s already two generations behind. Adblockers and zero Facebook tolerance seem to keep CPU cycle suckers away.
My major gripe there is that I’d really like to use it as a desktop and need external screen, an Ethernet port (security) and proper software support for desktop mode and dynamic DPI.
DRAM power consumption is a concern on really small IoT devices that need to last a decade on a small battery or things like a pace maker. That’s where MRAM may have a real impact, especially because you can get both bigger caches on the same process node than using SRAM and you save transferring the cache contents for a logic that feels almost non-volatile itself.
And it’s an issue on large servers in the data centers, where I have seen DRAM consume more power than the CPUs. I wonder if some of the power saving techniques invented for notebooks and mobile devices can or have been applied to server RAM just yet: Reactivating DRAM from standby may simple be too problematic for server latencies.
There NV-RAM mostly allows getting rid of all those HDD latencies, getting more compute into your 100ms interactive response time slot. And it allows putting those servers you cannot sell off as excess compute capacity like AWS does into a real standby, where they a) consume much less power than the 50% idle power that still seems normal today and b) come back into full service within milliseconds not minutes and allow a much more energy proportional computing.
Servers may need to resume in a few milliseconds before we can allow them to go idle, because response times are why we buy them or they might be doing thousands of transactions in the blink of an eye. But my phone only needs to be as fast as I am and I don’t notice nanoseconds unless you accumulate millions of them.
If we can get NV-RAM at DRAM latency, density and cost (or better), I’ll be the last one to be sorry, but in the mean-time I’ll just be happy to get two out of three and better than spinning rust.
But even if Terabytes of NV-RAM cost zero power to maintain state and fit into a mobile phone form factor, they may be useless to keep around if you can’t ever do a single full scan without running out of juice or patience. At 25MB/sec or current mobile memory speeds that would take eleven hours and way more energy just for the CPU than any current battery could provide.
That's why, as you hint, NV-RAM needs to do part of the compute itself to be useful.
QuickBooks POS Support phone number QuickBooks POS (point of sale) is powerful software that enables users to follow sales, customers and inventory quickly and effectively. Our QuickBooks POS Support phone number 18004173165 is open 24 hours to assist the user. Read more- https://tinyurl.com/y3sm5t7j & visit us- https://www.qbcustomersupportphonenumber.com/quick...
I immediately checked to see how he admitted being wrong, but while his visible output is at an all time low anyway he tends to wait until he has proof. I'd like Intel to be able to deliver and I'd like AMD and IBM to be able to deliver, too, but all that NV-RAM has disappointed far too often and it's hard to overcome that experience with little more than new slide ware.
A 400W Intel power consumption rating implies a much higher peak power draw. A PSU with over 1200W output is needed for each 9282 chip (an 8 socket system would need over 10kW of power supply - BEFORE peripherals !!!)
any Big Corp seeking to dump IBM mainframe. cheaper at twice the price. now, going from COBOL/file apps to RDBMS/*nix isn't a walk in the park. yes, the death of Big Iron has been just a couple of years away for a couple (or three) decades. may be now is the time.
lollolol IAN for your comments regarding CPU side? What Intel marketing wheed have you been smoking before posting this optimistic stuff?
A glued socket 56 cores consuming 400W. 12 dimms... no it is 6+6 just like the dual socket solution :) only 80 PCI-e lanes.... OEM customer specific no retail pricing....
You are aware that by mid 2019 there will be a 64core 128thread single socket 250W cpu to counter this shit? drop in compatible with already existing servers? At a way lower price than the top bin Platinum 8000 SKU.
Whoever thinks Intel has a massive CPU improvement this and next year is totally crazy, whoever continue to order intel cpu end of this year and next year for general server IT is also totally crazy. The only reason you might think of buying an Intel part is because of a specific high ghz low core count SKU for specific sw applications.
In 23 years of being a sys admin... I have worked with exactly ZERO AMD systems. No company I have worked for has been willing to use non-Intel chips regardless of cost, efficiency, etc. Intel has a hard lock on corporate IT infrastructure somewhat like Apple still dominates the “creatives”. Not saying it’s right but it is reality.
I've heard that said about the 360 and 370 and then came the PDP-11 and the VAX. I've head that said about the VAX and then came the PC. I've jumped over Suns, eroded z-Series and p-Series using Linux until they bled and I will make sure AMD will have its moments in our data centers while it makes sense.
During my 35 years in IT I have typically been right, but rarely convinced anyone in the first or even the second round. By the time upper management believes that the ideas I feed them are their own, they typically do what I want. Once you manage to get your ego out of the way, you can enjoy that it's easy and pays the bills.
Weird as Opteron back in the day had like 20%+ market share. We were still using Opteron servers in our HPC environment in 2009.
Admittedly AMD have had no worthwhile offering for the last 10years - and in that 10 years of no competition Intel has fleeced its customers.
Very shortly you will be able to get higher core count, higher performance, less power and a much more secure processor for LESS money than anything Intel has to offer.
Couple this with how Intel has treated its customers - there will be a huge uptake in EPYC processors for the data centre.
Intel advise to turn off Hyperthreading in these new cascade lake AP chips due to security vulnerabilities!
just out of curiosity... do these 'chips' implement more address lines? or is Intel/AMD still at the same number (48? 52?) below the 64 bit address space?
You have written a wonderful post. This article gave me very useful insights. Thanks for sharing. If you want to know how to update Rand McNally Maps through Rand McNally Dock, then you must visit our website. https://www.fix1st.com/gps/rand-mcnally-update/how...
This is article is really informative and insightful. It helped me a lot. Keep sharing. If you have any issues regarding Epson wi-fi printer setup then take help from Triumph PC care experts for instant solutions. Visit website to know more. https://www.triumphpccare.com/support-for-epson-pr...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
38 Comments
Back to Article
HStewart - Tuesday, April 2, 2019 - link
I think the news of Intel Agilex FPGA is significant.1. Intel 10nm in Q3
2. PCIe 5.0 (not 4.0) in 2019
3. EMiB is used
4. 40TFLOPs
I wonder if this chip is planned to be use in new Cray Supercomputers.
I am also curious what difference a 56 core 9200 would be compare to dual socket 28 core xeon. It just seems to me that industry is stuck in making more cores on cpu - but at least these module shave more than 8. For example with same cores what is the difference of the following
1. single 16 core cpu
2. single die with 2 8 core cpus
3. dual socket 8 core cpus.
My guess and only that is the level of performance is 1 is at top, follow by 2 and then 3
bubblyboo - Tuesday, April 2, 2019 - link
If you mean the new German Cray system then they already decided on Intel Stratix 10.HStewart - Tuesday, April 2, 2019 - link
https://www.anandtech.com/show/14112/intels-xeon-x...ksec - Tuesday, April 2, 2019 - link
The 10nm is likely from Custom Foundry, i.e Not the same 10nm used in Icelake.But Altera literally gave up completing with Xilinx ever since Intel acquired them. Cant wait to see how it compared to Xilinx's Everest
FreckledTrout - Tuesday, April 2, 2019 - link
Yes but that is just sampling in 2019.CharonPDX - Friday, April 5, 2019 - link
From other articles about the 56-core 9200, it sounds like it essentially is a dual-socket-in-one-mount solution. It's also BGA-only. It's designed for ultra-high-density servers (blade servers, etc,) to have the effective nature of dual-socket in far less physical space. It's 12 memory channels per socket, compared to 6 per socket for the 8200-series, so it's almost certainly just two 8200-series dies in one package. It's also nearly double the wattage.And it can have a two-package solution (I would say "two socket," but it's soldered not socketed,) to have a four-socket-equivalent system in less space than a two-socket system.
As for performance on the various types, it depends on how the CPUs are architected, and the workload. If your workload isn't heavily multithreaded, but is *VERY* memory intensive, then fewer cores-per-socket with more sockets (each with their own memory channels) may be better for you than more cores in one socket. And again, the 9200 is more like your option 3, just in a single physical package, than even like your option 2.
Dolan - Monday, April 8, 2019 - link
1. em, no2. serdes is still 16/20 nm so it will have same feature level as competitors.
3. sure but any volume product would be success, not these prototype parts
4. 40 in bfloat16. In regular 32b IEEE 754 it is just 10 TFLOPs!!! This is what they originaly advertised for Stratix 10.
Btw.: FPGA is pretty small. Only 30% more elements compared to Stratix 10. Where is claimed 2,7x better density?
40% performance gain is based on estimates in highest speed grade compared to Stratix 10 in -2 without mentioned voltage. Point is that these settings can cause double digit performance difference by itself.
This is shame. It is probably worst FPGA in Alteras history.
People, please. Stop eating Intel's propaganda.
Diogene7 - Tuesday, April 2, 2019 - link
@Ian Cutress : I am really looking forward to see Storage Class Memory (SCM) like 3D X-Point being used on the memory bus of consumer laptop computers and smartphones, and that it could be used as a bootable RAMDisk : in theory, it should help lower latency of several order of magnitude and bring some noticeable responsiveness improvements a bit like replacing a Hard Disk Drive (HDD) by a Solid State Drive (SSD) did.I know it is hard to predict the future, but approximatively what year do you think it would begin to be possible for consumers to buy a laptop / smartphone with 256GB or more of SCM plugged on a memory bus channel ?
I would think 2022 / 2023 at the earliest (probably even later than that) as it probably requires some maturing of memory agnotstic protocol like Gen-z...
What do you think Ian ?
nandnandnand - Tuesday, April 2, 2019 - link
There have already been laptops with configurations like 4 GB RAM + 16 GB 3D XPoint. I don't know if the XPoint is on the memory bus, but even that small amount should be enough to hold the OS and some applications.I don't know if describing XPoint as "slightly slower" than DRAM is accurate. That seems like Intel marketing speak. Certainly, there is room for another post-NAND technology to improve beyond XPoint and the bridge the gap between memory and storage.
And while XPoint would be good for increasing the amount of "memory", the real hotness in the next 10 years will be stacking and later integrating memory directly into the CPU. This could allow CPU performance to increase by orders of magnitude even with small amounts (4 GB) of DRAM. Source: https://www.darpa.mil/attachments/3DSoCProposersDa...
In addition to DRAM, XPoint or another post-NAND technology could also be added for another level of cache.
Diogene7 - Tuesday, April 2, 2019 - link
@nandnandnand : The configuration that you are describing for the laptop is 16GB DDR-Ram + 16GB NVMe Storage Class Memory (SCM) (Optane storage memory)Although the NVMe protocol is faster than the SATA protocal, from what I read on different websites, is that it still add much more latency than the same SCM plugged on the memory bus.
Launching a game application that may take 60s to load from an NVMe SSD could be 5 to 10 times faster without much software optimization, so could take from 6s/12s...
Launching any current size application / rebooting any Operating System (OS) would feel significantly shorter / near instantaneous, which in terms of overall customer experience would be amazing !!!
rahvin - Tuesday, April 2, 2019 - link
16GB is more than enough for a standard Linux install, but you need almost 30GB for a unmodified base Windows 10 install.Diogene7 - Tuesday, April 2, 2019 - link
For a better overall customer experience, I think at least 64GB could be needed in the coming years to have full Windows + some of the most used applications being stored in the RAMDisk.If you want to also store some big applications (like games) and some videos data (like 4K / 8K videos) to add more responsiveness, I think that you can quickly need 256GB of RAMDisk storage...
abufrejoval - Tuesday, April 2, 2019 - link
well since this is a cache, the difference may not be that big: Much of the bloat in Windows is perhaps just stuff that rarely ever gets uses but is also far from the critical path during reboots or normal operations. BTW, optimizing boot is optimizing a failure and using persistent memory to support zero power standby the much more attractive usage IMHO.Diogene7 - Wednesday, April 3, 2019 - link
@abufrejoval : I am interested in both using Persistent Memory (PM) for importantly lowering the latency of software launch & most used data access, AND also to support zero power stanby as the combination of both have the potential to importantly increase consumer end users experiences.In theory, with the development of <10ns very low latency persistent memory like Spin Orbit Torque - Magnetic Random Access Memory (SOT-MRAM) / Spin Torque Transfer - MRAM (STT-MRAM) and also <1000ns low latency memory Storage Class Memory (SCM) and also the development of technologies like 3D- System On Chip (3D-Soc), we may at the horizon of 2025 begin to see chip combining compute logic + several Gigabytes of MRAM cache memory replacing L2 / L3 SRAM cache memory, and several 100’s of Gigabytes / a few Terabytes of SCM replacing storage, and no need for DRAM : I do believe that it could importantly lower latency and power consumption, and in the end, importantly increase the user experience, but one of the challenge is to be able to do this in a way that it is reasonably cost competitive...
It is really the kind of innovation I would like to see happening as soon as possible in smartphones, but as of 2019, smartphone manufacturers are more investing in flexible displays (Huawei Mate X) which should cost ~2000€ at launch, so roughly 1000€ more than Huawei premium P30Pro smartphone...
On a personal basis, I would have no issue to pay 1500€ (so 500€ premium) for a much lower latency / zero power stanby smartphone that would provide a much better consumer experience that what we have nowadays...
abufrejoval - Wednesday, April 3, 2019 - link
I like the notion of NV-RAM to enable energy proportional compute on the server side. Terabytes of Memristor NV-RAM really got me excited, especially because they promised stacking at linear cost and no issues with energy density. Not sure the other technology will be able to deliver quite what the memristor failed to provide.But I see the smartphone as the least of worries or least to improve. Their mobile DRAM, even when active seems a minor energy draw compared to all the on-screen time batteries have to support and quite unnoticeable in standby: With networks all shut off, I’ve seen my Androids last weeks without charging on suspended DRAM, while I never need them to last more than a day without charging.
The only reason they ever seem to commit application state to flash is that they run out of memory. I still manage that on really old devices like my Nexus 10, which combines a high-resolution display with just 1GB of OS usable DRAM.
On my €500 phone with 8GB I have had real trouble just trying to reach the 50% mark. I only managed to fill significant parts by running a Ubuntu userland in a chroot() container with a full Mate desktop running a major compile job with CC-Cache via X2Go from my desktop.
Unless we’re talking games or HPC I actually have zero performance complaints about my smartphone, even if as an 835 it’s already two generations behind. Adblockers and zero Facebook tolerance seem to keep CPU cycle suckers away.
My major gripe there is that I’d really like to use it as a desktop and need external screen, an Ethernet port (security) and proper software support for desktop mode and dynamic DPI.
DRAM power consumption is a concern on really small IoT devices that need to last a decade on a small battery or things like a pace maker. That’s where MRAM may have a real impact, especially because you can get both bigger caches on the same process node than using SRAM and you save transferring the cache contents for a logic that feels almost non-volatile itself.
And it’s an issue on large servers in the data centers, where I have seen DRAM consume more power than the CPUs. I wonder if some of the power saving techniques invented for notebooks and mobile devices can or have been applied to server RAM just yet: Reactivating DRAM from standby may simple be too problematic for server latencies.
There NV-RAM mostly allows getting rid of all those HDD latencies, getting more compute into your 100ms interactive response time slot. And it allows putting those servers you cannot sell off as excess compute capacity like AWS does into a real standby, where they a) consume much less power than the 50% idle power that still seems normal today and b) come back into full service within milliseconds not minutes and allow a much more energy proportional computing.
Servers may need to resume in a few milliseconds before we can allow them to go idle, because response times are why we buy them or they might be doing thousands of transactions in the blink of an eye. But my phone only needs to be as fast as I am and I don’t notice nanoseconds unless you accumulate millions of them.
If we can get NV-RAM at DRAM latency, density and cost (or better), I’ll be the last one to be sorry, but in the mean-time I’ll just be happy to get two out of three and better than spinning rust.
But even if Terabytes of NV-RAM cost zero power to maintain state and fit into a mobile phone form factor, they may be useless to keep around if you can’t ever do a single full scan without running out of juice or patience. At 25MB/sec or current mobile memory speeds that would take eleven hours and way more energy just for the CPU than any current battery could provide.
That's why, as you hint, NV-RAM needs to do part of the compute itself to be useful.
qbcustomerss - Tuesday, April 9, 2019 - link
https://www.qbcustomersupportphonenumber.com/qbcustomerss - Tuesday, April 9, 2019 - link
QuickBooks POS Support phone numberQuickBooks POS (point of sale) is powerful software that enables users to follow sales, customers and inventory quickly and effectively. Our QuickBooks POS Support phone number 18004173165 is open 24 hours to assist the user. Read more- https://tinyurl.com/y3sm5t7j & visit us- https://www.qbcustomersupportphonenumber.com/quick...
DigitalFreak - Tuesday, April 2, 2019 - link
The 56 core CPU is 400w! Holy shiteabufrejoval - Tuesday, April 2, 2019 - link
you have 200 Watt Skylake SKUs today, it's really just two of those under a single hood. Package density is all it is.xrror - Tuesday, April 2, 2019 - link
"it even convinced the very skeptical Charlie from SemiAccurate" ....(checks calendar again to make sure it's still not the 1st)
wow. And I'm not being sarcastic.
abufrejoval - Tuesday, April 2, 2019 - link
I immediately checked to see how he admitted being wrong, but while his visible output is at an all time low anyway he tends to wait until he has proof. I'd like Intel to be able to deliver and I'd like AMD and IBM to be able to deliver, too, but all that NV-RAM has disappointed far too often and it's hard to overcome that experience with little more than new slide ware.xrror - Thursday, April 4, 2019 - link
I didn't mean my comment as a slam against Charlie. Quite the opposite.To convince Charlie, one of Optane's harshest critics - I'd really love to have a transcript of that Q & A just to read what Intel specifically said.
MFinn3333 - Tuesday, April 2, 2019 - link
The speed of optane DIMM's is really kind of interesting. It ranges from 20% slower to 17x faster depending on the file system and workload.https://arxiv.org/pdf/1903.05714.pdf
Samus - Tuesday, April 2, 2019 - link
400 WATTS!? LMFAO.DannyH246 - Tuesday, April 2, 2019 - link
A 400W Intel power consumption rating implies a much higher peak power draw. A PSU with over 1200W output is needed for each 9282 chip (an 8 socket system would need over 10kW of power supply - BEFORE peripherals !!!)Haha - who would buy this crap?
FunBunny2 - Wednesday, April 3, 2019 - link
"Haha - who would buy this crap?"any Big Corp seeking to dump IBM mainframe. cheaper at twice the price. now, going from COBOL/file apps to RDBMS/*nix isn't a walk in the park. yes, the death of Big Iron has been just a couple of years away for a couple (or three) decades. may be now is the time.
duploxxx - Wednesday, April 3, 2019 - link
lollolol IAN for your comments regarding CPU side? What Intel marketing wheed have you been smoking before posting this optimistic stuff?A glued socket 56 cores consuming 400W. 12 dimms... no it is 6+6 just like the dual socket solution :) only 80 PCI-e lanes.... OEM customer specific no retail pricing....
You are aware that by mid 2019 there will be a 64core 128thread single socket 250W cpu to counter this shit? drop in compatible with already existing servers? At a way lower price than the top bin Platinum 8000 SKU.
Whoever thinks Intel has a massive CPU improvement this and next year is totally crazy, whoever continue to order intel cpu end of this year and next year for general server IT is also totally crazy. The only reason you might think of buying an Intel part is because of a specific high ghz low core count SKU for specific sw applications.
Icehawk - Wednesday, April 3, 2019 - link
In 23 years of being a sys admin... I have worked with exactly ZERO AMD systems. No company I have worked for has been willing to use non-Intel chips regardless of cost, efficiency, etc. Intel has a hard lock on corporate IT infrastructure somewhat like Apple still dominates the “creatives”. Not saying it’s right but it is reality.abufrejoval - Wednesday, April 3, 2019 - link
I've heard that said about the 360 and 370 and then came the PDP-11 and the VAX.I've head that said about the VAX and then came the PC.
I've jumped over Suns, eroded z-Series and p-Series using Linux until they bled and I will make sure AMD will have its moments in our data centers while it makes sense.
During my 35 years in IT I have typically been right, but rarely convinced anyone in the first or even the second round. By the time upper management believes that the ideas I feed them are their own, they typically do what I want.
Once you manage to get your ego out of the way, you can enjoy that it's easy and pays the bills.
FunBunny2 - Thursday, April 4, 2019 - link
"eroded z-Series and p-Series using Linux "those machines have been running linux for some years. one might wonder how many are running *only* linux? at the user-facing OS level, of course.
DannyH246 - Wednesday, April 3, 2019 - link
Weird as Opteron back in the day had like 20%+ market share. We were still using Opteron servers in our HPC environment in 2009.Admittedly AMD have had no worthwhile offering for the last 10years - and in that 10 years of no competition Intel has fleeced its customers.
Very shortly you will be able to get higher core count, higher performance, less power and a much more secure processor for LESS money than anything Intel has to offer.
Couple this with how Intel has treated its customers - there will be a huge uptake in EPYC processors for the data centre.
Intel advise to turn off Hyperthreading in these new cascade lake AP chips due to security vulnerabilities!
Lololololololololollolololololololololololololololololololo
FunBunny2 - Wednesday, April 3, 2019 - link
just out of curiosity... do these 'chips' implement more address lines? or is Intel/AMD still at the same number (48? 52?) below the 64 bit address space?Billy Tallis - Saturday, April 6, 2019 - link
/proc/cpuinfo reports 46 bits physical, 48 bits virtual on Cascade Lake-SP, same as for Skylake-SP. (Desktop Skylake reports 39 bits physical, 48 bits virtual.)peevee - Thursday, April 18, 2019 - link
"128GB DDR4 with 512 GB Optane for 768 GB"math.
TimAce - Monday, April 29, 2019 - link
Best Data Centric innovation ever.. thanks for the detailed articlehttps://crackfurry.com/solidworks-2019-crack/">
TimAce - Monday, April 29, 2019 - link
thanks for Data-Centric detailed article<a href="https://crackfurry.com/solidworks-2019-crack/"...
lucymartin673 - Thursday, May 2, 2019 - link
You have written a wonderful post. This article gave me very useful insights. Thanks for sharing. If you want to know how to update Rand McNally Maps through Rand McNally Dock, then you must visit our website. https://www.fix1st.com/gps/rand-mcnally-update/how...lucymartin673 - Wednesday, May 29, 2019 - link
This is article is really informative and insightful. It helped me a lot. Keep sharing. If you have any issues regarding Epson wi-fi printer setup then take help from Triumph PC care experts for instant solutions. Visit website to know more.https://www.triumphpccare.com/support-for-epson-pr...