My key take aways are: IC: The IO die as showed in the presentation looked very symmetrical, almost modular in itself. Does that mean it can be cut into smaller versions? MP: No details at this time.
Makes me think that TR 3000 will be a cut down of 4 or 6 channel ram with 32 or 48c (max).
IC: With the FP units now capable of doing 256-bit on their own, is there a frequency drop when 256-bit code is run, similar to when Intel runs AVX2? MP: No, we don’t anticipate any frequency decrease. We leveraged 7nm. One of the things that 7nm enables us is scale in terms of cores and FP execution. It is a true doubling because we didn’t only double the pipeline with, but we also doubled the load-store and the data pipe into it.
This sounds great looks like a strong move forward.
I think they've left it open that they could have separate multiply and add units and not two multiply/add units. Either way, it should close a lot of the gap on benchmarks previously favorable to Intel.
I still think that *most* of what works on AVX would work better on a GPU, although any double operations are likely to require a specialized GPU for non-crippled double operation.
Excellent interview as usual. Also your efforts to get MP to give more details on the IO are appreciated, but it was always going to be a long shot to get more info on that :) I have one question, are the 2 256bit execution units confirmed, you mentioned that in one of your questions but it wasn't verified. Any chance you can send a follow up question on weather they will now follow intel's approach of 2x multiply/add 256bit vector execution units or if they are going to do with their 256bit implementation something analogous to their current 128bit design that has 2x add + 2x multiply execution units? Thanks.
"At the full launch we’ll share with you exact configurations and what customers want to deploy around that." -- to me this reads as "it depends". For AVX-512 it's also a matter of "which ones do you want to implement?" Intel used this for market segmentation; AMD could do much the same. Possibly without so much cost, if they're not using 512-byte registers, but also not so much benefit.
I'm mostly interested in their 256bit execution units as they are far more relevant but also the exact implementation will determine the expected peak avx512 performance. I would guess they followed the same approach as previously but the mention of 2 execution units means they followed intel's approach of fused multiply/add units otherwise it wouldn't make sense to have 2.
From wikichip: "The FP has a single pipe for 128-bit load operations. In fact, the entire FP side is optimized for 128-bit operations. Zen supports all the latest instructions such as SSE and AVX1/2. The way 256-bit AVX was designed was so that they can be carried out as two independent 128-bit operations. Zen takes advantage of that by operating on those instructions as two operations; i.e., Zen splits up 256-bit operations into two µOPs so they are effectively half the throughput of their 128-bit operations counterparts. Likewise, stores are also done on 128-bit chunks, making 256-bit loads have an effective throughput of one store every two cycles. The pipes are fairly well balanced, therefore most operations will have at least two pipes to be scheduled on retaining the throughput of at least one such instruction each cycle. As implies, 256-bit operations will use up twice the resources to complete (i.e., 2x register, scheduler, and ports). This is a compromise AMD has taken which helps conserve die space and power. By contrast, Intel's competing design, Skylake, does have dedicated 256-bit circuitry. It's also worth noting that Intel's contemporary server class models have extended this further to incorporate dedicated 512-bit circuitry supporting AVX-512 with the highest performance models having a whole second dedicated AVX-512 unit."
I guess the implication is they might have done the same thing but then for 256-512 bit. Or even - double up on 128 units, but I am not sure if they can split up 512bit operations in 4x 128bit... is that even feasible?
No. And they don't with zen1. The software or compiler simply takes an alternative and less efficient path. But with multiple 256bit execution units they could support it directly.
Papermaster was at least hinting that all EPYC processors would have all cores enabled, and reduced core variants would be implemented by including fewer chips in the package. That makes sense because the consumer demand for six and four core processors is probably more than enough to use all of the defective chips where the defects are confined to a small number of cores.
I understood what he said the same way but if that is so they will go the modular route in the consumer market too which is far from certain. My understanding is that the performance and power hit of having a chiplet communicate with an i/o module would be very significant. It might make sense for a new Threadripper but I don't know if volume is enough for those to take all partially disabled chipsets...
All in all I am extremely curious to learn about the upcoming new consumer Zen...
Just 1 question : Why fuse off portions of the IO die; could the required sections be disabled at foundry? ; or fused to stop re-enabling functions / Cores ?
Even if...........; i could not afford a propper cooler ( let alone use effectively in my home pc :-)!, i STILL want a ROME cpu!
“IC: Under the situation where we now have a uniform main memory architecture, for on-chip compared to chip-to-chip there is still a near and a far latency…
MP: I know exactly where you’re going and as always with AnandTech it’s the right question! I can honestly say that we’ll share this info with the full product announcement.”
To (not) answer your question: "It's the right question!"
Finally a confirmation of 14nm from GF. Now we know where are all those WSA going. I really can't wait for EPYC 2 to come out. I hope all the cloud vendor to be on board.
I doubt the wafer volume for the Rome I/O die will be particularly high. Unless their is a similar version for the consumer die I imagine AMD will be way short of what the current WSA required for 2019 and beyond wafer volume...but this really doesn't matter as AMD and Globalfoundries are currently working on an update to the WSA to account for GF deciding to no longer pursue leading edge process tech.
I would bet all their Zen 2 based designs use 14nm I/O. They probably get additional power savings over the integrated IF of today, since there's no high speed CPU dies and they can tweak for power accordingly. Now, for future generations... who knows. A tweaked 14nm might be good enough. But if it's not, they can redesign the next-gen I/O block for Samsung or TSMC nodes as needed for best pricing or capacity.
The design is incredibly flexible (attach various chiplets for CPUs, APUs, custom designs), and it allows you to crank out leading-edge wafers chock FULL of chiplets while using more mature processes for the I/O die. Brilliant.
Remember Ryzen, Threadripper is still being produced by GF, which fills much of the WSA minimum orders required. Still waiting for confirmation whether Polaris 30 are on TSMC as well ( Idiots on Wxxtech site are laughing about this idea ). GF has been on 100% capacity since the launch of Zen.
>12FDX might even be an option for the I/O block in the future.
I also believe that is the reason I/O blocks remains on GF, 12FDX in theory should be well suited for this. And judging from the I/O Die Size it will be much cheaper than going with leading node.
Yes, since as part of the 6th amendment of the WSA, there was a section(8) concerning BOTH working on and co-operating on 7nm, which probably was going to be a significant portion of the WSA beginning in 2019 and forward. When GF ended their 7nm roadmap that pretty much killed the current WSA.
"8. 7NM OPERATIONAL PLAN a. The Parties shall work in a spirit of partnership and good faith to focus resources to assist FoundryCo to develop its 7nm process technology in accordance with its time schedule. AMD shall provide such cooperation as reasonably required to enable FoundryCo to manufacture 7nm products for AMD consistent with AMD’s time schedule for 7nm Products. The details of such cooperation will be mutually agreed and set forth in an operational plan, which plan shall be based on the elements further described in Exhibit A (the “7nm Operational Plan”). The Parties acknowledge that certain elements of the 7nm Operational Plan will be updated from time to time per the Parties’ mutual agreement in order to fulfill the objectives set forth in this Section 8 until the 7nm Operational Plan is complete. "
Nice interview, Ian. One more question: RAM needs will increase in lockstep with core count. Is AMD considering new RAM types akin to Intel announced its 3D Xpoint (Optane) memory? Supposedly this kind RAM would deliver lower cost, higher density and support non volatile use cases.
“That ecosystem will have others over time [snip] who can predict what new memory configuration might provide an advantage. Those companies that are agile and adaptable will win and this is the crux of AMD’s strategy. ” Optane is Intel proprietary and they seem reluctant to share it. When Intel was asked (at SC18 this week) about Optane compability with competitors (read AMD) processors the answer was: "currently only those using Xeons will work with the persistent memory."
Micron is buying back their share in 3DXpoint and is going to render the tech available for everyone. As of now, Optane is not really a game changer, however multiple core CPU are. Supercomputers will just benefit from it.
I would imagine both have the rights to produce 3D XPoint memory and provide products based on it. Development of the tech going forward will be done separately, however.
Since socket AM4 is linked to DDR4 memory, I expect that in 2020 or 2021 we will see a new socket that may support DDR5(not to be confused with GDDR5).
The benefits of 3D Xpoint as a memory DIMM has a very limited use case, at least in its current state. That may change in future versions, but that remains to be seen.
This was amazing, Ian. Very insightful, probing and relevant questions on many different fronts while not being pushy at all! You have a real knack for this. Loved this interview, loved the interactions - "as always with AnandTech it’s the right question!", I couldn't have said it better myself. Kudos to you and Mark.
A quick question on the transcription. In one of Mark's answers, it's transcribed as "adapt to the workloads as they merge". Did he say "merge", or "emerge"? Both are plausible. Could you confirm that your transcription is correct?
What I get from it is that Infinity Fabric is that, maybe, it is what will really push AMD forward in the future. I am hoping they have IP and patents in place. If Intel had to pay license fees for their EMIB design, it would be hilarious.
Mark dances around an obvious upgrade path: to bring Zen 3 to market, AMD might be able to leverage the existing IO hub chip. Ditto for Zen 4. He is correct that DDR5/PCIe 5.0 may require a new socket but the chiplets are now effectively independent of that restriction (only the on package bus matters here!). This strategy permits asynchronous development of the CPU side and the IO side now.
Hey you want zen3 on ddr4, use last gen i/O chip. zen3 on DDR5, ohh here is the next gen I/O chip. the 7nm+ chips are the same and production line to rule all the cpu's.
It's a weird way of asking "So Zen 4 might be the point where you need a new socket?"
You could view it in the context of the subsequent graph where the intercept is where one line meets the other. Although it's not clear that what they're talking about there (promoted feature size) has anything to do with when they move to a new socket with DDR5 and PCIe 5.0 support.
Have you patched the RYZENFALL, FALLOUT, CHIMERA, MASTERKE in these new chipsets?
Its been 8 months since Mark Papermaster, Senior Vice President and Chief Technology Officer at AMD has acknowledged hardware level RYZENFALL, FALLOUT, CHIMERA, MASTERKE vulnerabilities, claiming "AMD will provide additional updates on both our analysis of these issues and the related mitigation plans in the coming weeks," well its been 8 months and they opted for a total blackout; 8 months and the greatest chipset level vulnerabilities AMD's history are apparently spreading freely and openly around the globe. It appears AMD doesn't have any concern for their customers safety, be it corporate, business, or power-users.
Intel made their vulnerabilities front page news, and initiated highly publicized steps to mitigate against them. Is AMD working on recalling these chipsets to have them refurbed? What are you doing to educate people and fix this problem?
Is it possible to disable the PSP? Snowden asked AMD to opensource PSP for this very reason.
I've seen this post on reddit and some other sites too. Don't copy paste your comments on EVERY AMD related thread... At least make a new one or something instead of copy paste...
1. The vulnerability isn't significant (because it needs admin rights) 2. They already sent the patches to their partners (see tomhardware article on May 2nd.) 3. You're a shill, stop copy-paste an exact same post. (At least make your own paragraph, geez)
Indeed, AMD can't get complacent because Intel was having problems with 10nm. For one thing, Intel is now claiming it's overcome those problems. For another, Intel has already hired TSMC to make chips for them, so they can get 7nm chips from TSMC just as easily as AMD if worst came to worst.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
57 Comments
Back to Article
The Benjamins - Monday, November 12, 2018 - link
My key take aways are:IC: The IO die as showed in the presentation looked very symmetrical, almost modular in itself. Does that mean it can be cut into smaller versions?
MP: No details at this time.
Makes me think that TR 3000 will be a cut down of 4 or 6 channel ram with 32 or 48c (max).
IC: With the FP units now capable of doing 256-bit on their own, is there a frequency drop when 256-bit code is run, similar to when Intel runs AVX2?
MP: No, we don’t anticipate any frequency decrease. We leveraged 7nm. One of the things that 7nm enables us is scale in terms of cores and FP execution. It is a true doubling because we didn’t only double the pipeline with, but we also doubled the load-store and the data pipe into it.
This sounds great looks like a strong move forward.
The manufacture confirmation is nice to know.
FirstStrike - Monday, November 12, 2018 - link
I suppose we got confirmation on the FP pipelines? Because AMD only vaguely said double load/store/throughput but not FPU width at the announcement.wumpus - Sunday, November 18, 2018 - link
I think they've left it open that they could have separate multiply and add units and not two multiply/add units. Either way, it should close a lot of the gap on benchmarks previously favorable to Intel.I still think that *most* of what works on AVX would work better on a GPU, although any double operations are likely to require a specialized GPU for non-crippled double operation.
kd_ - Monday, November 12, 2018 - link
Excellent interview as usual. Also your efforts to get MP to give more details on the IO are appreciated, but it was always going to be a long shot to get more info on that :)I have one question, are the 2 256bit execution units confirmed, you mentioned that in one of your questions but it wasn't verified. Any chance you can send a follow up question on weather they will now follow intel's approach of 2x multiply/add 256bit vector execution units or if they are going to do with their 256bit implementation something analogous to their current 128bit design that has 2x add + 2x multiply execution units?
Thanks.
GreenReaper - Monday, November 12, 2018 - link
"At the full launch we’ll share with you exact configurations and what customers want to deploy around that." -- to me this reads as "it depends". For AVX-512 it's also a matter of "which ones do you want to implement?" Intel used this for market segmentation; AMD could do much the same. Possibly without so much cost, if they're not using 512-byte registers, but also not so much benefit.kd_ - Monday, November 12, 2018 - link
I'm mostly interested in their 256bit execution units as they are far more relevant but also the exact implementation will determine the expected peak avx512 performance. I would guess they followed the same approach as previously but the mention of 2 execution units means they followed intel's approach of fused multiply/add units otherwise it wouldn't make sense to have 2.Rudde - Monday, November 12, 2018 - link
From wikichip:"The FP has a single pipe for 128-bit load operations. In fact, the entire FP side is optimized for 128-bit operations. Zen supports all the latest instructions such as SSE and AVX1/2. The way 256-bit AVX was designed was so that they can be carried out as two independent 128-bit operations. Zen takes advantage of that by operating on those instructions as two operations; i.e., Zen splits up 256-bit operations into two µOPs so they are effectively half the throughput of their 128-bit operations counterparts. Likewise, stores are also done on 128-bit chunks, making 256-bit loads have an effective throughput of one store every two cycles. The pipes are fairly well balanced, therefore most operations will have at least two pipes to be scheduled on retaining the throughput of at least one such instruction each cycle. As implies, 256-bit operations will use up twice the resources to complete (i.e., 2x register, scheduler, and ports). This is a compromise AMD has taken which helps conserve die space and power. By contrast, Intel's competing design, Skylake, does have dedicated 256-bit circuitry. It's also worth noting that Intel's contemporary server class models have extended this further to incorporate dedicated 512-bit circuitry supporting AVX-512 with the highest performance models having a whole second dedicated AVX-512 unit."
kd_ - Monday, November 12, 2018 - link
Yes. That is what happens with zen1. I am asking about zen2.jospoortvliet - Tuesday, November 13, 2018 - link
I guess the implication is they might have done the same thing but then for 256-512 bit. Or even - double up on 128 units, but I am not sure if they can split up 512bit operations in 4x 128bit... is that even feasible?kd_ - Tuesday, November 13, 2018 - link
No. And they don't with zen1. The software or compiler simply takes an alternative and less efficient path. But with multiple 256bit execution units they could support it directly.KAlmquist - Monday, November 12, 2018 - link
Papermaster was at least hinting that all EPYC processors would have all cores enabled, and reduced core variants would be implemented by including fewer chips in the package. That makes sense because the consumer demand for six and four core processors is probably more than enough to use all of the defective chips where the defects are confined to a small number of cores.jospoortvliet - Tuesday, November 13, 2018 - link
I understood what he said the same way but if that is so they will go the modular route in the consumer market too which is far from certain. My understanding is that the performance and power hit of having a chiplet communicate with an i/o module would be very significant. It might make sense for a new Threadripper but I don't know if volume is enough for those to take all partially disabled chipsets...All in all I am extremely curious to learn about the upcoming new consumer Zen...
halfbean01 - Monday, November 12, 2018 - link
@Ian Cutress Great read dude :-) thanks.Just 1 question : Why fuse off portions of the IO die; could the required sections be disabled at foundry? ; or fused to stop re-enabling functions / Cores ?
Even if...........; i could not afford a propper cooler ( let alone use effectively in my home pc :-)!, i STILL want a ROME cpu!
Fool; Old fool.
anexanhume - Monday, November 12, 2018 - link
In this context, “cluster” is like a Zeppelin entity, or a CCX?Rudde - Monday, November 12, 2018 - link
“IC: Under the situation where we now have a uniform main memory architecture, for on-chip compared to chip-to-chip there is still a near and a far latency…MP: I know exactly where you’re going and as always with AnandTech it’s the right question! I can honestly say that we’ll share this info with the full product announcement.”
To (not) answer your question:
"It's the right question!"
SaturnusDK - Monday, November 12, 2018 - link
Zeppelin entity would be my guess. It doesn't look like they made any changes to what constitutes a CCX in any respect.iwod - Monday, November 12, 2018 - link
Finally a confirmation of 14nm from GF. Now we know where are all those WSA going. I really can't wait for EPYC 2 to come out. I hope all the cloud vendor to be on board.ilt24 - Monday, November 12, 2018 - link
@iwodI doubt the wafer volume for the Rome I/O die will be particularly high. Unless their is a similar version for the consumer die I imagine AMD will be way short of what the current WSA required for 2019 and beyond wafer volume...but this really doesn't matter as AMD and Globalfoundries are currently working on an update to the WSA to account for GF deciding to no longer pursue leading edge process tech.
Alexvrb - Monday, November 12, 2018 - link
I would bet all their Zen 2 based designs use 14nm I/O. They probably get additional power savings over the integrated IF of today, since there's no high speed CPU dies and they can tweak for power accordingly. Now, for future generations... who knows. A tweaked 14nm might be good enough. But if it's not, they can redesign the next-gen I/O block for Samsung or TSMC nodes as needed for best pricing or capacity.The design is incredibly flexible (attach various chiplets for CPUs, APUs, custom designs), and it allows you to crank out leading-edge wafers chock FULL of chiplets while using more mature processes for the I/O die. Brilliant.
Alexvrb - Monday, November 12, 2018 - link
12FDX might even be an option for the I/O block in the future.iwod - Tuesday, November 13, 2018 - link
Remember Ryzen, Threadripper is still being produced by GF, which fills much of the WSA minimum orders required. Still waiting for confirmation whether Polaris 30 are on TSMC as well ( Idiots on Wxxtech site are laughing about this idea ). GF has been on 100% capacity since the launch of Zen.>12FDX might even be an option for the I/O block in the future.
I also believe that is the reason I/O blocks remains on GF, 12FDX in theory should be well suited for this. And judging from the I/O Die Size it will be much cheaper than going with leading node.
Targon - Tuesday, November 13, 2018 - link
Global giving up on 7nm after that agreement was made could also eliminate any existing requirements that Global may have on AMD orders.Freeb!rd - Friday, November 16, 2018 - link
Yes, since as part of the 6th amendment of the WSA, there was a section(8) concerning BOTH working on and co-operating on 7nm, which probably was going to be a significant portion of the WSA beginning in 2019 and forward. When GF ended their 7nm roadmap that pretty much killed the current WSA."8.
7NM OPERATIONAL PLAN
a.
The Parties shall work in a spirit of partnership and good faith to focus resources to assist FoundryCo to develop its 7nm process technology in accordance with its time schedule. AMD shall provide such cooperation as reasonably required to enable FoundryCo to manufacture 7nm products for AMD consistent with AMD’s time schedule for 7nm Products. The details of such cooperation will be mutually agreed and set forth in an operational plan, which plan shall be based on the elements further described in Exhibit A (the “7nm Operational Plan”). The Parties acknowledge that certain elements of the 7nm Operational Plan will be updated from time to time per the Parties’ mutual agreement in order to fulfill the objectives set forth in this Section 8 until the 7nm Operational Plan is complete. "
PaoDeTech - Monday, November 12, 2018 - link
Nice interview, Ian. One more question: RAM needs will increase in lockstep with core count. Is AMD considering new RAM types akin to Intel announced its 3D Xpoint (Optane) memory? Supposedly this kind RAM would deliver lower cost, higher density and support non volatile use cases.Rudde - Monday, November 12, 2018 - link
“That ecosystem will have others over time [snip] who can predict what new memory configuration might provide an advantage. Those companies that are agile and adaptable will win and this is the crux of AMD’s strategy. ”Optane is Intel proprietary and they seem reluctant to share it. When Intel was asked (at SC18 this week) about Optane compability with competitors (read AMD) processors the answer was: "currently only those using Xeons will work with the persistent memory."
Alexvrb - Monday, November 12, 2018 - link
They're not the only player, though. There's MRAM to consider.eva02langley - Tuesday, November 13, 2018 - link
Micron is buying back their share in 3DXpoint and is going to render the tech available for everyone. As of now, Optane is not really a game changer, however multiple core CPU are. Supercomputers will just benefit from it.Freeb!rd - Friday, November 16, 2018 - link
To be more exact: Micron is only buying out Intel's "share" of the Joint Venture with Intel that produces 3D XPoint...https://www.anandtech.com/show/13499/what-next-for...
I would imagine both have the rights to produce 3D XPoint memory and provide products based on it. Development of the tech going forward will be done separately, however.
Ian Cutress - Monday, November 12, 2018 - link
They'll be supporting the new 16Gb DDR4 ICs coming out for up to 4TB per socket.Targon - Tuesday, November 13, 2018 - link
Since socket AM4 is linked to DDR4 memory, I expect that in 2020 or 2021 we will see a new socket that may support DDR5(not to be confused with GDDR5).Freeb!rd - Friday, November 16, 2018 - link
The benefits of 3D Xpoint as a memory DIMM has a very limited use case, at least in its current state. That may change in future versions, but that remains to be seen.Carmen00 - Monday, November 12, 2018 - link
This was amazing, Ian. Very insightful, probing and relevant questions on many different fronts while not being pushy at all! You have a real knack for this. Loved this interview, loved the interactions - "as always with AnandTech it’s the right question!", I couldn't have said it better myself. Kudos to you and Mark.A quick question on the transcription. In one of Mark's answers, it's transcribed as "adapt to the workloads as they merge". Did he say "merge", or "emerge"? Both are plausible. Could you confirm that your transcription is correct?
eva02langley - Tuesday, November 13, 2018 - link
What I get from it is that Infinity Fabric is that, maybe, it is what will really push AMD forward in the future. I am hoping they have IP and patents in place. If Intel had to pay license fees for their EMIB design, it would be hilarious.Kevin G - Tuesday, November 13, 2018 - link
Mark dances around an obvious upgrade path: to bring Zen 3 to market, AMD might be able to leverage the existing IO hub chip. Ditto for Zen 4. He is correct that DDR5/PCIe 5.0 may require a new socket but the chiplets are now effectively independent of that restriction (only the on package bus matters here!). This strategy permits asynchronous development of the CPU side and the IO side now.oleyska - Thursday, November 15, 2018 - link
Hey you want zen3 on ddr4, use last gen i/O chip.zen3 on DDR5, ohh here is the next gen I/O chip.
the 7nm+ chips are the same and production line to rule all the cpu's.
peevee - Tuesday, November 13, 2018 - link
"So one might assume that an intercept might occur with Zen 4?"What does the question mean? "Intercept"?
GreenReaper - Wednesday, November 14, 2018 - link
It's a weird way of asking "So Zen 4 might be the point where you need a new socket?"You could view it in the context of the subsequent graph where the intercept is where one line meets the other. Although it's not clear that what they're talking about there (promoted feature size) has anything to do with when they move to a new socket with DDR5 and PCIe 5.0 support.
yagma - Tuesday, November 13, 2018 - link
Have you patched the RYZENFALL, FALLOUT, CHIMERA, MASTERKE in these new chipsets?Its been 8 months since Mark Papermaster, Senior Vice President and Chief Technology Officer at AMD has acknowledged hardware level RYZENFALL, FALLOUT, CHIMERA, MASTERKE vulnerabilities, claiming "AMD will provide additional updates on both our analysis of these issues and the related mitigation plans in the coming weeks," well its been 8 months and they opted for a total blackout; 8 months and the greatest chipset level vulnerabilities AMD's history are apparently spreading freely and openly around the globe. It appears AMD doesn't have any concern for their customers safety, be it corporate, business, or power-users.
Intel made their vulnerabilities front page news, and initiated highly publicized steps to mitigate against them. Is AMD working on recalling these chipsets to have them refurbed? What are you doing to educate people and fix this problem?
Is it possible to disable the PSP? Snowden asked AMD to opensource PSP for this very reason.
Fritzkier - Tuesday, November 13, 2018 - link
I've seen this post on reddit and some other sites too.Don't copy paste your comments on EVERY AMD related thread... At least make a new one or something instead of copy paste...
1. The vulnerability isn't significant (because it needs admin rights)
2. They already sent the patches to their partners (see tomhardware article on May 2nd.)
3. You're a shill, stop copy-paste an exact same post. (At least make your own paragraph, geez)
quadibloc - Thursday, November 22, 2018 - link
Indeed, AMD can't get complacent because Intel was having problems with 10nm. For one thing, Intel is now claiming it's overcome those problems. For another, Intel has already hired TSMC to make chips for them, so they can get 7nm chips from TSMC just as easily as AMD if worst came to worst.