To be fair even nvidias achitectures were designed to be gcn like. There is alot of complaining going around but deviating from the blueprint without significant gains elsewhere will lose out on all the gcn and gcn-like optimised games.
There's "GCN-like" (by which I guess you mean SIMD + SMT) and there's GCN. The former is just an architectural approach (and I wouldn't really characterize Nvidia's architectures that way), while the latter includes a specific ISA and toolchain support (including in LLVM and GCC). The distinction has meaningful and very practical implications.
Based on commits to the opensource Linux driver stack, I think we already know that Navi won't break continuity with GCN.
mode_13h but can wccftech be trusted as reliable ?? seems from a few posts on here in other threads/articles that that site is a rumor site... IMO.. better to just wait and see how Navi performs..
to give you an exapmle since you clearly cant come up with one on your own: with gcn shaders are grouped 64 per cu. games are optimised to dispatch work in a way that fills groups of 64 shaders with work more often. that way the shaders dont sit idle while they wait for either the cu to que up different work or for the other shaders to free up resources. With pascal nvidia similarly grouped shaders into groups of 64. vs on maxwell it was 128 and more on earlier generations.
some games that switch between compute based effects and the graphics pipeline had a much higher penalty before pascal. making pascal more gcn like helped nvidia.
There is more than one "architecture" when talking about a chip. First you have the ISA like x86 or ARM which defines how the instruction set works. Then you have the actual chip architecture, commonly known as microarch or uarch, which defines the chip itself. ISA's are pretty clear cut things but the microarch can be pretty loosely defined.
If we use Intel as an example they generally organize architectures on three lines, the ISA which is x86, the family architecture which is currently just known as Core and the microarch of the specific cpu generation ie: Haswell, Skylake, Icelake etc.
GCN is interesting because AMD uses the term to define both the ISA and the family so you can think of it as akin to x86 + Core. But it does not define the microarch so Navi, Polaris and Vega are all different. They work similarly but that's like saying an Ivy Bridge Core i3 works similarly to a Coffee Lake i9. It doesn't mean much in terms or real world performance.
Fun fact: "Navi" is the informal name for Gamma Cassiopeiae, a star that's 550 lightyears from the Sun. For some reason, I remember that it's actually the name "Ivan", spelled backwards. But I have no idea if this is true, or who this Ivan is :(
"The star was used as an easily identifiable navigational reference point during space missions and American astronaut Virgil Ivan "Gus" Grissom nicknamed the star Navi after his own middle name spelled backwards." -- Wikipedia
"Ivan" is Gus Grissom's (he was one of the Mercury Seven and died in the Apollo 1 fire) actual middle name. The names of the stars were intended to be a joke by Gus Grissom:
"Several years later, I met with one of the original Mercury astronauts, Wally Schirra, at a conference and, in the course of the conversation, the renamed stars were mentioned as a private joke made by Gus Grissom."
Since "Gamma Cassiopeiae" is a kind of pain in the ass-y name to spell and the three stars mentioned were used as navigational reference points, they renamed them for convenience at some point.
"BEST YEAR IN AMD HISTORY" - let they release these products first. Typically they can't avoid delays and disappointments. 'shortly in Q3' - pretty typical behaviour, they release products alt last week(s) of declared quarter
Or first week of the quarter. May 27th for Computex where AMD will be more specific about specs and launch dates/schedule. If it lands on July 1st, that's technically third quarter, and if AMD said second quarter, that would not be accurate.
Also due in Q3 are AMD’s 2nd Gen EPYC processors, known as Rome. These server CPUs are set to feature up to 64 cores powered by the company’s Zen 2 microarchitecture. It is worth keeping in mind that it takes server users some time to adopt new CPUs, and the launch of AMD’s 2nd Gen EPYC in Q3 does not necessarily mean that these processors will be used by a massive amount of designs straight away. Nonetheless, it is still important to release them rather sooner than later in a bid to ramp them up as soon as possible.
Dear writer , is this your opinion or did AMD tell you this? Zen was a new introduction back into the IT Enterprise world again. Where adoptions were slow and evaluated. ZEN2 is follow up and volume part to regain the market share.
That doesn't change the fact that server development cycles are longer than those of consumer products nor that the vendors involved are generally more conservative.
I wonder if he listened to the call because I distinctly remember them saying they'd have threadripper in 2019 but gave the impression it would be after Epyc ramps.
hmmm, I might be building my first desktop since 2012 soon (my I7-3770k has served me well and remained perfectly relevant...).
i imagine it will be the last desktop I will build in my IT career (I hope to hang-up coding in 2025... not retire... just do something else w/ less stress).
I haven't bought a discreet video card in forever (Athlon days), as I've been on the core architecture and just used integrated graphics for a long time. Maybe, I'll treat myself to a monster video card, monster CPU, high capacity m2 drives, tons of DDR4 memory... this third quarter.
My I7-3770k will make a nice little server for development work.
At this point for platform longevity, I'd wait for DDR5 and maybe PCIe5-based platform. DDR4 is already very old and DDR5 chips @6400MT/s are being mass-produced for months, and PCIe4 looks to be a short-lived standard.
How does PCIe 5.0's power consumption compare with 4.0? Early reports are that 4.0 runs hot. If 5.0 is even hotter, that might be a reason for 4.0 to stick around on the desktop, for a while.
Intel won't even have 4.0 on desktop chips for a couple more generations.
"Intel won't even have 4.0 on desktop chips for a couple more generations."
What is your source for that? I am pretty sure as soon as they separate cores from I/O again, they will implement 4 or even directly 5. Maybe even in the next few months.
I've got a Xeon Ivy Bridge clocked at 3770K speeds.
Just to make sure you know, it is nearly identical in performance to a Ryzen 1500x. On most games there's little difference even upgrading to an 8 core.
If you do other things as well, like I do, that 8 core is looking pretty good, but I try to hold out until I get 3 to 4x my performance before I upgrade. 12 and 16 cores are my target at the moment.
Isn't every company's product portfolio the strongest in its companies' history? Wouldn't a better comparison be to industry or a 3rd party rather than itself? It'd be like saying I've never been older than I am today LOL.
Honestly, I think it is hyperbolic overstatement. I'd argue the K8 era was probably AMD's strongest historical portfolio, because they were at a high point while the competition was at a low point. On the other hand, I admit that they didn't have a very BROAD portfolio at the time.
Ryzen is, taken in isolation, very nice. Navi is, in isolation, probably very nice(I certainly hope it is, anyways). But they don't exist in a vacuum. They're up against very strong competition from Intel and nVidia, both technologically and marketing-wise.
The main interest in Threadripper for me is 4 memory channels instead of 2. As far as I understand, it would not be hard to do with chiplet design at all. Their Am4 socket probably does not support 4 channels, am I right?
What I want is the new socket design with SODIMM slots ON ITS 4 SIDES (oriented parallel to the board), each on its own channel, to minimize distance (and so latency and voltage required) and maximize bandwidth. MB design will be greatly simplified too, as they'd only need paths for power and PCIe on the bottom, possible even in a dirt cheap 1-layer design.
That actually makes a lot of sense. SO-DIMM in general makes a lot of sense, and I'm kinda confused why it hasn't made inroads in full-size systems. (I know, I know. Less room for DRAMs, requires more expensive, higher-capacity chips to reach the same capacities. Can still put two SO-DIMM slots in the same space as a single full DIMM. )
For my specific proposal, SODIMMs are there because they are short enough to be put on all 4 sides of a socket without making the socket too large. The contacts should protrude into the socket from the sides, so the socket has contacts for CPU on the top and slots for SODIMMs on the sides, directly connected to the CPU (the best if the CPU package itself has memory contacts on its sides to shorten the paths further and simplify the socket itself).
High throughput low latency memory might even make integrated GPU decent. Or eliminate the need for LLC.
For GPUs, it makes sense to have lots of channels. For CPUs, I don't know how much you'll really gain by cutting your channel width below 128 bits. Remember that most transactions are cacheline-sized (typically 64 bytes).
Anyway, I have a suspicion they'll just program the Rome chiplets to route memory to the 4 enabled channels, on TR4 boards. Then, existing TR users can simply upgrade to a Rome CPU.
"I don't know how much you'll really gain by cutting your channel width below 128 bits."
I think it has already been cut a few CPU generations ago.
" Remember that most transactions are cacheline-sized (typically 64 bytes)."
Multiple cores typically require memory from different cache lines. And data required immediately is often 64 bits or less. So multiple smaller channels leave fewer cores idle waiting for data. Efficiency is not reduced as each channel can continue to read its own cache line if necessary.
There's a lot of games that say nVidia is the way it is meant to be played. AMD's gonna have to give us a REALLY good card to make up for the fact that games aren't playing right.
(I am being 100% sarcastic here. Just to clarify.)
And there are lots AMD Crashing Evolved which tends to run slower on Nvidia gpus and does nvidia cry about it in media like amd constantly do? No. So yeah, grow up. AMD has been doing the same thing for years, especially since DX11 was out and no one give a **** because that poor cheating AMD is an underdog.
Maxiking would you happen to have links for this ?? ever think that like intel, maybe nvidia did the same with the game studios that intel did with the hardware makers ??
I've seen very little of AMD complaining that nVidia has a much more robust marketing team, which is what the whole "way it's meant to be" campaign boiled down to. Not any sort of in-game difference, just nVidia paying publishers to include a commercial that played every time the game was started.
I am sorry I hurt your feelings by making fun of nVidia's marketing campaign.
Navi is also meant as a mainstream card, not an enthusiast card like the R7 is. Navi is meant to be in the sub 300$ range. Don't know why you expected it to be better than Radeon 7.
Be like calling the 1660 a failure because it's not RTX 2080 levels or more...
At the high end, probably not. But AMD can do a lot of damage at entry-level and mid-range.
And maybe with a multi-chiplet GPU, they can even compete in the high-end.
I think the article reads too much into AMD's plans to keep Radeon VII - that's probably just to cater to people who need the fp64, the memory capacity, or the memory bandwidth. It doesn't mean that Navi won't (eventually) surpass it, in gaming.
Are you mocking up the fact that AMD couldn't significantly outperform 14nm++ i9 9900k in thier public demo whilst demonstrating the new uber 7nm products and that close tie in the amd favoured benchmark was the best fakeout they could set up to hype the masses? If this is the best they could show imagine how slow it will be in the real life usage. But I am sure their cpus will be sold out thanks to the people visiting Fridays for future kekekekek
Maxiking he is probable mocking intel and their delays with 10nm, after all they have been promising 10nm for what.. 3 or 4 years now...
they could be comparing engineering samples vs retail release samples...so performance could change sounds like you are just hurt because intel isnt the tech leader it once was...
Is there really such a thing as an *AMD-flavored benchmark*? I'm thinkin' poor ol' AMD has spent less on R&D over the years than the tens of billions of dollars Intel *subsidized* in marketing the Bay Trail Fails.
Maybe Jimbo Keller can provide Chipzilly a *Thermal Velocity Boost* in the pants. Huh?
Huh?! - "It is worth keeping in mind that it takes server users some time to adopt new CPUs, and the launch of AMD’s 2nd Gen EPYC in Q3 does not necessarily mean that these processors will be used by a massive amount of designs straight away. Nonetheless, it is still important to release them rather sooner than later in a bid to ramp them up as soon as possible.".
The 7nm ones appear to be partially sold out (thus, no Threadripper), we hate waiting any longer but at the same time it makes sense for the 'general release' to have more cores using less power; and more PCIe 4 lanes wouldn't hurt, since 5*32=160.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
55 Comments
Back to Article
Ironchef3500 - Friday, May 17, 2019 - link
So they call Navi a GPU architecture, yet it is GCN based? How does that work? Would a new architecture not be based on itself?T1beriu - Friday, May 17, 2019 - link
Vega and Polaris were also GCN architectures. They're codenames and don't imply anything.Opencg - Friday, May 17, 2019 - link
To be fair even nvidias achitectures were designed to be gcn like. There is alot of complaining going around but deviating from the blueprint without significant gains elsewhere will lose out on all the gcn and gcn-like optimised games.mode_13h - Saturday, May 18, 2019 - link
There's "GCN-like" (by which I guess you mean SIMD + SMT) and there's GCN. The former is just an architectural approach (and I wouldn't really characterize Nvidia's architectures that way), while the latter includes a specific ISA and toolchain support (including in LLVM and GCC). The distinction has meaningful and very practical implications.Based on commits to the opensource Linux driver stack, I think we already know that Navi won't break continuity with GCN.
https://www.phoronix.com/scan.php?page=news_item&a...
However, I've seen some hints that post-Navi will be a departure from current GCN, but no firm confirmation.
From https://wccftech.com/amd-navi-radeon-rx-gpu-rumors... :
"Navi is meant to be the last GCN based GPU design by AMD"
We'll see...
Korguz - Saturday, May 18, 2019 - link
mode_13hbut can wccftech be trusted as reliable ?? seems from a few posts on here in other threads/articles that that site is a rumor site... IMO.. better to just wait and see how Navi performs..
Opencg - Sunday, May 19, 2019 - link
to give you an exapmle since you clearly cant come up with one on your own:with gcn shaders are grouped 64 per cu. games are optimised to dispatch work in a way that fills groups of 64 shaders with work more often. that way the shaders dont sit idle while they wait for either the cu to que up different work or for the other shaders to free up resources.
With pascal nvidia similarly grouped shaders into groups of 64. vs on maxwell it was 128 and more on earlier generations.
some games that switch between compute based effects and the graphics pipeline had a much higher penalty before pascal. making pascal more gcn like helped nvidia.
mode_13h - Tuesday, May 21, 2019 - link
So, just the SIMD-width is enough to qualify it as GCN-like? That's a silly thing to say!tipoo - Friday, May 17, 2019 - link
Navi is the code name for the evolution of the architecture.I.e every Skylake derivative.
zmatt - Friday, May 17, 2019 - link
There is more than one "architecture" when talking about a chip. First you have the ISA like x86 or ARM which defines how the instruction set works. Then you have the actual chip architecture, commonly known as microarch or uarch, which defines the chip itself. ISA's are pretty clear cut things but the microarch can be pretty loosely defined.If we use Intel as an example they generally organize architectures on three lines, the ISA which is x86, the family architecture which is currently just known as Core and the microarch of the specific cpu generation ie: Haswell, Skylake, Icelake etc.
GCN is interesting because AMD uses the term to define both the ISA and the family so you can think of it as akin to x86 + Core. But it does not define the microarch so Navi, Polaris and Vega are all different. They work similarly but that's like saying an Ivy Bridge Core i3 works similarly to a Coffee Lake i9. It doesn't mean much in terms or real world performance.
cerberusss - Friday, May 17, 2019 - link
Fun fact: "Navi" is the informal name for Gamma Cassiopeiae, a star that's 550 lightyears from the Sun. For some reason, I remember that it's actually the name "Ivan", spelled backwards. But I have no idea if this is true, or who this Ivan is :(lifeguard1986 - Friday, May 17, 2019 - link
"The star was used as an easily identifiable navigational reference point during space missions and American astronaut Virgil Ivan "Gus" Grissom nicknamed the star Navi after his own middle name spelled backwards." -- WikipediaN Zaljov - Friday, May 17, 2019 - link
"Ivan" is Gus Grissom's (he was one of the Mercury Seven and died in the Apollo 1 fire) actual middle name. The names of the stars were intended to be a joke by Gus Grissom:"Several years later, I met with one of the original Mercury astronauts, Wally Schirra, at a conference and, in the course of the conversation, the renamed stars were mentioned as a private joke made by Gus Grissom."
See: https://www.hq.nasa.gov/alsj/a15/a15.postland.html...
Since "Gamma Cassiopeiae" is a kind of pain in the ass-y name to spell and the three stars mentioned were used as navigational reference points, they renamed them for convenience at some point.
TristanSDX - Friday, May 17, 2019 - link
"BEST YEAR IN AMD HISTORY" - let they release these products first. Typically they can't avoid delays and disappointments.'shortly in Q3' - pretty typical behaviour, they release products alt last week(s) of declared quarter
Targon - Friday, May 17, 2019 - link
Or first week of the quarter. May 27th for Computex where AMD will be more specific about specs and launch dates/schedule. If it lands on July 1st, that's technically third quarter, and if AMD said second quarter, that would not be accurate.schujj07 - Friday, May 17, 2019 - link
There have been some theories that AMD will want to launch things on 7/7 since everything will be 7nm.HardwareDufus - Friday, May 17, 2019 - link
That didn't work out well for Boeing on 07/08/2007 (787)…mode_13h - Saturday, May 18, 2019 - link
Except for UK, I'd imagine.https://en.wikipedia.org/wiki/7_July_2005_London_b...
mode_13h - Saturday, May 18, 2019 - link
> Best Year in AMD HistoryMeanwhile, Trump and Xi Jinping have different ideas...
duploxxx - Friday, May 17, 2019 - link
Also due in Q3 are AMD’s 2nd Gen EPYC processors, known as Rome. These server CPUs are set to feature up to 64 cores powered by the company’s Zen 2 microarchitecture. It is worth keeping in mind that it takes server users some time to adopt new CPUs, and the launch of AMD’s 2nd Gen EPYC in Q3 does not necessarily mean that these processors will be used by a massive amount of designs straight away. Nonetheless, it is still important to release them rather sooner than later in a bid to ramp them up as soon as possible.Dear writer , is this your opinion or did AMD tell you this? Zen was a new introduction back into the IT Enterprise world again. Where adoptions were slow and evaluated. ZEN2 is follow up and volume part to regain the market share.
Valantar - Friday, May 17, 2019 - link
That doesn't change the fact that server development cycles are longer than those of consumer products nor that the vendors involved are generally more conservative.rahvin - Sunday, May 19, 2019 - link
I wonder if he listened to the call because I distinctly remember them saying they'd have threadripper in 2019 but gave the impression it would be after Epyc ramps.sorten - Friday, May 17, 2019 - link
These products are going to be "formally introduced" in 10 days, in Q2, and sold starting early Q3.HardwareDufus - Friday, May 17, 2019 - link
hmmm, I might be building my first desktop since 2012 soon (my I7-3770k has served me well and remained perfectly relevant...).i imagine it will be the last desktop I will build in my IT career (I hope to hang-up coding in 2025... not retire... just do something else w/ less stress).
I haven't bought a discreet video card in forever (Athlon days), as I've been on the core architecture and just used integrated graphics for a long time. Maybe, I'll treat myself to a monster video card, monster CPU, high capacity m2 drives, tons of DDR4 memory... this third quarter.
My I7-3770k will make a nice little server for development work.
peevee - Friday, May 17, 2019 - link
At this point for platform longevity, I'd wait for DDR5 and maybe PCIe5-based platform. DDR4 is already very old and DDR5 chips @6400MT/s are being mass-produced for months, and PCIe4 looks to be a short-lived standard.mode_13h - Saturday, May 18, 2019 - link
How does PCIe 5.0's power consumption compare with 4.0? Early reports are that 4.0 runs hot. If 5.0 is even hotter, that might be a reason for 4.0 to stick around on the desktop, for a while.Intel won't even have 4.0 on desktop chips for a couple more generations.
peevee - Monday, May 20, 2019 - link
"Intel won't even have 4.0 on desktop chips for a couple more generations."What is your source for that? I am pretty sure as soon as they separate cores from I/O again, they will implement 4 or even directly 5. Maybe even in the next few months.
Qasar - Monday, May 20, 2019 - link
peevee...where is your source for that ?? or is this just your own speculation ?
mode_13h - Tuesday, May 21, 2019 - link
I probably saw it in the latest batch of Intel roadmap leaks.0ldman79 - Saturday, May 18, 2019 - link
I've got a Xeon Ivy Bridge clocked at 3770K speeds.Just to make sure you know, it is nearly identical in performance to a Ryzen 1500x. On most games there's little difference even upgrading to an 8 core.
If you do other things as well, like I do, that 8 core is looking pretty good, but I try to hold out until I get 3 to 4x my performance before I upgrade. 12 and 16 cores are my target at the moment.
webdoctors - Friday, May 17, 2019 - link
"Strongest portfolio in AMD's history"?Isn't every company's product portfolio the strongest in its companies' history? Wouldn't a better comparison be to industry or a 3rd party rather than itself? It'd be like saying I've never been older than I am today LOL.
peevee - Friday, May 17, 2019 - link
"Isn't every company's product portfolio the strongest in its companies' history?"No, because strength is measured against competitive landscape.
Lord of the Bored - Saturday, May 18, 2019 - link
Honestly, I think it is hyperbolic overstatement. I'd argue the K8 era was probably AMD's strongest historical portfolio, because they were at a high point while the competition was at a low point.On the other hand, I admit that they didn't have a very BROAD portfolio at the time.
Ryzen is, taken in isolation, very nice. Navi is, in isolation, probably very nice(I certainly hope it is, anyways). But they don't exist in a vacuum. They're up against very strong competition from Intel and nVidia, both technologically and marketing-wise.
peevee - Friday, May 17, 2019 - link
The main interest in Threadripper for me is 4 memory channels instead of 2. As far as I understand, it would not be hard to do with chiplet design at all. Their Am4 socket probably does not support 4 channels, am I right?What I want is the new socket design with SODIMM slots ON ITS 4 SIDES (oriented parallel to the board), each on its own channel, to minimize distance (and so latency and voltage required) and maximize bandwidth. MB design will be greatly simplified too, as they'd only need paths for power and PCIe on the bottom, possible even in a dirt cheap 1-layer design.
Lord of the Bored - Saturday, May 18, 2019 - link
That actually makes a lot of sense.SO-DIMM in general makes a lot of sense, and I'm kinda confused why it hasn't made inroads in full-size systems. (I know, I know. Less room for DRAMs, requires more expensive, higher-capacity chips to reach the same capacities. Can still put two SO-DIMM slots in the same space as a single full DIMM. )
peevee - Monday, May 20, 2019 - link
For my specific proposal, SODIMMs are there because they are short enough to be put on all 4 sides of a socket without making the socket too large. The contacts should protrude into the socket from the sides, so the socket has contacts for CPU on the top and slots for SODIMMs on the sides, directly connected to the CPU (the best if the CPU package itself has memory contacts on its sides to shorten the paths further and simplify the socket itself).High throughput low latency memory might even make integrated GPU decent. Or eliminate the need for LLC.
mode_13h - Saturday, May 18, 2019 - link
For GPUs, it makes sense to have lots of channels. For CPUs, I don't know how much you'll really gain by cutting your channel width below 128 bits. Remember that most transactions are cacheline-sized (typically 64 bytes).Anyway, I have a suspicion they'll just program the Rome chiplets to route memory to the 4 enabled channels, on TR4 boards. Then, existing TR users can simply upgrade to a Rome CPU.
peevee - Monday, May 20, 2019 - link
"I don't know how much you'll really gain by cutting your channel width below 128 bits."I think it has already been cut a few CPU generations ago.
" Remember that most transactions are cacheline-sized (typically 64 bytes)."
Multiple cores typically require memory from different cache lines. And data required immediately is often 64 bits or less. So multiple smaller channels leave fewer cores idle waiting for data. Efficiency is not reduced as each channel can continue to read its own cache line if necessary.
mode_13h - Tuesday, May 21, 2019 - link
> I think it has already been cut a few CPU generations ago.Source?
> So multiple smaller channels leave fewer cores idle waiting for data.
It just means lower data rates (i.e. you have to wait longer for your data).
> Efficiency is not reduced as each channel can continue to read its own cache line if necessary.
What? I don't understand. Cache lines are filled by reading memory via a channel. The way you said it makes no sense to me.
Pinn - Friday, May 17, 2019 - link
Not even trying against nVidia.Korguz - Friday, May 17, 2019 - link
Pinnsays who ?? do you know how Navi will perform ? do you have links that show this ? seems most of it.. is still rumors and speculation
Lord of the Bored - Saturday, May 18, 2019 - link
There's a lot of games that say nVidia is the way it is meant to be played. AMD's gonna have to give us a REALLY good card to make up for the fact that games aren't playing right.(I am being 100% sarcastic here. Just to clarify.)
Maxiking - Sunday, May 19, 2019 - link
And there are lots AMD Crashing Evolved which tends to run slower on Nvidia gpus and does nvidia cry about it in media like amd constantly do? No. So yeah, grow up. AMD has been doing the same thing for years, especially since DX11 was out and no one give a **** because that poor cheating AMD is an underdog.Maxiking - Sunday, May 19, 2019 - link
God, give me the proper mobile site, can't see a thing, lots of/tend/gives.Korguz - Sunday, May 19, 2019 - link
Maxiking would you happen to have links for this ?? ever think that like intel, maybe nvidia did the same with the game studios that intel did with the hardware makers ??Lord of the Bored - Monday, May 20, 2019 - link
I've seen very little of AMD complaining that nVidia has a much more robust marketing team, which is what the whole "way it's meant to be" campaign boiled down to. Not any sort of in-game difference, just nVidia paying publishers to include a commercial that played every time the game was started.I am sorry I hurt your feelings by making fun of nVidia's marketing campaign.
Maxiking - Sunday, May 19, 2019 - link
We already know Navi is slower than Radeon VII, confirmed by the AMD CEO.Korguz - Monday, May 20, 2019 - link
i dont remember Lisa Su making that statement anywhere, would you remember where you read that ?Xyler94 - Wednesday, May 22, 2019 - link
Navi is also meant as a mainstream card, not an enthusiast card like the R7 is. Navi is meant to be in the sub 300$ range. Don't know why you expected it to be better than Radeon 7.Be like calling the 1660 a failure because it's not RTX 2080 levels or more...
mode_13h - Saturday, May 18, 2019 - link
At the high end, probably not. But AMD can do a lot of damage at entry-level and mid-range.And maybe with a multi-chiplet GPU, they can even compete in the high-end.
I think the article reads too much into AMD's plans to keep Radeon VII - that's probably just to cater to people who need the fp64, the memory capacity, or the memory bandwidth. It doesn't mean that Navi won't (eventually) surpass it, in gaming.
Sychonut - Saturday, May 18, 2019 - link
Cool. Looking forward to Intel's 14++++.Maxiking - Sunday, May 19, 2019 - link
Are you mocking up the fact that AMD couldn't significantly outperform 14nm++ i9 9900k in thier public demo whilst demonstrating the new uber 7nm products and that close tie in the amd favoured benchmark was the best fakeout they could set up to hype the masses? If this is the best they could show imagine how slow it will be in the real life usage. But I am sure their cpus will be sold out thanks to the people visiting Fridays for future kekekekekKorguz - Sunday, May 19, 2019 - link
Maxiking he is probable mocking intel and their delays with 10nm, after all they have been promising 10nm for what.. 3 or 4 years now...they could be comparing engineering samples vs retail release samples...so performance could change
sounds like you are just hurt because intel isnt the tech leader it once was...
Smell This - Sunday, May 19, 2019 - link
Is there really such a thing as an *AMD-flavored benchmark*? I'm thinkin' poor ol' AMD has spent less on R&D over the years than the tens of billions of dollars Intel *subsidized* in marketing the Bay Trail Fails.Maybe Jimbo Keller can provide Chipzilly a *Thermal Velocity Boost* in the pants. Huh?
Rοb - Sunday, May 19, 2019 - link
Huh?! - "It is worth keeping in mind that it takes server users some time to adopt new CPUs, and the launch of AMD’s 2nd Gen EPYC in Q3 does not necessarily mean that these processors will be used by a massive amount of designs straight away. Nonetheless, it is still important to release them rather sooner than later in a bid to ramp them up as soon as possible.".-- Hawk: https://www.hlrs.de/en/whats-new/news/detail-view/... will have 640K cores and installation starts end of May: https://kb.hlrs.de/platforms/index.php/Hawk_instal... and the BullSequana XH2000 Supercomputer: https://atos.net/en/products/high-performance-comp... needs 200K cores --- 1M/64=15,625 CPUs.
-- TSMC 6nm is a 'die shrink': https://www.tsmc.com/tsmcdotcom/PRListingNewsActio... offering 18% greater density, so they could easily upgrade the 64 core CPUs to 72 cores.
The 7nm ones appear to be partially sold out (thus, no Threadripper), we hate waiting any longer but at the same time it makes sense for the 'general release' to have more cores using less power; and more PCIe 4 lanes wouldn't hurt, since 5*32=160.
tellthebell123 - Wednesday, August 14, 2019 - link
good blog!tellthebelll