Gigabit over Copper uses PAM5, even. You don't need to look into exotic networking standards to see basic modulation schemes such as PAM being used. One day we will be using QAM across motherboard traces, now that will be nuts.
PAM-5 is not much different from PAM-4. PAM-5 is PAM-4 + 1 bit for FEC (Forward Error Correction). PAM-4 might still require FEC, depending on the signal quality, distance, SNR levels etc but it is implemented differently. By the way, there cannot be a digital PAM with an odd number : by definition the digital PAM values are always a power of 2. So the only possible values are 4, 8, 16*, 32 etc In other words the 5th bit of PAM-5 (which was by the way first employed in the now deprecated 100BASE‑T2, not gigabit ethernet) is not really counted. Analogue PAM, in contrast, can have an infinite number of amplitude values, both even and odd.
*PAM-16 is used in 2.5GBASE-T ethernet and above, though there are variants with a different signaling scheme.
There sure can be digital odd PAM encoding. You just can't think of each signal as a separate sequence of bits (2 bits in PAM4). For example using 4 PAM-5 signals you can encode 9 bits (and use the unused combinations for some level of error-detection).
12GB for 3090 seems a bit stingy given Titan X from 2016 already had that much. Yes, strictly not a gaming card, but i'd bet good money this thing will have a higher price tag.
they probably don't want to do it since they want to charge out the nose for the titan. not much value in the titan compared to their mainstream gpus other than more VRAM.
It's more memory than any other card they are comparing it to. Granted only 1gb more than either of the TI cards but it's still the most memory. Additionally memory size on GPUs has always been affected by the bus width. 16gb of memory may have been nice but there's also no good way to do that on a 364 bit bus and retain the full memory bandwidth across the entire memory space. 24gb was probably out of the question for a variety of reasons. Supply is going to limited on the new memory and costs are going to be high. So much like Vega was limited by it's memory design I expect 12gbs was the only practical answer here.
Apparently that has changed since the video was made. New PCB photos show that the 3090 will have 12 GDDR6X chips on the front and 12 on the back, for a total of 24GB. I guess Nvidia expects the upcoming Radeon "high end" card to have 16GB and didn't want have their top end card have less than that.
I don't get how & why they would do that for a number of reasons. First, they would need to stamp on it a Titan level price, at least. Second, double the number of chips (die stacks) would mean double the number of bits, i.e. a bus width twice as wide. It would be 768-bit wide rather than 384-bit.
Assuming a 21 Gb/s per pin the card would have a staggering 2+ TB/sec bandwidth. Even if the GPU memory controllers supported that (by the way, double the memory controllers in the GPU would be required as well...) that would be unheard of in a consumer card. That's precisely 3 times (3 x 672 GB = 2016 GB) the memory bandwidth of the Titan RTX Turing card, which is insane.
In contrast if they used 16 Gb (2 GB) chips they could double the memory without doubling the memory bandwidth or requiring twice as many memory controllers. This is what they did with the Titan RTX. Thus I think that photo with the 12 + 12 chips is a fake rendering.
p.s. Evidently they cannot use 16 Gb chips because GDRR6X is a brand new memory and is still being produced only at a 8 Gb size (according to Micron). Still, that and "Big Navi" would be very bizarre reasons for Nvidia to double down and use 24 of such chips in the 3090.
It should never be forgotten that gains in actual bandwidth (as opposed to theoretical) do not relate to GPU performance on a 1:1 basis. So if the bandwidth rises by 20%, let's say, that does not automatically equate to a 20% gain in performance from the GPU...;) It all depends on how much bandwidth the GPU requires to keep its instruction pipes operating at max performance. It's the same for CPUs--I remember some old Athlons in which the memory bandwidth far exceeded the ability of the full pipelines to process. As to this announcement, I'm sure if nVidia had objected to it then Micron would have removed the references.
It's because those numbers are obscuring the differences in memory bus width behind them. Some of those numbers came out around about the same times as each other (1.5GB / 2GB, 3GB / 4GB) so they're not really "progressions" as such.
There were never any 3.5GB GPUs either - they were 4GB with weird partitioning 👍
It would definitely be expensive, but they already did the "Titan-level price for the high-end card" with the 2080Ti, so I doubt that factor alone would dissuade them.
Using twice the chips doesn't have to double the bus-width, though. There's precedent for doing things this way.
If they add more memory they will eat some share from their prosumer / workstation (Titan) and professional cards. They like to sell lots of those due to their super fat profit margins.
The original table from Micron seems more like a set of example configurations and not actual products. It also lists a Titan RTX as an example for GDDR6 with 12GB - but that card had 24GB. So I wouldn't put that much into that table.
Micron stock is negatively tracking vs. SOX (PHLX Semiconductor Index) since this April. and "Micron this morning has accidentally spilled the beans on the future of graphics card memory technologies." It seems suspicious as not very accidental.
LOL I was thinking the same thing! Investing in Micron was the worst thing I did 2+ years ago, its gone absolutely nowhere. AMD/NVDA are killing it in the market and MU is just dead wood even though all 3 are needed in the final product. So bizarre.
Amazing specs in the table, 21Gb/s per pin seems unreal.
Memory prices peaked about 2 years ago, so how's it bizarre that a company which makes most of its profits from memory prices would also fail to show stock price increases?
Completely unrelated to the marvels of engineering here:
The cover photo for this article is absolutely trippy when you scroll. Scroll up & down while looking at the photo.
The intersecting grids & mirrors look they're vibrating / jumping as you scroll. This is a 60Hz panel on a laptop touchpad. Or is this my lack of sleep?
Optical illusion. The refresh rate doesn't really matter much, however, it will seem more pronounced on higher refresh rates. Our brains don't do well with slanted lines moving vertically.
If you take a picture of something with vertical edges/lines and slightly angle the camera down (so the lines converge toward the top of the image) and you scroll up and down while staring at it you will get the sensation that you are moving toward your monitor.
Not directly related, but I am wondering why normal CPU doesn’t use GDDR memory instead of standard DDR4 DRAM for example ?
I mean why not simply put one unified memory (ex: 24GB of GDDR6X) to be share between both the CPU and GPU, instead of 12GB DDR4 DRAM that will be used only by the CPU, and 12GB GDDR6X RAM that will be used only by the GPU ?
GDDR is higher latency, higher bandwidth. Good for GPUs that love eating up bandwidth.
DDR4 is lower latency and lower bandwidth. Good for CPUs that do a lot of small operations very quickly on local cache and can suffer substantially from latency when pulling from system ram.
Lower latency is more useful for most CPU operations.
Alright, so GDDR RAM has more latency than DDR RAM.
However, I am wondering how much more latency. I mean that as a non power user, I could be willing to consider having a laptop with a spooled GDDR6X RAM shared between CPU and GPU, than having 2 separate pool of memory that is accessible to only either the CPU or GPU...
I guess it is not really possible to get any benchmark because CPU memory controller are always sthg like DDR4 / DDR5 (LPDDR4 / LPDDR5) and doesn’t support GDDR memory...
There are ways to code around the latency, but the tradeoffs generally aren't worth it for desktops.
Consoles for example run a unified pool and most games have pretty deliberate job queuing/tiering to deal with the added latency. On desktops where power/formfactor isn't as big of a problem there's no real benefit from unifying everything. It's useful for mobile, though, where space and power draw matter.
Now, if you want to end up with a really poor performing CPU, well, okay, you could realize the system RAM as GDDR6X. But then what's the point of having GDDR6X for the GPU. It would be like combining a rather slow low-end CPU with a somewhat higher-end-ish GPU that would benefit from GDDR6X. Would that make sense?
If you unify the memories on the same bus you will come up with half the bandwidth (as a mean) for both CPU and GPU when they work together. More over having the controller satisfy two commend queues coming from two different sources will still add latency. If you have two separate chips you have two memory controllers conflicting to get access to the bus, even more uncontrollable latency. To mask this you would need lot of cache at different levels. Moreover the big speed you can obtain from GDDR memory is also a factor of the fact that the memory chips are very near the GPU. So they are either near the GPU or the CPU if you mount them on the same PCB. It is out of question getting those bandwidth if you install a GPU on a separate card, so that won't work with discrete GPUs even though you are not using PCIe as a connection bus.
I don't think there should be substantial latency difference. Sure, GDDR has higher latency if expressed in cycles but it's probably be comparable if expressed in time (as it should be!). The latency comes from the capacitor array inside the chip and the electronics that drive it, not from the interface. It is possible that GPUs better tolerate higher latency so there isn't much pressure to optimize heavily for it. But I would need some real numbers before I can be convinced that there is some meaningful difference in latency between both.
I think its also a system capacity issue, GPU memory can't scale up to say 64GBs or 32GBs where in the home PC market 32GB systems are almost common now.
I'm not sure if GPU memory latency is that terrible compared to regular DDR4 memory anymore, seems they're not that far off.
"designed from the ground up" is kind of an exaggeration. AMD just added GDDR6 support to the memory controllers of the SoCs, which by the way I have no idea if they are also unified (the same controllers for both the CPU and GPU block) or distinct (different controllers). "Semi-custom" parts like these SoCs tend to be less custom than the word might imply.
The downsides of using video card ram is higher latency. The other downside is ludicrously-low maximum capacity, and the need to solder the memory chips to the motherboard.
VRAM has always used system DRAM as a cache, because it's always sacrificed speed over capacity. On consoles, they are stuck with only limited VRAM for gaming, so there's a balance that has to be struck over how to best utilize the limited resources.
I don't see any more spilled beans, but there's some EE porn and a few interesting details, like how it goes back to two-state encoding at lower clock frequencies.
There's also an obscure link to JEDEC in the CRC section of a table, but the JEDEC site itself doesn't have anything on "half data rate cyclic redundancy check"
OTOH and OT: I'd rather see prices rounded to at least the nearest 1%. It's far easier to mentally compare $200 to $400 than $199 to $399. Of course vendors do it precisely because it does confuse - but we don't need to further that.
And will end being a transistor sucker as you would have to implement PAM4 logic (encoder + decoder) on each of its 1024 pin times the number of channels.
HBM has gone wide for the reason to remain simple and use less power and transistor as possible. It is already expensive as it is now.
GDDR5 would be succeeded by GQDR6. I believe consumers already have a hard time remembering GDDR6, for there to be a need to make it even more complicated.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
68 Comments
Back to Article
extide - Friday, August 14, 2020 - link
Gigabit over Copper uses PAM5, even. You don't need to look into exotic networking standards to see basic modulation schemes such as PAM being used. One day we will be using QAM across motherboard traces, now that will be nuts.Santoval - Friday, August 14, 2020 - link
PAM-5 is not much different from PAM-4. PAM-5 is PAM-4 + 1 bit for FEC (Forward Error Correction). PAM-4 might still require FEC, depending on the signal quality, distance, SNR levels etc but it is implemented differently.By the way, there cannot be a digital PAM with an odd number : by definition the digital PAM values are always a power of 2. So the only possible values are 4, 8, 16*, 32 etc In other words the 5th bit of PAM-5 (which was by the way first employed in the now deprecated 100BASE‑T2, not gigabit ethernet) is not really counted. Analogue PAM, in contrast, can have an infinite number of amplitude values, both even and odd.
*PAM-16 is used in 2.5GBASE-T ethernet and above, though there are variants with a different signaling scheme.
jim bone - Friday, August 14, 2020 - link
PAM5 is not PAM4 + 1 bit. It's a signaling scheme with one more symbol level than PAM4.That extra level gives an additional ~0.32 bits per symbol; not 1 bit.
nevcairiel - Saturday, August 15, 2020 - link
Incidentally Ethernet actually uses PAM5 with just 2 bits of payload and the additional data for error correction.willis936 - Friday, August 14, 2020 - link
All PAM4 specs include the use of FEC. If you’ve seen the eyes I’ve seen you’d think it was a good thing too.qap - Saturday, August 15, 2020 - link
There sure can be digital odd PAM encoding. You just can't think of each signal as a separate sequence of bits (2 bits in PAM4). For example using 4 PAM-5 signals you can encode 9 bits (and use the unused combinations for some level of error-detection).albertmamama - Saturday, August 15, 2020 - link
We humans can detect neutron star collision from million light years away... now that's exotic.ArcadeEngineer - Friday, August 14, 2020 - link
12GB for 3090 seems a bit stingy given Titan X from 2016 already had that much. Yes, strictly not a gaming card, but i'd bet good money this thing will have a higher price tag.whatthe123 - Friday, August 14, 2020 - link
they probably don't want to do it since they want to charge out the nose for the titan. not much value in the titan compared to their mainstream gpus other than more VRAM.kpb321 - Friday, August 14, 2020 - link
It's more memory than any other card they are comparing it to. Granted only 1gb more than either of the TI cards but it's still the most memory. Additionally memory size on GPUs has always been affected by the bus width. 16gb of memory may have been nice but there's also no good way to do that on a 364 bit bus and retain the full memory bandwidth across the entire memory space. 24gb was probably out of the question for a variety of reasons. Supply is going to limited on the new memory and costs are going to be high. So much like Vega was limited by it's memory design I expect 12gbs was the only practical answer here.imaheadcase - Friday, August 14, 2020 - link
That is the low end one, they will come in higher memory configs.DigitalFreak - Friday, August 14, 2020 - link
Apparently that has changed since the video was made. New PCB photos show that the 3090 will have 12 GDDR6X chips on the front and 12 on the back, for a total of 24GB. I guess Nvidia expects the upcoming Radeon "high end" card to have 16GB and didn't want have their top end card have less than that.Santoval - Friday, August 14, 2020 - link
I don't get how & why they would do that for a number of reasons. First, they would need to stamp on it a Titan level price, at least. Second, double the number of chips (die stacks) would mean double the number of bits, i.e. a bus width twice as wide. It would be 768-bit wide rather than 384-bit.Assuming a 21 Gb/s per pin the card would have a staggering 2+ TB/sec bandwidth. Even if the GPU memory controllers supported that (by the way, double the memory controllers in the GPU would be required as well...) that would be unheard of in a consumer card. That's precisely 3 times (3 x 672 GB = 2016 GB) the memory bandwidth of the Titan RTX Turing card, which is insane.
In contrast if they used 16 Gb (2 GB) chips they could double the memory without doubling the memory bandwidth or requiring twice as many memory controllers. This is what they did with the Titan RTX. Thus I think that photo with the 12 + 12 chips is a fake rendering.
Santoval - Friday, August 14, 2020 - link
p.s. Evidently they cannot use 16 Gb chips because GDRR6X is a brand new memory and is still being produced only at a 8 Gb size (according to Micron). Still, that and "Big Navi" would be very bizarre reasons for Nvidia to double down and use 24 of such chips in the 3090.Santoval - Friday, August 14, 2020 - link
edit : *GDDR6X*nevcairiel - Friday, August 14, 2020 - link
You don't need to double the bus width to double the memory, you can have two chips on each 32-bit memory controller.WaltC - Saturday, August 15, 2020 - link
It should never be forgotten that gains in actual bandwidth (as opposed to theoretical) do not relate to GPU performance on a 1:1 basis. So if the bandwidth rises by 20%, let's say, that does not automatically equate to a 20% gain in performance from the GPU...;) It all depends on how much bandwidth the GPU requires to keep its instruction pipes operating at max performance. It's the same for CPUs--I remember some old Athlons in which the memory bandwidth far exceeded the ability of the full pipelines to process. As to this announcement, I'm sure if nVidia had objected to it then Micron would have removed the references.Kangal - Sunday, August 16, 2020 - link
Memory progression in GPU's has been weird.0.5GB, 0.75GB, 1GB, 1.5GB,
2GB, 3GB, 3.5GB, 4GB,
6GB, 8GB, 10GB, 11GB, 12GB
...like I was expecting a tick-tock doubling cadence:
1GB, 2GB, 4GB, 8GB, 16GB, etc etc.
(at least these are powers of two/binary)
Spunjji - Monday, August 17, 2020 - link
It's because those numbers are obscuring the differences in memory bus width behind them. Some of those numbers came out around about the same times as each other (1.5GB / 2GB, 3GB / 4GB) so they're not really "progressions" as such.There were never any 3.5GB GPUs either - they were 4GB with weird partitioning 👍
Spunjji - Monday, August 17, 2020 - link
It would definitely be expensive, but they already did the "Titan-level price for the high-end card" with the 2080Ti, so I doubt that factor alone would dissuade them.Using twice the chips doesn't have to double the bus-width, though. There's precedent for doing things this way.
CiccioB - Saturday, August 15, 2020 - link
12GB for the gaming card, probably 24GB for the Titan version (doubling the chip).Santoval - Friday, August 14, 2020 - link
If they add more memory they will eat some share from their prosumer / workstation (Titan) and professional cards. They like to sell lots of those due to their super fat profit margins.nevcairiel - Friday, August 14, 2020 - link
The original table from Micron seems more like a set of example configurations and not actual products. It also lists a Titan RTX as an example for GDDR6 with 12GB - but that card had 24GB. So I wouldn't put that much into that table.McShazee - Tuesday, June 15, 2021 - link
okhttps://www.google.com">3boredsysadmin - Friday, August 14, 2020 - link
Micron stock is negatively tracking vs. SOX (PHLX Semiconductor Index) since this April.and
"Micron this morning has accidentally spilled the beans on the future of graphics card memory technologies."
It seems suspicious as not very accidental.
webdoctors - Friday, August 14, 2020 - link
LOL I was thinking the same thing! Investing in Micron was the worst thing I did 2+ years ago, its gone absolutely nowhere. AMD/NVDA are killing it in the market and MU is just dead wood even though all 3 are needed in the final product. So bizarre.Amazing specs in the table, 21Gb/s per pin seems unreal.
grant3 - Sunday, August 16, 2020 - link
Memory prices peaked about 2 years ago, so how's it bizarre that a company which makes most of its profits from memory prices would also fail to show stock price increases?ikjadoon - Friday, August 14, 2020 - link
Completely unrelated to the marvels of engineering here:The cover photo for this article is absolutely trippy when you scroll. Scroll up & down while looking at the photo.
The intersecting grids & mirrors look they're vibrating / jumping as you scroll. This is a 60Hz panel on a laptop touchpad. Or is this my lack of sleep?
brucethemoose - Friday, August 14, 2020 - link
I see it too, on a 120hz phone. The top dies and the bottom 2 dies look like they're shaking horizontally, relative to each other.ikjadoon - Friday, August 14, 2020 - link
I feel less old.Heh, all right, glad it's not just me. Trippy for sure...
wrkingclass_hero - Sunday, August 16, 2020 - link
Went back up to check it out. I can confirm.Kangal - Monday, August 17, 2020 - link
Here on 90Hz, can confirm too.(but 90Hz feels more like 61Hz with some added fake-frames, unlike the buttery smoothing of a straight doubling we see on 120Hz displays)
PopinFRESH007 - Tuesday, August 18, 2020 - link
Optical illusion. The refresh rate doesn't really matter much, however, it will seem more pronounced on higher refresh rates. Our brains don't do well with slanted lines moving vertically.If you take a picture of something with vertical edges/lines and slightly angle the camera down (so the lines converge toward the top of the image) and you scroll up and down while staring at it you will get the sensation that you are moving toward your monitor.
Spunjji - Monday, August 17, 2020 - link
Ugh, damn. Why did I do that 🤣13xforever - Friday, August 14, 2020 - link
First table says RTX 3080 instead of 3090Diogene7 - Friday, August 14, 2020 - link
Not directly related, but I am wondering why normal CPU doesn’t use GDDR memory instead of standard DDR4 DRAM for example ?I mean why not simply put one unified memory (ex: 24GB of GDDR6X) to be share between both the CPU and GPU, instead of 12GB DDR4 DRAM that will be used only by the CPU, and 12GB GDDR6X RAM that will be used only by the GPU ?
whatthe123 - Friday, August 14, 2020 - link
GDDR is higher latency, higher bandwidth. Good for GPUs that love eating up bandwidth.DDR4 is lower latency and lower bandwidth. Good for CPUs that do a lot of small operations very quickly on local cache and can suffer substantially from latency when pulling from system ram.
Lower latency is more useful for most CPU operations.
Diogene7 - Friday, August 14, 2020 - link
Thanks for the explanation whatthe123.Alright, so GDDR RAM has more latency than DDR RAM.
However, I am wondering how much more latency. I mean that as a non power user, I could be willing to consider having a laptop with a spooled GDDR6X RAM shared between CPU and GPU, than having 2 separate pool of memory that is accessible to only either the CPU or GPU...
I guess it is not really possible to get any benchmark because CPU memory controller are always sthg like DDR4 / DDR5 (LPDDR4 / LPDDR5) and doesn’t support GDDR memory...
whatthe123 - Friday, August 14, 2020 - link
There are ways to code around the latency, but the tradeoffs generally aren't worth it for desktops.Consoles for example run a unified pool and most games have pretty deliberate job queuing/tiering to deal with the added latency. On desktops where power/formfactor isn't as big of a problem there's no real benefit from unifying everything. It's useful for mobile, though, where space and power draw matter.
MrVibrato - Friday, August 14, 2020 - link
Now, if you want to end up with a really poor performing CPU, well, okay, you could realize the system RAM as GDDR6X. But then what's the point of having GDDR6X for the GPU. It would be like combining a rather slow low-end CPU with a somewhat higher-end-ish GPU that would benefit from GDDR6X. Would that make sense?CiccioB - Saturday, August 15, 2020 - link
If you unify the memories on the same bus you will come up with half the bandwidth (as a mean) for both CPU and GPU when they work together.More over having the controller satisfy two commend queues coming from two different sources will still add latency. If you have two separate chips you have two memory controllers conflicting to get access to the bus, even more uncontrollable latency. To mask this you would need lot of cache at different levels.
Moreover the big speed you can obtain from GDDR memory is also a factor of the fact that the memory chips are very near the GPU. So they are either near the GPU or the CPU if you mount them on the same PCB.
It is out of question getting those bandwidth if you install a GPU on a separate card, so that won't work with discrete GPUs even though you are not using PCIe as a connection bus.
kobblestown - Friday, August 14, 2020 - link
I don't think there should be substantial latency difference. Sure, GDDR has higher latency if expressed in cycles but it's probably be comparable if expressed in time (as it should be!). The latency comes from the capacitor array inside the chip and the electronics that drive it, not from the interface. It is possible that GPUs better tolerate higher latency so there isn't much pressure to optimize heavily for it. But I would need some real numbers before I can be convinced that there is some meaningful difference in latency between both.webdoctors - Friday, August 14, 2020 - link
I think its also a system capacity issue, GPU memory can't scale up to say 64GBs or 32GBs where in the home PC market 32GB systems are almost common now.I'm not sure if GPU memory latency is that terrible compared to regular DDR4 memory anymore, seems they're not that far off.
brucethemoose - Friday, August 14, 2020 - link
AMD did exactly that for a Chinese customer: https://www.anandtech.com/tag/zhongshan-suborIt ended up being vaporware, which is a shame, as its a fascinating idea.
DigitalFreak - Friday, August 14, 2020 - link
The PS5 and XsX both use 16GB GDDR6 for their CPU / GPU. However, those chips were designed from the ground up to support GDDR6.Santoval - Friday, August 14, 2020 - link
"designed from the ground up" is kind of an exaggeration. AMD just added GDDR6 support to the memory controllers of the SoCs, which by the way I have no idea if they are also unified (the same controllers for both the CPU and GPU block) or distinct (different controllers). "Semi-custom" parts like these SoCs tend to be less custom than the word might imply.defaultluser - Friday, August 14, 2020 - link
The downsides of using video card ram is higher latency. The other downside is ludicrously-low maximum capacity, and the need to solder the memory chips to the motherboard.VRAM has always used system DRAM as a cache, because it's always sacrificed speed over capacity. On consoles, they are stuck with only limited VRAM for gaming, so there's a balance that has to be struck over how to best utilize the limited resources.
brucethemoose - Friday, August 14, 2020 - link
The link to that 2nd document is up now.I don't see any more spilled beans, but there's some EE porn and a few interesting details, like how it goes back to two-state encoding at lower clock frequencies.
There's also an obscure link to JEDEC in the CRC section of a table, but the JEDEC site itself doesn't have anything on "half data rate cyclic redundancy check"
Kamen Rider Blade - Friday, August 14, 2020 - link
Ryan Smith, please use GiB/s vs GB/s to avoid confusion to the reader.The sooner we can educate the masses on Decimal vs Binary Prefix, the better.
https://en.wikipedia.org/wiki/Binary_prefix
SSTANIC - Friday, August 14, 2020 - link
masses know and don't careksec - Friday, August 14, 2020 - link
True. Much prefer to stick to GB/s.nandnandnand - Friday, August 14, 2020 - link
GB/s is fine as long as the value given is actually correct.MrVibrato - Friday, August 14, 2020 - link
I am more confused about that table there using both Gb/s and Gbps in the same row. The masses won't understand what is going on there...alphasquadron - Friday, August 14, 2020 - link
Hey guys, masses here, I didn't even know there was a table. I just wanna play Skyrim on max settings. Upvote if you agree!Ryan Smith - Friday, August 14, 2020 - link
Whoops. Thanks to catching that. Whatever we do, it's important to be consistent about it.Hul8 - Friday, August 14, 2020 - link
Except every time they rage about losing part of their purchased HDD/SDD capacity "from formatting".Arbie - Friday, August 14, 2020 - link
+1. This is very much a tech site.OTOH and OT: I'd rather see prices rounded to at least the nearest 1%. It's far easier to mentally compare $200 to $400 than $199 to $399. Of course vendors do it precisely because it does confuse - but we don't need to further that.
Kamen Rider Blade - Friday, August 14, 2020 - link
Imagine if we had regulation to prevent that kind of Non-Sense.PopinFRESH007 - Tuesday, August 18, 2020 - link
except when the data is actually decimal values.Kamen Rider Blade - Friday, August 14, 2020 - link
Imagine when HBM gets to implement PAM4 signaling across it's data pins.That's going to be BONKERS fast =D
CiccioB - Saturday, August 15, 2020 - link
And will end being a transistor sucker as you would have to implement PAM4 logic (encoder + decoder) on each of its 1024 pin times the number of channels.HBM has gone wide for the reason to remain simple and use less power and transistor as possible. It is already expensive as it is now.
casperes1996 - Sunday, August 16, 2020 - link
Hm. Am I missing something? DDR in GDDR stands for Double-Data Rate, yeah?... But it's actually QDR? And nobody bothered renaming it GQDR?Rudde - Monday, August 17, 2020 - link
GDDR5 would be succeeded by GQDR6. I believe consumers already have a hard time remembering GDDR6, for there to be a need to make it even more complicated.Axiomatic - Monday, August 17, 2020 - link
I love the "NVIDIA GeForce Specification Comparison" chart.Many
Shiny
:-)
isthisavailable - Tuesday, August 18, 2020 - link
Wait a sec. RTX 3090? 90 series is supposed to be two GPU's on single PCB type card, right?McShazee - Tuesday, June 15, 2021 - link
I like layoutsMcShazee - Tuesday, June 15, 2021 - link
there<a href="https://www.google.com" class="rds_hl_outlinks">1</a>