I wonder if the HBM models will still support Optane. That could lead to a really interesting and complicated memory hierarchy. The HBM as probably the smallest and fastest pool of memory that can probably either be addressable memory or act as a cache. DDR5 as a middle tier and Optane as a final persistent tier. I'm sure making good benefit of that all will take some custom work but I wouldn't be too surprised if someone decides that it is worthwhile.
Possible yes, practical no. In that idea you would be using Optane in memory mode most likely. According the VMware documentation, you want to have a 1:4 ratio for Optane in memory mode as the RAM acts as a cache for the Optane. Following best practice would mean that your host would only have 256GB Optane with 64GB HBM. Problem there is the smallest Optane DIMMs are 128GB so you would be at a 1:8 ratio for 512GB RAM which is against best practices. On top of that 512GB RAM isn't that much is a server host now of days. The more RAM you can have in a virtual environment the more VMs you can easily run.
I someone did run only HBM and Optane memory I'm sure it would be for some custom server software tuned specifically for that and not for something as standard as a VMware server. Assuming it's supported, HBM + maxed out Optane memory would theoretically be the highest possible memory config. Previous systems required at least one rank of DDR4 memory so the system had some normal memory to work with. Being able to eliminate that by using the HBM would allow more total Optane memory. That could be handy for some extremely large memory set situations.
Yeah, just what I was thinking. There are surely some too-big-for-ddr5 datasets/workloads that would benefit from running on Optane instead of an NVMe drive, and that could still utilize 64GB of cache/scratch space.
SAP HANA does benefit from Optane in App Direct mode. SAP HANA is an in RAM DB so every GB of storage needed for the DB requires a DB of RAM. I've seen some HANA DBs that are 1.7TB in size and they can be bigger than that. App Direct mode can make the startup & shutdown process much faster. That said App Direct mode is done in a 1:1 ratio usually according to best practices. However, it can be done in up to 1:4 ratios. Again you run into a max amount of Optane being 512 GB if you have 64GB HBM.
Optane is already slower than DDR4, so you might as well move it out to CXL. There, it can at least scale up more and be symmetrically shared by multiple CPUs or accelerators.
It would make more sense for a laptop to use HBM as main memory and then swap to Optane. You wouldn't need very much HBM to make that workable. It could give you instant S5 sleep/wake.
As for servers, I think a large, in-memory DB is probably the use case that makes sense to me.
However, if you add DDR5 to make a 3-teir memory hierarchy, I was thinking along the sames lines as JayNor about putting the Optane in a CXL module.
Especially useful considering that the HBM certainly has cache tags and I'd never expect it with DDR5. If you could add cache tags to DDR5 than HBM+DDR5+Optane would be even better, but also expensive to make.
Putting the tags in the DDR5 would be iffy, unless the CPU is designed to accept motherboards designed specifically for this (with at least twice the minimum DDR5 width) and reading both banks of DDR5, checking the tags and only sending the right cachlines. Maybe you'd use a slew cache or something (limited to 1-way), as the DDR5 "cache" would be absolutely enormous compared to any other cache.
But from what I've seen from Intel, don't expect any way to add tags to DDR5. And don't really expect Optane "DDR5" to be compatible with DDR5 (your motherboard would probably not be able to set the timings for the latency anyway).
edit: with Micron throwing in the towel, I suspect there are real issues in making the whole thing work that Intel simply isn't talking about. They have all the rights to make this stuff and simply aren't interested.
It would make more sense if HBM accesses didn't have to plug up L3. Something like the cxl direct attached memory on GPUs seems to be the use case, so why not implement it the same way...?
I odn't know if journalists are the progenitors of the naming scheme or not, but it seems to be an Intel thing. I don't remember if every CPU generation/iteration received a three letter abbreviator, but IVB and KBL come to mind. By your logic, which is completely sound in my opinion, those should have been IB and KL.
In the pharmaceutical industry, "SR" stands for "slow release"; maybe Intel was tired of being the butt of even more jokes; they have been quite slow to release anything really new for a while now.
is it certain that the long names came first and the abbreviations second? Rather than something generating a random letter pair to start with, and then they find a pronounceable word to wrap around it?
I was under the impression they were naming things after actual map features(initially map features near HQ, but one has to cast a wider net after a while)
the better question is, when will intel finally drop the lake and cove crap, and name their cpus with something that is easy to tell which core it is on ? as it stands, most of those i know, have NO idea which iteration/version is what.
honestly, shouldnt have to do that, and hard to do when you are talking about cpu's, and you dont have access to do that. alot of times its " which ever cove or lake intel is on right now, who knows which one is which.
you like needing a decoder ring and slide rule to figure out which is which ? 😂😂😂😂😂😂 as i said, no one i know does, and some are getting sick of lakes and coves.7 cpus with lakes in their names, is getting old.
Intel just wants you & your friends to buy the latest and greatest model number. Never mind that Gen11 laptop and desktop CPUs are made on different nodes and have different generation cores!
In truth, you really *could* just forget about what's inside the CPU and just base your decisions on the benchmarks. If the benchmarks are thorough and represent your needs, then the results they measure are what you should actually be worrying about. The internal details are really just stuff for geeks to argue over.
" Intel just wants you & your friends to buy the latest and greatest " heh, and thats why most, if not all of them, are getting ryzen instead of intel when they upgrade in the next few months 😂😂😂
" Never mind that Gen11 laptop and desktop CPUs are made on different nodes and have different generation cores! " and thats why they hate intel current way of naming their cpus.
yep, and thats why they are going with amd, as amd clearly is the better cpu over all right now.
No. That's only what they tell the shareholders. It might work for Intel as if you only hear them talking about a location name, you won't always know if it is the next generation CPU or just some support chip (I think).
But they really are for drumming up support for your project, getting a team on board, and crushing your (internal) competition. A cool code name goes a long way. See DoD and NASA project acronyms for examples.
The coolest codenames probably aren't that hard to troll. Like, by circulating a meme of a rocket crashing into a lake (either a cartoon or a botched SpaceX landing). Or maybe a pic of a comet crashing into a lake and a bunch of dinosaurs fleeing.
> They'll probably stop using Lake when they quit using Skylake derived cores.
For CPUs, Ice Lake already broke that concept. Further violations include: Rocket Lake, Alder Lake, Meteor Lake, Jasper Lake, Elkhart Lake... I could probably find more.
> Notice the new non Skylake cores have Cove as the code name.
Yes, the cores do. AFAIK, the desktop and server cores didn't have a name, before they started with the Cove-based naming. At the low end, the Goldmont cores of Apollo Lake is the first time I noticed a separate name given to the cores.
They seem to have adopted the "cove" naming scheme for core uArch. I appreciate having a separate naming convention for those, since it's now clear whether someone is talking about a core vs. entire CPU.
This will be the HEDT processor base. LGA4xxx socket with DDR5 and Gen 5. So AMD has to be getting ready, but sadly on that side they didn't even launch new Threadripper processors based on Zen 3. Ultimate shame.
Why is Intel not using Serial DRAM? AFAIK it has the same latency issues as HBM, same bandwidth/contoller die space benefits as HBM, etc... you just trade overall energy-efficiency for less energy/heat on the die.
But with Serial DRAM you'd need less models, you can customize, upgrade, repair, etc... the memory of the system (You can still use DRAM+Optane for instance), and I'd assume it's easier to integrate compute-in-memory as you're already essentially putting a processor between the memory controller and the DRAM.
Quote: The Aurora supercomputer is expected to be delivered by the end of 2021, and is anticipated to not only be the first official deployment of Sapphire Rapids, but also SPR-HBM. We expect a full launch of the platform sometime in the first half of 2022, with general availability soon after.
---
Hey guys you forgot to account for the time delay Intel has EVERY SINGLE TIME a chip comes out. Move each year up by 1 and we should be good. If your feeling nice move it up by two quarters instead.
Ah, EPYC may run up to around 1GB of L3 - whereas this is an order of magnitude more.
Dunno where I got myself mixed up thinking the stacked L3 would also be multiples of GB. Maybe confused in how many layers they were using or something.
I suppose going off at a tangent - is this seen as L4 cache by CPU memory controller? Or is it seen as system RAM?
Oh - and did you get my email about using the openFOAM benchmark for your server/workstation comparisons? Be a very useful addition that would stress both memory controller and FPU.
HBM2 provides slower access than AMD's stacked L3 innovation, but scales to about 100x larger data sizes. For instance, if your dataset is like 40 GB, and you're accessing it pretty randomly, Intel's HBM2 is probably going help a lot more than AMD's larger L3 cache.
The way I look at them is complementary. For some purposes, the best option would certainly be to have both!
To understand why this scales better, remember that HBM is DRAM, which is denser & more power-efficient than SRAM. Also, HBM can currently be stacked up to 12-high, where as AMD's approach only appears to work for adding one extra layer to the compute die.
HBM, is very power efficient, as you say, because it is basically multiple DRAM dies that are stacked on top of each other, but to avoid overheating, the memory has to run much slower that regular DRAM. HBM compensates for the slow memory speed by having a very wide interface, but there is no way to get around the increased memory latency that having slower RAM will entail
As for AMD's approach. AMD only announced details of adding a single additional layer of SRAM to it's chips. TSMC on the other hand, has said that this approach can be used to stack up to 8 additional layers of SRAM.
> HBM compensates for the slow memory speed by having a very wide interface
I thought it was just that they used a wider interface because they *could*. Since it's in-package, there are multiple reasons it's practical to have a much wider interface than external DRAM.
As for the width vs. frequency, I thought that was just about the interface - not the memory, itself.
> there is no way to get around the increased memory latency that having slower RAM will entail
Do you have any latency figures on HBM2? It's be nice to see how it compares with DDR4 and DDR5.
> TSMC on the other hand, has said that this approach can be used to stack up to > 8 additional layers of SRAM.
I thought there was some discussion that, because the signals were traveling from silicon-to-silicon (i.e. not through TSVs), it was only good for a second layer, but the article definitely suggests more layers could be possible. I still wonder about heat, if going beyond that.
> I still wonder about heat, if going beyond that.
stacked SRAM has some major advantages compered to stacked DRAM when it come to heat. Firstly it's a lot less dense than DRAM, and second, it's static, it does not need constant refreshing like DRAM. Sure it takes more power when it comes to fetching memory, but that's not such a big problem.
> That's not transistor density, right? That's just cell density, because each cell requires more transistors.
Yes, SRAM cell density is lower
> Thing is, L3 cache sees a ton of activity, compared to a DRAM die. AMD was claiming like 40x the bandwidth of DRAM.
Yes indeed, but you were concerned about power usage, if you stack SRAM you end up distributing the power usage over a large area (lower cell density again) and over multiple levels, so heat is probably even less of a concern than it would be for a single layer of SRAM.
Another point about scalability is that AMD's stacked SRAM is going to put a huge amount of strain on their interconnect fabric. It will work very well as essentially an extension to a die's L2 cache, but won't help other dies nearly as much.
I can already tell that AnandTech is going to have fun benchmarking multi-die CPUs with this stacked SRAM, especially if AMD puts it in an EPYC or Threadripper!
Intel included stacked SRAM on their Ponte Vecchio chip, which Raja called Rambo Cache. They said the HBM was too slow. We should get more detail from Hotchips, but perhaps Anandtech already has the presentations as part of the early release privilege.
They also announced a hybrid bonding stacked sram that was back in the lab last August. I haven't seen any announcement whether or not the hybrid bonding version is the same as the Rambo Cache.
Plenty <a href="https://www.plentythingz.nl/"> Thingz</a> is een e-commerce website, hier kun je elk product vinden. Spin, wiebel en race; het kan allemaal met onze kinderauto! Deze wiebelende auto houdt je kind actief en ontwikkelt zijn evenwichts- en coördinatievaardigheden tijdens het spelen.
hahahaha While many of you switched into Waiting Mode for the new "miracle" design, counting days till the mid 2022, delayed till end 2022, and finally getting it in mid 2023... i will tell that i am not interested exactly right now. My apps which use parallel algebra will have exactly 0 (=zero) benefits from this DRAM memory no matter how fast and what size
So why no mention of latency in the article. Sure, HBM will improve bandwidth, but memory latency is and always has been the Achilles heel of HBM technologies. HBM 1.0 ran the memory at just 500Mhz, and although newer version have improved the speed, they are still well behind DDR4 and DDR5 when it comes to clock speed and latency.
Oh sure, HBM have dramatically improved the latency situation with the more recent versions, but due to the power constraints of stacking multiple memory dies they generally run at lower clock speeds than regular memory, which seems to why the latency suffers. As for supporting info, there are a few studies out there eg: https://arxiv.org/pdf/2005.04324.pdf A quote from theat article: "Shuhai identifies that the latency of HBM is 106.7 ns while the latency of DDR4 is 73.3 ns,"
It's going to come down to the type of application being run, some will prefer the high bandwidth of HBM but others will suffer from higher latency. If only someone could combine HBM and Stacked SRAM...
> "Shuhai identifies that the latency of HBM is 106.7 ns while the latency of DDR4 is 73.3 ns,"
That's hardly night-and day. Also, the DDR4 is a lot better than what I've seen on Anandtech's own memory latency benchmarks. I think their test isn't accounting for the added latency of the DDR4 memory sitting on an external DIMM.
Finally, the latency of HBM should scale better at higher queue depths. And a 56-core / 112-thread server CPU is going to have some very deep queues in its memory hierarchy.
No, there's not a massive difference between HBM and DDR4 any more but barring some kind of breakthrough HBM will continue to have higher latency.
I think it's going to come down to the application being run more than things like queue depths. One of the downsides of the HBM approach now is that many of the workloads that would have taken advantage of that approach have already migrated over to GPU's and won't be returning any time soon.
Still, I'm sure it'll only be a few years before some company gives us a combination of stacked SRAM and HBM on chip with DDR5 for further memory expansion. Can't wait
IAN, any info on Intel 3nm chips coming 2022 from TSMC (in hand according to Linus TT vid 24hrs ago - already working, so not some joke, I knew gelsinger was holding back...LOL)? LInus said niche servers and laptop stuff at least. Probably due to having to leave out superfin stuff, so niche where that missing part won't matter perhaps. Otherwise, I'd just buy all 3nm I could and produce the crap out of those servers that fit (without giving up intel secrets, a 3nm TSMC server from Intel isn't going to lose to 5nm AMD TSMC, even without Intel special sauce), or gpus until the cows come home. Or simply revive your dead HEDT platform and soak up as much 3nm as you can for 2022 and 2023. Every wafer not sold to AMD/NV is a win for Intel and you can make MASS income on gpus right now.
AMD is so short on wafers they're having to fire up 12nm chips again. So yeah, pull an apple and buy up more than you need if possible and even bid on 2nm ASAP. As long as you can keep your fabs 100%, take every wafer you can from TSMC and flood the gpu market with 3nm gpus. Price how you like, they will sell out completely anyway. You can price to kill AMD/NV NET INCOME, or price to take all that EBAY scalper money by just selling direct for 3x normal launch price etc. :)
I don't know why anyone thinks AMD is winning. It doesn't matter if your chip is the best if you can't make enough because you keep making consoles for 500mm^2 on the best nodes and making $10-15 on a $100 soc. Those should be SERVER/HEDT/PRO GPU. You'd be making BILLIONS per year instead of ~500mil or losses (see last 15yrs). No, one time tax breaks don't count as a 1B+ NET INCOME Q. They are 4yrs into this victory dance and still can't crack Q4 2009 1B+ NET INCOME Q. Yet the stock has went up 10-15x from then, while shares outstanding have doubled (meaning worth half whatever stock price was then, $2-10 for a decade), and assets have dropped basically in half (though it is coming back slowly). Their stock crashed the same way when Q's dropped back and the people punished the stock from $10-2 again 2009+. You are looking at the same story now people. IF AMD can't get wafers to make chips they're stuck with great tech that can't be sold. Nothing illegal about Intel buying all the 3nm they can to enter the gpu market for round 2 (round 1 was 6nm, and it killed AMD warhol IMHO...LOL).
Intel can write billions in checks for wafers from TSMC and make money on NEW products (discrete gpu for example). They pissed away 4B+ a year for 4-5yrs on mobile with contra revenue which is why the fabs ended up where they are today (that 20B should have been in 10/7nm and we'd be in a whole other game today). Either way, AMD can't stop Intel's checks and the pissed away 4yrs chasing share instead of INCOME. Now it's time to pay the BIG CHECKS, and, well, only Apple, Intel, NV etc, have them in that order (others in there after Intel, but you get the point).
3nm Intel HEDT chips could do the exact thing to threadripper that it did to Intel HEDT (doesn't exist today really). Priced right, threadrippers would become hard to sell and with Intel volumes surely 3nm cheaper for them already. Time to turn the tables, and it's even legal this time, and much easier with so many options to slow AMD down, cause issues for NV too, and take a shot at apple's bow while at it, since they're coming directly at everyone anyway (cpu and gpu and GAMING big time). 3nm TSMC won't need Intel's superfin stuff to beat AMD/NV 5nm stuff, so no risk there giving it up to china theft. Business is war, surely Pat is on this angle.
Let me tell one thing. Do you think AMD doesn't know what is happening in the industry ? Esp when Lisa Su showcased 5900X with V-Cache out in public..Intel doesn't have any damn hardware that showcases they beat AMD let that sink in first.
Next is, TSMC 3nm is not going to be released in 2022. What are you smoking man ? Samsung is facing issues with GAAFET 3nm node, just to think what is this TSMC 3nm we do not know yet, and how much it varies with 5nm we do not know. No high powered silicon has been made on TSMC 5nm yet, and jump to 3nm ?
Intel is just securing orders, I believe those are Intel Xe first then CPUs, Intel to date since decades never made their processors on TSMC or any other foundries, making them work without issues and ability to beat AMD who are crushing Intel since 3 years in DC market it's not a simple hey we pay BIG checks and we won the damn game lol. Do you think it's that easy eh.
Sapphire Rapids HEDT is coming in 2022 which is on 10nm+ this 3nm magic aint coming. By that time Zen 4 AM5 is going to crush Intel, ADL is already dead with 3D V-Cache Zen, proof is EVGA making X570 in 2021 Q3, almost end of cycle. But they are making it because they know AMD is going to whack Intel hard. Also let Zen 3 come, HEDT Sapphire Rapids, will definitely lose to that lol. As for Zen 4 based DC processor, it's 80+ cores, Intel is not going to beat them any time soon.
The console scam certainly isn’t helping AMD right now.
Go ahead, AMD... compete against PC gamers after shoveling out garbage releases like Frontier and Radeon VI — after peddling ‘Polaris forever’ so Nvidia could, with the help of AMD’s pals MS and Sony, artificially (via nearly a monopoly) inflate ‘PC’ GPU prices (and, of course, half-hearted shovelware garbage like Radeon VI, 590, Frontier, and Vega). Vega had identical IPC versus Fury X.
AMD deserves a good rant, regardless of how lackluster Intel has been (very). Of course, we plebeians get the best quasi-monopoly trash the first world can conjure. More duopolies/oligarchies in tech, please.
What is this "scam"? And please post some evidence.
What's funny about your post is that this thread isn't even about AMD GPUs. It sounds like you're still sore from the pre-RDNA era, but we should be looking ahead to RDNA3 -- not behind, to Vega and Polaris. By the time Sapphire Rapids reaches the public, that's the generation AMD should be on.
> More duopolies/oligarchies in tech, please.
Odd timing for such a comment, when ARM is ascendant and RISC-V is finally starting to grow some legs.
I have written extensively wherein the irrefutable elementary logic of the situation has been explained.
AMD, Sony, Microsoft, Nvidia, and Nintendo all compete against the PC gaming platform. That’s your first clue.
As for opinions, you can try to use that word pejoratively but opinions are essential for understanding the world in a rational factual manner. Moreover, opinion has nothing to do with the fact that the logic and facts I have presented have never, not once, been rebutted substantively. Posting emojis and ad homs doesn’t cut it.
> AMD, Sony, Microsoft, Nvidia, and Nintendo all compete against the PC gaming platform.
It's not like that. It's like AMD and Nvidia are pioneering the high-end, cutting-edge stuff on the PC, where there's demand for the highest performance, newest features, and people willing to pay for it. Since most gamers are more budget-constrained, consoles bring most of those benefits to a more accessible price point that's also more consumer-friendly.
I wonder if maybe you don't understand the concept of market tiers.
I have posted extensively about the scam and the nature of it.
It’s abundantly clear that responses, which have included ad homs, emojis, false claims (including the hilarious claim that it’s easier to create the Google empire from scratch than to create a 3rd GPU player that’s actually serious about selling quality hardware to PC gamers versus spending more time developing machinations to keep prices artificially high — zero substantive rebuttal — that I’m unlikely to find much discourse on the subject.
Many very large monied interests want to keep the scam going via favorable posts (e.g. stonewalling) and refusal to challenge status quo thinking is also par for the course. Samsung was caught for hiring astroturfers. Judging by the refusal to engage me substantively on this subject that doesn’t surprise. Even without a paycheck being involved, though, it is clearly emotionally satisfying to critique without doing any good-faith work.
Instead of the emojis, ad homs, and lazy false claims — it would be amazing to see someone put forth the effort to explain, in detail, how the existence of the ‘consoles’ as they are today and have been since the start of the Jaguar-based era are not more harmful to the PC gamer’s wallet and more parasitic on the industry as a whole. Explain how having four artificial walled software gardens is in the interest of the consumer, particularly in light of how everything has changed to run the same hardware platform. Explain how having wafers allocated to ‘consoles’ isn’t a way to undermine the competitiveness of the PC gaming platform, keeping prices high by creating demand artificially for the ‘consoles’. Explain how having just one 1/2 companies producing PC gaming GPUs (since AMD has been doing a half-hearted effort for years and sandbagging doesn’t disprove my point at all) has led to a healthy PC gaming platform rather than one that falls prey to mining, wafer unavailability, lack of competitiveness, etc. None of this is difficult to comprehend and there is plenty more to point out.
Suffice to say that Linux with Open GL, Vulkan, and the ITX form factor in particular make consoles put parasitic redundancy. It’s so obvious that apparently no one can be bothered to see it.
And don’t forgot all the years of selling GPUs designed for mining as if they’re PC gaming parts. It just goes on and on... all these too-obvious examples.
Meanwhile, when Nvidia was selling Maxwell at reduced pricing (versus its appetite for extreme pricing) because GCN cards were more targeted toward mining, the company responded by achieving mining attractiveness parity with AMD.
But the company really wants to only sell the cards to gamers. Ha.
Meanwhile, AMD isn’t even feeling it’s worth the bother to try to feign disinterest in maximizing its profits via mining.
> Evidently more than reading posts for comprehension.
Your sensitivity to "ad hom" is dialed up to 11, but you seem to throw shade on the slightest whim. Something's wrong with that.
Trust that if I respond to a post, I've read it enough times to believe I understand it (unless I ask about something not clear to me). If I miss something in your intent or meaning, there are numerous other explanations. And rather than speculate why I missed something, you need only point out when I do, so we can continue.
" I have written extensively wherein the irrefutable elementary logic of the situation has been explained. " " I have posted extensively about the scam and the nature of it. " and i dont recall AT ALL you posting ANY links to this BS claim. all i have seen is personal options of this BS claim, oxford guy, post some links or admit is just your opinion. the FACT that mode 13 asked you for proof, and ALL you did was ramble, and post no links, at this time, proves it is just your opinion.
instead of posting ramble, post links, this way some of us, can also see where you are geting this BS from.
Oxford Guy, I can't stand consoles myself, but if I may venture my opinion, I'd say people like consoles because one just puts a game in, presses a button, and plays, whereas PCs take a lot more work (supposedly). Sony and MS churn out consoles to make money; AMD saw an opportunity for business and didn't waste time. As a side effect of that, PC gaming has taken some knocks. It's more a symptom of the mess, not the motive. Also, consoles have been in homes for decades, even the old TV games with those cartridges.
There is nothing about the ‘console’ since Jaguar that offers anything other than a duplicate parasitic walled software garden.
Every feature except for the specific DRM is available on the PC. That’s because the ‘consoles’ are PCs.
The faux convenience people always cite is nothing more than DRM. The bygone days where consoles were unique and offered actual advantages have been over since the Jaguar machines. Everything is standard hardware. And if convenience were really the motivation of the ‘console’ designers they wouldn’t be selling them with the same junk joystick mechanism (drift) and all optical media would have been put inside a protective shell (e.g. DVD-RAM).
As someone who saw what actual consoles were since the 70s, the breathtaking passivity of the more modern ‘console’ consumer (unprotected high-density optical media, for machines used by kids) was something. Now, though, even though there is 0% substantive hardware difference to justify the claims about convenience and everything else involved in claiming these DRM vehicles add value for the consumer — people continue to recycle the same archaic points. Back when a console had cartridges and a power button (and computer joysticks were inferior to console digital pads) the convenience argument held some water. More importantly was the fact that the hardware had to be much different due to high cost for PC-level equipment. (Has not been true since Jaguar.)
It’s impossible to credibly argue that evolution that resulted in a 100% identical x86 common hardware platform simultaneously justifies a bunch of duplicate DRM gardens. The ‘exclusive’ releases don’t add one iota of value, considering the costs.
> There is nothing about the ‘console’ since Jaguar that offers anything
PS3 offered blu-ray playback and the Cell CPU was genuinely faster than any PC CPU of its day (hard to *use* that power, but its raw compute power was off the charts). Add to that bluetooth controllers that had farther range than anything for the PC.
> all optical media would have been put inside a protective shell (e.g. DVD-RAM).
PCs stopped doing that even before consoles adopted optical media!
> even though there is 0% substantive hardware difference to justify the claims about convenience
It is more convenient, since it's better assembled. And it's also cheaper for the horsepower you get.
> The faux convenience people always cite is nothing more than DRM.
That's not true. People can buy a console game knowing it'll work exactly the same on their console as everybody else's, including the reviewers who might've motivated them to buy it.
As for DRM, you act as if that doesn't exist on the PC.
Okay. I’ll respond to this post first. The only substantive point you made is praise for the PS3, which is not x86 and is therefore more irrelevant than not.
You even quoted me saying ‘Jaguar’ but didn’t seem to read carefully enough to know what to try to rebut.
Your other claims, like build quality, are total nonsense. The only thing special about the ‘consoles’ is their particular DRM. That’s it. That makes all the claims about specialness unfounded in fact. They are PCs being peddled as items captured by special DRM, DRM that is parasitic for consumers.
I will also add that your rebuttal attemp regarding protecting the optical media completely flies in the face of your attempt to justify ‘consoles’ on the basis of better build quality and ease of use.
I also already pointed out the sad sorry cold hard fact that the ‘console’ peddlers are peddling the same defective joystick mechanism. Extremetech covered that.
If you’re going to try to make arguments take more time to make them hold some water. Reality is that software DRM doesn’t increase hardware build quality nor differentiate that hardware at all, other than the existence of so much redundancy reduces the incentive 3rd parties have to try to compete. The market is diluted.
Parasitic redundancy is a positive for certain corporations and their investors. It’s entropy for consumers.
> If you’re going to try to make arguments take more time to make them hold some water.
If you're going to reject my points, then simply reject them. You don't need to blame me for not making a better case, when you were probably going to reject them no matter what.
I make points to be considered. I don't pretend they will change anyone's mind who's already staked out a position. When is the last time you saw that, on the Internet?
> Your other claims, like build quality, are total nonsense.
I understand the point was rather vague, so I'll explain. If you build a mini-ITX PC with comparable horsepower to modern consoles, the fact is that they don't ship as well and the cases have to be beefed up to support the PC's expandability. The mechanical (and thermal) design of consoles is optimized for what they are, saving weight, cost, size, packing material, and damage in transit.
Let's say Sony and MS both supported the same OS, APIs, and games as PCs. So, you could run any game on any of the three. They might cost a little more, because they can no longer be sold at a slight loss, but they would still be cheaper than a PC with comparable specs.
> Sony and MS churn out consoles to make money; AMD saw an opportunity
And let's not forget that Sony and MS were going to make consoles no matter what. And those consoles were going to compete for wafer supply. So, that's just another reason that whole point falls flat.
> explain how the existence of the ‘consoles’ as they are today ... > are not more harmful to the PC gamer’s wallet
I don't understand this argument. If you could magically remove the silicon demands of consoles over night, then we'd certainly see chip prices drop, for a while. However, that's not what would actually happen, if there were never consoles. What would happen is that there'd be even more demand for gaming PCs, which use more silicon and other components than consoles. So, by eliminating the console tier, you could actually just increase silicon demand.
> Explain how having four artificial walled software gardens is in the interest of the consumer
Yeah, that's not great. I don't like walled gardens on phones, cloud, or elsewhere, either. Seems to me like more of an indictment of modern tech than specific to consoles, however.
> Explain how having just one 1/2 companies producing PC gaming GPUs > has led to a healthy PC gaming platform
AMD gets valuable input & feedback from Sony and MS that influence their development. The market for console gaming also incentivizes them to continue their GPU development beyond the profit they make from PC gamers. It's easier to make those investments when you have a fairly stable income stream, which the PC GPU market hasn't been. I don't know how long you've been following PC GPUs, but it's very much been a tale of boom and bust cycles.
> than one that falls prey to mining, wafer unavailability,
That's the whole tech industry.
> lack of competitiveness
That's the overhang of AMD's multiple generations of uncompetitive CPUs, IMO. AMD was in bad financial shape for several years, leading to lack of investment and innovation. RDNA is the first we saw of the newly-invigorated AMD, from their graphics division.
The problem is that Nvidia is moving much faster than Intel's CPU division, so it will be harder for AMD to surpass Nvidia than it was for Zen3 to beat Intel. Still, they're making very strong progress.
> None of this is difficult to comprehend
What's difficult to comprehend is how you think the world would be so much better without consoles. It would cause more supply problems and higher prices, and those who couldn't pay would be stuck gaming on phones and tablets.
It's also hard to see why you're tilting at windmills, like this. Nobody has the power to simply make consoles go away. I could understand focusing on the walled garden thing. That's your strongest argument and it's conceivable that governments could actually force some meaningful change, there.
> Suffice to say that Linux with Open GL, Vulkan, and the ITX form factor ... > it’s so obvious that apparently no one can be bothered to see it.
Valve has invested a lot into development of the Linux graphics stack, but I doubt it's paid off, for them. I mean, it works as a little bit of a threat against MS, but the idea of steam boxes as an alternative console hasn't been a thing for quite some time. Of course, a lot of people have gaming PCs hooked up to their living room TV, so it's not as if people don't know that it's possible. Nvidia has even sold GSync Ultimate monitors that are essentially gaming TVs.
Regarding silicon supply, there's another point to consider and that's how often PC gamers do upgrades vs. console users. If you've got a console, there's no sense upgrading it until the next generation comes out. Even the half-steps we saw in the last generation probably garnering a relatively small number of upgrades.
In PC gaming, 7 years is multiple lifetimes to be using the same graphics card, if not also the same CPU. So, that's another way that forcing more people into PC gaming would just put even more strain on the silicon supply chain.
‘Yeah, that's not great. I don't like walled gardens on phones, cloud, or elsewhere, either. Seems to me like more of an indictment of modern tech than specific to consoles, however.’
Specious hand waiving.
1. Parasitic redundant walled gardens are the ONLY defining quality of consoles since Jaguar.
Since I know you won’t comprehend this I will explain it even more.
A console is hardware that is very different from common x86 hardware (home PCs). That is its defining quality. That hardware difference stems from the hardware needing to be different in order to deliver the console gaming experience.
The PS3 you mentioned erroneously — with its (sadly very harvested) Cell CPU that was greatly inferior to an x86 or PPC desktop machine for general-purpose computing/AI — versus streaming, the thing it was optimized for — and Blu-Ray, the other oddity about the PS3 hardware — was the last hurrah for the console. It’s dead, Jim.
What’s lacking flesh is the horse all the specious archaic arguments about the ‘consoles’ rode in on. Jaguar junk came out a long time ago. The facts have been clear for much longer than anyone should need to have to comprehend the state of affairs.
Oxford Guy, I would say that consoles allow one to play games on a TV and not worry about computers. Arguably, they look neater and are smaller. And the game is guaranteed to work. That's differentiation enough, from the consumer's point of view. (Whether they're x86 or not, is more of an implementation detail.) Many folk have a preference for them, too, having grown up on the Xbox or PS. Of course, we computer enthusiasts hold to the idea that PC gaming is number one, which is just a preference, like theirs. Yes, consoles are computers tricked up in simple garb, locked up in DRM jail, but if the consumer sees them as accessible gaming devices in the sitting-room, they will be sold.
On a side note, gaming as a whole has gone down. No doubt, consoles have had a hand in that. Many games today are the equivalent of Hollywood-manufactured junk. (Is there an analogy between TV destroying "the pictures" and consoles doing the same to games? Perhaps.) I sigh but don't worry too much about it. Just like old films, there are fantastic old games we can still play.
I hear this, but I also sometimes hear about great indie titles. I think there are still good games being made, if you look for them, but I haven't played anything for at least 6 years or so.
100%. Well, thing is, I don't really play any more. I'm working my way through BioShock Infinite though (but haven't touched it in a year). When Half-Life 3 comes out, I'll be very excited too.
> Since I know you won’t comprehend this I will explain it even more.
Ah, warm fuzzies all around!
Just because I don't agree with you doesn't necessarily mean I don't understand. While we might disagree on the benefits of hardware streamlining and homogenization, I think we can agree that it's unfortunate consoles and PCs don't share OS, APIs, and software.
How so? Am I wrong that it's a broader problem? Or that there could be broader solutions? Or that it's actually conceivable to do something about walled gardens, when there's literally nothing you can do about consoles beyond this lonely, inconsequential rant on the internet?
Maybe if the walled garden issue could be somehow tackled by regulators, it would have impacts on consoles that you deem positive?
mode_13h, dont waste your time with oxford guy any more, he hates consoles for what ever reason he has, and from what i can tell in his posts, always will. but to call them a scam, with NO proof what so ever, 17 posts from him, with no proof of this other then what looks to be his PERSONAL OPINION, is where the BS comes into play. others on here have asked him for proof of the console scam, as he puts it, and that has resulted in the same type of replies from him, and, no proof.
> but to call them a scam, with NO proof what so ever, 17 posts from him
Yeah, I was just curious to hear his case. Now that I have, I think we can move on.
I think there's a missed opportunity, somewhere in here, to explore the what-ifs. So far, his only answer to consoles seems to be that people should just use mini-ITX PCs, and with little apparent appreciation of what that would mean for the industry or consumers.
I think there are other possibilities that are more interesting to explore, such as what if regulators blew open the doors on the walled gardens, by forcing platforms to authorize 3rd party signature authorities and app stores, as well as a requirement to open their APIs to all developers?
‘So far, his only answer to consoles seems to be that people should just use mini-ITX PCs, and with little apparent appreciation of what that would mean for the industry or consumers.’
Once again, speculation rather than factual substance. It’s easy to ‘win’ arguments involving one’s fictional opponents.
I have said more than that but reading for comprehension rather than dismissal is not your modus operandi.
> It’s easy to ‘win’ arguments involving one’s fictional opponents.
I wouldn't say I'm trying to "win" anything, other than trying to get to the heart of your case and see if it's based on anything that withstands scrutiny.
" Once again, speculation rather than factual substance. It’s easy to ‘win’ arguments involving one’s fictional opponents " hello pot, meet kettle. " I have said more than that but reading for comprehension rather than dismissal is not your modus operandi " maybe, but you have posted no proof of this console scam, and just personal opinion. maybe its you that needs to work on their reading comprehension. but considering you also seem to resort to insults and name calling, i wouldnt expect much.
just like you are your bs console scam posts, right ? look, either post proof of this bs, admit its just your person opinion, and that you hate consoles.. cause that all it looks like it is.
again, so far, you have posted NO proof of this bs.
It's comic how many of you see my AMD posts as negative. I'm trying to get them to make more money by making HIGHER MARGIN chips. How is it negative to tell someone, for the love of GOD, start MAKING NET INCOME, CHARGING MORE etc?
To bad they don't have a block button on here life wccftech (only good thing about their system).
I gave tons of data for you to look at. See all those numbers in my post? That is DATA. Learn to debate the data, instead of attacking the messenger.
But your premise is flawed. AMD cannot stop the production of console chips. Do you think MS and Sony are stupid? Whatever arrangements they have for wafer supply, you can bet that it's out of AMD's hands.
Also, the notion that Intel can buy up the 3 nm wafer supply before anyone else, or that it can justify doing so for it shareholders, or that TSMC would even be obligated to sell the wafers in volumes that could damage the prospects for its other customers are all laughable.
You're living in a fantasy land, not the real world.
You are under the impression you win just because you have the best perf/chip. AMD had that ages ago for same 3-4yrs, and the same thing happened that is happening now, just for different reasons. The first time Intel cheated, bribed, etc, and AMD also had a hard limit of 20% share as that was all their fabs could make. Today, is it much different, AMD has CHOSEN to blow wads of wafers (the best nodes each time) on consoles which is limiting the amount of HIGH MARGIN server/HEDT/GPU sales they could be getting instead. It is that simple.
You are in fantasy land and don't read enough. Apple is launching 3nm products in sept 2022 and chips are being made/tested now for it. Google TSMC 3nm apple intel and you should get a 100 articles talking about 3nm for 2022 and Intel either in 2022 also or Q1 2023. SAmsung issues have nothing to do with TSMC. Intel has made chips at TSMC for AGES, your are incorrect. They are literally about 8% of TSMC's total output yearly. You really don't read. They rank about 1% behind AMD this year if it all turns out as we've been told. And yes, I think it's as easy as apple writes the largest check, so they get every process first. Intel just has to write one large enough to be 2nd in line and with 18.6B TTM, they certainly have the cash to pay a premium for wafers, and it's legal. You must be too young to remember the last time AMD was here. I was a reseller for AMD then...ROFL.
HEDT wasn't mentioned by linus, he said niche server and mobile at least, but didn't know about other stuff. Did you read my post or just freak when you saw the wall of data, not text? Jeez, I spend half the post telling AMD how to beat Intel but you just dismissed it all. https://hothardware.com/news/apple-intel-racing-de... https://www.gizmochina.com/2021/07/02/apple-intel-... Intel already testing 3nm designs from TSMC...And goes on to say TSMC 3nm mass production h2 2022. Read more, much of this data has been out for AGES, for example: https://www.pcgamer.com/tsmc-confirms-3nm-tech-for... Mass production of 3nm h2 2022 again, from Dec 2020. I could probably go back further but you are wasting my time. You are claiming stuff isn't coming that 3 companies 100% involved in this stuff, are ALL claiming next year 3nm devices. You claim all 3 companies, TSMC, Apple, Intel all are lying...OK.
I don't have to beat you, I just have to stop you from getting wafers, which stops you from getting NET INCOME. I can beat you later or simply bankrupt you and buy you out, as 2.1B units or ARM mobile (PC's in a hand, hooked up to monitor/kb/ms it IS a PC), and apple now making ARM macs, etc. You have an AMD with no defense from FTC now. It will be ARM vs. x86 vs riscv.
So me some NET INCOME, or STFU. https://www.youtube.com/watch?v=NCYNftA4EYM LInus 3nm cpus incoming 2022. Take time 50 mins in the wan show and they get to 3nm. They wait the entire vid to talk INtel 3nm 2022. Comments section gives the timestamps of each topic.
Again, Intel doesn't have to win. I described more than one way to cause damage, and it is easy and legal for them to do. 6nm warhol is already an example of wafers lost to intel ;) That costs you too, if you have to cancel a design, a team's time just got wasted, R&D just wasted, probably some take or pay crap involved, etc. Now imagine Intel does what I said for all of h2 2022 and 2023. This is easy to win for Intel, it would be different if they weren't pulling 21B for 2018-2020 and TTM of 18.6B NET INCOME (not revenue, do you even know the difference?).
Are you just ignoring news on purpose? Without dropping consoles, I don't see how AMD gets more wafers to assault server or any product line heavily for a few more years. But by that time, everyone is basically on the same field again, but with an AMD who forgot to cash in for 4yrs so far. Share means nothing, if you make no income before I take it back...LOL.
> AMD has CHOSEN to blow wads of wafers (the best nodes each time) on consoles
At the time AMD signed up to design the console chips, they had no reason to believe that TSMC couldn't scale capacity to meet all of AMD's demands on top of those console chip orders. Now that the situation has changed, it's too late. AMD doesn't get to decide which wafers are used for console chips.
> You are in fantasy land and don't read enough.
Um, maybe you're reading the wrong sort of stuff. Maybe you need to read more about how business actually works.
> HEDT wasn't mentioned by linus
You listen too much to Linus Tech Tips. He and WccfTech profit by being sensationalist. They just want views, clicks, and followers.
> you are wasting my time.
You're free to leave and stop posting. We won't miss you.
> 6nm warhol is already an example of wafers lost to intel ;)
Proof? There are other reasons it could have been canceled.
https://www.cnet.com/news/apple-intel-will-be-firs... "Apple's iPad may be the first device from the company to be powered by processors using 3nm technology, Nikkei reported, while Intel is working on designs for notebooks and data center servers. TSMC reportedly plans to manufacture more 3nm chips for Intel than Apple."
Reportedly more 3nm for INTEL than apple...ROFL. NO 3nm coming...It's just a ruse, I swear.. We are done here, I didn't realize how much news is out on it, I'd rather read some more than waste my time on fools like you. :) Good day. LInus mentioned niche servers, well, I guess that could be DC...I guess your DC comment seems moot now huh? Intel doesn't have anything to beat AMD with? TSMC's 3nm is better than their 5nm so...IF it's 3nm Intel vs. 5nm AMD...both on TSMC...TSMC seems to think Intel will win this contest if their figures are correct, but either way, it is massively better than 10nm Intel right?? We done here?
I reported months ago IIRC that Intel was buying over 50% of TSMC 3nm. People laughed. Nikkei seems to be proving what I said previously. AS the ramp obviously many others will join apple/Intel on 3nm, but first run went apple, so second must be quite a bit larger and more going intel's way. I'm shocked you don't get this is how it would go when someone has ~20B yearly NET INCOME to blow on YOUR wafers you need. So easily flipped with a check for the next process upcoming. I expect Intel to fight for every wafer they can get that apple won't pay a premium for. Intel can use them all and still keep their fabs 100% full. Just more info, not sure if that post I'm talking about was as thejian or nobodyspecial (or one of my other nicks out there on stock sites).
15% faster perf, or 30% lower watts than TSMC 5nm. So Intel's answer to AMD's 5nm DC assault is 3nm DC server chips from TSMC...You get the point here right? https://www.windowscentral.com/intel-apple-tsmc-3n... Is that enough windows, tech sites, etc telling you to rethink your position here? LOL. Again, here says making DC server chips and notebooks. Both areas 3nm would be better used than a 10nm Intel chip right? ROFLMAO
Just because Intel might be on a smaller node doesn't mean their chip on 3nm will be faster than an AMD on 5nm. Even Tiger Lake, on Intel's 10nm that is better than TSMC's 7nm, is at an IPC disadvantage compared to Zen 3. Most people in the tech industry think Intel will not be able to reach performance parity with AMD until 2025. That is how far Intel is behind right now due to their own 10nm fiasco.
I'm not sure what to make of this TSMC 3 nm hype, but I wouldn't worry about it. We know AMD and TSMC are collaborating closely and it seems unlikely to me that Intel would be able to swoop in and buy up all the 3 nm wafer supply before AMD got a bite.
Reading well-designed peer-reviewed studies from outside of one’s field is a good way to be humbled.
I suppose jargon has gotten out of control, though. One article said sex researchers during the W administration invented new obscurantist jargon in order to try to maintain funding.
Agreed. I'd say anything outside of sociology and such fields can be decent reading. In physics, etc.,there's not much scope for hogwash, like gravity is a social construct. Still, there's a tendency for academics to lead one's mind through a maze, till it's not possible even to think clearly any more. And it all begins with language. Soon, spectres begin to operate as realities in people's minds.
Clarity of thought is what writing should be about. Setting down truth in simple, dignified terms. And truth is nature, or a reflection of her. She is plain, simple, modest, straightforward. Hard to find, and blushing like a rose.
Intel can't justify paying way more for those wafers than they can make in revenue from the chips, and the price per transistor is going to be prohibitive for GPUs or probably even server chips, for years. What you're missing about Apple is that their phone chips tend to be smaller than others and higher-margin as well. This makes a new process node viable for them (or other leading-edge phone SoCs) sooner than it is for other types of chips. That's why they bought the initial 5 nm capacity.
To the extent that 3 nm does make economic sense, I'm sure AMD has as much opportunity to place orders as Intel does.
Costing more per mm^2 than Intel can sell them for.
> 2. Citation requested.
Based on my observations that small, high-margin mobile chips are always the ones on leading process nodes. This is partly due to low yield of new process nodes (thus, favoring smaller chips). Also, because it takes time to bring online more production lines, the initial demand overwhelms supply, which bids up prices.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
149 Comments
Back to Article
kpb321 - Monday, June 28, 2021 - link
I wonder if the HBM models will still support Optane. That could lead to a really interesting and complicated memory hierarchy. The HBM as probably the smallest and fastest pool of memory that can probably either be addressable memory or act as a cache. DDR5 as a middle tier and Optane as a final persistent tier. I'm sure making good benefit of that all will take some custom work but I wouldn't be too surprised if someone decides that it is worthwhile.brucethemoose - Monday, June 28, 2021 - link
HBM + Optane, with no DDR5 in between, would be an interesting configuration if possible.schujj07 - Monday, June 28, 2021 - link
Possible yes, practical no. In that idea you would be using Optane in memory mode most likely. According the VMware documentation, you want to have a 1:4 ratio for Optane in memory mode as the RAM acts as a cache for the Optane. Following best practice would mean that your host would only have 256GB Optane with 64GB HBM. Problem there is the smallest Optane DIMMs are 128GB so you would be at a 1:8 ratio for 512GB RAM which is against best practices. On top of that 512GB RAM isn't that much is a server host now of days. The more RAM you can have in a virtual environment the more VMs you can easily run.kpb321 - Monday, June 28, 2021 - link
I someone did run only HBM and Optane memory I'm sure it would be for some custom server software tuned specifically for that and not for something as standard as a VMware server. Assuming it's supported, HBM + maxed out Optane memory would theoretically be the highest possible memory config. Previous systems required at least one rank of DDR4 memory so the system had some normal memory to work with. Being able to eliminate that by using the HBM would allow more total Optane memory. That could be handy for some extremely large memory set situations.brucethemoose - Monday, June 28, 2021 - link
Yeah, just what I was thinking. There are surely some too-big-for-ddr5 datasets/workloads that would benefit from running on Optane instead of an NVMe drive, and that could still utilize 64GB of cache/scratch space.schujj07 - Monday, June 28, 2021 - link
SAP HANA does benefit from Optane in App Direct mode. SAP HANA is an in RAM DB so every GB of storage needed for the DB requires a DB of RAM. I've seen some HANA DBs that are 1.7TB in size and they can be bigger than that. App Direct mode can make the startup & shutdown process much faster. That said App Direct mode is done in a 1:1 ratio usually according to best practices. However, it can be done in up to 1:4 ratios. Again you run into a max amount of Optane being 512 GB if you have 64GB HBM.JayNor - Monday, June 28, 2021 - link
An Intel cxl presentation indicated they will move Optane to a cxl memory pool, which should spur adoption.brucethemoose - Monday, June 28, 2021 - link
Surely there would be a latency and power hit vs. hanging it off the IMC, right?mode_13h - Monday, June 28, 2021 - link
Optane is already slower than DDR4, so you might as well move it out to CXL. There, it can at least scale up more and be symmetrically shared by multiple CPUs or accelerators.mode_13h - Monday, June 28, 2021 - link
> HBM + Optane, with no DDR5 in betweenYes, I thought that as well.
It would make more sense for a laptop to use HBM as main memory and then swap to Optane. You wouldn't need very much HBM to make that workable. It could give you instant S5 sleep/wake.
As for servers, I think a large, in-memory DB is probably the use case that makes sense to me.
However, if you add DDR5 to make a 3-teir memory hierarchy, I was thinking along the sames lines as JayNor about putting the Optane in a CXL module.
wumpus - Thursday, July 1, 2021 - link
Especially useful considering that the HBM certainly has cache tags and I'd never expect it with DDR5. If you could add cache tags to DDR5 than HBM+DDR5+Optane would be even better, but also expensive to make.Putting the tags in the DDR5 would be iffy, unless the CPU is designed to accept motherboards designed specifically for this (with at least twice the minimum DDR5 width) and reading both banks of DDR5, checking the tags and only sending the right cachlines. Maybe you'd use a slew cache or something (limited to 1-way), as the DDR5 "cache" would be absolutely enormous compared to any other cache.
But from what I've seen from Intel, don't expect any way to add tags to DDR5. And don't really expect Optane "DDR5" to be compatible with DDR5 (your motherboard would probably not be able to set the timings for the latency anyway).
wumpus - Thursday, July 1, 2021 - link
edit: with Micron throwing in the towel, I suspect there are real issues in making the whole thing work that Intel simply isn't talking about. They have all the rights to make this stuff and simply aren't interested.mode_13h - Friday, July 2, 2021 - link
> Putting the tags in the DDR5 would be iffyYeah, so just use the DDR5 as RAM and then swap to the Optane memory. That essentially gives you a software version of caching.
JayNor - Monday, June 28, 2021 - link
It would make more sense if HBM accesses didn't have to plug up L3. Something like the cxl direct attached memory on GPUs seems to be the use case, so why not implement it the same way...?mode_13h - Monday, June 28, 2021 - link
> It would make more sense if HBM accesses didn't have to plug up L3.What do you mean by that? Are you saying you want HBM accesses to bypass L3?
JayNor - Wednesday, June 30, 2021 - link
I mean act as direct attached memory for the cores as on cxl gpu slaves.https://www.youtube.com/watch?v=OK7_89zm2io
James5mith - Monday, June 28, 2021 - link
Why the shortening of Sapphire Rapids to SPR? Sapphire is one word. Shouldn't it be SR?erotomania - Monday, June 28, 2021 - link
I odn't know if journalists are the progenitors of the naming scheme or not, but it seems to be an Intel thing. I don't remember if every CPU generation/iteration received a three letter abbreviator, but IVB and KBL come to mind. By your logic, which is completely sound in my opinion, those should have been IB and KL.eastcoast_pete - Monday, June 28, 2021 - link
In the pharmaceutical industry, "SR" stands for "slow release"; maybe Intel was tired of being the butt of even more jokes; they have been quite slow to release anything really new for a while now.mode_13h - Monday, June 28, 2021 - link
Yeah, maybe Intel needs a project to accelerate their production cycle named after a laxative?Ian Cutress - Monday, June 28, 2021 - link
Intel's namingSandy Bridge = SNB
Ivy Bridge = IVB
Has-Well = HSW
Broad-Well = BDW
Sky-Lake = SKL
Kaby Lake = KBL
Coffee Lake = CFL
Rocket Lake = RKL
Cannon Lake = CNL
Ice Lake = ICL
Tiger Lake = TGL
Intel's code names have always been two letters from the first word, and one letter from the second.
TomWomack - Monday, June 28, 2021 - link
is it certain that the long names came first and the abbreviations second? Rather than something generating a random letter pair to start with, and then they find a pronounceable word to wrap around it?Lord of the Bored - Monday, June 28, 2021 - link
I was under the impression they were naming things after actual map features(initially map features near HQ, but one has to cast a wider net after a while)mode_13h - Monday, June 28, 2021 - link
> they were naming things after actual map featuresYeah, place names. But, they also probably check that each prospective place name has its own unique three-letter abbreviation.
Ian Cutress - Wednesday, June 30, 2021 - link
They use place names because they can't be trademarked/copyrighted and Intel can't be sued for using them, even as an internal nameThe Hardcard - Monday, June 28, 2021 - link
I want to know why the processors with Cove cores don’t have Cove names instead of more Lakes.Qasar - Monday, June 28, 2021 - link
the better question is, when will intel finally drop the lake and cove crap, and name their cpus with something that is easy to tell which core it is on ? as it stands, most of those i know, have NO idea which iteration/version is what.mode_13h - Monday, June 28, 2021 - link
> when will intel finally drop the lake and cove crap, and> name their cpus with something that is easy to tell which core it is on ?
That's getting tricky, with CPUs starting to have a mix of cores.
> as it stands, most of those i know, have NO idea which iteration/version is what.
Send them to ark.intel.com. They simply have to look it up.
Qasar - Tuesday, June 29, 2021 - link
honestly, shouldnt have to do that, and hard to do when you are talking about cpu's, and you dont have access to do that. alot of times its " which ever cove or lake intel is on right now, who knows which one is which.you like needing a decoder ring and slide rule to figure out which is which ? 😂😂😂😂😂😂 as i said, no one i know does, and some are getting sick of lakes and coves.7 cpus with lakes in their names, is getting old.
mode_13h - Wednesday, June 30, 2021 - link
Intel just wants you & your friends to buy the latest and greatest model number. Never mind that Gen11 laptop and desktop CPUs are made on different nodes and have different generation cores!In truth, you really *could* just forget about what's inside the CPU and just base your decisions on the benchmarks. If the benchmarks are thorough and represent your needs, then the results they measure are what you should actually be worrying about. The internal details are really just stuff for geeks to argue over.
Qasar - Wednesday, June 30, 2021 - link
" Intel just wants you & your friends to buy the latest and greatest " heh, and thats why most, if not all of them, are getting ryzen instead of intel when they upgrade in the next few months 😂😂😂" Never mind that Gen11 laptop and desktop CPUs are made on different nodes and have different generation cores! " and thats why they hate intel current way of naming their cpus.
yep, and thats why they are going with amd, as amd clearly is the better cpu over all right now.
dullard - Tuesday, June 29, 2021 - link
That is the point of business code names. You are NOT supposed to easily know.Qasar - Tuesday, June 29, 2021 - link
and the point of that would be what ? to confuse people ? at least with amd and their cpus, its easy and straight forward.dullard - Wednesday, June 30, 2021 - link
Yes, code names are to confuse the competition. https://en.wikipedia.org/wiki/Code_nameQasar - Wednesday, June 30, 2021 - link
and customers, mostlywumpus - Wednesday, June 30, 2021 - link
No. That's only what they tell the shareholders. It might work for Intel as if you only hear them talking about a location name, you won't always know if it is the next generation CPU or just some support chip (I think).But they really are for drumming up support for your project, getting a team on board, and crushing your (internal) competition. A cool code name goes a long way. See DoD and NASA project acronyms for examples.
mode_13h - Thursday, July 1, 2021 - link
> A cool code name goes a long way.The coolest codenames probably aren't that hard to troll. Like, by circulating a meme of a rocket crashing into a lake (either a cartoon or a botched SpaceX landing). Or maybe a pic of a comet crashing into a lake and a bunch of dinosaurs fleeing.
29a - Thursday, July 1, 2021 - link
They'll probably stop using Lake when they quit using Skylake derived cores. Notice the new non Skylake cores have Cove as the code name.Qasar - Thursday, July 1, 2021 - link
the cove names aren't any better.mode_13h - Friday, July 2, 2021 - link
> They'll probably stop using Lake when they quit using Skylake derived cores.For CPUs, Ice Lake already broke that concept. Further violations include: Rocket Lake, Alder Lake, Meteor Lake, Jasper Lake, Elkhart Lake... I could probably find more.
> Notice the new non Skylake cores have Cove as the code name.
Yes, the cores do. AFAIK, the desktop and server cores didn't have a name, before they started with the Cove-based naming. At the low end, the Goldmont cores of Apollo Lake is the first time I noticed a separate name given to the cores.
bigboxes - Monday, July 5, 2021 - link
The problem is that there isn't much difference between the lakes and coves. Lame.mode_13h - Monday, June 28, 2021 - link
They seem to have adopted the "cove" naming scheme for core uArch. I appreciate having a separate naming convention for those, since it's now clear whether someone is talking about a core vs. entire CPU.Qasar - Tuesday, June 29, 2021 - link
glad you think so 😁mode_13h - Monday, June 28, 2021 - link
Intel's gotta have a TLC (Three Letter Code) for everything.JayNor - Monday, June 28, 2021 - link
"The exact launch of SPR-HBM is unknown,"Intel's newsroom PR says Aurora is using the HBM version.
Ian Cutress - Monday, June 28, 2021 - link
Funnily enough that wasn't addressed as part of our specific briefing.Silver5urfer - Monday, June 28, 2021 - link
This will be the HEDT processor base. LGA4xxx socket with DDR5 and Gen 5. So AMD has to be getting ready, but sadly on that side they didn't even launch new Threadripper processors based on Zen 3. Ultimate shame.yeeeeman - Tuesday, June 29, 2021 - link
this actually looks good, lots of new stuff, if temps, power and costs are in check, then it might just be a worthy adversary to the zen 4 based epyc.mode_13h - Wednesday, June 30, 2021 - link
Depends on what you're doing, but it should give Intel an edge in *some* benchmarks.Wereweeb - Tuesday, June 29, 2021 - link
Why is Intel not using Serial DRAM? AFAIK it has the same latency issues as HBM, same bandwidth/contoller die space benefits as HBM, etc... you just trade overall energy-efficiency for less energy/heat on the die.But with Serial DRAM you'd need less models, you can customize, upgrade, repair, etc... the memory of the system (You can still use DRAM+Optane for instance), and I'd assume it's easier to integrate compute-in-memory as you're already essentially putting a processor between the memory controller and the DRAM.
mode_13h - Wednesday, June 30, 2021 - link
You mean like CAPI or CXL memory devices? Or are you talking about something else?Wereweeb - Monday, July 5, 2021 - link
Not memory through PCIe, Jesus, I mean something like IBM's OMI.Geef - Tuesday, June 29, 2021 - link
Quote: The Aurora supercomputer is expected to be delivered by the end of 2021, and is anticipated to not only be the first official deployment of Sapphire Rapids, but also SPR-HBM. We expect a full launch of the platform sometime in the first half of 2022, with general availability soon after.---
Hey guys you forgot to account for the time delay Intel has EVERY SINGLE TIME a chip comes out. Move each year up by 1 and we should be good. If your feeling nice move it up by two quarters instead.
Atari2600 - Tuesday, June 29, 2021 - link
So what makes this not a day late and dollar short when compared to AMD's stacked L3??Looks slower, more energy intensive and of similar size.... am I missing something?
Ian Cutress - Wednesday, June 30, 2021 - link
This HBM will be alongside the CPU, not on top of it.HBM is higher capacity than AMD's stacked SRAM
Atari2600 - Wednesday, June 30, 2021 - link
Ah, EPYC may run up to around 1GB of L3 - whereas this is an order of magnitude more.Dunno where I got myself mixed up thinking the stacked L3 would also be multiples of GB. Maybe confused in how many layers they were using or something.
I suppose going off at a tangent - is this seen as L4 cache by CPU memory controller? Or is it seen as system RAM?
mode_13h - Thursday, July 1, 2021 - link
> is this seen as L4 cache by CPU memory controller? Or is it seen as system RAM?Didn't they say it supported both modes? Supposedly, that's how Xeon Phi implemented MCDRAM.
Atari2600 - Wednesday, June 30, 2021 - link
Oh - and did you get my email about using the openFOAM benchmark for your server/workstation comparisons? Be a very useful addition that would stress both memory controller and FPU.mode_13h - Wednesday, June 30, 2021 - link
HBM2 provides slower access than AMD's stacked L3 innovation, but scales to about 100x larger data sizes. For instance, if your dataset is like 40 GB, and you're accessing it pretty randomly, Intel's HBM2 is probably going help a lot more than AMD's larger L3 cache.The way I look at them is complementary. For some purposes, the best option would certainly be to have both!
mode_13h - Wednesday, June 30, 2021 - link
To understand why this scales better, remember that HBM is DRAM, which is denser & more power-efficient than SRAM. Also, HBM can currently be stacked up to 12-high, where as AMD's approach only appears to work for adding one extra layer to the compute die.raddude9 - Thursday, July 1, 2021 - link
HBM, is very power efficient, as you say, because it is basically multiple DRAM dies that are stacked on top of each other, but to avoid overheating, the memory has to run much slower that regular DRAM. HBM compensates for the slow memory speed by having a very wide interface, but there is no way to get around the increased memory latency that having slower RAM will entailAs for AMD's approach. AMD only announced details of adding a single additional layer of SRAM to it's chips. TSMC on the other hand, has said that this approach can be used to stack up to 8 additional layers of SRAM.
mode_13h - Friday, July 2, 2021 - link
> HBM compensates for the slow memory speed by having a very wide interfaceI thought it was just that they used a wider interface because they *could*. Since it's in-package, there are multiple reasons it's practical to have a much wider interface than external DRAM.
As for the width vs. frequency, I thought that was just about the interface - not the memory, itself.
> there is no way to get around the increased memory latency that having slower RAM will entail
Do you have any latency figures on HBM2? It's be nice to see how it compares with DDR4 and DDR5.
> TSMC on the other hand, has said that this approach can be used to stack up to
> 8 additional layers of SRAM.
I thought there was some discussion that, because the signals were traveling from silicon-to-silicon (i.e. not through TSVs), it was only good for a second layer, but the article definitely suggests more layers could be possible. I still wonder about heat, if going beyond that.
raddude9 - Saturday, July 3, 2021 - link
> I still wonder about heat, if going beyond that.stacked SRAM has some major advantages compered to stacked DRAM when it come to heat. Firstly it's a lot less dense than DRAM, and second, it's static, it does not need constant refreshing like DRAM. Sure it takes more power when it comes to fetching memory, but that's not such a big problem.
mode_13h - Sunday, July 4, 2021 - link
> Firstly it's a lot less dense than DRAMThat's not transistor density, right? That's just cell density, because each cell requires more transistors.
> it's static, it does not need constant refreshing like DRAM.
This is an interesting point. How much of DRAM's idle power is due to refreshes? And how does its active power compare with idle power?
> Sure it takes more power when it comes to fetching memory
Thing is, L3 cache sees a ton of activity, compared to a DRAM die. AMD was claiming like 40x the bandwidth of DRAM.
raddude9 - Monday, July 5, 2021 - link
> That's not transistor density, right? That's just cell density, because each cell requires more transistors.Yes, SRAM cell density is lower
> Thing is, L3 cache sees a ton of activity, compared to a DRAM die. AMD was claiming like 40x the bandwidth of DRAM.
Yes indeed, but you were concerned about power usage, if you stack SRAM you end up distributing the power usage over a large area (lower cell density again) and over multiple levels, so heat is probably even less of a concern than it would be for a single layer of SRAM.
mode_13h - Wednesday, June 30, 2021 - link
Another point about scalability is that AMD's stacked SRAM is going to put a huge amount of strain on their interconnect fabric. It will work very well as essentially an extension to a die's L2 cache, but won't help other dies nearly as much.I can already tell that AnandTech is going to have fun benchmarking multi-die CPUs with this stacked SRAM, especially if AMD puts it in an EPYC or Threadripper!
JayNor - Wednesday, June 30, 2021 - link
Intel included stacked SRAM on their Ponte Vecchio chip, which Raja called Rambo Cache. They said the HBM was too slow. We should get more detail from Hotchips, but perhaps Anandtech already has the presentations as part of the early release privilege.They also announced a hybrid bonding stacked sram that was back in the lab last August. I haven't seen any announcement whether or not the hybrid bonding version is the same as the Rambo Cache.
mode_13h - Thursday, July 1, 2021 - link
> Intel included stacked SRAM on their Ponte Vecchio chipI remember the term Rambo Cache, but either missed or forgot that detail. Thanks.
webdoctors - Wednesday, June 30, 2021 - link
Is the HBM supposed to replace the DRAM in the hierarchy?"leading to 64 GB of HBM."
That seems far too little for servers that would have 100s of GBs of main memory.
mode_13h - Thursday, July 1, 2021 - link
Or even TBs. Yeah, it can supposedly be used as L4 cache.plenty000 - Wednesday, June 30, 2021 - link
Plenty <a href="https://www.plentythingz.nl/"> Thingz</a> is een e-commerce website, hier kun je elk product vinden. Spin, wiebel en race; het kan allemaal met onze kinderauto! Deze wiebelende auto houdt je kind actief en ontwikkelt zijn evenwichts- en coördinatievaardigheden tijdens het spelen.SanX - Wednesday, June 30, 2021 - link
hahahaha While many of you switched into Waiting Mode for the new "miracle" design, counting days till the mid 2022, delayed till end 2022, and finally getting it in mid 2023... i will tell that i am not interested exactly right now. My apps which use parallel algebra will have exactly 0 (=zero) benefits from this DRAM memory no matter how fast and what sizemode_13h - Thursday, July 1, 2021 - link
> My apps which use parallel algebra will have exactly 0 (=zero) benefits from this DRAM memoryIt's true that some apps will not benefit from it.
The key questions are: "how many will?" and "by how much?" I'm sure the precise config details will also have a lot to do with it.
Tomatotech - Friday, July 2, 2021 - link
My apes also will not benefit from this. This new chip is useless for them.They prefer to swing from tree branches and eat bananas.
raddude9 - Thursday, July 1, 2021 - link
So why no mention of latency in the article. Sure, HBM will improve bandwidth, but memory latency is and always has been the Achilles heel of HBM technologies. HBM 1.0 ran the memory at just 500Mhz, and although newer version have improved the speed, they are still well behind DDR4 and DDR5 when it comes to clock speed and latency.mode_13h - Friday, July 2, 2021 - link
> HBM 1.0 ran the memory at just 500Mhz, and although newer version have improved the speedYeah, so why are you talking about HBM 1.0? In version 2.0, they at least doubled it, and then there's HBM2E and now HBM3.
> they are still well behind DDR4 and DDR5 when it comes to ... latency
I'd like to see some data supporting that claim.
raddude9 - Saturday, July 3, 2021 - link
Oh sure, HBM have dramatically improved the latency situation with the more recent versions, but due to the power constraints of stacking multiple memory dies they generally run at lower clock speeds than regular memory, which seems to why the latency suffers.As for supporting info, there are a few studies out there eg:
https://arxiv.org/pdf/2005.04324.pdf
A quote from theat article: "Shuhai identifies that the latency of HBM is 106.7 ns while the latency of DDR4 is 73.3 ns,"
It's going to come down to the type of application being run, some will prefer the high bandwidth of HBM but others will suffer from higher latency.
If only someone could combine HBM and Stacked SRAM...
mode_13h - Sunday, July 4, 2021 - link
> "Shuhai identifies that the latency of HBM is 106.7 ns while the latency of DDR4 is 73.3 ns,"That's hardly night-and day. Also, the DDR4 is a lot better than what I've seen on Anandtech's own memory latency benchmarks. I think their test isn't accounting for the added latency of the DDR4 memory sitting on an external DIMM.
Finally, the latency of HBM should scale better at higher queue depths. And a 56-core / 112-thread server CPU is going to have some very deep queues in its memory hierarchy.
raddude9 - Monday, July 5, 2021 - link
No, there's not a massive difference between HBM and DDR4 any more but barring some kind of breakthrough HBM will continue to have higher latency.I think it's going to come down to the application being run more than things like queue depths. One of the downsides of the HBM approach now is that many of the workloads that would have taken advantage of that approach have already migrated over to GPU's and won't be returning any time soon.
Still, I'm sure it'll only be a few years before some company gives us a combination of stacked SRAM and HBM on chip with DDR5 for further memory expansion. Can't wait
TheJian - Sunday, July 4, 2021 - link
IAN, any info on Intel 3nm chips coming 2022 from TSMC (in hand according to Linus TT vid 24hrs ago - already working, so not some joke, I knew gelsinger was holding back...LOL)? LInus said niche servers and laptop stuff at least. Probably due to having to leave out superfin stuff, so niche where that missing part won't matter perhaps. Otherwise, I'd just buy all 3nm I could and produce the crap out of those servers that fit (without giving up intel secrets, a 3nm TSMC server from Intel isn't going to lose to 5nm AMD TSMC, even without Intel special sauce), or gpus until the cows come home. Or simply revive your dead HEDT platform and soak up as much 3nm as you can for 2022 and 2023. Every wafer not sold to AMD/NV is a win for Intel and you can make MASS income on gpus right now.AMD is so short on wafers they're having to fire up 12nm chips again. So yeah, pull an apple and buy up more than you need if possible and even bid on 2nm ASAP. As long as you can keep your fabs 100%, take every wafer you can from TSMC and flood the gpu market with 3nm gpus. Price how you like, they will sell out completely anyway. You can price to kill AMD/NV NET INCOME, or price to take all that EBAY scalper money by just selling direct for 3x normal launch price etc. :)
I don't know why anyone thinks AMD is winning. It doesn't matter if your chip is the best if you can't make enough because you keep making consoles for 500mm^2 on the best nodes and making $10-15 on a $100 soc. Those should be SERVER/HEDT/PRO GPU. You'd be making BILLIONS per year instead of ~500mil or losses (see last 15yrs). No, one time tax breaks don't count as a 1B+ NET INCOME Q. They are 4yrs into this victory dance and still can't crack Q4 2009 1B+ NET INCOME Q. Yet the stock has went up 10-15x from then, while shares outstanding have doubled (meaning worth half whatever stock price was then, $2-10 for a decade), and assets have dropped basically in half (though it is coming back slowly). Their stock crashed the same way when Q's dropped back and the people punished the stock from $10-2 again 2009+. You are looking at the same story now people. IF AMD can't get wafers to make chips they're stuck with great tech that can't be sold. Nothing illegal about Intel buying all the 3nm they can to enter the gpu market for round 2 (round 1 was 6nm, and it killed AMD warhol IMHO...LOL).
Intel can write billions in checks for wafers from TSMC and make money on NEW products (discrete gpu for example). They pissed away 4B+ a year for 4-5yrs on mobile with contra revenue which is why the fabs ended up where they are today (that 20B should have been in 10/7nm and we'd be in a whole other game today). Either way, AMD can't stop Intel's checks and the pissed away 4yrs chasing share instead of INCOME. Now it's time to pay the BIG CHECKS, and, well, only Apple, Intel, NV etc, have them in that order (others in there after Intel, but you get the point).
3nm Intel HEDT chips could do the exact thing to threadripper that it did to Intel HEDT (doesn't exist today really). Priced right, threadrippers would become hard to sell and with Intel volumes surely 3nm cheaper for them already. Time to turn the tables, and it's even legal this time, and much easier with so many options to slow AMD down, cause issues for NV too, and take a shot at apple's bow while at it, since they're coming directly at everyone anyway (cpu and gpu and GAMING big time). 3nm TSMC won't need Intel's superfin stuff to beat AMD/NV 5nm stuff, so no risk there giving it up to china theft. Business is war, surely Pat is on this angle.
Silver5urfer - Sunday, July 4, 2021 - link
What is this wall of text lol.Let me tell one thing. Do you think AMD doesn't know what is happening in the industry ? Esp when Lisa Su showcased 5900X with V-Cache out in public..Intel doesn't have any damn hardware that showcases they beat AMD let that sink in first.
Next is, TSMC 3nm is not going to be released in 2022. What are you smoking man ? Samsung is facing issues with GAAFET 3nm node, just to think what is this TSMC 3nm we do not know yet, and how much it varies with 5nm we do not know. No high powered silicon has been made on TSMC 5nm yet, and jump to 3nm ?
Intel is just securing orders, I believe those are Intel Xe first then CPUs, Intel to date since decades never made their processors on TSMC or any other foundries, making them work without issues and ability to beat AMD who are crushing Intel since 3 years in DC market it's not a simple hey we pay BIG checks and we won the damn game lol. Do you think it's that easy eh.
Sapphire Rapids HEDT is coming in 2022 which is on 10nm+ this 3nm magic aint coming. By that time Zen 4 AM5 is going to crush Intel, ADL is already dead with 3D V-Cache Zen, proof is EVGA making X570 in 2021 Q3, almost end of cycle. But they are making it because they know AMD is going to whack Intel hard. Also let Zen 3 come, HEDT Sapphire Rapids, will definitely lose to that lol. As for Zen 4 based DC processor, it's 80+ cores, Intel is not going to beat them any time soon.
Qasar - Monday, July 5, 2021 - link
" What is this wall of text lol. " come one silver5urfer, you know exactly what this is, the usual anti amd BS rant from the jian, what else is it ?Oxford Guy - Monday, July 5, 2021 - link
The console scam certainly isn’t helping AMD right now.Go ahead, AMD... compete against PC gamers after shoveling out garbage releases like Frontier and Radeon VI — after peddling ‘Polaris forever’ so Nvidia could, with the help of AMD’s pals MS and Sony, artificially (via nearly a monopoly) inflate ‘PC’ GPU prices (and, of course, half-hearted shovelware garbage like Radeon VI, 590, Frontier, and Vega). Vega had identical IPC versus Fury X.
AMD deserves a good rant, regardless of how lackluster Intel has been (very). Of course, we plebeians get the best quasi-monopoly trash the first world can conjure. More duopolies/oligarchies in tech, please.
mode_13h - Monday, July 5, 2021 - link
> The console scamWhat is this "scam"? And please post some evidence.
What's funny about your post is that this thread isn't even about AMD GPUs. It sounds like you're still sore from the pre-RDNA era, but we should be looking ahead to RDNA3 -- not behind, to Vega and Polaris. By the time Sapphire Rapids reaches the public, that's the generation AMD should be on.
> More duopolies/oligarchies in tech, please.
Odd timing for such a comment, when ARM is ascendant and RISC-V is finally starting to grow some legs.
Qasar - Monday, July 5, 2021 - link
mode 13, he wont post any proof, as there is none, its is own opinion.almost could add most of his posts as anti amd bs as well.
Oxford Guy - Monday, July 5, 2021 - link
An ad hom and a false claim in addition to it.I have written extensively wherein the irrefutable elementary logic of the situation has been explained.
AMD, Sony, Microsoft, Nvidia, and Nintendo all compete against the PC gaming platform. That’s your first clue.
As for opinions, you can try to use that word pejoratively but opinions are essential for understanding the world in a rational factual manner. Moreover, opinion has nothing to do with the fact that the logic and facts I have presented have never, not once, been rebutted substantively. Posting emojis and ad homs doesn’t cut it.
mode_13h - Wednesday, July 7, 2021 - link
> AMD, Sony, Microsoft, Nvidia, and Nintendo all compete against the PC gaming platform.It's not like that. It's like AMD and Nvidia are pioneering the high-end, cutting-edge stuff on the PC, where there's demand for the highest performance, newest features, and people willing to pay for it. Since most gamers are more budget-constrained, consoles bring most of those benefits to a more accessible price point that's also more consumer-friendly.
I wonder if maybe you don't understand the concept of market tiers.
Oxford Guy - Monday, July 5, 2021 - link
I have posted extensively about the scam and the nature of it.It’s abundantly clear that responses, which have included ad homs, emojis, false claims (including the hilarious claim that it’s easier to create the Google empire from scratch than to create a 3rd GPU player that’s actually serious about selling quality hardware to PC gamers versus spending more time developing machinations to keep prices artificially high — zero substantive rebuttal — that I’m unlikely to find much discourse on the subject.
Many very large monied interests want to keep the scam going via favorable posts (e.g. stonewalling) and refusal to challenge status quo thinking is also par for the course. Samsung was caught for hiring astroturfers. Judging by the refusal to engage me substantively on this subject that doesn’t surprise. Even without a paycheck being involved, though, it is clearly emotionally satisfying to critique without doing any good-faith work.
Instead of the emojis, ad homs, and lazy false claims — it would be amazing to see someone put forth the effort to explain, in detail, how the existence of the ‘consoles’ as they are today and have been since the start of the Jaguar-based era are not more harmful to the PC gamer’s wallet and more parasitic on the industry as a whole. Explain how having four artificial walled software gardens is in the interest of the consumer, particularly in light of how everything has changed to run the same hardware platform. Explain how having wafers allocated to ‘consoles’ isn’t a way to undermine the competitiveness of the PC gaming platform, keeping prices high by creating demand artificially for the ‘consoles’. Explain how having just one 1/2 companies producing PC gaming GPUs (since AMD has been doing a half-hearted effort for years and sandbagging doesn’t disprove my point at all) has led to a healthy PC gaming platform rather than one that falls prey to mining, wafer unavailability, lack of competitiveness, etc. None of this is difficult to comprehend and there is plenty more to point out.
Suffice to say that Linux with Open GL, Vulkan, and the ITX form factor in particular make consoles put parasitic redundancy. It’s so obvious that apparently no one can be bothered to see it.
Oxford Guy - Monday, July 5, 2021 - link
‘Pure’, not ‘put’. Typing on this phone, with its aggressive auto-defect, is enough to put me in a home.Oxford Guy - Monday, July 5, 2021 - link
And don’t forgot all the years of selling GPUs designed for mining as if they’re PC gaming parts. It just goes on and on... all these too-obvious examples.mode_13h - Wednesday, July 7, 2021 - link
> GPUs designed for mining as if they’re PC gaming parts.I don't agree with this, at all. As Nvidia has recently demonstrated, it's actually *hard* to make a GPU that's bad at mining!
Oxford Guy - Thursday, July 8, 2021 - link
‘I don't agree with this, at all.’I’m completely shocked.
Meanwhile, when Nvidia was selling Maxwell at reduced pricing (versus its appetite for extreme pricing) because GCN cards were more targeted toward mining, the company responded by achieving mining attractiveness parity with AMD.
But the company really wants to only sell the cards to gamers. Ha.
Meanwhile, AMD isn’t even feeling it’s worth the bother to try to feign disinterest in maximizing its profits via mining.
mode_13h - Wednesday, July 7, 2021 - link
> Typing on this phoneMy sympathies. I do enjoy the irony of a pc master-race post being typed on a phone, though. Thanks for that.
Oxford Guy - Thursday, July 8, 2021 - link
‘I do enjoy the irony of a pc master-race post being typed on a phone, though. Thanks for that.’Evidently more than reading posts for comprehension.
mode_13h - Thursday, July 8, 2021 - link
> Evidently more than reading posts for comprehension.Your sensitivity to "ad hom" is dialed up to 11, but you seem to throw shade on the slightest whim. Something's wrong with that.
Trust that if I respond to a post, I've read it enough times to believe I understand it (unless I ask about something not clear to me). If I miss something in your intent or meaning, there are numerous other explanations. And rather than speculate why I missed something, you need only point out when I do, so we can continue.
Qasar - Tuesday, July 6, 2021 - link
" I have written extensively wherein the irrefutable elementary logic of the situation has been explained. " " I have posted extensively about the scam and the nature of it. " and i dont recall AT ALL you posting ANY links to this BS claim. all i have seen is personal options of this BS claim, oxford guy, post some links or admit is just your opinion. the FACT that mode 13 asked you for proof, and ALL you did was ramble, and post no links, at this time, proves it is just your opinion.instead of posting ramble, post links, this way some of us, can also see where you are geting this BS from.
GeoffreyA - Tuesday, July 6, 2021 - link
Oxford Guy, I can't stand consoles myself, but if I may venture my opinion, I'd say people like consoles because one just puts a game in, presses a button, and plays, whereas PCs take a lot more work (supposedly). Sony and MS churn out consoles to make money; AMD saw an opportunity for business and didn't waste time. As a side effect of that, PC gaming has taken some knocks. It's more a symptom of the mess, not the motive. Also, consoles have been in homes for decades, even the old TV games with those cartridges.Oxford Guy - Wednesday, July 7, 2021 - link
There is nothing about the ‘console’ since Jaguar that offers anything other than a duplicate parasitic walled software garden.Every feature except for the specific DRM is available on the PC. That’s because the ‘consoles’ are PCs.
The faux convenience people always cite is nothing more than DRM. The bygone days where consoles were unique and offered actual advantages have been over since the Jaguar machines. Everything is standard hardware. And if convenience were really the motivation of the ‘console’ designers they wouldn’t be selling them with the same junk joystick mechanism (drift) and all optical media would have been put inside a protective shell (e.g. DVD-RAM).
As someone who saw what actual consoles were since the 70s, the breathtaking passivity of the more modern ‘console’ consumer (unprotected high-density optical media, for machines used by kids) was something. Now, though, even though there is 0% substantive hardware difference to justify the claims about convenience and everything else involved in claiming these DRM vehicles add value for the consumer — people continue to recycle the same archaic points. Back when a console had cartridges and a power button (and computer joysticks were inferior to console digital pads) the convenience argument held some water. More importantly was the fact that the hardware had to be much different due to high cost for PC-level equipment. (Has not been true since Jaguar.)
It’s impossible to credibly argue that evolution that resulted in a 100% identical x86 common hardware platform simultaneously justifies a bunch of duplicate DRM gardens. The ‘exclusive’ releases don’t add one iota of value, considering the costs.
mode_13h - Wednesday, July 7, 2021 - link
> There is nothing about the ‘console’ since Jaguar that offers anythingPS3 offered blu-ray playback and the Cell CPU was genuinely faster than any PC CPU of its day (hard to *use* that power, but its raw compute power was off the charts). Add to that bluetooth controllers that had farther range than anything for the PC.
> all optical media would have been put inside a protective shell (e.g. DVD-RAM).
PCs stopped doing that even before consoles adopted optical media!
> even though there is 0% substantive hardware difference to justify the claims about convenience
It is more convenient, since it's better assembled. And it's also cheaper for the horsepower you get.
> The faux convenience people always cite is nothing more than DRM.
That's not true. People can buy a console game knowing it'll work exactly the same on their console as everybody else's, including the reviewers who might've motivated them to buy it.
As for DRM, you act as if that doesn't exist on the PC.
Oxford Guy - Thursday, July 8, 2021 - link
Okay. I’ll respond to this post first. The only substantive point you made is praise for the PS3, which is not x86 and is therefore more irrelevant than not.You even quoted me saying ‘Jaguar’ but didn’t seem to read carefully enough to know what to try to rebut.
Your other claims, like build quality, are total nonsense. The only thing special about the ‘consoles’ is their particular DRM. That’s it. That makes all the claims about specialness unfounded in fact. They are PCs being peddled as items captured by special DRM, DRM that is parasitic for consumers.
Oxford Guy - Thursday, July 8, 2021 - link
I will also add that your rebuttal attemp regarding protecting the optical media completely flies in the face of your attempt to justify ‘consoles’ on the basis of better build quality and ease of use.I also already pointed out the sad sorry cold hard fact that the ‘console’ peddlers are peddling the same defective joystick mechanism. Extremetech covered that.
If you’re going to try to make arguments take more time to make them hold some water. Reality is that software DRM doesn’t increase hardware build quality nor differentiate that hardware at all, other than the existence of so much redundancy reduces the incentive 3rd parties have to try to compete. The market is diluted.
Parasitic redundancy is a positive for certain corporations and their investors. It’s entropy for consumers.
mode_13h - Thursday, July 8, 2021 - link
> If you’re going to try to make arguments take more time to make them hold some water.If you're going to reject my points, then simply reject them. You don't need to blame me for not making a better case, when you were probably going to reject them no matter what.
I make points to be considered. I don't pretend they will change anyone's mind who's already staked out a position. When is the last time you saw that, on the Internet?
mode_13h - Thursday, July 8, 2021 - link
> Your other claims, like build quality, are total nonsense.I understand the point was rather vague, so I'll explain. If you build a mini-ITX PC with comparable horsepower to modern consoles, the fact is that they don't ship as well and the cases have to be beefed up to support the PC's expandability. The mechanical (and thermal) design of consoles is optimized for what they are, saving weight, cost, size, packing material, and damage in transit.
Let's say Sony and MS both supported the same OS, APIs, and games as PCs. So, you could run any game on any of the three. They might cost a little more, because they can no longer be sold at a slight loss, but they would still be cheaper than a PC with comparable specs.
> DRM that is parasitic for consumers.
PC games have DRM, as well.
GeoffreyA - Thursday, July 8, 2021 - link
"And it's also cheaper for the horsepower you get."Quite an important point nowadays.
mode_13h - Wednesday, July 7, 2021 - link
> Sony and MS churn out consoles to make money; AMD saw an opportunityAnd let's not forget that Sony and MS were going to make consoles no matter what. And those consoles were going to compete for wafer supply. So, that's just another reason that whole point falls flat.
mode_13h - Wednesday, July 7, 2021 - link
> Samsung was caught for hiring astroturfers.Sounds interesting. Source?
> explain how the existence of the ‘consoles’ as they are today ...
> are not more harmful to the PC gamer’s wallet
I don't understand this argument. If you could magically remove the silicon demands of consoles over night, then we'd certainly see chip prices drop, for a while. However, that's not what would actually happen, if there were never consoles. What would happen is that there'd be even more demand for gaming PCs, which use more silicon and other components than consoles. So, by eliminating the console tier, you could actually just increase silicon demand.
> Explain how having four artificial walled software gardens is in the interest of the consumer
Yeah, that's not great. I don't like walled gardens on phones, cloud, or elsewhere, either. Seems to me like more of an indictment of modern tech than specific to consoles, however.
> Explain how having just one 1/2 companies producing PC gaming GPUs
> has led to a healthy PC gaming platform
AMD gets valuable input & feedback from Sony and MS that influence their development. The market for console gaming also incentivizes them to continue their GPU development beyond the profit they make from PC gamers. It's easier to make those investments when you have a fairly stable income stream, which the PC GPU market hasn't been. I don't know how long you've been following PC GPUs, but it's very much been a tale of boom and bust cycles.
> than one that falls prey to mining, wafer unavailability,
That's the whole tech industry.
> lack of competitiveness
That's the overhang of AMD's multiple generations of uncompetitive CPUs, IMO. AMD was in bad financial shape for several years, leading to lack of investment and innovation. RDNA is the first we saw of the newly-invigorated AMD, from their graphics division.
The problem is that Nvidia is moving much faster than Intel's CPU division, so it will be harder for AMD to surpass Nvidia than it was for Zen3 to beat Intel. Still, they're making very strong progress.
> None of this is difficult to comprehend
What's difficult to comprehend is how you think the world would be so much better without consoles. It would cause more supply problems and higher prices, and those who couldn't pay would be stuck gaming on phones and tablets.
It's also hard to see why you're tilting at windmills, like this. Nobody has the power to simply make consoles go away. I could understand focusing on the walled garden thing. That's your strongest argument and it's conceivable that governments could actually force some meaningful change, there.
> Suffice to say that Linux with Open GL, Vulkan, and the ITX form factor ...
> it’s so obvious that apparently no one can be bothered to see it.
Valve has invested a lot into development of the Linux graphics stack, but I doubt it's paid off, for them. I mean, it works as a little bit of a threat against MS, but the idea of steam boxes as an alternative console hasn't been a thing for quite some time. Of course, a lot of people have gaming PCs hooked up to their living room TV, so it's not as if people don't know that it's possible. Nvidia has even sold GSync Ultimate monitors that are essentially gaming TVs.
mode_13h - Wednesday, July 7, 2021 - link
Regarding silicon supply, there's another point to consider and that's how often PC gamers do upgrades vs. console users. If you've got a console, there's no sense upgrading it until the next generation comes out. Even the half-steps we saw in the last generation probably garnering a relatively small number of upgrades.In PC gaming, 7 years is multiple lifetimes to be using the same graphics card, if not also the same CPU. So, that's another way that forcing more people into PC gaming would just put even more strain on the silicon supply chain.
Oxford Guy - Thursday, July 8, 2021 - link
‘Yeah, that's not great. I don't like walled gardens on phones, cloud, or elsewhere, either. Seems to me like more of an indictment of modern tech than specific to consoles, however.’Specious hand waiving.
1. Parasitic redundant walled gardens are the ONLY defining quality of consoles since Jaguar.
2. The end.
Oxford Guy - Thursday, July 8, 2021 - link
Since I know you won’t comprehend this I will explain it even more.A console is hardware that is very different from common x86 hardware (home PCs). That is its defining quality. That hardware difference stems from the hardware needing to be different in order to deliver the console gaming experience.
The PS3 you mentioned erroneously — with its (sadly very harvested) Cell CPU that was greatly inferior to an x86 or PPC desktop machine for general-purpose computing/AI — versus streaming, the thing it was optimized for — and Blu-Ray, the other oddity about the PS3 hardware — was the last hurrah for the console. It’s dead, Jim.
What’s lacking flesh is the horse all the specious archaic arguments about the ‘consoles’ rode in on. Jaguar junk came out a long time ago. The facts have been clear for much longer than anyone should need to have to comprehend the state of affairs.
GeoffreyA - Thursday, July 8, 2021 - link
Oxford Guy, I would say that consoles allow one to play games on a TV and not worry about computers. Arguably, they look neater and are smaller. And the game is guaranteed to work. That's differentiation enough, from the consumer's point of view. (Whether they're x86 or not, is more of an implementation detail.) Many folk have a preference for them, too, having grown up on the Xbox or PS. Of course, we computer enthusiasts hold to the idea that PC gaming is number one, which is just a preference, like theirs. Yes, consoles are computers tricked up in simple garb, locked up in DRM jail, but if the consumer sees them as accessible gaming devices in the sitting-room, they will be sold.GeoffreyA - Thursday, July 8, 2021 - link
On a side note, gaming as a whole has gone down. No doubt, consoles have had a hand in that. Many games today are the equivalent of Hollywood-manufactured junk. (Is there an analogy between TV destroying "the pictures" and consoles doing the same to games? Perhaps.) I sigh but don't worry too much about it. Just like old films, there are fantastic old games we can still play.mode_13h - Thursday, July 8, 2021 - link
> gaming as a whole has gone downI hear this, but I also sometimes hear about great indie titles. I think there are still good games being made, if you look for them, but I haven't played anything for at least 6 years or so.
GeoffreyA - Thursday, July 8, 2021 - link
100%. Well, thing is, I don't really play any more. I'm working my way through BioShock Infinite though (but haven't touched it in a year). When Half-Life 3 comes out, I'll be very excited too.GeoffreyA - Thursday, July 8, 2021 - link
Lastly, walled gardens, whether in software or gaming, is something we should stand against.mode_13h - Thursday, July 8, 2021 - link
+1 against walled gardens.mode_13h - Thursday, July 8, 2021 - link
> Since I know you won’t comprehend this I will explain it even more.Ah, warm fuzzies all around!
Just because I don't agree with you doesn't necessarily mean I don't understand. While we might disagree on the benefits of hardware streamlining and homogenization, I think we can agree that it's unfortunate consoles and PCs don't share OS, APIs, and software.
mode_13h - Thursday, July 8, 2021 - link
> Specious hand waiving.How so? Am I wrong that it's a broader problem? Or that there could be broader solutions? Or that it's actually conceivable to do something about walled gardens, when there's literally nothing you can do about consoles beyond this lonely, inconsequential rant on the internet?
Maybe if the walled garden issue could be somehow tackled by regulators, it would have impacts on consoles that you deem positive?
Qasar - Friday, July 9, 2021 - link
mode_13h, dont waste your time with oxford guy any more, he hates consoles for what ever reason he has, and from what i can tell in his posts, always will. but to call them a scam, with NO proof what so ever, 17 posts from him, with no proof of this other then what looks to be his PERSONAL OPINION, is where the BS comes into play. others on here have asked him for proof of the console scam, as he puts it, and that has resulted in the same type of replies from him, and, no proof.mode_13h - Friday, July 9, 2021 - link
> but to call them a scam, with NO proof what so ever, 17 posts from himYeah, I was just curious to hear his case. Now that I have, I think we can move on.
I think there's a missed opportunity, somewhere in here, to explore the what-ifs. So far, his only answer to consoles seems to be that people should just use mini-ITX PCs, and with little apparent appreciation of what that would mean for the industry or consumers.
I think there are other possibilities that are more interesting to explore, such as what if regulators blew open the doors on the walled gardens, by forcing platforms to authorize 3rd party signature authorities and app stores, as well as a requirement to open their APIs to all developers?
GeoffreyA - Friday, July 9, 2021 - link
Games that would run on either console.Oxford Guy - Sunday, July 11, 2021 - link
‘So far, his only answer to consoles seems to be that people should just use mini-ITX PCs, and with little apparent appreciation of what that would mean for the industry or consumers.’Once again, speculation rather than factual substance. It’s easy to ‘win’ arguments involving one’s fictional opponents.
I have said more than that but reading for comprehension rather than dismissal is not your modus operandi.
mode_13h - Sunday, July 11, 2021 - link
Well, then paint us your vision of a world without consoles.mode_13h - Sunday, July 11, 2021 - link
> It’s easy to ‘win’ arguments involving one’s fictional opponents.I wouldn't say I'm trying to "win" anything, other than trying to get to the heart of your case and see if it's based on anything that withstands scrutiny.
Qasar - Sunday, July 11, 2021 - link
" Once again, speculation rather than factual substance. It’s easy to ‘win’ arguments involving one’s fictional opponents "hello pot, meet kettle.
" I have said more than that but reading for comprehension rather than dismissal is not your modus operandi "
maybe, but you have posted no proof of this console scam, and just personal opinion. maybe its you that needs to work on their reading comprehension. but considering you also seem to resort to insults and name calling, i wouldnt expect much.
Oxford Guy - Sunday, July 11, 2021 - link
Qasar, that you believe posts like that are worthwhile says it all.Qasar - Sunday, July 11, 2021 - link
just like you are your bs console scam posts, right ? look, either post proof of this bs, admit its just your person opinion, and that you hate consoles.. cause that all it looks like it is.again, so far, you have posted NO proof of this bs.
TheJian - Monday, July 5, 2021 - link
It's comic how many of you see my AMD posts as negative. I'm trying to get them to make more money by making HIGHER MARGIN chips. How is it negative to tell someone, for the love of GOD, start MAKING NET INCOME, CHARGING MORE etc?To bad they don't have a block button on here life wccftech (only good thing about their system).
I gave tons of data for you to look at. See all those numbers in my post? That is DATA. Learn to debate the data, instead of attacking the messenger.
https://kubraconsult.files.wordpress.com/2019/07/p...
https://www.macrotrends.net/stocks/charts/AMD/amd/...
https://www.macrotrends.net/stocks/charts/AMD/amd/...
https://www.macrotrends.net/stocks/charts/AMD/amd/...
Do your own homework, I've given enough data to support my points. You just choose to remain stupid. I'd say ignorant, but, you don't seem to learn no matter how much data is put in front of your face. Compare 2009 to today. Great Q's for 2008, then realization you aren't growing and plummet 2009-2015... I could go on with data all day vs. Intel, NV, etc.
Feel free to ignore the wall of text and move along. Or grow a pair and try to debate my data.
mode_13h - Wednesday, July 7, 2021 - link
> I gave tons of data for you to look at.But your premise is flawed. AMD cannot stop the production of console chips. Do you think MS and Sony are stupid? Whatever arrangements they have for wafer supply, you can bet that it's out of AMD's hands.
Also, the notion that Intel can buy up the 3 nm wafer supply before anyone else, or that it can justify doing so for it shareholders, or that TSMC would even be obligated to sell the wafers in volumes that could damage the prospects for its other customers are all laughable.
You're living in a fantasy land, not the real world.
TheJian - Monday, July 5, 2021 - link
You start with my wall, then write one...LOL. OK.You are under the impression you win just because you have the best perf/chip. AMD had that ages ago for same 3-4yrs, and the same thing happened that is happening now, just for different reasons. The first time Intel cheated, bribed, etc, and AMD also had a hard limit of 20% share as that was all their fabs could make. Today, is it much different, AMD has CHOSEN to blow wads of wafers (the best nodes each time) on consoles which is limiting the amount of HIGH MARGIN server/HEDT/GPU sales they could be getting instead. It is that simple.
You are in fantasy land and don't read enough. Apple is launching 3nm products in sept 2022 and chips are being made/tested now for it. Google TSMC 3nm apple intel and you should get a 100 articles talking about 3nm for 2022 and Intel either in 2022 also or Q1 2023. SAmsung issues have nothing to do with TSMC. Intel has made chips at TSMC for AGES, your are incorrect. They are literally about 8% of TSMC's total output yearly. You really don't read. They rank about 1% behind AMD this year if it all turns out as we've been told. And yes, I think it's as easy as apple writes the largest check, so they get every process first. Intel just has to write one large enough to be 2nd in line and with 18.6B TTM, they certainly have the cash to pay a premium for wafers, and it's legal. You must be too young to remember the last time AMD was here. I was a reseller for AMD then...ROFL.
HEDT wasn't mentioned by linus, he said niche server and mobile at least, but didn't know about other stuff. Did you read my post or just freak when you saw the wall of data, not text? Jeez, I spend half the post telling AMD how to beat Intel but you just dismissed it all.
https://hothardware.com/news/apple-intel-racing-de...
https://www.gizmochina.com/2021/07/02/apple-intel-...
Intel already testing 3nm designs from TSMC...And goes on to say TSMC 3nm mass production h2 2022. Read more, much of this data has been out for AGES, for example:
https://www.pcgamer.com/tsmc-confirms-3nm-tech-for...
Mass production of 3nm h2 2022 again, from Dec 2020. I could probably go back further but you are wasting my time. You are claiming stuff isn't coming that 3 companies 100% involved in this stuff, are ALL claiming next year 3nm devices. You claim all 3 companies, TSMC, Apple, Intel all are lying...OK.
I don't have to beat you, I just have to stop you from getting wafers, which stops you from getting NET INCOME. I can beat you later or simply bankrupt you and buy you out, as 2.1B units or ARM mobile (PC's in a hand, hooked up to monitor/kb/ms it IS a PC), and apple now making ARM macs, etc. You have an AMD with no defense from FTC now. It will be ARM vs. x86 vs riscv.
So me some NET INCOME, or STFU.
https://www.youtube.com/watch?v=NCYNftA4EYM
LInus 3nm cpus incoming 2022. Take time 50 mins in the wan show and they get to 3nm. They wait the entire vid to talk INtel 3nm 2022. Comments section gives the timestamps of each topic.
Again, Intel doesn't have to win. I described more than one way to cause damage, and it is easy and legal for them to do. 6nm warhol is already an example of wafers lost to intel ;) That costs you too, if you have to cancel a design, a team's time just got wasted, R&D just wasted, probably some take or pay crap involved, etc. Now imagine Intel does what I said for all of h2 2022 and 2023. This is easy to win for Intel, it would be different if they weren't pulling 21B for 2018-2020 and TTM of 18.6B NET INCOME (not revenue, do you even know the difference?).
https://www.hardwaretimes.com/intels-5nm-process-n...
Are you just ignoring news on purpose? Without dropping consoles, I don't see how AMD gets more wafers to assault server or any product line heavily for a few more years. But by that time, everyone is basically on the same field again, but with an AMD who forgot to cash in for 4yrs so far. Share means nothing, if you make no income before I take it back...LOL.
mode_13h - Wednesday, July 7, 2021 - link
> AMD has CHOSEN to blow wads of wafers (the best nodes each time) on consolesAt the time AMD signed up to design the console chips, they had no reason to believe that TSMC couldn't scale capacity to meet all of AMD's demands on top of those console chip orders. Now that the situation has changed, it's too late. AMD doesn't get to decide which wafers are used for console chips.
> You are in fantasy land and don't read enough.
Um, maybe you're reading the wrong sort of stuff. Maybe you need to read more about how business actually works.
> HEDT wasn't mentioned by linus
You listen too much to Linus Tech Tips. He and WccfTech profit by being sensationalist. They just want views, clicks, and followers.
> you are wasting my time.
You're free to leave and stop posting. We won't miss you.
> 6nm warhol is already an example of wafers lost to intel ;)
Proof? There are other reasons it could have been canceled.
TheJian - Monday, July 5, 2021 - link
https://www.msn.com/en-us/news/technology/intel-wi...https://www.cnet.com/news/apple-intel-will-be-firs...
"Apple's iPad may be the first device from the company to be powered by processors using 3nm technology, Nikkei reported, while Intel is working on designs for notebooks and data center servers. TSMC reportedly plans to manufacture more 3nm chips for Intel than Apple."
Reportedly more 3nm for INTEL than apple...ROFL. NO 3nm coming...It's just a ruse, I swear.. We are done here, I didn't realize how much news is out on it, I'd rather read some more than waste my time on fools like you. :) Good day. LInus mentioned niche servers, well, I guess that could be DC...I guess your DC comment seems moot now huh? Intel doesn't have anything to beat AMD with? TSMC's 3nm is better than their 5nm so...IF it's 3nm Intel vs. 5nm AMD...both on TSMC...TSMC seems to think Intel will win this contest if their figures are correct, but either way, it is massively better than 10nm Intel right?? We done here?
TheJian - Monday, July 5, 2021 - link
I reported months ago IIRC that Intel was buying over 50% of TSMC 3nm. People laughed. Nikkei seems to be proving what I said previously. AS the ramp obviously many others will join apple/Intel on 3nm, but first run went apple, so second must be quite a bit larger and more going intel's way. I'm shocked you don't get this is how it would go when someone has ~20B yearly NET INCOME to blow on YOUR wafers you need. So easily flipped with a check for the next process upcoming. I expect Intel to fight for every wafer they can get that apple won't pay a premium for. Intel can use them all and still keep their fabs 100% full. Just more info, not sure if that post I'm talking about was as thejian or nobodyspecial (or one of my other nicks out there on stock sites).15% faster perf, or 30% lower watts than TSMC 5nm. So Intel's answer to AMD's 5nm DC assault is 3nm DC server chips from TSMC...You get the point here right?
https://www.windowscentral.com/intel-apple-tsmc-3n...
Is that enough windows, tech sites, etc telling you to rethink your position here? LOL. Again, here says making DC server chips and notebooks. Both areas 3nm would be better used than a 10nm Intel chip right? ROFLMAO
mode_13h - Wednesday, July 7, 2021 - link
> Just more info, not sure if that post I'm talking about was as thejian or nobodyspecial> (or one of my other nicks out there on stock sites).
Thanks for confirming. Remember folks: free investment advice is worth exactly what you pay for it!
schujj07 - Friday, July 9, 2021 - link
Just because Intel might be on a smaller node doesn't mean their chip on 3nm will be faster than an AMD on 5nm. Even Tiger Lake, on Intel's 10nm that is better than TSMC's 7nm, is at an IPC disadvantage compared to Zen 3. Most people in the tech industry think Intel will not be able to reach performance parity with AMD until 2025. That is how far Intel is behind right now due to their own 10nm fiasco.mode_13h - Friday, July 9, 2021 - link
I'm not sure what to make of this TSMC 3 nm hype, but I wouldn't worry about it. We know AMD and TSMC are collaborating closely and it seems unlikely to me that Intel would be able to swoop in and buy up all the 3 nm wafer supply before AMD got a bite.GeoffreyA - Tuesday, July 6, 2021 - link
"What is this wall of text lol."Well, I don't think he subscribes to Polonius's "brevity is the soul of wit."
mode_13h - Wednesday, July 7, 2021 - link
Nice!Oxford Guy - Thursday, July 8, 2021 - link
Instead of dissertations, elite doctoral programs are reportedly going to transition to MadLibs.GeoffreyA - Thursday, July 8, 2021 - link
Doesn't surprise me. The postmodern generator at Elsewhere already shows how "valuable" many academic papers are.Oxford Guy - Sunday, July 11, 2021 - link
Reading well-designed peer-reviewed studies from outside of one’s field is a good way to be humbled.I suppose jargon has gotten out of control, though. One article said sex researchers during the W administration invented new obscurantist jargon in order to try to maintain funding.
GeoffreyA - Sunday, July 11, 2021 - link
Agreed. I'd say anything outside of sociology and such fields can be decent reading. In physics, etc.,there's not much scope for hogwash, like gravity is a social construct. Still, there's a tendency for academics to lead one's mind through a maze, till it's not possible even to think clearly any more. And it all begins with language. Soon, spectres begin to operate as realities in people's minds.Clarity of thought is what writing should be about. Setting down truth in simple, dignified terms. And truth is nature, or a reflection of her. She is plain, simple, modest, straightforward. Hard to find, and blushing like a rose.
mode_13h - Monday, July 5, 2021 - link
Intel can't justify paying way more for those wafers than they can make in revenue from the chips, and the price per transistor is going to be prohibitive for GPUs or probably even server chips, for years. What you're missing about Apple is that their phone chips tend to be smaller than others and higher-margin as well. This makes a new process node viable for them (or other leading-edge phone SoCs) sooner than it is for other types of chips. That's why they bought the initial 5 nm capacity.To the extent that 3 nm does make economic sense, I'm sure AMD has as much opportunity to place orders as Intel does.
Oxford Guy - Monday, July 5, 2021 - link
‘the price per transistor is going to be prohibitive for GPUs or probably even server chips, for years’1. Define ‘prohibitive’.
2. Citation requested.
mode_13h - Monday, July 12, 2021 - link
> 1. Define ‘prohibitive’.Costing more per mm^2 than Intel can sell them for.
> 2. Citation requested.
Based on my observations that small, high-margin mobile chips are always the ones on leading process nodes. This is partly due to low yield of new process nodes (thus, favoring smaller chips). Also, because it takes time to bring online more production lines, the initial demand overwhelms supply, which bids up prices.
https://www.anandtech.com/show/16732/tsmc-manufact...
Kipii45 - Friday, July 23, 2021 - link
HBM + Optane, with no DDR5 in between, would be an interesting configuration if possible.