Comments Locked

18 Comments

Back to Article

  • Alistair - Friday, December 17, 2021 - link

    It's fun going backwards. 12nm in 2019, moving to 8nm in 2021, and 7nm for AMD, 5nm for Apple... then back to 12nm in 2022... NO thanks
  • RU482 - Friday, December 17, 2021 - link

    Intel be like "Hey Guys, wait up"
  • Alistair - Friday, December 17, 2021 - link

    Imagine what a 5nm RTX 3060 would be like? Call it the GTX 4050 and charge $200.
  • meacupla - Saturday, December 18, 2021 - link

    low end nvidia products never used the best process node available.
    In fact, I'm surprised they used 12nm at all.
    I wouldn't have expected 16nm on Fermi architecture.
  • Kangal - Tuesday, December 21, 2021 - link

    I would hardly call the RTX-3050-Ti low-end.

    This position has always been Nvidia's high-end offering for cards that are 70W / don't need external power supply. It's been the goto option for HTPCs and Office PCs, and they generally have a price-premium. Since the original GTX-650 in 2012, then GTX 750-Ti in 2014, then the GTX-1050Ti in 2016, and lastly the GTX-1650 back in 2019.
  • Kangal - Tuesday, December 21, 2021 - link

    This market has been underserved for a long while now by the OEMs. It's nice seeing the RTX-3050Ti looking competitive on-paper. Hopefully they will be available to buy, come in a Low-Profile (or Half-Height) variant, and cost a fair price. Though I doubt it. Otherwise, this market will continue to stagnate, or possibly be served by Intel's Xe, and dominated by Apple's own SoC/APU in the market.

    Would've been great to grab an ex-Office PC like a Dell OptiPlex 7070 SFF, with an i7-8700, for something like USD $270, throw in a $80 nVme, and extra RAM for $20. Then complete the system with an RTX-3050Ti for USD $130, for a total of USD $500 give or take. Not too bad for roughly the performance of the Xbox Series S. That is, if the Pcie x4 slot doesn't cause problems since the x16 slot is placed wrongly/too low.

    ...but I'm dreaming here, those types of deals are gone.
  • Samus - Saturday, December 18, 2021 - link

    It seems like putting 4GB VRAM on a 64-bit bus is a total waste on Ampere. My GTX970 has DOUBLE that memory bandwidth using ancient memory technology on a 7 year old card.
  • Slyr762 - Saturday, December 18, 2021 - link

    Agreed. Actually, 970 was what I had before a 1080, lol.
  • Oxford Guy - Saturday, December 18, 2021 - link

    Over 3.5 GB.
  • Samus - Sunday, December 19, 2021 - link

    We used to think the 3.5GB thing mattered, especially when BF2042 was in beta and there was clear stutter when more than 3.5GB was being utilized. But like ALL games released over the years that the GTX970 has had the known segmented memory issue, ALL games have taken it into account, including 2042. Which says a lot considering 2042 doesn't even officially support cards prior to the GTX 1000-series.

    I've been running 2042 at 1080P (technically 1200P) in low detail just fine on my 970. And monitoring its memory usage in GPUZ, there is no stutter when more than 3.5GB is used.

    Posts on forums and reddit from beta testers quantify something happened during the beta that rectified this: even when the 512MB segment is used (which is only allocated a 32-bit memory interface to the crossbar) it is still exponentially faster than going to system RAM. What needed to be done to prevent the stutter was for DICE to make sure textures didn't get cached into this segment. You can monitor the memory allocation in detail using GPUZ to see the layout or whatever, and the game does what some other recent games do: two memory partitions. This is actually done for ALL GPU's in BF2042 and presumably other games, but there appears to be a specific memory allocation for GTX970 that, you guessed it, makes a 3.5GB and 512MB partition, and the 512MB partition is utilized by non-texture data that still benefits from having high speed memory. Keep in mind here that many videocards (like those in THIS AT POST) only have 64-bit memory buses so 32-bit is totally fine for logging, z-buffering, pre-rendering, meshes, shades\colors, even HUD\static data. The goal is to keep the data quickly accessible by the crossbar. DICE obviously didn't prioritized supporting unofficially supported hardware until the game released as that wasn't really the focus of the beta.

    Basically the only time the 0.5GB segment is ever an issue is when devs don't use it right, and textures are stored indiscriminately across the entire memory bus. Basically all games do, as they should, because the GTX970 was one of the best selling GPU's in history and there are a great deal of them still in use. Pretty crazy when you consider they are 7 years old, but it's also worth mentioning why some people like myself hold onto them: one of the last GPU's with a compact blower, the last gen of GPU's with analog output and native HD15, low power consumption, still relevant for 1080P gaming, actively supported drivers.
  • Oxford Guy - Tuesday, December 21, 2021 - link

    Fraud isn't something I would spend so many words trying to sugarcoat.

    'exponentially faster than going to system RAM'

    That's what a GPU's RAM speed is measured by. Absolutely.
  • meacupla - Saturday, December 18, 2021 - link

    Personally, I am waiting to see how well AMD Ryzen 6000 U or H with Navi 2 and ddr5 will perform.
    The engineering sample benchmarks popping up show they work at GTX 1650 levels
  • Samus - Sunday, December 19, 2021 - link

    Wow that would be ridiculous.
  • Rookierookie - Saturday, December 18, 2021 - link

    Early benchmarks put the 2050 on the same level as the 1650 Ti.
  • TheinsanegamerN - Saturday, December 18, 2021 - link

    Maybe at 720p or 1080p low details in non demanding games. That 64 bit bus is going to be catastrophic for frametime consistency.
  • TheinsanegamerN - Saturday, December 18, 2021 - link

    2050 having more cores then a 2060 but a third the memory bandwidth and 2GB less VRAM....just...WTF nvidia. Talk about gimping your GPU.
  • IBM760XL - Sunday, December 19, 2021 - link

    After reading this article, I now understand the RTX 2050's placement. Previously I wondered why you'd bother with ray-tracing on that spec of a chip, but now I see that the ray-tracing isn't the point, it's to be able to make use of chips that would have been discarded otherwise. And with the market the way it is, it's worthwhile for nVIDIA and laptop partners to do that, whereas maybe it wouldn't have been before.

    I'm still not a fan of the MX branding though. Why not call the MX570/MX550 the 3030 GT and the 3020 G or something like that? I can never remember which MX cards equate to which low-end GT/G/GTX cards, and having the same naming scheme would help enormously with that.

    Although part of it might also have been buying an MX440 back around 2003. The MX name has never been the same since realizing a Ti 4000 series (or even 3000) would have been better. But at least then it was 4000/400, now it's 3000/500.
  • FatFlatulentGit - Monday, December 20, 2021 - link

    Come on Nvidia, or AMD, or even Intel, laptops are nice, but when are we gonna get a good 75w desktop card? I'm still waiting for HDMI 2.1 on my HTPC...

Log in

Don't have an account? Sign up now