Original Link: https://www.anandtech.com/show/17583/the-nvidia-geforce-project-beyond-and-gtc-fall-2022-keynote-live-blog-starts-at-8am-pt1500-utc



Kicking off a bit later this morning will be NVIDIA’s GTC 2022 fall keynote, which should prove to be a very interesting event.

Besides NVIDIA’s usual run-through of business announcements, the first part of this GTC’s keynote will be focused on NVIDIA’s GeForce products, making for a very rare appearance at NVIDIA’s increasingly enterprise-focused event. NVIDIA has been teasing the GeForce portion of the event as “Project Beyond” for about the past month, and in traditional secretive NVIDIA fashion, that’s all we officially know ahead of the show.

Given the timing of this event, the announcement of NVIDIA’s next-generation of consumer video cards (GeForce RTX 40 series?) and associated GPUs is a very safe bet. The GeForce RTX 30 series premiered just over two years ago, which is right in line with NVIDIA’s usual bi-yearly consumer product cadence.

Significant performance improvements are (hopefully) in the cards, but it will be interesting to see what NVIDIA does in light of the current cypto hangover, which hit a fevered pitch last week with the long-awaited completion of the Ethereum Merge – eliminating the need for video cards to mine the popular cryptocoin. The market for video cards is almost certain to be saturated for the next several months, especially as the performance levels covered by the current RTX 30 series cards. Which means it’s cards that would be faster than the RTX 3090 and its ilk that are the most likely to succeed in the current climate.

At the same time, from a graphics feature standpoint NVIDIA has been relatively stagnant since the release of the Turing architecture (RTX 20 series) in 2018, when NVIDIA first added DirectX 12 Ultimate (FL 12_2) support. As a result, a more feature-focused release would not be unusual for NVIDIA, but at the same time we’re not immediately aware of any new features under development for DirectX.

Following CEO Jensen Huang’s GeForce presentation, we’re expecting the GTC keynote to then dovetail into a more traditional enterprise presentation. NVIDIA’s H100 Hopper accelerator will no doubt be a big focus, as it’s slated to ship soon. As well, NVIDIA has been ever-increasingly focused on robotics, medical, automotive, and of course their omniverse simulation environment. So there should be no shortage of other things to talk about – even if we’re here first and foremost for the gaming cards.

NVIDIA's keynote starts at 8am Pacific (15:00 UTC), so please join us for our live blog coverage of the green machine’s latest announcements.


10:59AM EDT - And here we go. There's just over a minute left on the stream timer

10:59AM EDT - A thank you to my colleague Gavin Bonshor, who picked up on the fact that I wrote the wrong year in the article title (d'oh!)

11:00AM EDT - Like NVIDIA's other GTC presentations, I'm expecting this to be entirely pre-recorded

11:00AM EDT - So it should be a tight show

11:00AM EDT - And here we go

11:01AM EDT - "Today we will show you new advances in NVIDIA RTX, NVIDIA AI, and NVIDIA Omniverse"

11:02AM EDT - "Future games will not have pre-baked worlds. Future games will be simulations"

11:03AM EDT - Currently rolling some demo footage that is apparently being run on a single GPU

11:03AM EDT - "Racer X"

11:05AM EDT - Discussing how NVIDIA made the demo possible

11:05AM EDT - Empasis on RTX and ray tracing

11:05AM EDT - Announcing "Ada Lovelace", third generation RTX

11:06AM EDT - NVIDIA engineers worked with TSMC for the 4N process for GPUs

11:06AM EDT - 76B transistors!

11:06AM EDT - New SM with 90 TFLOPS

11:06AM EDT - Major new technology: shader execution reordering

11:06AM EDT - SER

11:07AM EDT - Out of order execution for GPU shaders?

11:07AM EDT - New RT core with 2x the ray-triangle intersection throughput

11:07AM EDT - New tensor core with tensor engine and FP8 support

11:08AM EDT - Shader execution reordering improves shader execution efficiency by reordering shaders to better take advantage of coherency

11:09AM EDT - Thereby reducing one of the issues tha tmakes ray tracing less efficient on GPUs

11:09AM EDT - DLSS 3 is on Jensen's slide

11:10AM EDT - Yep, DLSS 3

11:10AM EDT - Talking about how DLSS is used to get images up to 4K resolution without breaking the bank on performance

11:10AM EDT - DLSS 3 is a new AI that can generate entire frames rather than just pixels

11:10AM EDT - So frame interpolation/projection, then?

11:11AM EDT - Optical flow accelerator provides NN with pixel motion and geometry to generate intermediate frames

11:11AM EDT - "Boosting game performance by up to four times over brute-force rendering"

11:11AM EDT - Benefits both CPU and GPU heavy games, since you're not having to render frames on either side

11:12AM EDT - Now showing a demo of Cyberpunk with DLSS 3

11:13AM EDT - Recapping the need for DLSS. Transistor budgets/performnace have not kept up with resolution and image quality demands

11:14AM EDT - (Or rather, the image quality gains are slowing way down if you stick with just what you can buy with GPU performance gains)

11:14AM EDT - Announcing Portal RTX

11:14AM EDT - Remastered Valve's Portal with omniverse tools

11:15AM EDT - Rebuilt with RT and DLSS 3

11:15AM EDT - This looks like it's Portal 2 as well?

11:15AM EDT - Coming in November as DLC for current Portal owners

11:16AM EDT - New application: RTX Remix

11:16AM EDT - Capture the game into USD (universal scene discriptor)

11:16AM EDT - Can then play with the scene to alter lighting and other attributes

11:17AM EDT - And can then play those modified games

11:17AM EDT - Available shortly after the Lovelace launch

11:17AM EDT - 1400 TOPS tensor performance

11:17AM EDT - 2x faster than Ampere for rasterized games. 4x for RT games

11:17AM EDT - NVIDIA has pushed clockspeeds over 3GHz in the labs

11:18AM EDT - And now to cards

11:18AM EDT - GeForce RTX 4090, 24GB GDDR6X, $1600. Available October 12th

11:18AM EDT - Expected to be 2-4x faster than RTX 3090 Ti

11:19AM EDT - GeForce RTX 4080, 16GB and 12GB GDDR6 versions. $899 for 12GB, $1199 for 16GB

11:19AM EDT - RTX 3060 now starting at $329 (again?)

11:20AM EDT - Start saving your pennies now

11:21AM EDT - Now rolling another video

11:22AM EDT - And that's it for GeForce

11:22AM EDT - We're now on to enterprise matters, starting with the Omniverse

11:23AM EDT - Jensen is talking up omniverse for multiple tasks

11:24AM EDT - Several new features for omniverse and simulations in general

11:24AM EDT - Omniverse JT connector

11:25AM EDT - For connecting to Siemens software

11:27AM EDT - Racer X was created with 30 artists in 3 months using Omniverse for collaboration

11:27AM EDT - Continuing to talk about Omniverse and quickly going through all of the different groups/customers using it

11:28AM EDT - "In the future, everything made will have a digital twin"

11:29AM EDT - Charter and Heavy.AI are using Omniverse to make digital twins of their cellular networks to simulate their RF propogation and resulting network performance

11:31AM EDT - Now rolling a video of further users and uses of omniverse/digital twins

11:32AM EDT - (Jensen's pre-recorded presentations are less a written speech and more going down a list of bullet points, so he switches subjects very quickly)

11:32AM EDT - Already 150 connectors to omniverse

11:33AM EDT - NVIDIA has built a GDN - graphics delivery network - as part of building out GeForce Now

11:34AM EDT - NVIDIA is using this to build out a global omniverse cloud service

11:34AM EDT - Anything running on GDN can be streamed to any client device

11:35AM EDT - Omniverse as a cloud service

11:35AM EDT - Omniverse Cloud

11:35AM EDT - Using OVX servers

11:37AM EDT - Announcing Omniverse Cloud. Infrastructure as a Service (IaaS)

11:37AM EDT - Cloud, Replicator, and Farm containers available on AWS today

11:37AM EDT - Also offering them as managed services

11:37AM EDT - Now on to robotics

11:38AM EDT - Looks like an update on NVIDIA's SoCs?

11:38AM EDT - Atlan is dead!

11:38AM EDT - Being replaced with another SoC: Thor

11:39AM EDT - (Fun fact: in the comics, Atlan is known as the "Dead King")

11:39AM EDT - Why the change? Thor is faster, and NVIDIA has decided to implement newer features from NVIDIA's latest architectures such as multi-instance GPU

11:39AM EDT - 2000 TFLOPS FP8

11:39AM EDT - 77B transistors

11:40AM EDT - Tensor cores feature transformer engines

11:40AM EDT - Complete multi-domain isolation (running 3 OSes on one computer for different tasks, for example)

11:41AM EDT - Like Atlan, NVIDIA is trying to make it so that auto makers can do all of their car computation, from self-driving to infotainment, on a single processor

11:41AM EDT - Now on to NVIDIA DRIVE

11:42AM EDT - NV has developed an AI to make a 3D scene from imported sensor data

11:42AM EDT - Neural Reconstruction Engine

11:42AM EDT - Now rolling a video explaining the fewature

11:43AM EDT - Scenes and assets are reconstructed

11:43AM EDT - Take recordings and modify them to easily create new/tweaked scenarios

11:45AM EDT - All of this is part of NVIDIA's big goal of training self-driving AIs using sims, rather than having to do vast amounts of training in real-time on the roads

11:47AM EDT - And of course, all of this is being developed with a focus on safety

11:48AM EDT - Now rolling a video showing NVIDIA's various DRIVE technologies and features being used in concert on a drive

11:48AM EDT - (Where is my self-driving car?)

11:49AM EDT - "Robotic computers are the newest type of computers"

11:50AM EDT - NVIDIA is currently on their Orin SoC for both cards/DRIVE and robotics

11:50AM EDT - Now talking about the Jetson platform

11:50AM EDT - And how NVIDIA's partners are using Jetson robotics

11:50AM EDT - Announcing Jetson Orin Nano

11:51AM EDT - 80x faster than previous Jetson Nano

11:51AM EDT - This is the 6th Orin SKU for Jetson. The slowest, but also the cheapest

11:51AM EDT - NVIDIA is also using Orin for a new platform called IGX

11:52AM EDT - MicroATX motherboards with an Orin SoC and ConnectX NIC

11:52AM EDT - Just add a GPU on a video/accelerator card

11:53AM EDT - This is for edge computing devices in multiple fields

11:53AM EDT - Medical, robotics, etc

11:53AM EDT - Several new surgical robotics systems are being announced that will be using IGX and NVIDIA's Clara software stack

11:54AM EDT - Now on to the Isaac robotics platform

11:55AM EDT - Driving autonomous robots and more

11:55AM EDT - Now rolling a video of Isaac in action

11:59AM EDT - Now we're on to AI software frameworks

11:59AM EDT - NV boasts 3.5 million developers

12:01PM EDT - NVIDIA RAPIDS now has a plug-in for Spark 3

12:02PM EDT - Updates to NVIDIA's Triton software as well

12:05PM EDT - NVIDIA can now accelerate Deep Graph Library and PyTorch Geometric graph neural networks

12:06PM EDT - New project: CV-CUDA, an open source library for imaging and computer vision

12:06PM EDT - Also shipping an updated version of NVIDIA's CuQuantum software for simulating quantum computers

12:07PM EDT - Being used by both Amazon and Oracle

12:07PM EDT - As well as QODA for hybrid quantum-classical computing

12:08PM EDT - Fully emulating quantum-classical computer

12:09PM EDT - Now on to the subejct of large language models

12:09PM EDT - GPT-3, etc

12:11PM EDT - Large models can be used for multiple task. They have to, as a full retraining of them is non-viable given their size

12:11PM EDT - But there are ways to tweak existing large models by using "prompt learning"

12:12PM EDT - Which NVIDIA's NeMo software can do

12:12PM EDT - Announcing NeMo LLM Service

12:12PM EDT - Cloud service that trains a model based on example tasks

12:14PM EDT - Announcing Bionemo large language model service

12:16PM EDT - Now talking about NVIDIA's Hopper architecture, used in the H100 accelerator

12:17PM EDT - Which is also NVIDIA's initial architecture to offer transformer engines

12:17PM EDT - (And now we're quoting Star Trek II)

12:17PM EDT - Hopper/H100 can serve 30x as many users in large language models

12:17PM EDT - H100 is available now on LaunchPad

12:18PM EDT - DGX H100 pre-orders starting now. Shipping in Q1'2023

12:18PM EDT - OEM systems with H100 available in October

12:18PM EDT - H100 is now in full production

12:19PM EDT - Now on to reccomender systems

12:20PM EDT - NVIDIA believes that Grace Hopper, NVIDIA's Grace CPU + Hopper H100 GPU superchip, is well suited for executing recommender systems

12:20PM EDT - 120 node GH system can process a 70TB recommender system

12:21PM EDT - Recapping Grace Hopper

12:21PM EDT - 72 Neoverse V2 cores, 900GB/sec NVLInk C2C, 500 GB/sec LPDDR5X w/ECC, 117MB L3 cache, and a 3.2 TB/sec coherency fabric

12:22PM EDT - Grace and Grace Hopper are designed for high performance computing systems

12:22PM EDT - Grace systems will be available in the first half of 2023 as HGX and OVX systems

12:25PM EDT - Now talking about the second generation of OVX systems

12:25PM EDT - Based around the L40 datacenter GPU

12:25PM EDT - Based on the Ada Lovelace architecture, of course

12:25PM EDT - L40 GPUs are in full production

12:27PM EDT - Now rolling another demo video, this time on avatar creation and NVIDIA ACE

12:30PM EDT - (Why is it that it seems like voice synthesis hasn't progressed much in the last few years? It still sounds so stilted)

12:31PM EDT - Now recapping today's announcements

12:32PM EDT - Lovelace GPUs, Hopper in full production, Grace Hopper in the first half of next year, second-generation OVX servers and L40 GPUs

12:32PM EDT - And Thor SoC replaces Atlan. For 2025 autos

12:33PM EDT - And a slew of software library/framework updates

12:34PM EDT - 200 talks scheduled for this GTC event

12:36PM EDT - And that's a wrap. Thanks for joining us. Now to look into more of that GeForce news...

Log in

Don't have an account? Sign up now