490 annotations
Page 7 of 25
We are expanding our supply quite significantly.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Grace Hopper is in production in high-volume production now.
(No comment added)
Transcript
2024 Q3
2 Dec 23
We are on a very, very fast ramp with our first data center CPU to a multibillion-dollar product line.
(No comment added)
Transcript
2024 Q3
2 Dec 23
It adds to Ethernet with an end-to-end solution with BlueField as well as our Spectrum switch
(No comment added)
Transcript
2024 Q3
2 Dec 23
we invented this new platform that extends Ethernet
(No comment added)
Transcript
2024 Q3
2 Dec 23
with InfiniBand and with software-defined networks, we could do congestion control, adaptive routing, performance isolation and noise isolation, not to mention, of course, the data rate and the low latency
(No comment added)
Transcript
2024 Q3
2 Dec 23
we don't expect their contribution to be material or meaningful as a percentage of the revenue in Q4
(No comment added)
Transcript
2024 Q3
2 Dec 23
We are, though, working to expand our data center product portfolio to possibly offer new regulation-compliant solutions that do not require a license.
(No comment added)
Transcript
2024 Q3
2 Dec 23
driven by higher data center sales and lower net inventory reserves
(No comment added)
Transcript
2024 Q3
2 Dec 23
including a 1 percentage point benefit from the release of previously reserved inventory related to the Ampere GPU architecture products
(No comment added)
Transcript
2024 Q3
2 Dec 23
We launched a new line of desktop workstations based on NVIDIA RTX Ada Lovelace generation GPUs and ConnectX, SmartNICs, offering up to 2x the AI processing, ray tracing and graphics performance of the previous generations.
(No comment added)
Transcript
2024 Q3
2 Dec 23
NVIDIA RTX is the workstation platform of choice for professional design
(No comment added)
Transcript
2024 Q3
2 Dec 23
We just released TensorRT LLM for Windows, which speeds on-device LLM inference up by 4x.
(No comment added)
Transcript
2024 Q3
2 Dec 23
We now have enterprise AI partnerships with Adobe, Dropbox, Getty, SAP, ServiceNow, Snowflake and others to come.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Spectrum-X can achieve 1.6x higher networking performance for AI communication compared to traditional ethernet offerings.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Networking now exceeds a $10 billion annualized revenue run rate. Strong growth was driven by exceptional demand for InfiniBand, which grew fivefold year-on-year. InfiniBand is critical to gaining the scale and performance needed for training LLMs.
Microsoft made this very point last week highlighting that Azure uses over 29,000 miles of InfiniBand cabling, enough to circle the globe.
(No comment added)
Transcript
2024 Q3
2 Dec 23
At last week's Microsoft Ignite, we deepened and expanded our collaboration with Microsoft across the entire stock.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Amazon Web Services, Google Cloud, Microsoft Azure and Oracle Cloud will be among the first CSPs to offer H200-based instances starting next year.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Compared to the [ A100 ], H200 delivers an 18x performance increase for infancy models like GPT-3
(No comment added)
Transcript
2024 Q3
2 Dec 23
It moves inference speed up to another 2x compared to H100 GPUs for running LLMs like Llama 2. Combined, TensorRT LLM and H200 increased performance or reduced cost by 4x in just 1 year without customers changing their stack.
(No comment added)
Transcript
2024 Q3
2 Dec 23