513 annotations
Page 11 of 26
Our AI models are called AI Foundations.
(No comment added)
Transcript
2024 Q3
2 Dec 23
second, you have to have the best practice
(No comment added)
Transcript
2024 Q3
2 Dec 23
we have just an incredible depth of AI capability, AI technology capability
(No comment added)
Transcript
2024 Q3
2 Dec 23
the third thing is you need factories, and that's what DGX Cloud is.
(No comment added)
Transcript
2024 Q3
2 Dec 23
With their proprietary data on NVIDIA DGX Cloud and deploy the AI applications on enterprise-grade NVIDIA AI Enterprise, NVIDIA is essentially an AI foundry.
(No comment added)
Transcript
2024 Q3
2 Dec 23
With their proprietary data on NVIDIA DGX Cloud and deploy the AI applications on enterprise-grade NVIDIA AI Enterprise, NVIDIA is essentially an AI foundry.
(No comment added)
Transcript
2024 Q3
2 Dec 23
We have the AI technology, expertise and scale to help customers build custom models.
(No comment added)
Transcript
2024 Q3
2 Dec 23
NVIDIA software and services is on track to exit the year at an annualized run rate of $1 billion.
(No comment added)
Transcript
2024 Q3
2 Dec 23
This week, we announced an Ethernet for AI platform for enterprises. NVIDIA Spectrum-X is an end-to-end solution of BlueField SuperNIC, Spectrum-4 Ethernet switch and software that boosts Ethernet performance by up to 1.6x for AI workloads.
(No comment added)
Transcript
2024 Q3
2 Dec 23
We highlighted 3 elements to our new growth strategy that are hitting their stride, CPU, networking and software and services.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Nations and regional CSPs are building AI clouds to serve local demand.
(No comment added)
Transcript
2024 Q3
2 Dec 23
Our strong growth reflects the broad industry platform transition from general purpose to accelerated computing and Generative AI.
(No comment added)
Transcript
2024 Q3
2 Dec 23
The list of markets and segments that we have domain-specific libraries is incredibly broad.
(No comment added)
Transcript
2024 Q3
2 Dec 23
InfiniBand network -- InfiniBand networking, Ethernet networking, x86, ARM, just about every permutation combination of solutions, technology solutions and software stacks provided.
(No comment added)
Transcript
2024 Q3
2 Dec 23
now we have an end-to-end solution for data centers
(No comment added)
Transcript
2024 Q3
2 Dec 23
We have an installed base that is not only largest in every single cloud.
(No comment added)
Transcript
2024 Q3
2 Dec 23
the reason why everybody likes our inference engine is because our installed base
(No comment added)
Transcript
2024 Q3
2 Dec 23
It's used by companies of just about every industry.
(No comment added)
Transcript
2024 Q3
2 Dec 23
H200 increases it by another factor of 2.
(No comment added)
Transcript
2024 Q3
2 Dec 23
TensorRT LLM on the same GPU, without anybody touching anything, improves the performance by a factor of 2.
And then on top of that, of course, the pace of our innovation is so high. H200 increases it by another factor of 2.
And so our inference performance, another way of saying inference cost, just reduced by a factor of 4 within about a year's time.
(No comment added)
Transcript
2024 Q3
2 Dec 23