Docoh
Loading...

NVDA NVIDIA

Participants
Simona Jankowski Investor Relations
Jensen Huang President and Chief Executive Officer
Colette Kress Executive Vice President and Chief Financial Officer
Vivek Arya Bank of America
Timothy Arcuri UBS
Aaron Rakers Wells Fargo
C.J. Muse Evercore ISI
Toshiya Hari Goldman Sachs
Stacy Rasgon Bernstein Research
Joseph Moore Morgan Stanley
Call transcript
Operator

Good afternoon. My name is David and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA’s Financial Results Conference Call. [Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.

Simona Jankowski

Thank you. Good afternoon, everyone and welcome to NVIDIA’s conference call for the second quarter of fiscal 2021. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer and Colette Kress, Executive Vice President and Chief Financial Officer. I would like to remind you that our call is being webcast live on NVIDIA’s Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the third quarter of fiscal 2021. The content of today’s call is NVIDIA’s property. It can’t be reproduced or transcribed without our prior written consent.

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties and our actual results may differ materially.

For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today’s earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, August 19, 2020 based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

During this call, we will discuss non-GAAP financial measures.

You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.

Colette Kress

Thanks, Simona. Q2 was another extraordinary quarter. The world continued to battle the COVID-19 pandemic and most of our employees continue to work from home, but through the team’s agility and dedication, we successfully combined Mellanox into NVIDIA while also delivering a very strong quarter. Revenue was $3.87 billion, up 50% year-on-year, up 26% sequentially and well ahead of our outlook. Starting with gaming, revenue was $1.65 billion, was up 26% year-on-year and up 24% sequentially significantly ahead of our expectations. The upside is broad-based across geographic regions, products and channels. Gaming’s growth amid the pandemic highlights the emergence of a leading form of entertainment worldwide.

For example, the number of daily gamers on Steam, a leading PC game online distributor, is up 25% from pre-pandemic levels and NPD reported that U.S. consumer spending on videogames grew 30% in the second calendar quarter to a record $11 billion. NVIDIA’s PCs and laptops are ideal for the millions of people who are now working, learning and gaming at home. At the outset of the pandemic, many retail outlets were closed and demand shifted to online channels.

As the quarter progressed and the stores reopened, retail demand picked up, iCafes largely reopened and online sales continued to thrive. Gaming laptop demand is very strong as students and professionals turn to GeForce-based systems to improve how they work, learn, and game from home. We ramped over 100 new models with our OEM partners focused on both premium and mainstream price points. In the premium laptop segment, we delivered unparalleled performance with the GeForce RTX 2080 and the 2070 SUPER GPUs [indiscernible] form factors.

We also brought ray tracing to gaming laptops for the first time at price points as low as $999 with the GeForce RTX 2060. In the mainstream segment, we brought the GeForce GTX to laptop price points as low as $699. Momentum continues for our Turing architecture, which enables stunning new visual effects in games and is driving powerful upgrade cycle among gamers. Its RTX technology adds ray tracing and AI to programmable shading and has quickly redefined the new standard for computer graphics. DLSS used the AI capabilities of Turing to boost frame rates by almost 2x, while generating crisp image quality. RTX support in blockbuster games continues to grow, including Mega-Hit DEATH STRANDING, the highly anticipated Cyberpunk 2077 and the upcoming release of Watch Dogs. These games join Minecraft and other major titles that support NVIDIA RTX ray tracing and DLSS.

We are in the midst of a 21-day countdown campaign promoting a GeForce special event on September 1, with each day highlighting a year in the history of GeForce. We don’t want to spoil the surprise, but we encourage you to tune in.

We are very pleased with the traction of our GeForce NOW cloud gaming service now in its second quarter of commercial availability. GFN offers the richest content to any game streaming service through partnerships with leading digital game stores, including Valve Steam, Epic Games and Ubisoft Uplay. GeForce NOW enables users with underpowered PC, Macs or Android devices to access powerful GPUs to play their libraries of PC games in the cloud, expanding the universe of gamers that we can reach with GeForce.

Just yesterday, we announced that GFN is now supported on Chromebooks further expanding our reach into tens of millions of users.

In addition to NVIDIA’s own service, GFN is available or coming soon to a number of telecom partners around the world, including SoftBank and KDDI in Japan, Rostelecom and Beeline in Russia, LG U+ in South Korea and Taiwan Mobile.

Moving to Pro Vis, in Q2 was $203 million in revenue, down 30% year-on-year and down 34% sequentially, with declines in both mobile and desktop workstations. Sales were hurt by lower enterprise demand amid the closure of many offices around the world. Industries negatively impacted during the quarter, include automotive, architectural engineering and construction, manufacturing, media and entertainment and oil and gas. Initiatives by enterprises to enable remote workers drove demand for virtual and cloud-based graphic solutions. Accordingly, our Q2 vGPU bookings accelerated increasing 60% year-on-year. Despite near-term challenges, we are winning new business in areas such as healthcare, including Siemens, Philips and General Electric and the public sector.

We continue to expand our market opportunity with over 50 leading design and creative applications that are NVIDIA RTX-enabled, including the latest release from Foundry, Chaos Group, and Maxon. These applications provide faster ray tracing and accelerated performance, improving creators’ design workflows. The pandemic will have a lasting impact on how we work.

Our revenue mix going forward will likely reflect this evolution in enterprise workforce trends with a greater focus on technologies, such as NVIDIA laptops and virtual workstations that enable remote work and virtual collaboration.

Moving to automotive, automotive revenue was $111 million, down 47% year-over-year and down 28% sequentially. This was slightly better than our outlook of a 40% sequential decline as the impact of the pandemic was less pronounced than expected, with auto production volumes starting to recover after bottoming in April.

Some of the decline is also due to the roll-off of legacy infotainment revenue, which remained a headwind in future quarters. In June, we announced a landmark partnership with Mercedes-Benz, which starting in 2024 will launch software-defined intelligent vehicles across an entire fleet in using end-to-end NVIDIA technology. Mercedes will utilize NVIDIA’s full technology stack, including the DRIVE AGX computer, DRIVE AV autonomous driving software and NVIDIA’s AI infrastructure spanning from the core to the cloud. Centralizing and unifying computing in the car will make it easier to integrate and upgrade advanced software features as they are developed. With over-the-air updates, vehicles can receive the latest autonomous driving and intelligent cockpit features, increasing value and extending majority of ownership with each software upgrade. This is a transformator announcement for the automotive industry making the turning point of traditional vehicles becoming high-performance updatable data centers on wheels. It’s also a transformative announcement for NVIDIA’s evolving business model as the software content of our platforms grows positioning us to build a recurring revenue stream.

Moving to data center, data center is a diverse, consist of cloud service providers, public cloud providers, supercomputing centers, enterprises, telecom and industrial edge. Q2 revenue was a record $1.75 billion, up 167% year-on-year and up 54% sequentially. In Q2, we incorporated a full quarter of contribution from the Mellanox acquisition, which closed on April 27, the first day of our quarter. Mellanox contributed approximately 14% of company revenue and just over 30% of data center revenue. Both compute and networking within data center set a record with accelerating year-on-year growth. The biggest news in data center this quarter was the launch of our Ampere architecture.

We are very proud of the team’s execution in launching and ramping this technological marvel especially amid the pandemic. The A100 is the largest chip ever made with 54 billion transistors. It runs our full software stack for accelerating the most compute-intensive workloads.

Our software leases include CUDA 11, the new versions of over 50 CUDA-X libraries and a new application framework for major AI workloads, such as Jarvis for conversational AI and Merlin for deep recommender systems. The A100 delivers NVIDIA’s greatest generational leap ever, boosting AI performance by 20x over its predecessor. It is also our first universal accelerator unifying AI training and inference and powering workloads, such as data analytics, scientific computing, genomics, edge video analytics, 5G services and graphics.

The first Ampere GPU, A100, has been widely adopted by all major server vendors and cloud service providers. Google Cloud platform was the first cloud customer to bring it to market making it the fastest GPU to come to the cloud in our history. And just this morning, Microsoft Azure announced the availability of massively scalable AI clusters, which are based on the A100 and interconnected with 200-gigabyte-per-second Mellanox InfiniBand networking. A100 is also getting incorporated into offerings from AWS, Alibaba Cloud, Baidu Cloud and Tencent Cloud. And we announced that the A100 is going to market with more than 50 servers from leading vendors around the world, including Cisco, Dell, Hewlett-Packard Enterprise and Lenovo. Adoption of the A100 into leading server makers offerings is faster than any prior launch, with 30 systems expected this summer and over 40 more by the end of the year. The A100 is already winning industry recognition in the latest A100 training benchmark, MLPerf 0.7 NVIDIA set 16 records, sweeping all categories for commercially available solutions in both per chip and outscale performance based on the A100. MLPerf offers the industry’s first and only objective AI benchmark. Since the benchmark was introduced 2 years ago, NVIDIA has consistently delivered leading results and record performance for both training and inference. NVIDIA also topped the chart in the latest TOP500 list of the fastest supercomputers. The ranking, released in June, showed that 8 of the world’s top 10 supercomputers use NVIDIA GPUs, NVIDIA’s networking or both. They include the most powerful systems in the U.S. and Europe. NVIDIA now combined with Mellanox powers two-thirds of the top 500 systems on the list compared with just less than a half for the two companies in total 2 years ago. In energy efficiencies, systems using NVIDIA GPUs are pulling away from the pack. On average, they are nearly 2.8x more powerful and efficient than systems without NVIDIA GPUs measured by gigaflops per watt. The incredible performance and efficiency of the A100 GPU is best amplified by NVIDIA’s own new Selene supercomputer, which debuted as number seven on the TOP500 list and is the only top 100 systems to cross the 20 gigaflops per watt barrier.

Our engineers were able to assemble Selene in less than 4 weeks using NVIDIA’s open modular DGX SuperPOD reference architecture instead of the typical build time of months or even years. This is the system that we will use to win the MLPerf benchmarks and it is a reference design that’s available for our customers to quickly build a world class supercomputer.

We also brought GPU acceleration to data analytics, one of the largest and fastest growing enterprise workloads. We enabled end-to-end acceleration of the entire data analytics workload pipeline for the first time with NVIDIA’s GPUs and software stack in the latest version of Apache Spark released in June. Spark is the world’s leading data analytics platform used by more than 500,000 data scientists and 16,000 enterprises worldwide. And we have two major milestones to share.

We have now shipped a cumulative total of 1 billion CUDA GPUs and the total number of developers in the NVIDIA ecosystem just reached 2 million. It took over a decade to reach the first million and less than 2 years to reach the second million. Mellanox has fantastic results across the Board in its first quarter as part of NVIDIA. Mellanox revenue growth accelerated with strength across Ethernet and InfiniBand products.

Our Ethernet shipments reached a new record. Major hyperscale build drove the upside in the quarter as growth in cloud computing and AI is fueling increased demand for high-performance networking. Mellanox networking was a critical part of several of our major new product introductions this quarter. These include the DGX AI system, the DGX SuperPOD clusters for our Selene supercomputer and the EGX Edge AI platform.

We also launched the Mellanox ConnectX-6 Ethernet NIC, the 11th generation product of the ConnectX family and is designed to meet the needs of modern cloud and hyperscale data centers, where 25, 50 and 100 gigabyte per second is becoming the standard. We expanded our switch networking capabilities with the addition of Cumulus Networks, a privately held network software company that we purchased in June. Cumulus augments our Mellanox acquisition in building out open modern data center. The combination of NVIDIA accelerated computing, Mellanox networking and Cumulus software enables data centers that are accelerated, disaggregated and software-defined to meet the exponential growth in AI, cloud and high-performance computing.

Moving to the rest of the P&L. Q2 GAAP gross margin was 58.8% and non-GAAP gross margin was 66%. GAAP gross margin declined year-on-year and sequentially due to costs associated with the Mellanox acquisition. Non-GAAP gross margins increased by almost 6 points year-on-year, reflecting a shift in product mix with higher data center sales and lower automotive sales. Q2 GAAP operating expenses were $1.62 billion and non-GAAP operating expenses were $1.04 billion, up 67% and 38% from a year ago, respectively. Q2 GAAP EPS was $0.99, up 10% from a year earlier. Non-GAAP EPS was $2.18, up 76% from the year ago. Q2 cash flow from operations was $1.57 billion. With that, let me turn to the outlook for the third quarter of fiscal 2021.

We expect revenue to be $4.4 billion, plus or minus 2%. With that, we expect gaming to be up just over 25% sequentially, with data center to be up in the low to mid-single digits sequentially.

We expect both pro vis and auto to be at similar levels out in Q2. GAAP and non-GAAP gross margins are expected to be 62.5% and 65.5% respectively plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $1.54 billion and $1.09 billion respectively. Full year GAAP and non-GAAP OpEx is tracking in line with our outlook of $5.7 billion and $4.1 billion respectively. GAAP and non-GAAP OI&E are both expected to be expense of approximately $55 million. GAAP and non-GAAP tax rates are both expected to be 8%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $225 million to $250 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community.

We will be at the BMO Virtual Technology Summit on August 25, Citi’s 2020 Global Technology Conference on September 9, Deutsche Bank’s Technology Conference on September 14 and the Evercore’s Virtual Memo Forum, The Future of Mobility, on September 21.

We will also host a financial analyst Q&A with Jensen on October 5 as part of our next virtual GTC.

Our earnings call to discuss our third quarter’s results is scheduled for Wednesday, November 18.

We will now open the call for questions. Operator, would you please poll for questions? Thank you.

Operator

Certainly. [Operator Instructions] Your first question comes from the line of Vivek Arya with Bank of America.

Your line is open.

Vivek Arya

Thanks for taking my question and congratulations on the strong growth and execution. Jensen, I am wondering how much of the strength that you are seeing in gaming and data center is maybe more temporary because of COVID or some customer pull-ins in the data center or so forth? And how much of it is more structural and more secular that can continue even once we get into hopefully sooner rather than later into a more normalized period for the industry?

Jensen Huang

Yes, Vivek, thank you.

So first of all, we didn’t see pull-ins and we are in the beginning of our brand new product cycle with Ampere and so the vast majority of the data center growth came from that. The gaming industry, with all that’s happening around the world and it’s really unfortunate but it’s made gaming the largest entertainment medium in the world. More than ever, people are spending time digitally, spending on time – spending their time in videogames. The thing that people haven’t realized about videogames is that it’s not just the game itself anymore. The variety of different ways that you can play, whether you can hang out with your friends in Fortnite, go to a concert in Fortnite, building virtual worlds in Minecraft, you are spending time with your friends, you are using it to create to realize your imaginations. People are using it for broadcast, for sharing ideas and techniques with other people.

And so – and then of course, it’s just an incredibly fun way to spend time even by consumption of the video – of a videogame.

And so there is just so much that you could do with videogames now. And I think that this way of enjoying entertainment digitally has been accelerated as a result of the pandemic, but I don’t think it’s going to return. Video game adoption has been going up over time and pretty steadily. And PC is now the single largest entertainment platform – is the largest gaming platform. And GeForce is now the largest gaming platform in the world. And as I mentioned, it’s not just about gaming. There’s just so many different ways that you could enjoy games. With data center the things that – the structural change that��s happening in data center are coupled with different dynamics that are happening at the same time.

The first dynamic, of course, is the movement to the cloud, the way that a cloud data center is built and the way that an enterprise data center or a cluster is built is fundamentally different. And it’s really, really beneficial to have the ability to accelerate applications that cloud service providers would like to offer, which is basically everything. And we know that one of the most important applications of today is artificial intelligence. It’s a type of software that really wants acceleration. And NVIDIA’s GPU acceleration is the perfect medium, perfect platform for it. And then the last reason about the data center is the – this architectural change from hosting applications to hosting services that’s driving this new type of architecture called disaggregation versus hyper converged. And the original name of hyperscalers is a large data center of a whole bunch of hyperconverged computers. But the computers of today are really disaggregated. A single application service could be running on multiple servers at the same time, which generates a ton of east-west traffic, and a lot of it is artificial intelligence neuro network models.

And so, because of this type of architecture, two components, two types of technologies are really important to the future of cloud.

One of them, as I mentioned, was – is acceleration, and our GPU is ideal for it. And then the other one is high-speed networking. And the reason for that is because the server is now disaggregated, the application is fractionalized and broken up into this – in a bunch of small pieces that are running across the data center. And whenever an application needs to send parts of the answer to another server for the microservice to run. That transition is called east-west traffic. And the most important thing you could possibly do for yourself is to buy really high-speed, low-latency networking. And that’s what Mellanox is fantastic at.

And so, we find ourselves really in this perfect condition where the future is going to be more virtual, more digital, and that’s why – that’s the reason why GeForce is so successful. And then we find ourselves in a world where the future is going to be more autonomous and more AI-driven. And that’s the benefit of our GPUs. And then, lastly, cloud microservice transactions really benefit high-speed networking, and that’s where Mellanox comes in.

And so, I think that this is – the dynamics that I am describing are permanent, and it’s just been accelerated to the present, because of everything that’s happening to us. But this is the future, and it’s not – there’s no going back. It’s – and we just found everything accelerated.

Operator

Your next question comes from the line of Timothy Arcuri with UBS.

Your line is open.

Timothy Arcuri

Thanks a lot. Jensen, I guess I had a question on both architecture and also manufacturing. And I think on the manufacturing side, you have been radical on that for some time. And when you have been asked in the past about moving to more of a tiled or chiplet approach, you sort of made light of that. But the CPU guys are clearly picking that approach.

So, I guess, the question is why do you think you won’t have to make a similar move? And then on the side of architecture, the theme of Hot Chips this week was really how compute demand is far outstripping computing power? And then we see this, talk about you and ARM.

So, I guess can you talk about whether GPU is the future and maybe the broader opportunity to integrate CPU and GPU? Thanks.

Jensen Huang

Yes. We push architecture really hard. And the way we push architecture is we start from the system. But we believe that the future computer company is a data center-scale company. The computing unit is no longer a microprocessor or even a server or even a cluster. The computing unit is an entire data center now. And as I was explaining it to Vivek just now that a microservice that we are enjoying hundreds of billions of transactions a day, those are broken up into a whole bunch of microservices that are running across the entire data center.

And so, the data center is running – the entire data center is running an application. I mean, that’s pretty remarkable thing. And that happened in the last several years. We were ahead of this trend, and we recognized that, as a computing company, we have to be a data center scale company and we architect from that starting.

If you look at our architecture, we were the first in the world to create the concept of an NVLink, with eight processors being fully synchronized across the computing node, and we created the DGX. We recognize the importance of high-speed networking and low-latency networking, and that’s why we bought Mellanox. And the amount of software that we invented along the way to make it possible for low-latency communications, whether it’s GPUDirect or, recently, the invention of GPUDirect Storage, all of that technology was inspired by the idea that you have to think about the data center all in one holistic way. And then in the last – in this current generation with Ampere, we invented the world’s first multi-instance GPU. We invented the world’s first multi-instance GPU, which means that our Ampere GPU could simultaneously be one GPU or, with NVlink, 8 GPUs combined, working together.

So you could think of them as being titled, so those 8 GPUs are working harmoniously together or each one of the GPUs could fractionalize itself, if you don’t need that much GPU working on your workload, fractionalize into a multi-GPU instance, we call the MIG, a little tiny instance. And each one of those tiny instances are more powerful and more performant than our entire Volta GPU lap time.

And so, whether you like to fractionalize the GPU, which happens oftentimes, create a larger GPU using NVLink, or create an even larger GPU, the size of DGX POD, connected together with high-speed, low-latency networking with Mellanox, we could architect it any way you would like.

You made a comment about – you asked a question about ARM.

We have been a long-term partner of ARM, and we use ARM in a whole bunch of applications. And whether it’s autonomous driving or a robotics application, the Nintendo Switch, console business that we are in. And then, recently, we brought CUDA to ARM and to bring accelerated computing to ARM.

And so, we worked with the ARM team very closely. They are really great guys. And one of the specials about the ARM architecture that you know very well is that it’s incredibly energy-efficient. And because it’s energy-efficient, it has the headroom to scale into very high-performance levels over time.

And so, anyways, we love working with the ARM guys.

Operator

Your next question comes from the line of Aaron Rakers with Wells Fargo.

Your line is open.

Aaron Rakers

Yes. Thanks for taking the question. Congratulations on the quarter.

Just building on some prior questions.

The first one, I was just curious if you could help us appreciate kind of the installed base of the gaming GPU business, relative to where we are at the Turing upgrade cycle, what do we see still on prior generations, be it Pascal or before? And then secondly, I was wondering, Colette, could you just give me a kind of updated commentary or views on visibility in the data center business? How that – how has that changed over the last 3 months? What does that look like as you look through the back half of the calendar year? Thank you.

Jensen Huang

Yes. Thanks a lot, Aaron.

We are – we are still in the ramping of the RTX generation. Turing, Turing the current generation that we are in, is the world’s first ray tracing GPU. And it fuses – the RTX technology fuses three fundamental technologies: the programmable shader that we introduced a long time ago that revolutionized computer graphics, and we added two new technologies. One technology is a ray tracing acceleration core that makes the tracing of rays and looking for intersections between the ray and the scene – objects in scene super, super fast. And that’s – it’s a complicated problem. It’s a super complicated problem. We want it to be running concurrently to shading so that the ray traversal and the shading of the pixels could be done independently and concurrently.

The second thing is we invented this technology to bring AI, artificial intelligence, using this new type of algorithm called deep learning to computer graphics. And one example of its capability is the algorithm we introduced called DLSS, Deep Learning Super Sampling, which allows us to essentially synthesize by learning from previous examples, essentially learning from previous examples of images and remembering it, remembering what beautiful images look like so that when you take a low-resolution image, and you run it through the deep neural network, it synthesizes a high-resolution image that’s really, really beautiful. And people have commented that it’s even more beautiful than native rendered images at the native resolution. And the benefit is not only is it beautiful, it’s also super fast. We essentially nearly doubled the performance of RTX as a result of doing that.

So, you can have the benefit of ray tracing as well as very high resolution and very high speed.

And so that’s called RTX. And Turing is probably not even close, not even one-third of the total installed base of all of our GeForce GPUS, which is, as you know, the single-largest installed base of gaming platforms in the world.

And so, we support this large installed base, and we are in the process of bringing them to the future with RTX. And now, with the new console generation coming, every single game developer on the planet is going to be doing ray tracing, and they are going to be creating much, much richer content. And because of multi-platform, cross-platform play, and because of the size of the gaming platform, PC gaming platform, it’s really important that these game developers bring the latest generation content to PCs, which is great for us.

Aaron Rakers

And then on the data center visibility?

Colette Kress

Yes.

Let me see if I can answer this one for you. Yes, we have been talking about our visibility of data center. And as you have seen in our Q2 results, you can see that our overall adoption of the NVIDIA computing portfolio has accelerated quite nicely. But keep in mind, we are still really early in the product cycle. A100 is ramping. It’s ramping very strong into our existing installed bases but also into new markets. Right now, A100 probably represents less than a quarter of our data center revenues.

So we still have a lot to grow.

We have good visibility looking into Q3 with our hyperscales.

We have a little bit more of a mixed outlook in terms of our vertical industries, given a lot of the uncertainty in the market and in terms of the overall economy. On-premises are challenged because of the overall COVID. But remember, industries are quickly and continuing to adopt and move to the overall cloud. But overall, we do expect a very strong Q3.

Operator

Your next question comes from the line of C.J. Muse with Evercore ISI.

Your line is open.

C.J. Muse

Yes, hi. Thank you for taking the questions. I guess two questions. If I look at your outstanding inventory purchase obligations, grew 17% sequentially. Is that – as you prepare for the September 1 launch? And can you kind of comment on gaming visibility into the back half of the year? And then the second question, Jensen, I know you are very focused on platforms and driving recurring revenues. I would love to hear if there’s any particular platforms over the last 3 months where you have made real headway or get you excited, whether Jarvis, Merlin, Spark or whatever. Thank you.

Jensen Huang

Yes. Thanks so much, C.J.

We are expecting a really strong second half for gaming.

I think this may very well be one of the best gaming seasons ever. And the reason for that is because PC gaming has become such a large format. The combination of amazing games like Fortnite and Minecraft and because of the way people game now, their gaming and their e-sporting, even F1 is an e-sport now, they are hanging out with friends. They are using it to create other content. They are using, game captures, create art. They are sharing it with the community. It’s a broadcast medium. The number of different ways you could game has just really, really exploded. And it works on PCs because all the things that I described, require cameras or keyboards or streaming systems and – but it requires an open system that is multitasking.

And so, the PC has just become such a large platform for gaming. And the second thing is that RTX, it’s a home run. We really raised the bar with computer graphics, and the games are so beautiful, and it’s really, really the next level. It’s not been this amazing since we introduced programmable shaders about 15 years ago.

And so, for the last 15 years, we have been making programmable shaders better and better and better, and it has been getting better. But there’s never been a giant leap like this. And RTX brought both artificial intelligence as well as ray tracing to PC gaming. And then the third factor is the console launch. There’s – people are really – the game developers are really gearing up for a big leap. And because of the gaming – because how vibrant the gaming market is right now and how many people around the world is depending on gaming at home, I think it’s going to be the most amazing season ever.

We are already seeing amazing numbers from our console partner, Nintendo. Switch has – about to sell more than Super Nintendo, more than all the Famicom. I mean – which was one of the best gaming consoles of all time. I mean, they are on their way to make Switch the most successful gaming platform of all time.

And so, I am super excited for them.

And so, I think it’s going to be quite a huge second half of the year.

Operator

Your next question comes from the line of Toshiya Hari of Goldman Sachs.

Your line is open.

Jensen Huang

Colette, I felt like I didn’t – I missed C.J.’s second question. Can we jump on and answer it?

Colette Kress

I think your – I think the question was regarding our inventory purchases on that piece. Is that the part that you are referring to?

Jensen Huang

Yes, that’s the one. Yes.

Colette Kress

Yeah. Keep in mind, C.J., that when you think about the complexity of the products that we are building, we have extremely long lead times, both in terms of what we produce for the data center, our full systems that we need to do, as well as what you are seeing now between the sequential growth between Q2 and Q3 for overall gaming.

So, all of that is in preparation for the second half. Nothing unusual about it other than, yes, we have got to hit those revenue numbers that are in our Q3 guidance.

C.J. Muse

Okay.

Operator

Your next question comes from the line of Toshiya Hari with Goldman and Sachs.

Your line is open.

Toshiya Hari

Hi. Good afternoon and thank you so much for taking the question. I had one for Jensen and another one for Colette. Jensen, just following-up on the data center business.

As you probably know, quite a few of your peers have been talking about potential digestion of capacity on the part of your hyperscale customers over the next call it, 6 to 12 months. Curious, is that something that you think about, worry about in your data center business? Or do you have enough idiosyncratic growth drivers like the A100 ramp? And I guess the breadth that you have built within your data center business across compute and networking, are those enough for you to buck the trend within data center over the next 6 to 12 months? And then the second one for Colette, just on gross margins, you are guiding October quarter gross margins down 50 basis points sequentially. Based on the color that you provided for the individual segments, it looks like mix remains pretty positive.

So just curious what’s driving the marginal decline in gross margins in the October quarter? Thank you.

Jensen Huang

Yes. Thank you.

So and thanks for the question. The – our data center trend is really tied to a few factors. One is the proliferation of using deep learning and artificial intelligence in all the services are – that are in – by the cloud service providers. And I think it’s fair to say that over the last several years, the number of breakthroughs in artificial intelligence has been really terrific. And we are seeing anywhere from 10x more computational requirement each year to more than that.

And so, in the last three years, we have seen somewhere between 1,000x to 3,000x increase in the size of models, the computational requirement necessary to create these AI models and to deploy these AI models.

And so, the.1 trend that we are – we are probably indexed to is the breakthroughs of AI and the usefulness of AI and how people are using it. And one of the – and I remember C.J.’s question now, and I’ll answer this along with that.

One of the things that we look for and you should look for is how – what kind of breakthroughs are based on deep learning and based on AI that these services all demand. And there are three big ones, just gigantic one.

Of course, one of them is natural language understanding. The ability to take a very complicated text and use deep learning to create essentially a dimension reduction, it’s called deep embedding, dimension reduction on that body of text so that you could use that vector as a way to teach a recommender system, which is the second major breakthrough, the recommender system, how to predict and make a recommendation to somebody. Recommendation on ads and videos and there are trillions of videos on the web.

You need ways to recommend them, both the news and just the amount of information that is going to – that is in true dynamic form require these recommenders to be instantaneous.

And so, the first one is natural language understanding, the second one is the recommender system, gigantic breakthroughs in the last several years. And the third is conversational AI. I mean, we are going to have conversational engines that are just super clever. And they can predict what you’re about to ask. They’re going to predict the right answer for you, make recommendations to you based on the three pillars that I just described. And I haven’t even started talking about robotics, the breakthroughs that are happening there with all the factories that need to automate. And breakthroughs that we’re seeing in self-driving cars, the models there are really improving fast.

And so, the answer to you, Toshiya, and C.J. are kind of similar, that on the first one, we’re indexed to AI.

The second, we’re indexed to breakthroughs of AI.

So that it can continue to consume more and more capability and more technology. And then the third thing that we’re indexed to is the movement of workloads to the cloud. It is now possible to do rendering in the cloud, remote graphics workstations in the cloud. And NVIDIA virtual workstations is in every single cloud.

You could do big data analytics in the cloud. And these applications, I have just given you a few applications where you can do scientific computing in the cloud. These applications all have fundamentally different computing architectures. NVIDIA is the only accelerated architecture that allows you to do microservices for conversational AI and other types of AI applications to scale up applications like high-performance computing, training, big data analytics to virtualize applications like workstations.

Our platform is universal. And these three facts that I just described are supremely complex, virtualized, microservices-based, and scale-up-based.

And so, these – bare-metal scale-up. And these are complicated, and it’s one of the reasons why we bought Mellanox because they are at the core and at the intersection of all of that. The storage, the networking, the security, the virtualization, they are at the intersection of all of that. And I just described three dynamics that are very, very powerful and are at the early stages yet.

And so those are the things that we’re really indexed to. And then lastly, when somebody adopts – when we introduce a new platform like Ampere, we are in the beginning of a multiyear product cycle, Ampere is such a gigantic breakthrough. It’s the first universal GPU we ever created. It is both able to scale up as well as scale out, scale up as in multi GPUs, scale out is fractionalization, multi-instance GPUs. And it’s – it reduced – it saves money, tremendous amount of money for people who use it. It speeds up their application. It reduces their TCO. Their TCO value just goes through the roof.

And so, we’re in the beginning of this multiyear cycle and the enthusiasm has been fantastic. This is the fastest ramp we’ve ever had.

And so, we are going to keep on racing through the second half.

Colette Kress

Okay. And, Toshiya, you asked a question regarding our guidance going forward regarding gross margin. And within our Q3 guidance, we have just a small decline in our gross margin from Q2. Most of that is really associated with mix, but also a little bit in terms of the ramping of our new Ampere architecture products that we have.

So, keep in mind, our data center will likely be a lower percentage of total revenue, given the strong overall gaming growth that we expect between Q2 and Q3. Within that gaming growth, keep in mind, consoles are also included, which will continue to be below our company totals average gross margin, and that is expected to be up strongly quarter over quarter for our overall console shipments.

We are going to be ramping those new architectures over time when we have the ability to expand our gross margin as Ampere GPUs mature, too.

Operator

Your next question comes from the line of Stacy Rasgon with Bernstein Research.

Your line is open.

Stacy Rasgon

Hi, guys. Thanks for taking my question. I wanted to dig into data center a little bit. This is a question for Colette.

So, in the quarter, ex Mellanox, data center was up, core data center, maybe 6%, 7%. The guide looks to be roughly similar to that into Q3. Can you talk to us a little bit about what’s driving the trajectory? Are you more demand- or more supply limited at this point? What does your supply situation look like? And what are the lead times especially on the A100 products for data center look like at this point? Like if you have more capacity available, do you think you’d have like a stronger trajectory than you have right now?

Colette Kress

Yes. Stacy, so thanks for the question.

Let me first start on our Q3 outlook and what we’re seeing. And when we think about our demand and our supply, we are very comfortable with the supply that we have. Keep in mind, our products are quite complex, and a lot of our time is spent in terms of procuring every aspect of that supply over multiple quarters previously.

So that’s how we work. But we are very confident with the overall supply that we have across the board in data center. Keep in mind that it’s not just A100.

We are continuing to sell V100, T4. And we are also bringing new versions of the A100 coming to overall market.

So, I hope that helps you understand our statements on where we are at in terms of the Q3 guidance.

We will see if Jensen wants to add a little bit more to that.

Jensen Huang

Well, when we are ramping, we sure love to have more and sooner. And – but this is our plan, and we are executing to the plan. It is a very complicated product, as Colette mentioned. It is the most complicated.

Stacy Rasgon

Got it. Got it. And just a quick follow-up, within the data center guidance, how do you think about like the core data center sequential growth versus Mellanox?

Colette Kress

Yes.

So, in terms of moving from Q2 to Q3, we believe that most of the actual growth that we will receive in that single – low single-digits to mid-single-digit growth will likely stem from NVIDIA compute will be the largest driver of that.

Operator

Your next question comes from the line of Joseph Moore with Morgan Stanley.

Your line is open.

Joseph Moore

Great. Thank you. I wonder if I could ask a longer-term question about the – how you guys see the importance of process technology. There’s been a lot of discussion around that in the CPU domain. But you guys haven’t really felt the need to be first on seven-nanometer, and you have done very well.

Just how important do you think it is to be early in the new process node? And how does that factor into the cycle of innovation at NVIDIA?

Jensen Huang

Yes.

First of all, thanks, Joe. The process technology is a lot more complex than a number.

I think people have simplified it down to almost a ridiculous level, alright? And so, process technology we have a really awesome process engineering team. World-class. Everybody will recognize that it’s absolutely world-class. And we work with the foundries, we work with TSMC really closely, to make sure that we engineer transistors that are ideal for us and we engineer metallization systems that is ideal for us. It’s a complicated thing, and we do it at high part. Then the second part of it is where architecture, where the process technology and the rest of the design process, the architecture of the chip, and the final analysis, what NVIDIA paid for, is architecture, not procurement of transistors.

We are paid for architecture. And there’s a vast difference between our architecture and the second-best architecture and the rest of the architectures. The difference is incredible.

We are easily twice the energy efficiency all the time, irrespective of the number of the – in the transistor side.

And so, it must be more complicated than that.

And so, we put a lot of energy into that. And then the last thing I would say is that going forward, it’s really about data center-scale computing.

Going forward, you optimize at the data center scale. And the reason why I know this for a fact is because if you’re a software engineer, you would be sitting at home right now and you will write a piece of software that runs on the entire data center in the cloud.

You have no idea what’s underneath it, nor do you care.

And so, what you really want is to make sure that, that data center is as high throughput as possible. There are lot of code in there.

And so, what NVIDIA has decided to do over the years is to take our game to a new level.

Of course, we start with building the world’s best processors, and we use the world’s best foundries, and we partnered them very closely to engineer the best process for us. We partner with the best packaging companies to create the world’s best packaging.

We’re the world’s first user of cobots. And whether it’s – I think we are– I’m pretty sure we are still the highest volume by far of 2.5D and 3D packaging.

And so, we start from a great chip. We start from a great chip, but we don’t end there. That’s just the beginning for us.

Now we take this thing all the way through systems, the system software, algorithms, networking, all the way up to the entire data center. And the difference is absolutely shocking. we built our data center, Selene, and it took us four weeks. We put up Selene in four weeks’ time. It is the seventh-fastest supercomputer in the world, one of the fastest AI supercomputers in the world. It’s the most energy-efficient supercomputer in the world, and it broke every single record in MLPerf. And that kind of shows you something about the scale that we work and the complexity of the work that we do. And this is our – the future. It’s for – the future is about data centers.

Operator

We have no further questions at this time. Jensen Huang, I turn the call back over to you.

Jensen Huang

Thank you. The accelerated computing model we pioneered has clearly passed the tipping point. Adopting of NVIDIA computing is accelerating. On this foundation and leveraging one architecture, we have transformed our company in three dimensions.

First, NVIDIA is a full-stack computing platform company, offering the world’s most dynamic industries, the chips systems, software and libraries like NVIDIA AI to tackle their most pressing challenges. NVIDIA – second, NVIDIA is a data center-scale company with capabilities to architect, build and operate the most advanced data centers. The data center is the new computing unit. With this capability, we can create modern data center architectures that are computer maker partners, and then scale out to the world’s industry.

Third, NVIDIA is a software-defined company today, with rich software content like GeForce NOW, NVIDIA virtual workstation in the cloud, NVIDIA AI and NVIDIA Drive that will add recurring software revenue to our business model. In the coming years, AI will revolutionize software. Robotics will automate machines, and the virtual and physical worlds will become increasingly integrated through VR and AR. Industry advancements will accelerate, and NVIDIA-accelerated computing will play an important role.

Our next GTC will be coming on October 5, again from my kitchen. Join me. I have some exciting developments to share with you. Thanks, everyone.

Operator

This concludes today’s conference call.

You may now disconnect.