Good afternoon. My name is Sumitra, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Financial Results Conference Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question-and-answer session. [Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.
Thank you. Good afternoon, everyone, and welcome to NVIDIA's conference call for the first quarter of fiscal 2022. With me on the call today from NVIDIA are Jensen Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the second quarter of fiscal 2022. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.
During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially.
For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, May 26, 2021, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.
During this call, we will discuss non-GAAP financial measures.
You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.
Thanks, Simona. Q1 was exceptionally strong with revenue of $5.66 billion and year-on-year growth accelerating to 84%. We set a record in total revenue in gaming, data center and professional visualization driven by our best ever product line ups and structural tailwinds across our businesses. Starting with gaming, revenue of $2.8 billion was up 11% sequentially and up 106% from a year earlier. This is the third consecutive quarter of accelerating year-on-year growth, beginning with the fall launch of our GeForce RTX 30 Series GPUs. Based on the Ampere GPU architecture, the 30 Series has been our most successful launch ever, driving incredible demand and setting records for both desktop and laptop GPU sales. Channel inventories are still leading, and we expect to remain supply constrained into the second half of the year. With our Ampere GPU architecture now ramping across the stack in both desktop and laptops, we expect the RTX upgrade cycle to kick into high gear as the vast majority of our GPU installed base needs to upgrade. Laptops continue to drive strong growth this quarter as we started ramping the Ampere GPU architecture across our lineup. Earlier this month, all major PC OEMs launch GeForce RTX 30 Series laptops based on the 3080, 39 -- 3070 and 3060 as part of their spring refresh.
In addition, mainstream versions based on the 3050 and 3050 Ti will be available this summer, just in time for back-to-school starting at price points as low as 799. This is the largest ever wave of GeForce gaming laptops over 140 in total, as OEMs address the rising demand from gamers, creators and students for NVIDIA's powered laptops. The RTX 30 Series delivers our biggest generational leap in performance ever. It also features our second generation ray tracing technology and frame rate boosting, AI powered DLSS. The RTX is a reset for graphics with over 60 accelerated games.
This quarter, we added many more, including Call of Duty Modern Warfare, Crysis Remastered and Outriders.
We also announced that DLSS is now available in Unreal Engine 4 and soon in the Unity game engine, enabling game developers to accelerate frame rates with minimal effort. The RTX 30 Series also offers NVIDIA Reflex, a new technology that reduces system latency. Reflex is emerging as a must have feature for e-sports gamers who play competitive titles like Call of Duty Warzone, Fortnite, Valorant and Apex Legends. We estimate that about 75% of GeForce gamers play e-sports games, and 99% of e-sports pros compete on GeForce. We believe gaming also benefited from crypto mining demand, although it's hard to determine to what extent. We've taken actions to optimize GeForce GPUs for gamers, while separately addressing mining demand with Cryptocurrency Mining Processors, or CMPs. Last week, we announced that newly manufactured GeForce RTX 3080, RTX 3070 and RTX 3060 Ti graphic cards will have their Ethereum mining capabilities reduced by half and carry a low hash rate or LHR identifier. Along with the updated RTX 3060, this should allow our partners to get more GeForce cards into the hands of gamers at better prices. To help address mining demand, CMP products launched this quarter, optimized for mining performance and efficiency because they don't meet the specifications required of a GeForce GPU, they don't impact the supply of GeForce GPUs to gamers. CMP revenue was $155 million in Q1 reported as part of the OEM and other category. And our Q2 outlook assumes CMP sales of $400 million.
Our GeForce NOW cloud gaming platform passed 10 million registered numbers this quarter. GFN offers nearly 1,000 PC games from over 300 publishers more than any other cloud gaming service, including 80 of the most popular free to game play games. GFN expands the reach of GeForce to billions of under powered Windows PCs, Macs, Chromebooks, Android devices, iPhones and iPads. GFN is offered in over 70 countries with our latest expansions including Australia, Singapore and South America.
Moving to Pro Vis. Q1 revenue was $372 million, up 21% both sequentially and year-on-year. Strong notebook growth is driven by a new sleek and powerful RTX powered mobile workstations with Max-Q technology and the enterprise's continued to support remote workforce initiatives. Desktop workstations rebounded as enterprise resumed the spending that has been deferred during the lockdown with continued growth likely as offices open. Key verticals driving Q1 demand include manufacturing, health care, automotive and media and entertainment. At GTC, we announced the coming general availability of NVIDIA Omniverse Enterprise, the world's first technology platform that enables global 3D design teams to collaborate in real time in a shared space working across multiple software speeds. This incredible technology builds on NVIDIA's entire body of work and is supported by a large, rapidly growing ecosystem. Early adopters includes sophisticated design teams and some of the world's leading companies such as BMW Group, Foster + Partners and WPP. Over 400 companies have been evaluating Omniverse and nearly 17,000 users have downloaded the open beta. Omniverse is offered as a software subscription on a per user and a per server basis.
As the world becomes more digital, virtual and collaborative, we see a significant revenue opportunity for Omniverse.
We also announced powerful new Ampere architecture GPUs for next generation desktop and laptop workstation. The new RTX powered workstations will be available from all major OEMs.
Moving to Automotive. Q1 revenue was $154 million, up 6% sequentially and down 1% year-on-year. Growth in AI cockpit revenue was partially offset by the expected decline in legacy infotainment revenue. We extended our technology leadership with the announcement of the next generation NVIDIA DRIVE Atlan SoC. Atlan will deliver an unrivalled 1,000 trillion operations per second of performance and integrate data center class NVIDIA BlueField networking and security technologies to enhance vehicle performance and safety, making it a true data center on wheels. Atlan, which targets automakers 2025 models will follow the NVIDIA DRIVE or an SoC which delivers 254 TOPS that has been selected by leading vehicle makers for production timelines starting next year. The NVIDIA DRIVE platform has achieved global adoption across the transportation industry.
Our automotive design win pipeline now exceeds 8 billion through fiscal 2027. Most recently, Volvo Cars announced that it will use NVIDIA DRIVE Orin building on our next great momentum with some of the largest automakers including Mercedes Benz, SAIC and Hyundai Motor Group. In robotaxi's we added GM Cruise to the growing number of companies adopting the NVIDIA DRIVE platform, which includes Amazon Zoox and DiDi. We’ve also had a great traction with new energy vehicle makers.
Our latest wins include Faraday Future, R Auto, IM Motors and VinFast, which join previously announced wins with SAIC, NIO, XPeng and Li Auto. And in trucking, Navistar is partnered with TuSimple in selecting NVIDIA DRIVE for autonomous driving, joining previously announced robo autonomous solutions and plus. NVIDIA is helping to revolutionize the transportation industry.
Our full stack software defined AV and AI cockpit platform spans silicon, systems, software and AI data center infrastructure, enabling over the air upgrades to enhance safety and the joy of driving throughout the vehicles lifetime. Starting with our lead partner Mercedes Benz, NVIDIA DRIVE can transform the automotive industry with amazing technologies delivered through a new software and services business models.
Moving to data center. Revenue topped $2 billion for the first time, growing 8% sequentially and up 79% from the year-ago quarter, which did not include Mellanox. Hyperscale customers led our growth this quarter, as they built infrastructure to commercialize a AI in their services.
In addition, cloud providers have adopted the A100 to support growing demand for AI for enterprises, startups and research organizations. Customers have deployed NVIDIA's A100 and DGX platforms to train deep neural networks with rising computational intensity led by two of the fastest growing areas of AI; natural language understanding and deep recommendators. In March, Google Cloud Platform announced general availability of the A100 with early customers including square for its cash application and Alphabet’s DeepMind. The A100 is deployed across all major hyperscale and cloud service providers globally. And we see strengthening demand in the coming quarters. Every industry is becoming a technology industry and accelerating investments in AI infrastructure, both through the cloud and on-premise.
Our vertical industries grew both sequentially and year-on-year led by consumer internet companies.
For example, NAVER, a leading internet technology company in Korea and Japan is training giant AI language models at scale on DGX SuperPOD to pioneer new services across e-commerce, search, entertainment and payment applications.
We continue to gain traction in France with hyperscale and vertical industry customers across a broadening portfolio of GPOs. We had record shipments of GPUs used for inference. Inference growth is driving not just the T4, which was up strongly in the quarter, but also the universal A100 Tensor Core GPU as well as the new Ampere architecture based A10 and A30 GPUs, all excellent at training as well as inferencing. Customers are increasingly migrated from CPUs to GPUs for AI inference for two chief reasons.
First, GPUs can better keep up with the exponential growth in the size and the complexity of deep neural networks and respond with the required low latency. In April's MLPerf AI inference benchmark, NVIDIA achieved the top results across every category that include computer vision, medical imaging, recommender systems, speech recognition and natural language processing. And second, NVIDIA's full stack inference platform including tritons and inference server software simplifies the complexity of deploying AI applications by supporting models from all major frameworks, and optimizing for different query types including batch, real time and streaming. Triton is supported by several partners in the cloud services, including Amazon, Google, Microsoft and Tencent. Examples of how customers use NVIDIA's inference platform include Microsoft for grammar checking in office, the United States Postal Service for real time package analytics, T-Mobile for customer service, Pinterest for image search, and GE Healthcare for heart disease detection.
We also have strong results with Mellanox networking products. Like our compute business, strong growth was driven by hyperscale customers across both Ethernet and InfiniBand. We achieved key design wins and proof-of-concept trials for the NVIDIA BlueField-2 DPU, the cloud service providers and consumer internet companies.
We also unveiled BlueField-3, the first DPU built for AI and accelerated computing, with support from VMware, Splunk, NetApp, Cloudflare and others. BlueField-3 is the industry's first 400 gig DPU and delivers the equivalent data center services of up to 300 CPU cores. It transforms traditional server infrastructure into zero trust environments in which every user is authenticated by offloading and isolating data center services for business applications. With BlueField-3, our DPU roadmap will deliver an unrivalled 100x performance increase over a 3-year period.
As we look back at the first full year since closing the Mellanox acquisition, we are extremely pleased with how the business has performed. It has not only exceeded our financial projections, but it has been instrumental in key new platforms like the DGX SuperPOD and the BlueField DPU, enabling our data center scale computing strategy. In April, we held our largest ever GPU Technology Conference with more than 200,000 registrants from 195 countries. Jensen's keynote has over 14 million views. At GTC, we announced our first data center CPU, NVIDIA Grace, targeted at processing massive next generation AI models with trillions of parameters. The arm based processor will enable 10x the performance and energy efficiency of today's fastest servers. With Grace, NVIDIA has a three chip strategy with GPU, DPU and now CPU. The Swiss National Supercomputing Center and the US Department of Energy's Los Alamos National Laboratory are the first to announce plans to build Grace powered supercomputers. Grace will be available in early 2023. GTC is first and foremost for developers. We announced NVIDIA developed and optimized pre-trained model availability on the NVIDIA GPU cloud registry. Developers can choose a pre-trained model and adapt it to fit their specific needs using NVIDIA TAO, our transfer learning software. TAO fine tunes the model with customers own small dataset to get models accustomed without the cost, fine and massive data sets required to train a neural net -- a neural network from scratch. Once a model is optimized and ready for deployment, users can integrate it with an NVIDIA application framework that fits their use.
For example, the NVIDIA Jarvis framework for interactive conversational AI is now generally available and used by customers such as T-Mobile and Snap and the NVIDIA Merlin framework for deep recommendators is an open beta with customers such as Snap and Tencent.
With the chosen application framework, users can launch NVIDIA Fleet Command software to deploy and manage the AI application across a variety of NVIDIA GPU powered devices.
For enterprise customers, we unveiled a new enterprise grade software offering available as a perpetual license or subscription. NVIDIA AI Enterprise is a comprehensive suite of AI software that speeds development and deployment of AI workloads and simplifies management of enterprise AI infrastructure. Through our partnership with VMware hundreds of thousands of vSphere customers will be able to purchase NVIDIA AI Enterprise with the same familiar pricing model that IT managers use to procure VMware infrastructure software.
We also made several announcement at GTC about accelerating the delivery of both NVIDIA AI and accelerated computing to enterprises and edge users among the world's largest industries. Leading server OEMs launched NVIDIA certified systems, which are industry standard servers based on the NVIDIA EGX platform. They run NVIDIA AI enterprise software and are supported by the NVIDIA A30 and A10 GPUs. Initial customers including Lockheed Martin and Mass General Brigham.
In addition, we announced the NVIDIA AI on 5G platform supported on NVIDIA EGX servers to enable high performance 5G Ram and AI applications. The AI on 5G platform leverages the NVIDIA Aerial software and the NVIDIA BlueField-2 A100 converged card which combines our GPUs and DPUs.
We are teaming with Fujitsu, Google Cloud, Mavenir, Radisys and Wind River in developing solutions based on our AI on 5G platform to speed the creation of smart cities and factories, advanced hospitals and intelligent stores.
Another highlight at GTC was the announcement of a broad range of initiatives to strengthen the Arm ecosystem across cloud data centers, HPC, enterprise and edge and PCs. In the cloud, we are bringing together AWS Graviton2 processors and NVIDIA DPUs to provide a range of benefits including lower costs, support for virtual game streaming experiences and greater performance for Arm based workloads. In HPC, we are bringing together an Ampere Altra CPU with NVIDIA GPUs, DPUs and NVIDIA HPC software development kit. Initial supercomputing centers deployed include Oak Ridge and Los Alamos National Labs. In the Enterprise and Edge, we're bringing together Marvell Arm-based OCTEON processors and the NVIDIA GPUs to accelerate video analytics and cybersecurity solutions. And in PCs, we are bringing together MediaTek's Arm-based processors with NVIDIA RTX GPUs to enable realistic ray trace graphics and cutting edge AI in a new class of Arm-based laptops. On our Arm acquisition, we are making steady progress in working with the regulators across key regions. We remain on track to close the transaction within our original timeframe of early 2022. Arm's IP is widely used, but the company needs a partner that can help it achieve new heights. NVIDIA is uniquely positioned to enhance Arm's capabilities, and we are committed to invest in developing the Arm ecosystem, enhancing R&D, adding IP and turbo charging its development to grow into new markets in the data center, IoT and embedded devices, areas where it only has a light footprint or in some cases, none at all.
Moving to the rest of the P&L. GAAP gross margin for the first quarter was down 100 basis points from a year earlier and up 100 basis points sequentially. Non-GAAP gross margin was up 40 basis points from a year earlier and up 70 basis points sequentially. The sequential non-GAAP increase was largely driven by a more favorable mix within data center and the addition of CMP products. Q1 GAAP EPS was $3.03, up 106% from a year earlier. Non-GAAP EPS was $3.66, up 103% from a year-ago. Q1 cash flow from operations was $1.9 billion.
Let me turn to the outlook for the second quarter of fiscal 2022.
We expect broad base sequential year-on-year revenue growth in all of our market platforms.
Our outlook includes $400 million in CMP.
Aside from CMP, the sequential revenue increase in our Q2 outlook is driven largely by data center and gaming. In data center, we expect sequential growth in both compute and networking. In gaming, with the move to low hash rate G4 CPUs, and increasing the amount of CMP products, we are making a significant effort to serve miners with CMPs and provide more GeForce cards to gamers. If there is additional CMP demand, we have supplied flexibility to support it. We believe these actions combined with strong gaming demand will drive an increase in our core gaming business for Q2.
Now to look at our outlook for Q2. Revenue is expected to be $6.3 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 64.6% and 66.5%, respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately $1.76 billion and $1.26 billion, respectively. GAAP and non-GAAP, other income and expenses are both expected to be an expense of approximately $50 million. GAAP and non-GAAP tax rates are both expected to be 10%, plus or minus 1%, excluding discrete items. Capital expenditures are expected to be approximately $300 million to $325 million. Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight that Jeff Fisher and Manuvir Das will keynote Computex on the evening of May 31 U.S time, as well as several upcoming events for the financial community.
We will be virtually attending the Evercore TMT Conference on June 7, the BofA 2021 Global Technology Conference on June 9, and the NASDAQ Virtual Investor Conference on June 16.
Our earnings call to discuss our second quarter results is scheduled for Wednesday, August 18. With now, we will open the call for question. Operator, would you please poll for questions.
[Operator Instructions] And your first question comes from Timothy Arcuri with UBS.
Thanks a lot. Colette, I was wondering if you can double click a little more on the guidance. I know of the 600 to 650 in growth, you said 250 is coming from CMP and both gaming and data center will be up. Can we assume that they're up about equally, so you're getting about 200 roughly from each of those? And I guess second part of that is within data center, I'm wondering can you speak to the networking piece. It sounds like maybe it was up a bit more modestly than it's about the past few quarters. I'm just wondering what the outlook is there. Thanks.
Yes. Thanks so much for the question on our guidance.
So I first want to start off with we see demand really across all of our markets.
All of our different market platforms we do plan to grow sequentially.
You are correct, that we are expecting a increase in our CMP. And outside of our CMP growth, we expect the lion share of our growth to come from our data center and gaming. In our data center business, right now our product lineup couldn't be better.
We have a strong overall portfolio, both for training and for inferencing. And we're seeing strong demand across our hyperscales and vertical industries. We've made a deliberate effort on the gaming perspective to supply to our gamers the cards that they would like, given the strong demand that we see.
So that will also support the sequential growth that we are receiving.
So you are correct, that we do see growth sequentially coming from data center and gaming, both contributing quite well to our growth.
Thanks a lot, Colette.
I didn't answer your second question, my apologies, on Mellanox.
Additionally, Mellanox is an important part of our data center. It is quite integrated with our overall products. We did continue to see growth this last quarter and we are also expecting them to sequentially grow as we move into Q2. They are a smaller part of our overall data center business, but again, we do expect them to grow.
And your next question comes from C.J. Muse with Evercore ISI.
Yes, good afternoon. Thank you for taking the question. In your prepared remarks, I think I heard you talk about a vision for acceleration in data center as we go through the year. And as you think about the purchase obligations that you reported up 45% year-on-year, how much of that is related to long lead time data center and how should we interpret that in terms of what kind of ramp we could see in the second half, particularly as you think about perhaps adding more growth from enterprise on top of what was hyperscale driven growth in the April quarter? Thank you.
Let me take the first part of your question regarding our purchasing of inventory and what we're seeing in just both our purchase commitments and our inventory. The market has definitely changed to where long lead times are required to build out our data center products.
So we're on a steady stream to both commit longer term so that we can make sure that we can serve our customers with the great lineup of products that we have.
So yes, a good part of those purchase commitments is really about those long lead times, the components to create the full systems. I will turn the second part of the question over to Jensen.
What was the second part of the question, Colette?
Second part of the question was, what do we see in the second half as it relates to the lineup of enterprise. And we articulated in our pre remarks regarding that we're seeing an acceleration. Thank you.
Yes. We're seeing strength across the board in data centers, and we're seeing strengthening demand. CGR, our data center as you know is accelerated with a range of applications. From scientific computing, both physical and life sciences, data analytics, and classical machine learning, cloud computing and cloud graphics, which is becoming more important because of remote work. And very importantly, AI both for training as well as inferencing for classical machine learning models, like XGBoost all the way to deep learning we base models like conversational AI, natural language, understanding recommender systems, and so on.
And so we have a large suite of applications in our NVIDIA AI and NVIDIA HPC as the case accelerate these applications and data centers. They run on systems that range from HGX for the hyperscalers to DGX for on-prem to EGX for Enterprise and Edge, all the way out to AGX autonomous systems. And this quarter, at GTC, we announced one of our largest initiatives and it's taken us several years.
You've seen working on it in open -- on the open over the course of the last several years, and it's called EGX it's our Enterprise AI platform. We're democratizing AI, we're bringing it out in cloud, we're bringing it to enterprises, and we're bringing it out to the Edge. And the reason for that is because the vast majority of the world at the automation that has to be done has data that has data sovereignty issues, or data rate issues that can't move to the cloud easily.
And so we have to move the computing to their premise and oftentimes all the way up to the edge. The platform has to be secure, has to be confidential, it has to be remotely manageable. And of course, it has to be high performance, and it has to be cloud native. And that's the built -- be built like the cloud, the modern way of doing cloud data centers.
And so these stacks has to be modern.
On the one hand, it has to be integrated into classical enterprise systems on the other hand, which is the reason why we work so closely with VMware and accelerated VMware's operating system, data center operating system, software defined data center, stacks on BlueField. Meanwhile, we ported NVIDIA AI, NVIDIA HPC on to VMware so that they could run distributed large scale accelerated computing for the very first time. And that partnership, that partnership was announced at VMworld. It was announced at GTC and we're in the process of going to market with all of our enterprise partners, their OEMs, their value added resellers, their service -- their solution integrators all over the world.
And so, this is a really large endeavor and the early indications of it are really exciting. And the reason for that is because as you know, our data center business is more than 50% vertical industry enterprise already. It's more than 50% vertical industry enterprises already and then by creating this easy to adapt and easy to integrate stack, it's going to allow them to move a lot faster.
And so this is the next major wave of AI. This is a very exciting part of our initiative. And it's something that I've been working on for -- we've been working on for quite a long time.
And so I'm delighted with the launch this quarter at GTC. The rest of the data centers do agree to.
As Colette mentioned, hyperscale demand is strengthening. We're seeing that for computing and networking.
You know that the world's cloud data centers are moving to deep learning, because every small percentage that they get out of predictive inference drives billions and billions of dollars of economics for them.
And so the movement towards deep learning shifts the data center workload away from CPUs, because accelerators are so important.
And so hyperscale, we're seeing great traction and great demand. And then lastly, supercomputing. Supercomputer centers all over the world are building out. And we're really in a great position there to fuse for the very first time simulation based approaches as well as data driven based approaches what is called artificial intelligence.
And so across the board, our data center is gaining momentum. And we see -- we just see great strength right now and it's growing strength. And we're really set up for years of growth in data center. This is the largest segment of computing as you know, and this segment of computing is going to continue to grow for some time to come.
And your next question comes from Aaron Rakers with Wells Fargo.
Yes, thanks for taking the questions. Congratulations on the results. I'm going to first slip in two of them here.
First of all, Colette, I think in the past you talked about how much of your gaming install base is kind of on the pre-race ray tracing platforms are really kind of the context behind the upgrade cycle that's still part of us. That's kind of question one. And then, on the heels of the last question, I was just curious things like VMware's project model ray as we think about the BlueField-2 product and BlueField-3, how should we think about those starting to become or when should they become really material incremental revenue growth contributors for the company? Thank you.
So, yes, we have definitely discussed in terms of the great opportunity that we have in front of us of folks moving to our ray traced GPUs. And we're in the early stages of that. We've had a strong cycle already, but still we probably have approximately 15% moving up a little bit from that at this time.
So it's a great opportunity for us to continue to upgrade a good part of that install base, not only just with our desktop GPUs, but the RTX laptops are also a great driver of growth and upgrading folks to RTX.
Colette, do you want me to take the second one?
Aaron, a good -- great question on BlueField.
First of all, the modern data center has to be rearchitected for several reasons. There are several fundamental reasons that makes it very, very clear that the architecture has to change.
The first insight is cloud-native, which means that a data center is shared for everybody. [Indiscernible] you don't know who's coming and going and it's exposed to everybody on the internet. Number two, you have to assume that it's a zero trust environment because you don't know who's using it. It used to be that we have perimeter security, but those days are gone because it's cloud-native, it's remote access, it's multi tenant, it's public cloud, the infrastructure is used for internal and external applications.
So number two has to be -- it has to be zero trust.
The third reason is something that started a long time ago, which is software defined in every way, because you want -- you don't want a whole bunch of bespoke custom gear inside a data center, you want to avoid the data center with software.
You want to be software defined. The software defined data center movement enabled this one pane of glass, a few IT managers orchestrating millions and millions of nodes of computers at one place. And the software runs what used to be storage, networking, security, virtualization and all of that -- all of those things have become a lot larger and a lot more intensive. And it's consuming a lot of the data center.
In fact, the estimate depending on how you want to think about it, how much security you want to put on it, if you assume that it's a zero trust data center, probably half of the CPU cores inside the data center is running not applications. And that's kind of strange, because you created the data center to run services and applications, which is the only thing that makes money.
The other half of the computing is completely soaked up running the software defined data center just to provide for those applications. And that you could imagine even accepting, if you like, as the cost of doing business.
However, it commingles the infrastructure, the security plane and the application plane and exposes the data center to attackers.
And so you fundamentally want to change the architecture as a result of that. To offload that software defined virtualization and the infrastructure operating system, if you will, and the security services to accelerate it because Moore's law has ended and moving software that was running on one CPU -- one set of CPUs, which is really, really good already to another set of CPUs is going to make it more effective, separating it doesn't make more effective.
And so you want to offload that and take the -- take that application software and accelerated using accelerators, a form of accelerated computing.
And so that's -- these things are fundamentally what BlueField is all about. And we created the processor that allows us to -- BlueField-2 replaces approximately 30 CPU cores. BlueField-3 replaces approximately 300 CPU cores, which just -- put it give you a sense of it. And BlueField-4, we're in the process of building already.
And so, we've got a really aggressive pipeline to do this.
Now, how big of this market, the way to think about that is every single networking chip in the world will be a smart network [indiscernible] -- it will check. It will be a programmable accelerated infrastructure processor. And that's what the DPU is, it's a data center on a chip. And I believe every single server node will have it. It will replace today's mix with something like BlueField, and it will offload about half of the software processing that's consuming data centers today. But most importantly, it will enable this future world where every single packet, every single application is being monitored in real time all the time for intrusion.
And so, how big is that application? How big is that market? Just, 25 million servers a year. That's the size of the market. And we know that servers are growing, and so those give you a feeling for that. And then in the future servers are going to move out to the Edge. And all of those Edge devices will have something like BlueField. And then how are we doing? We're doing PLCs now with just about every internet company. We're doing really exciting work there. We've included it in high performance computing, so that it's possible for supercomputers in the future to be cloud-native, to be zero trust, to be secured and still be a supercomputer. And then we expect next year to have meaningful, if not significant revenues contribution from BlueField, and this is going to be a really large growth market for us.
You can tell, I'm excited about this. And I put a lot of my energy into it. The company is working really hard on it. And this is a form of accelerated computing that's going to really make a difference.
And your next question comes from Vivek Arya with Bank of America Securities.
Thanks for taking my question. Jensen, is NVIDIA able to ring fence this crypto impact in your CMP product? So even if, let's say crypto goes away, for whatever reason, the decline is a lot more predictable and manageable than what we saw in the 2018, '19 cycle. And then kind of part B of that is, how do you think about your core PC gamer demand? Because when we see these kind of 106% year-on-year growth rate, it brings questions of sustainability.
So give us your perspectives on these two topics, just how does one ring fence kind of the crypto effect? And what do you think about the sustainability of your core PC gamer demand? Thank you.
Sure. Thanks a lot.
First of all, it's hard to estimate exactly how much and where crypto mining is being done.
However, we can only assume that the vast majority of it is contributed by professional miners, especially when the amount of mining increases tremendously like in-house [ph].
And so we created the CMP. And CMP and GeForce are not fungible.
You could use GeForce for mining, but you can't use CMP for gaming. CMP is yields better and producing those doesn't take away from the supply of GeForce.
And so it protects our GeForce supply for the gamers. And the question that you have is what happens when on the tail end of this? There's several things that we hope. And we learned a lot from the last time, but you never learn enough about this dynamic. What we hope is that that the CMPs will satisfy the miners at work will stay in mines, in the professional mines. And we're trying to produce a fair amount of them and we have secured a lot of demand for the CMPs and we will fulfill it. And what makes it different this time is several things. One, we're in the beginning of our RTX cycle, whereas Pascal was the last GTX. And now exactly was at the tail end of the GTX cycle, because the last GTX and it was the tail end of GTX cycle. We're at the very beginning of the RTX 30 cycle. And because we reinvented computer graphics, we reset the computer industry. And after 3 years, the entire graphics industry has followed. Every game developers need to do ray tracing, every content developer and every content tool has moved to ray tracing.
And so if you lose ray tracing, these applications are so much better. And they simply run too slow on GTX's and so we're seeing a reset of the install base, if you will. And at a time when the gaming market is the largest ever, we've got this incredible install base of GeForce users. We've reinvented computer graphics and reset the install base and create an upgrade opportunity that's really exciting at a time when the market is the gaming market, the gaming industry is really large. And what's really exciting on top of that is that gaming is no longer just gaming. And it's infused into sports, e-sports. It's infused into art. It's infused into social.
And so gaming is -- it has such a large cultural impact now, it's the largest form of entertainment. And I think that the experience we're going through is going to last a while.
And so, one I hope that crypto will -- the CMP will steer our GeForce supply to gamers. We see strong demand and I expect to see strong demand for quite some time because of the dynamics that I described. And hopefully in the combination of those two, we'll see strong growth and through strong growth in our core gaming business through the year.
And your next question comes from John Pitzer with Credit Suisse.
Yes. Good afternoon, guys. Thanks for let me ask the question. Jensen, I had two hopefully quick questions.
First, I hearken back to the monitor you guys put out at couple of analyst days ago, the more you spend, the more you save. And you've always been very successful as you brought down the cost of doing something to really drive penetration growth.
And so I'm curious with the NVIDIA Enterprise AI software stack, is there a sense that you can give us is how much that brings down the cost of deployment and AI inside the Enterprise? And do you think whether COVID lockdown related or cost related, there's pent up demand that this unlocks? And then my second question is just around government subsidies. A lot of talks out of Washington about subsidizing the chip industry, a lot of that goes towards building fabs domestically. But when I look at AI, I can't think of anything more important to maintain sort of leadership in relative to national security. How do we think about NVIDIA and kind of the impact that these government subsidies might have on either you or your customers or your business trends?
The more you buy, the more you [indiscernible], there's no question about that. And the reason for that is because we're in the business of accelerated computing, we don't accelerate every application.
However, for the applications we do accelerate, the acceleration is so dramatic. And because we sell a component, the entire system, the TCO, the TCO of the entire system, and all the services and all the people and the infrastructure and the energy cost has been reduced by X factors, sometimes 10x, sometimes 15x, sometimes 5x.
And so the -- so when we set our mind on accelerating a certain class of applications and recently we worked on true [ph] quantum so that we could help the quantum industry, quantum computing industry it's already there simulators so that they could discover new algorithms and invent future computers, even though it won't happen until 2030.
For the next 20 years, that we're going to have 15 years, we're going to have some really, really great work that we can do, using NVIDIA GPUs to do quantum simulations. We recently did a lot of work in natural language understanding in computational biology so that we could decode biology and understand how biology is to infer to understand it and to predictively improve upon it and design new proteins. Those words are so vital. And that's what accelerated computing all about.
Our Enterprise software, and I really appreciate the question.
Our Enterprise software used to be just about the BGP, which is virtualizing GPU inside the VMware environment, or inside the Red Hat environment and makes it possible for multiple users to use one GPU, which is the nature of Enterprise virtualization, but now with NVIDIA AI, NVIDIA Omniverse, NVIDIA Fleet Command, whether you're doing collaboration or virtual simulations for robotics and digital twins, design your factory or you're doing data analytics, learning what the predictive features are that could create an AI model, predictive model that you can deploy out at the Edge using Fleet Command. We now have an intense suite of software that is consistent with today's enterprise service agreements. It's consistent with today's enterprise business models, and allows us to support customers directly, and provide them with the necessary service promises that they expect, because they're delivering -- they're trying to build a mission critical application on top. And, more importantly, by creating this -- prioritizing our software, we provide the ability for our large network of partners, OEM partners, value added resellers, system integrators, solution providers for this large network of hundreds of thousands of IT sales professionals that we are connected to through our network, we give them a product that they can take to market.
And so the distribution channel, the sales channel, VMware, the sales channel of Cloudera, the sales channel of all of our partners [indiscernible] and design, Autodesk, [indiscernible] so on so forth, all of these sales channels and all of these partners are now partners and taking our stacks to market. And we have a fully integrated system that are open to the OEM, so that they could create systems of run the stack. And it's all certified, all tested, all benchmark and, of course, very importantly, all supported.
And so this new way of taking our products to market, whereas our cloud business is going to continue to grow, and that part of AI is going to continue to grow that business is direct. We sell components directly to them, we support them directly. But there are 10 of those customers in the world.
For Enterprises, there are thousands industries far and wide.
And so I think this -- we now have a great stack and a great software stack that allows us to take it to the world's market so that everybody could buy more and save more.
And your final question comes from Stacy Rasgon with Bernstein.
Hi, guys. Thanks for taking my questions. [Indiscernible] Colette.
So Colette, last quarter you had kind of suggested that Q1 would be the trough for, I guess, for gaming as well as the rest of the company beginning in particular, and it would grow sequentially through the year. I guess given the strength we're seeing in the first half, do you still believe that that is the case? And I kind of heard you guys, I think kind of dance around that point a little bit in response to one of the other questions. But could you clarify that? Is that still your belief that that core gaming business can grow sequentially through the rest of the year? And I guess same question is for data center, especially since sounds like hyperscale is now coming back, like after a few quarters of digestion and then all of the other tailwinds you've talked about. I mean, is there any reason to think that data center itself shouldn't also grow sequentially, like through the rest of the year?
Yes, Stacy, thanks for the question.
So I first of all start with when we talked about our Q1 results. And when we're looking at Q1, we were really discussing a lot about what we expected between Q4 and Q1.
Given what we knew was still high demand for gaming. We believed we would continue to grow between Q4 and Q1, which often we don't. And we absolutely have the strength and overall demand to grow. What that then lead was, again, continued growth from Q1 to Q2 as we are working hard to provide more supply for the strong demand that we say.
We have talked about that we have additional supply coming.
We expect to continue to grow as we move into the second half of the year as well for gaming.
Now, we only guide one quarter at a time, but our plan is to take the supply, serve the overall gamers, work on building out the channel, as we know the channel is quite lean.
And so yes, we do and still expect growth in the second half of the year, particularly when we see the lineup of games, the holiday overall coming, the back-to-school, all very important cycles for us. And there's a great opportunity to upgrade this, RTX install base.
Now, in terms of data center, will work in terms of our guidance here.
We have growth from Q1 to Q2 planned in our overall guidance. And we do see as things continue to open up a time to accelerate in the second half of the year for data center.
We have, again a great lineup of products here. It couldn't be a better lineup now that we've also added the inferencing products and the host of overall applications that are using our software that we have.
So this could be an opportunity as well to see that continued growth.
We will work in terms of serving the supply that we need for both of these markets. But yes, we can see definitely growth in the second half of the year.
There are no further questions at this time. The CEO, Jensen Huang, I'll turn the call back over to you.
Well, thank you. Thank you for joining us today. NVIDIA computing platform is accelerating. Launched at GTC, we are now ramping new platforms and initiatives. There are several that I mentioned.
First, enabled by the fusion of NVIDIA RTX, NVIDIA AI and NVIDIA [indiscernible]. We built Omniverse, a platform for virtual collaboration and virtual worlds to enable tens of millions of artists and designers to create together in their own metaverse.
Second, we lay the foundation to be a three check data center scale computing company with GPUs, DPUs and CPUs.
Third, AI is the most powerful technology force of our time. We partner with cloud and consumer internet companies to scale out and commercialize AI powered services. And we're democratizing AI for every enterprise and every industry. With NVIDIA AGX certified systems, the NVIDIA Enterprise AI Suite pre-train models for conversational AI, language understanding, recommender systems and our broad partnerships across the IT industry, we are removing the barriers for every enterprise to access state-of-the-art AI. Four, the work of NVIDIA Clara in using AI to revolutionize genomics and biology is deeply impactful for the health care industry, and I look forward to telling you a lot more about this in the future. And fifth, the electric self driving and software defined car is coming. With NVIDIA DRIVE, we are partnering with the global transportation industry to reinvent the car architecture, reinvent mobility, reinvent driving and reinvent the business model of the industry. Transportation is going to be one of the world's largest technology industries. From gaming, metaverses cloud computing, AI, robotics, self driving cars, genomics, computational biology, NVIDIA is doing important work and innovating in the fastest growing markets today.
As you can see, on top of our computing platforms that span PC, HPC, Cloud, Enterprise to Autonomous Edge, we've also transformed our business model beyond chips. NVIDIA vGPU, NVIDIA AI Enterprise, NVIDIA Fleet Command and NVIDIA Omniverse adds enterprise software license and subscription to our business model. And NVIDIA GeForce Now and NVIDIA DRIVE with Mercedes Benz as the lead partner, our end-to-end services on top of that. I want to thank all of the NVIDIA employees and partners for the amazing work you're doing. We look forward to updating you on our progress next quarter. Thank you.
This concludes today's conference call.
You may now disconnect.