Podcast
The Future of AI Chips
In this episode of The Future Of, Dr. Ronen Dar, Co-Founder and CTO of Run:ai, joins host Jeff Dance to discuss the future of AI chips. They dive into the increasing demand and the importance of optimizing GPU usage. They also share his insights on the future of specialized chips for specific AI workloads, the ethical considerations in AI, and upcoming advancements in the field.
Dr. Ronen Dar (Guest) – 00:00:01:
The entire space of how chips are being manufactured is changing right now. You see a few things. You see Intel is starting to manufacture chips to other companies. That’s a new move by Intel. Until now, they manufactured chips in their own fabs just for themselves. And now, Intel’s CEO announced that they’re going to manufacture to Microsoft chips worth $15 billion, I think. It’s a big move from Intel.
Jeff Dance (Host) – 00:00:36:
Welcome to The Future Of, a podcast by Fresh Consulting, where we discuss and learn about the future of different industries, markets, and technology verticals. Together, we’ll chat with leaders and experts in the field and discuss how we can shape the future human experience. I’m your host, Jeff Dance. In this episode of The Future Of, we’re talking about The Future Of AI chips, which have become quite a hot commodity recently. With me is Jeff Dance (Host), co-founder and CTO of Run:AI, a NVIDIA preferred partner that optimizes and orchestrates GPU compute resources for AI, and also deep learning workloads. Run:AI has been recognized by Wired, Forrester, and Gartner, and in 2022, actually raised $75 million of funding. Their platform offers GPU optimization, cluster management, and AI/ML workflow management. So Dr. Ronan is really familiar with the topic. We’re grateful to have him on the show. He comes with a relevant undergrad, master’s, and PhD. So really has the depth of expertise to help us understand what’s going on in this hot space. Grateful to have you with us, Ronan.
Dr. Ronen Dar (Guest) – 00:01:52:
Yeah. Hey, Jeff, good to be here. Thank you for inviting me. It’s AI and chips for AI are an amazing, amazing topic with a lot of importance. A lot of things are happening in that space. So happy to be here and speak about it.
Jeff Dance (Host) – 00:02:08:
Tell us a bit more about your journey. How did you come to be so focused on this area, help the audience understand, as we think about the future of AI chips, how you fit in?
Dr. Ronen Dar (Guest) – 00:02:17:
So obviously, since we started to run AI in 2018, I’m focused on AI, on running AI workloads and on GPUs. And so we’re in that space for the last six years and we saw the space going, we saw the innovation happening in the space, everything that’s happening with the demand for GPUs and other AI chips. And that’s amazing. And before on AI, I’m actually, I’m coming with a background that mixes both academia and industry. So I’ve been for many years in the academia. I did my masters and my bachelor’s, my master’s, my PhD, and my postdoc all in electrical engineering. And in part, I also worked in the industry and in a few chip companies. So I worked for Intel. I worked for a startup here in Israel. We built a chip that optimizes the performance of flash storage. And that startup was bought by Intel. It’s one time Intel. So Intel came into Israel and started an Research and Development center around chip design. And a lot of Intel’s hardware is being designed here in Israel based on that acquisition. So I got familiar with the, how chips are being designed and you know, what it takes and what it takes to have an R&D around that. And in 2018, I started running AI together with my co-founder Omri. So Omri is, he’s a tech. He’s the CEO. He’s an amazing person. I, we met in the academia. He also did his masters. We worked a lot back then. And we saw back then in 2018, two very important things. So one that AI is going to change the world, right? There is an amazing new technology that is going to be transformative. I believe that it’s the most transformative technology that the humankind ever created. That’s one. And we also saw the GPUs and chips. And actually compute power is going to be critical for AI and people will need more and more and more compute power to build better and better AI. And we went and started around AI around that, around building solutions for GPUs for AI workloads. We saw that there is a gap in terms of what’s needed for AI and GPUs and what’s out there. And we went and started to run AI. And a lot of things have happened since then. Yeah, it’s a good journey until now.
Jeff Dance (Host) – 00:04:40:
Great journey. And it’s amazing to think of all the growth that’s happened in the last year. But the fact that you had the depth of expertise, plus the foresight, kind of five years prior to all this massive growth, to be in the space and be preparing for it is quite an incredible journey. So that’s awesome. Let’s start with some of the basics. We all know AI chips, specifically GPUs, are very hot right now. Why is there such a demand? I was just looking at NVIDIA now being valued at the fourth largest company in the world. And they were kind of under the radar there for a little bit. They were smaller than Intel. Now Intel is a small company compared to kind of their valuation. So we’ve seen this in the stock market. We’re hearing about it. But why is there such a big demand for AI chips right now?
Dr. Ronen Dar (Guest) – 00:05:27:
What happened with NVIDIA in the last year is absolutely amazing, specifically in the last year, but the hype and the growth of NVIDIA as a company and the demand for their product, the GPU, is just growing tremendously in the last decade, actually. I think a few things happened. So let’s speak maybe about, first of all, about what happened in 2023. In 2023, the demand for GPU grew amazingly fast, much faster than ever before. And it followed ChatGPT and OpenAI, right? So OpenAI introduced to the world ChatGPT in November 2022. And suddenly the entire world had access to the amazing capabilities of large language models, right, like generative AI models, and my mother is using ChatGPT, right, everyone has access to that. So people saw what amazing capabilities. GenAI holds and how transformative it can be to industries and what impact it can have. And that triggered a race of many companies to build generative AI applications, to train LLMs, to build applications and workloads on top of LLMs. So a lot of ball and excitement around generative AI. Now, to build generative AI applications and to use large language models, you need compute power and you need a lot of compute power. You need GPUs. And with that race came a huge demand for GPU power. And the demand got so high that it was much higher than what the supply chain of GPUs could supply. So last year, the GPU shortage created, right, was… People were looking for GPUs, for the newer GPUs and couldn’t find. It’s still, the GPU shortage is still somewhat real. Still, if you would go to AWS, for example, or one of the cloud providers and try to spin up the newest H100 GPU, you might wait four days until you get one GPU machine like that. So the shortage is still there. I think the supply chain are much better now, right? NVIDIA had to increase the pace of manufacturing those GPUs. Together with, you know, TSMC and then the chip manufacturers. And not just the situation is much better, but NVIDIA grew significantly with everyone. All of us heard about the NVIDIA stock and how it did in the last 18 months. So that’s what happened in the last year. I think the growth and the trends around GPUs exist already in the last 10 years. Already since 2012, 2013, we’ve seen this trend of bigger and bigger. It’s also… We also saw it in crypto somewhat, but in AI in specific, we saw this trend of bigger and bigger models. Right? In AI, bigger is better. So in the last 10 years, people have developed bigger and bigger models with more and more parameters. Models that are more capable, that are trained on more data, and that can get to much better accuracy to do much new stuff. And with all of that came the need for more and more GPUs, right? If you train bigger. Models on more data, you need more computing power. You’re doing more processing. So in the last decade, we saw that the requirement in terms of computing power to train state of the art models go 100 million times. So 100 million times that eight orders of magnitude increase in just the amount of compute power that you need to train state of the art models. Right. So GPT-4 was trained last year on 100 million X more compute than 10 years ago. So that’s crazy. The trends are there for the last decade, more and more compute power, more and more GPUs are in it.
Jeff Dance (Host) – 00:09:31:
The GPU is originally created for this graphical processing units to kind of render graphics, but it just turns out they’re really good at parallel processing, right? And for doing mathematical equations and for really data intensive applications. And so it was a kind of happenstance that they became so needed and powerful. Was it sort of a, you know, an accident because they weren’t originally created for what the AI has come to be? Is that the reality? Because did NVIDIA have this foresight or they’re just like stumbled upon the idea like, oh, these just so happen to be one of the most important things for these LLMs.
Dr. Ronen Dar (Guest) – 00:10:10:
Yeah, it’s an amazing question, Jeff. I think that you know the answer. NVIDIA is an amazing company with a big vision, and they had that vision already 20 years ago. So actually, NVIDIA is not a new company, right? NVIDIA exists for the last 30 years, 30 years that they sell GPUs. And the main application for GPUs 20 years ago was gaming, right? Graphical applications. GPUs were really good in that. And gaming is wonderful, but gaming is relatively a niche workload, right? A niche industry compared to all the cloud workloads out there. But NVIDIA had this vision that, as you say, GPUs can accelerate not just graphical workloads, but any workload. With linear algebra calculations. Their vision was that they could accelerate scientific computing. So they created CUDA in 2006, actually, to make it much easier for developers to accelerate their workloads with GPUs. Because before CUDA, you needed to be an expert to program a workload to run on a GPU. GPUs are really complex. So they created a software framework called CUDA to make it much, much easier for people to run workloads on GPUs. And that enabled several years afterwards, the big breakthrough of AI, the big breakthrough of AI in the industry, or the big breakthrough of deep learning happened because of a few things. But one of the most critical reasons is that GPUs were out there. The researchers from Toronto University. They were the first to show this big breakthrough that you can train deep learning models on GPUs. They use CUDA for that. And they were able to accelerate their workloads and orders of magnitude. And they could train bigger models and they could achieve results that no one saw before. They just broke records. And then in 2012, that’s when they’re in the industry. That was the big breakthrough of deep learning on GPUs. It was enabled.
Jeff Dance (Host) – 00:12:24:
It wasn’t just not just memory now, not just CPU. GPUs sort of brought computing to a new level. There’s talk about this GPU revolution. We’ve heard about Sam Altman wanting to work with countries to raise $7 trillion with a T to help build more AI chips. As we think about the future, just mind boggling. What are your thoughts on Sam’s ask and getting involved at the country level? Is this realistic? Tell us more.
Dr. Ronen Dar (Guest) – 00:12:52:
Sam Altman was the first to raise $10 billion for a startup, right? I don’t think that any startup before raised such amounts. And they raised $10 billion because they needed a lot of GPUs. All of that money doesn’t go to people, right? It goes to computing power.
Jeff Dance (Host) – 00:13:10:
Microsoft servers to supply their chat GPT.
Dr. Ronen Dar (Guest) – 00:13:14:
Exactly. It goes directly to NVIDIA to buy GPUs, most of it. And now that’s a problem because NVIDIA dominates the market. And if I’m sure that the OpenAI people look forward and they see the amount of GPUs and the amount of computing power that they will need to continue to grow as they are growing until now and to continue to innovate in AI. And to continue to build bigger and bigger models and to host all the chat chip in all of their applications. So they will need a lot of GPUs. And right now it’s a place where they have, I guess, you know, they really depend on NVIDIA and the supply chain of NVIDIA and the prices of NVIDIA and how fast NVIDIA and TSMC are manufacturing GPUs for them, I’m just guessing. So my guess is that it’s around that. It’s a very strategic move around manufacturing chips. I think, and so Sam Altman is going around that. I think it’s, there is even bigger stories around geopolitical issues and the entire space of how chips are being manufactured is changing right now. You see a few things, you see Intel is starting to manufacture chips to other companies, that’s a new move by Intel. Until now, they manufacture chips in their own fabs just for themselves. And now Intel is seeing. The CEO announced that they’re going to manufacture to Microsoft chips worth $15 billion, I think. So it’s a big move, right? From Intel. So the entire manufacturing space is changing also because the chip act by Biden and the chip war that is happening between countries right now. So it’s a big, big, big story. I think that it’s important, you know, at the country level.
Jeff Dance (Host) – 00:15:10:
There’s a war for chips and there’s warfare happening where those chips actually matter. So it becomes, it’s like a strategic component to a country. It relays to their national security or their competitive advantage in the future. And that’s where I could see where Sam has a play in saying, okay, I raised $10 billion. Let me just level up a little bit and roll up the $7 trillion and start working at the country level instead of the biggest tech companies in the world. Let me start tapping some of the bigger countries with some extra T’s sitting around. It’s pretty fascinating. But we’re talking a lot about GPUs and I understand your company with Rene, you also help optimize because it’s not just the GPUs. You got to have GPUs, but there’s a lot that goes on in optimizing the solutions for AI. Tell me more about some of those layers and some of the things that you guys do to optimize performance.
Dr. Ronen Dar (Guest) – 00:16:02:
So we’re doing a lot around optimizing the usage of GPUs. We started from there, right? So we started from building technologies. I’m the CTO, so I’ll have to speak about the technology that we’ve built, right? But we’ve built a lot of capabilities around just optimizing the usage of GPUs. And we go from there. We go to provide more tools for data scientists and for AI engineers to build and train models much easier and so on. But I think with GPUs, they became so performant in the last 10 years that it became really difficult to actually utilize all of their power, all of their performance and to leverage all of their power, right? So their performance and the throughput of GPUs, how fast they can calculate things, right? How fast they can process data, improved in orders of magnitude in the last decade. The GPUs became much better and much better. The GPUs became much bigger with more GPU memory because the models became so big. They just increased in size. You need more and more memory to store those models, to store their power meter. So GPUs had 12 gigabytes of memory 10 years ago. Now they have more than 80 gigabytes of memory. So big increase in GPUs. Also, there was an increase in networking. Networking became much faster. So the entire infrastructure became much faster, much more performant. It became really difficult to just to get the performance out of that the infrastructure. So that’s one thing. The second thing is that GPUs also became very expensive. So if you look at the AWS prices. In the last 10 years, so they go from below $1 per hour for a GPU machine with eight GPUs around like eight years ago to now H100 on AWS costs almost $100 for an hour of a machine with eight H100, so 100x more expensive, right? The performance per dollar just increased and it was better and better.
Jeff Dance (Host) – 00:18:13:
Order of magnitude.
Dr. Ronen Dar (Guest) – 00:18:14:
Yeah, the performance improved, and the GPUs also became more expensive, but performance per dollar became better and better. But still… Now you have users who are getting access to a machine with GPUs, which is so costly. It’s all performance. That companies are now realizing that they need to start controlling. They can take off controlling how those resources are being utilized because it’s so expensive. OpenAI spent $10 billion on GPUs. So companies are now looking to get more control on how GPUs are utilized, how people are using them, how those GPUs are being allocated. So that’s one thing. The second thing is that also with AI, it’s not always that when, like if I had a project and I’m getting a lot of GPUs for my project, I won’t use those GPUs always, all the time. I probably use just a fraction of them. And sometime, a little bit of that time, I’ll use all of these GPUs. So meaning those GPUs are sitting idle for long times and they are really expensive hardware. So what we’ve built, we’ve built this mechanism. Of pulling computing resources, pulling all the GPUs into a centralized cluster. And essentially our software runs on top of that. And we allow different teams, different users to share those expensive GPUs. So if one team is not using their GPUs, someone else can take it. So GPUs can be shared. And then with our software, all of the performance, all of that very powerful compute can be actually utilized close to 100%. So we enable companies to get much more out of their GPU infrastructure because we pull those resources, we share them in a smart way. And so that’s the optimization that we bring.
Jeff Dance (Host) – 00:20:04:
It sounds important to maximizing the performance, but also the efficiency of the costs that kind of go into that equation. That’s pretty cool. So you can get more power to run your models, but you can also share the costs, essentially, with others at the same time.
Dr. Ronen Dar (Guest) – 00:20:22:
Exactly.
Jeff Dance (Host) – 00:20:23:
As we think about AI chips, there’s the GPU, there’s the integrated circuit, the FPGA, and then there’s the CPU. Are there other aspects of AI chips that are really changing, or is it all centering around sort of the GPU?
Dr. Ronen Dar (Guest) – 00:20:37:
So there are the GPUs. NVIDIA is the dominant player. Intel and AMD also have their own GPUs. But there are initiatives of other companies to offer new types of chips that are more specialized to AI. Exactly. Yeah. So Google, for example. They are offering their chip, which is called Tensor processing unit, but they offer it in their cloud. And it’s built somewhat differently than GPU. And it’s very oriented to deep learning models, to language models. Amazon have their own chips that they build and design and offer on their cloud. Intel has their own. So all the big companies are building their own offering for AI chips, most of the big companies. And there are also startups who also have more innovative solutions around chips that can compete with the NVIDIA GPUs.
Jeff Dance (Host) – 00:21:42:
It’s interesting. It reminds me of the platforms that we rely on to grow. We think about the, I don’t know if you’ve read the platform revolution, but it talks about the railroad and how that was a platform and then on and on. And then how the cloud in the last 10 years has become like a platform. And it seems like now these AI chips are becoming like a platform. And these companies seem to be taking matters into their own hands, going like, I can’t rely on the ecosystem. I need to build my own. I need to have some control. These big tech companies with the resources. But it seems like countries are doing the same thing, going like, hey, I need access. I need. I need. I need to know that I can get these chips. And so I can see why you’re saying sort of the ecosystem and the supply chain is really evolving and changing really quickly. We saw a lot of problems in the pandemic that now have seen some resolution. For example, some of the supply chain problems. But, you know, when you have these big pains in the ecosystem, it creates a lot of innovation and a lot of change and a lot of investment as well. We’ve seen a lot of news articles around, you know, the billions of dollars that people are investing in chips. What other innovations? What other innovation are you seeing as an expert in the field as we think about the next 10 to 20 years forward? And that’s a long time. I know you’ve been in this in the space for a while and we can all look at the NVIDIA last 10 to 20 years. Can I look at the stock price to see how quickly things can change? So knowing that things can change, what are some of the things you’re looking forward to in the future?
Dr. Ronen Dar (Guest) – 00:23:07:
I think first of all, what people are underestimating when speaking about chips for AI is the power of software, the software ecosystem. So people are speaking a lot these days about new chips that maybe can beat the NVIDIA GPUs, certain batch parts, and attain better performance. Nvidia is very dominant in the market today, very much because of the software ecosystem that they have started to build since 2006, when they started to build CUDA. And so they built CUDA and they built, they’re building a lot of layers in the software stack. So right now it’s really difficult for new companies to get in because all the models are being trained today on GPUs. So they, all the innovations are happening on GPUs. So, but that’s one thing, but. I do think that we’re going to see more new offering of chips, right? That we get from market share. I think around inference, it’s really interesting. Inferencing is different than training. Right? So inference is the phase where you take a trained model and you actually deploy it and you give it like new data and the modern inferencing, right, gives you results out of that new data. So we check GPT, for example, when we prompt it, that’s inference, right? The trained model gives you a result. So inference looks different here than training and the hardware requirements, I think, are different and you need different types of chips. So if I see an opportunity now is around inference. Yeah. In the next few years, inference is going to be big, big market. We’ll see more and more AI running in production and more and more applications based on AI. And those applications will probably need. To run on accelerators on hardware, right? On new chips. So, and Nvidia’s GPUs might not be the perfect match. So I do think that inference might be an interesting use case. I think we’re going to see in the next 10 to maybe 20 years, more and more chips that are specialized to specific workloads. And like we’re seeing now, right? Chips that are specialized for AI.
Jeff Dance (Host) – 00:25:22:
There’s a lot of specialized clouds, the government cloud, for example. So a lot more specific tuned. Hardware for those specific use cases.
Dr. Ronen Dar (Guest) – 00:25:30:
I think it will become economically viable to build chips specific for workloads that are in the clouds that they have a lot of, a lot of market, a lot of demand. So economically wise, it might be relevant. So more specialized chips. It makes sense. But you know, let’s see, it’s like so, so hard to predict what will happen, right?
Jeff Dance (Host) – 00:25:53:
A lot changes in a year in this space. I think Bill Gates had said that we overestimate what happens in a year, but we underestimate what happens in 10 years. But I think with AI, I think it’s like we overestimate what can happen in like three months, but we’re underestimating what can happen in a single year. But I think it’s sort of come down on an order of magnitude because the pace of change is quite fast. And you had mentioned how your mother is using ChatGPT. And a lot of people have said ChatGPT is actually on an order of magnitude better than a lot of other platforms. I’m curious if you believe they’re going to be able to keep their edge because there’s the back-end compute. There’s also the inference. But it seems from an inference perspective, ChatGPT still has an edge. We work heavily with generative AI for robotics. And so we’re seeing lots of interesting things there, given how you have these specialized models. And so I’m in agreement that we’re going to see a lot of specialization as we go forward around different use cases, different bodies of knowledge, different inference. But do you think that ChatGPT will be able to remain a leader?
Dr. Ronen Dar (Guest) – 00:26:59:
I think OpenAI now are doing amazing job. Right. They are keep innovating. Now we saw what they did with Sora, with Text2Video. So those capabilities are out there. There are some companies doing similar stuff. But OpenAI, when they do it, it’s like they do it much better than anyone else. It’s like, for us, I didn’t use SOAR actually, but from the results that I saw out there in the internet, it’s like, it’s amazing. So, I keep doing now. OpenAI are doing an amazing job. We’re creating a lot of amazing stories that everyone will remember from ChatGPT to Sora and to Sam Altman being fired and then coming back. So a lot of amazing stories are happening around OpenAI. You know, at the end, there are only 500 people in the company. That’s crazy. That’s such a small company and they’re doing more than, I think, $2 billion in revenues in AR. And so I believe in them. I think they have this big challenge of right now, AI is very costly, but I think they’re working out to reduce the cost and that’s also, I think like an interesting trend of inference. Take the cost of inference going down with people are optimizing their models and then making sure that they are running better, that their GPUs become better and collect the cost of inference goes down, while the innovation is being driven by bigger models. So like innovation is driven by bigger models and more parameters and so on. But then inference cost goes down with more specialized models, smaller models. So I think OpenAI are doing an amazing job. Let’s see how it will go with the $7 trillion investment, right?
Jeff Dance (Host) – 00:28:50:
Yeah, yeah. Maybe that will decide their fate if they retain their leadership. It’s clear they have a lot of leadership, a lot of clout. And sometimes that works out for companies. You look at Apple with the iPhone and how they evolved into being predominantly a smartphone company where their revenue is. And Tesla is similar. And it’s like, will OpenAI be the first that remains the first, or will they be the first that paves the way where someone else kind of leapfrogs? Because sometimes it’s the second that picks up. I think if the capital flows, then I think so. There’s an element of them having the capital, but also I think that their size may be part of their competitive advantage as well. They’re operating kind of like a crazy funded startup that has the true innovators on the edge. So talking to others that are talking about some of the differences between models, they definitely have an edge. I think it’s possible to sustain as long as the capital keeps flowing. That’s my hunch. It’s easy to say right now, but that’s my hunch as well. Tell me more about ethics. As you think about ethics, another big question, as we think about these AI resources and chips, the environmental impact is huge. What else do we need to consider from an ethical perspective? AI has a big range of ethics. There’s people that are for it, there’s people that are against it, and there will always be, as we have change with people, there will always be people that are for and against things. Any thoughts around the ethical things that we should be considering?
Dr. Ronen Dar (Guest) – 00:30:21:
It’s a hard, hard topic. A very important one. I think we as humanity experienced how it looks when technology goes bad with social networks. And in my opinion, at least, right, social networks brought a lot of good things, but also a lot of harm. And so we saw that and people have concerns around AI. I also have concerns around AI. It’s such a powerful technology. I think it’s the most, I said before, the most transformative technology that humankind will ever built, right, created, going to change everything. And I think on this tech industry, on the people who are actually building that technology, who are actually building applications on top of that technology to make sure that the technology is bringing mainly good to humanity. At the end, it’s on the community itself, on the industry itself to put the right DNA, to put the right, the right guard there is, the right mindset, and make sure that this technology is going to bring good things and minimize the bad things.
Jeff Dance (Host) – 00:31:37:
I think with any technology, there can be a lot of good. There can be a dark side to it as well. Almost with all tech, we have that. But I think as we look at some of these transformative things, technology moves so fast, it has a life of its own. Sometimes we wake up and we don’t realize what we’ve created or how much change we have caused. And humans don’t adapt as fast as technology does. I think the foresight of… Experts like you and those that are in the space are important that we consider the intent of what we’re doing and we consider humanity at the same time. So it’s good to hear your perspective on that. Three more questions as we kind of wrap up. One is, any other advancements in AI that you’re personally excited about that you haven’t shared so far?
Dr. Ronen Dar (Guest) – 00:32:24:
Text-to-video. All right, that’s the new stuff by OpenAI. Great. That’s amazing. There are so many things that we’re going to see with AI, so many things that I’m actually excited about. I think everything that we do with data is going to change. You know, how we search things in data, how we get insights into data. It all can move into natural language. I think avatars, like digital images of us in the internet, I’m seeing, you know, startups doing those things and it actually works really, really good. There are a few startups working on that. You mentioned robotics. Robotics might be more down there in the future. But also really exciting. So a lot of things are going to happen in the next years, right?
Jeff Dance (Host) – 00:33:13:
AI seems to be one of the great accelerators. With our robotics team, we’ve been able to make some great leaps forward as we think about the usability between humans and machines and then the translation between things that you need and things that a robot does and the ability for the robot to see. Leveraging often these large language models. And that’s helped us accelerate quite a bit. As you think about you’re a leader in the space, you’re a partner with NVIDIA as well, who’s the dominant GPU leader in the space. Who do you look for for insights as we think about the future? Can you name any leaders or books or resources that listeners might be interested in?
Dr. Ronen Dar (Guest) – 00:33:57:
Yeah, there are a few. First of all, I love to listen to Sam Altman speaking. Sam Altman and Ilya Sutskever. Ilya Sutskever is the chief scientist of Ilya Sutskever. So more on the technology, I’d love to hear and listen to him. On the infrastructure side, Jensen. Jensen from NVIDIA, the NVIDIA CEO, he’s amazing. He has big vision. I always listen to his keynotes. So actually, NVIDIA’s conference, GTC, is happening towards the end of March. That’s their biggest conference. And there is so much excitement around that conference. We, RunAI, are going to be in that conference. We’re going to meet Diamond sponsors. We’re going to have a big booth with a lot of people. I’m going to give a couple of talks there. And Jensen is going to give a keynote. So Jensen’s keynotes are amazing. So I encourage people. To listen to Jensen’s keynote. Two hours. A lot of great stuff are coming there.
Jeff Dance (Host) – 00:34:58:
Where is it going to be at?
Dr. Ronen Dar (Guest) – 00:34:59:
It’s in San Jose. Live. It wasn’t live since 2019, I think. Since COVID hit. But there were like virtual other keynotes of Jensen. They are always, always there. The keynotes are, his keynotes are amazing.
Jeff Dance (Host) – 00:35:13:
Good stuff. Anything else on the future that you think would be beneficial for our listeners?
Dr. Ronen Dar (Guest) – 00:35:19:
Chips are going to continue to be important, right? They’re going to. Be more and more important. We see everything that is happening, right? With their geopolitical stuff. It’s going to be critical and strategic to companies and to countries, as you say. And it’s just going to grow. We’re going to see more and more and more compute out there. We’re going to see more and more data centers out there. So the footprint of what we’re doing with compute is just going to increase in the next 10 years, for sure, 20 years. So compute is really important.
Jeff Dance (Host) – 00:35:53:
Tell me more about what you’re going to be talking about at the NVIDIA conference. What are some of the core topics that you’re going to be covering?
Dr. Ronen Dar (Guest) – 00:35:58:
So one talk is going to be a sponsored talk by us because we diamond sponsors. So I’m going to speak about to run AI, about challenges with GPUs, on how to manage and orchestrate GPUs, how to give your data scientists and AI engineers the best tools out there to do their work the fastest that they can do, how to get the most out of your GPUs and so on. That’s one talk. The second talk is a talk about benchmarks. It’s more deep dive, more technical to developers. It was accepted to GTC, so it’s not sponsored. I’m not going to speak almost any about run AI. So it’s going to be on LLM training on how to train a big, large language models on GPUs. We did some benchmarking. Really, I’m going to speak about best practices in that sense. So really like deep dive into the how to train the LLMs on GPUs.
Jeff Dance (Host) – 00:36:51:
Dr. Ronen, it’s been so good to have you. Thank you for your leadership here, your expertise, your career that you really devoted to this relevant space and your insights today. We’re looking forward to seeing what you do with Run.ai, given all the momentum you have and how important this topic is. And it’s clear as a key partner of NVIDIA is that you guys had some amazing, amazing insight. So thanks for joining us today.
Dr. Ronen Dar (Guest) – 00:37:15:
Thanks, Jeff. I had fun.
Jeff Dance (Host) – 00:37:16:
Thanks. The Future of Podcasts is brought to you by Fresh Consulting. To find out more about how we pair design and technology together to shape the future, visit us at freshconsulting.com. Make sure to search for The Future of in Apple Podcasts, Spotify, Google Podcasts, or anywhere else podcasts are found. Make sure to click subscribe so you don’t miss any of our future episodes. And on behalf of our team here at Fresh Consulting, thank you for listening.