Scale AI’s Alexandr Wang on the most powerful technological advancement of our time
It was only recently that Scale AI stepped into the limelight. But since 2016, it has been one of Silicon Valley’s quiet but indispensable forces – the data infrastructure powering the entire AI industry. Today, every significant large language model (LLM) is built on top of Scale’s data engine. Because of Scale, AI has the potential to transform the global economy.
At its helm is founder and CEO Alexandr (Alex) Wang. Born in Los Alamos, New Mexico, as the child of two physicists, Alex saw the profound impact of science and technology demonstrated many times. But he rarely followed anyone else’s playbook.
Before Scale, Alex tinkered with artificial intelligence and deep learning. He created AI algorithms that could detect people’s emotions based on facial expressions or power a refrigerator camera to catch if his roommates were eating his food. Alex understood that progress in AI would not hinge on algorithms or technical limitations but on data availability.
“AI is the most important technological advancement of our time.” - Alexandr Wang
Alex believed he could build the data infrastructure platform to support the entire AI ecosystem. So, at 19 years old, Alex dropped out of MIT to launch Scale. Around this time, the Accel team met Alex, and we were struck by his determination. In the years following, Scale has been at the forefront of every significant advancement in AI, contributing to projects such as autonomous vehicle programs at General Motors and Toyota, the US Government's AI initiatives, and several leading AI labs at Meta, Microsoft, and more.
At just 26 years old, Alex is now a seasoned veteran in the AI space. In this episode, he joins us to reflect on Scale’s founding story and, in his own words, the advantage of naïveté and how a fresh perspective enabled him to achieve things no other artificial intelligence company has. He also shares advice for founders on how to get a headstart in a developing industry and gives his thoughts on how the AI ecosystem has developed faster than he ever expected.
- 00:00 - How Alex’s upbringing as the child of two physicists shaped his view of the world
- 03:00 - The significant bottleneck for AI development that inspired Alex to launch Scale
- 04:40 - Defining AI infrastructure and how to spot a great infrastructure opportunity
- 10:00 - Alex’s advice for founders on critical early entry into new markets before they are “cool”
- 13:00 - Why “naïveté” can help new founders accomplish things that left other companies in a rut
- 23:00 - Future predictions on how AI models could become highly personalized
- 27:00 - Exploring the potential impact of AI on the national and global economy
- 33:44 - Alex’s opinion on why “algo-raving” is the best way to experience music
Featuring: Alexandr Wang, CEO and Founder of Scale
Learn more: www.accel.com/spotlighton/scale-alexandr-wang
Explore more episodes from the season:
- Episode 01: AssemblyAI's Dylan Fox on building an AI company during a period of radical change
- Episode 02: Roblox’s Daniel Sturman on Building Great Teams in the AI Era
- Episode 03: Ada’s Mike Murchison on how AI is revolutionizing customer service
- Episode 04: Merge’s Shensi Ding on powering the next generation of AI SaaS companies
- Episode 05: Scale AI’s Alexandr Wang on the most powerful technological advancement of our time
- Episode 06: Bard’s Jack Krawczyk on the birth of Google’s AI chatbot and the creative potential that lies ahead
- Episode 07: Synthesia’s Victor Riparbelli on creating an environment to harness AI benefits and reduce harms
- Episode 08: Ironcald's Cai GoGwilt on a decade of anticipating the transformative power of AI
- Episode 09: Checkr’s Daniel Yanisse on tackling bias in people and AI
- Episode 10: Cinder’s Glen Wise on trust and safety threats and holding AI accountable
- Episode 11: Transcend’s Kate Parker on putting data back into the hands of users in an AI-driven world
- Episode 12: Arm’s Rene Haas on building the brain of artificial intelligence
Learn more about Accel’s relationship with Scale:
- Scale & Accel
- The Next Step of AI: Accel’s Continued Investment in Scale
- Accelerating & Democratizing Access to AI: Accel’s Ongoing Collaboration with Scale AI
*Scale's episode was recorded in October 2023 and references events that have now passed as “upcoming”
*Due to the Thanksgiving holiday next week the next episode will air on 11/28/23
Daniel Levine (00:04):
Welcome to Spotlight On. I'm Dan Levine, and I'm excited to interview Alex Wang, founder and CEO of Scale.
Alexandr Wang (00:14):
Excited to be here, Dan.
Scale’s core value proposition and Alex’s background
Daniel Levine (00:20):
Thanks so much. Alex, can you tell us just a little bit about Scale so that people know what it is, what you do?
Alexandr Wang (00:26):
Yeah, so at Scale we power the data infrastructure for the entire AI industry. Every major large language model (LLM) is built on top of Scale's data engine. And then we also partner with enterprises and governments, particularly the US government, in helping them implement and use this technology specifically on their data sets, their problems, their businesses, their unique challenges to basically power the future of AI development.
Daniel Levine (00:51):
Very cool. Very cool. Okay, so one of my favorite things about you is your origin story. Not that you're a comic book character, but tell us a little bit about where you're from, why you're from there, and then get us to how you decided to work on Scale.
Alexandr Wang (01:04):
I was born in Los Alamos, New Mexico. For those of you who have seen Oppenheimer,
Daniel Levine (01:10):
Oppenheimer's really taken it to the next level. I feel like now everyone knows Los Alamos.
Alexandr Wang (01:13):
Everyone knows Los Alamos. Yeah. Anyway, so I grew up in Los Alamos, New Mexico,
Daniel Levine (01:16):
Alexandr Wang (01:17):
Both my parents were weapons physicists working for Los Alamos National Lab. My mom is one of the country's experts in plasma physics and plasma fluid dynamics, very relevant for nuclear weapons. And so I grew up in this hotbed where everybody was talking about science and technology all the time. And when I went to college, when I went to MIT, this was right around when DeepMind came out with AlphaGo, Google came out with TensorFlow. It was like the start of when machine learning engineers were starting to get paid a lot more than regular engineers in the valley.
Alex’s introduction to Machine Learning and early projects
Alexandr Wang (01:54):
So I basically dedicated myself to studying AI/ machine learning. And at that point, it was quite primitive compared to what it is today. It was the very sort of beginning of deep learning and deep neural networks starting to take off. And basically in the course of being in school, I built a few side projects. There was an algorithm that I tried to build that could detect people's emotions based on their facial expression. Another one that I tried to build a camera inside a refrigerator that would tell me when my roommates were taking my food and realized pretty quickly that there's this incredible feature of these models, which is that they're just like sponges for data basically. So you take a model, you take the exact same model – in most cases, it's the exact same algorithm – and you just upload it with different data and it learns very surprising cognitive traits from that data. So for example, in the facial recognition case that I just talked about and the camera inside my fridge, it's the exact same algorithm and they just sucked up different data and basically realized that the bottleneck for AI development and deployment, it wasn't the intuitive things, it wasn't the algorithm. It wasn't like something more technical. It was actually just data.
Alexandr Wang (03:16): And availability of data. Our beginning thesis was that there needed to be an infrastructure platform like an AWS or Stripe to power the AI industry specifically focused on data. And then in the past seven years since starting Scale, I think we've been able to be at the forefront of every major development in AI since then. So we worked on all the major autonomous vehicle programs going from General Motors to Toyota to many of the leading startups. We worked with the US government on the first major programs, AI programs of record, and their first major deployments of artificial intelligence technology. And then we worked really closely with all the major labs in large language models. We worked with Open AI since 2019. Worked closely with many of the other labs like Meta, Microsoft character, and many others. And so the industry has progressed much faster than I could have even predicted over the course of the past 10 years roughly. And it's been an incredibly wild ride.
AI Infrastructure and how models have evolved over time
Daniel Levine (04:19):
Very cool. I mean, one of the things that you and I have always talked about that I think is so fundamental, and you kind of alluded to it, is this infrastructure aspect of it. What does it mean to you to be AI infrastructure, to be infrastructure in the data side? What specifically is the difference between being kind of some random tool that people use and being an infrastructure company in the space?
Alexandr Wang (04:41):
One of the key things that I think sets apart infrastructure is that you're deeply focused on primitives. So I think you look at the history of AWS, I think one of the sort of strokes of genius in AWS's history was they spent a lot of time just boiling it down to what are the core APIs that are necessary for cloud computing broadly, or just computing, let's say in a general sense, they invented cloud computing and they really boiled it down to the first two APIs were EC2. So the ability to have compute on demand and S3, the ability to of storage on demand. And so if you think about the AI paradigm, these models or sort of everything that we see, whether it's ChatGPT or Bard or all these advanced models, they all boil down to the same base primitives. Those base ingredients are, there's the compute.
(05:38): So now basically high-end GPUs and accelerators that are able to run a lot of the training and inference at extremely high speed. There's data, which is what is the data that these models are training themselves on that are fundamentally fueling these models improvement and teaching them what various concepts are. There's talent. This is obviously the missing ingredient in a lot of technology is very, very high end talent. And AI talent is certainly very scarce. And then there's the algorithms and the algorithms actually have become the transformers open source. Most of the algorithmic development is broadly open sourced and sort of broadly accessible. So then you look at these primitives, high-end compute and data, and you think about what primitives will continue scaling as the industry scales.
(06:29): And that, I think that's the heart of what a great infrastructure opportunity is. It's something where A, it's a base primitive, so there's no way around it. There's no way of, you can't build AI without data, fundamentally. And then it's something that has to scale exponentially as the entire field develops. So in particular, the entire AI industry is governed by what are called the scaling laws, which they were discovered in 2019, 2018, 2019 by OpenAI. But the basic observation is as you make models way bigger, they get way better, more technically every 10x increase in model size. So every exponential increase in model size creates a predictable increase in performance of the models. So one of the things we've seen is, so GPT2 was a 2 billion dollar, sorry, 2 billion parameter model. GPT4 based on most of the reports, is something close to one and a half to 2 trillion sparse model. Almost a thousand times.
Alexandr Wang (07:38): And most companies are on the record for seeking another roughly a hundred times certainly in terms of the amount of compute that they want to throw at these models. And so we're just in this very incredible renaissance period. You're very rarely in the midst of a hundred thousand X scale up of anything in humanity. Maybe RNA vaccines are the humanity's last example, but it's just we're in the midst of this incredible scaling of models, and I think that's governing a lot of the performance we're seeing. If you went back when we were working with OpenAI on GPT2, the models could sort of every once in a while spit out a word where you're like, oh, that's cool. It was sort of very, very primitive. And then now talking to GPT4, it's like I have a friend who mentioned that they use GPT4 for as an executive coach. They just talk to it at night and it teaches them how to think about things. It's pretty incredible.
Alex’s thoughts and advice for emerging AI founders
Daniel Levine (08:41):
That's awesome. One of the things I always think about, because I've been talking with a lot of folks recently, they've been asking me foolishly for my thoughts on AI. I have no idea what's going on, but I come back to this kind of framing that you used where you have the core primitives are basically this talent stuff. And everyone has a sense, everyone knows that AI talent is extremely expensive and it's worth it. And then you have, everyone knows there's a shortage of GPUs, and you can look at Nvidia and see just how well that business has gone. I think their data center business grew a hundred percent quarter over quarter. And yet maybe the biggest, the best kept secret to the broader world might be the value of data. Now of course, if you ask people at all these companies who are building these models, they would all tell you it's extremely valuable. Our usage is scaled up, our cost structure end is scaled up and Scale is our favorite vendor. But the broader world maybe hasn't quite figured it out yet. So it's one of my favorite best kept secrets in Silicon Valley, which is great. So I think the word's getting out, so you were once a young 19 year old founder in the AI space and you're a whole 25, 26? All of 26 now. I mean, you're basically a seasoned veteran, but there are more and more founders every day starting AI companies right now, and you've been through that. What's some advice you would give them about starting the company in the space?
Alexandr Wang (09:57):
Yeah, so in general, my general, one of my observations from studying successful companies is that most of the time you start companies in an area or in a trend long before they become cool. And the thing that allows you to actually win is that by the time the thing becomes cool or interesting, you've had this running headstart to be able to take advantage of the opportunity. So that might cause founders to be slightly dissuaded from starting AI companies today. Another way to think about this is for founders starting AI companies today, there's just an incredible amount of competition. I think there was a few friends of mine were working on an idea for AI-powered sales, and there are probably literally 50 if not more companies working on AI powered sales. And in that kind of environment, even if you're an incredibly good product person, even if you're incredibly good engineer, you're still probably only going to build something that's marginally better than your competition. So it's really difficult to thrive in those kinds of environments. I mean, in zero to one, Peter Thiel writes about this concept, which is that the best capitalists actually skate away from competition.
(11:16): So I mean, I have two minds about this too. I think on the other hand, AI is probably the most important technological advancement of our time. I think we'll look back on this in a few decades and realize that AI was the greatest economic engine for revitalizing growth of the developed world. And so I certainly think there's a lot of opportunity. So I'd probably boil down to, I would pick areas which are not very popular, but which AI can have a large impact. And the other thing I would caution for folks in building these companies is don't try to start something that's too far ahead of where the capability of the technology is.
Alexandr Wang (11:59): I think a lot of people, there's a lot of hype in the industry. I think in many cases for good reason, the technology is very impressive, but the models are still not good at various things. They still hallucinate, they're still not great at reasoning. Tool use doesn't work particularly well. And so I wouldn't start a company assuming that these problems will go away. They might go away, but that's a lot to risk your entire company on.
Breaking open new industries and the value of entrepreneurial naivete
Daniel Levine (12:22):
It's almost interesting. One of the things we talk about internally is there were people at various points who started internet companies or you started a mobile company, but the reality is you had to start a particular company that happened to be mobile or happened to be about the internet. And of course the internet really changes things if you're thoughtful about it, but it's still at the end of the day, what's the actual company do? What are you excited about? What are you passionate about? And right now, there's probably a glut of people who are starting AI companies and they maybe haven't been as thoughtful about what they're doing. And by the way, of course the second thing you would tell them is you need a lot of data. It's like it's expensive, the talent, the GPUs, and of course you're getting a lot of data. How do you think about, I'm curious, one of my favorite things about you is when you started at 19, you were obviously very deep into the AI space, having been in MIT having thought about it a lot, but people have been studying and researching AI for decades, but since before you were born, and yet, I like to think it was almost a benefit to you that you had call it the right combination of experience, thoughtfulness, passion, and some of the inexperience.
(13:25): You hadn't learned some of the lessons that people had learned from 20 years ago. So when a new technology comes along, you can back it. I think this is true of people at OpenAI as well. One of the first people to jump on transformers and say, wow, this is totally different. And obviously he had deep expertise in the space, but probably helped that he hadn't been doing it for 30 years so he could take a fresh view and appreciate something new. What are some of the things that you think founders should not learn from the last decade or so of AI the last five years now? What are the things where you think there are opportunities to be a little naive, a little silly maybe?
Alexandr Wang (14:01):
Yeah, in general, I believe there's a huge premium to naivete. You look at the fields, you look at AI, you look at a lot of Elon's companies, there's just this incredible track record, which is approaching industries with a totally blank slate and without a fine grain understanding of what makes things hard is actually part of what allows you to accomplish things that other companies or other efforts are sort of in ruts and are therefore unable to accomplish. And I think open AI is a great example of that. I think the things that I would probably encourage founders be more open-minded about is probably the thinking about the business case or not being so dogmatic about the exact HBS business case style framework of thinking about the business. I think we talked about this a lot in the early days of Scale, which is that originally when I think one of the funding rounds when we were pitching Scale, we got a lot of no's.
(15:12): We talked a lot of investors and a lot of them were just like, yeah, we just don't see this being that important or we just don't see it being that big. Or the tam's a little small. And I think you made a great point, which is like, yeah, the tam was small of every important company when they were getting started basically. And the whole point is that it's growing super fast, so you kind of just got to ignore them. And I remember it being quite discouraged at the time, but I think you helped encourage me in the moment, and I think it's true in this case, the reason that an investor should have invested in Scale is because you believe that the AI industry was going to fundamentally grow many, many, many fold, which is obviously playing out today, but you had to have that conviction early on. I think as a founder, you need to have similar kinds of convictions. Another good example of this is I, like many others, witnessed the growth and explosion of Mid Journey as a platform. I think I remember when Mid Journey first came out, it seemed like, oh, this is just super weird.
(16:18): This doesn't seem like a business. Who knows what's going on here? And then fast forward to now, it's a bigger business than most SaaS businesses dramatically bigger. And it's still one of the strangest user experiences. I would say.
How AI is rewriting the rules of building a business, and the wisdom of thinking big and being a little irrational
Daniel Levine (16:34):
It's a great example of this by the way, because this, it's actually one of my case examples where people come to me and they ask, if only Midjourney? The first thing people say is, if only Mid Journey had its own UI, how much bigger could it be? And I'm always like, I mean maybe, but have you ever built a business that's as big as Midjourney? I feel like the UI is working pretty well for them, and I think they seem pretty thoughtful about it, but it's one of my favorite incidentally shock and awe things to say to people. I think a lot of people come in chest puffed out and go, wow, but what if we built a great mobile app UI for Midjourney? And I'm always like, I don't know that you're in a good position to give feedback to Midjourney. I admire the gall of a extremely, I dunno, extremely proud, extremely, a lot of bravado people who are like, I could do it better. And I'm like, there's no evidence to suggest that you could do it better. And I think it's going quite well. They're my case example of that piece.
Alexandr Wang (17:30):
Totally. I mean, I think another way to think about this is I think that the rules are being totally rewritten right now because of these models. I think no sane, no sane business school graduate would advise OpenAI to have spent hundreds of millions of its billion dollars or whatnot on training GPT4 speculatively without understanding what the business plan was going to be. So the rules of physics of AI businesses are different.
Daniel Levine (18:00):
One of my favorite isms or thoughts about starting companies is you have to accept a level of being a little irrational. I tell founders, prospective founders, how do I pick an idea? I'm like, it's going to be something you're a irrationally excited about, which for very smart, hardworking, thoughtful people is almost the worst thing. You can tell. Many of them, they're like, what do you mean be irrationally excited? Be irrational? And I'm like, well, it's just hard to know how well it's going to go. And there's not great examples. I had the pleasure of working at Dropbox, and I think it kind of picks up on a couple of the trends that you've mentioned, which is the first is I don't think Dropbox could have necessarily existed without a technology like S3, which had just gotten launched. And a lot of AS was built in the back of startups who maybe of a prior era or big companies would've said, I'm not going to use this cloud thing.
(18:47): But Drew didn't give it a second thought. He was like, well, this is the logical choice. Sure, I should just use S3. It's exactly what I need. And that was true for Instagram, Pinterest, etc, and a few other folks. And then the second thing is Dropbox was always a product for people, for customers, for users, how they thought about it. But everyone always tried to bucket the company as one of these legacy enterprise storage companies or something like that. We just never thought that way as a business, at least not in the first decade. And it always baffled me, but of course, Dropbox is a great company with a great product and incredible talent, including some wonderful folks at Scale. And so it's just such a good example. And this is true of pretty much every business I can think of, which is it's one of my favorite things about Silicon Valley, and I think it's why there's so many younger founders, so many founders who they're kind of passionate, curious, interesting people with diverse hobbies and passions.
(19:35): So it's one of my favorite parts. This is all great. One of my favorite things when I get to talk to you, and I think one of the great privileges of being an investor in companies is you get to invest in these great companies when they're really early and nobody knows about them and these great people before people know about them. And even now, as I alluded to earlier, Scale is a little bit of a secret to a certain extent. I mean, I don't think people fully appreciate the impact it has in the category and on the world. And of course, you run the company and you're something of a futurist in this space. You're somebody who spends a lot of your time thinking about AO. You get to see companies and what they're building from the very beginning using Scale. What are some of your favorite predictions for the future of AI for how the world's going to evolve that you like to talk about over drinks or dinner with friends?
The data locked in video, multimodality, and Alex’s predictions for the future of AI
Alexandr Wang (20:29):
Yeah, I think we're in a period of quite high uncertainty in a number of dimensions. And I think there's sort of like you could debate about, I often debate with other people in AI around what the outcome of some of these fronts of uncertainty look like. And just to say a few, I think in terms of the evolution of model capability. So how are the models going to get better? I think there's some things that are pretty clear. I think it's very clear that multimodality is going to continue improving. So models are going to get better at understanding images, they're going to get better at creating images, they're going to be better at the audio stuff is going to get better. And then maybe perhaps most impactfully, I think video, video models are going to keep improving pretty dramatically over time as well. So we're going to continue seeing not just text, but basically all the data formats that we as humans create and consume are going to be very convincingly generated by AI systems.
Daniel Levine (21:26):
And just to dive in on that, you mentioned video, and it's something that I think about a lot as well. I always think to myself, a lot of people have this view of how big the internet is, the general speaking internet, and it's quite large, but when you think about the corpus of data that YouTube has or TikTok or aspects of Facebook and Instagram have, it's really massive. So to the extent that you can understand video, you really grow the internet by orders of magnitude. I mean, there was always these fun legacy studies of when Cisco and still is, but they power all the networking and switches for the internet. And they would talk about video and that's why they did flip camera and that's why they got into the video conferencing business because they're like, most of the internet is video or people, Netflix and YouTube take up a huge amount of engagement.
(22:09): So yeah, I just want to double click, which is, I think, again, this is a good example of a thing that you and I spent a lot of time thinking about, but I think people don't quite realize how large video is, how much information is there, how much language, content, understanding is in video, and to the extent that some of these companies can unlock that video content, it will be a meaningful driver. Not only is it a meaningful area for companies to target, but it's also a meaningful area to drive the future of how these models can perform and get better.
Alexandr Wang (22:34):
Totally, yeah. And I think the one way to think about multimodality is that these models basically are going to be able to see and understand all the things that we as humans see and understand. And so you're going to get to worlds where you have a, and this is something that is not possible today, but I think a lot of us wish that we always had a friend looking over our shoulder in all of our social situations. Any technical situations you wish, you have a mechanic you can call whenever you have a tricky issue. And that's going to be, I think, very feasible with the technology, sort of having an AI assistant that's always present and can look at basically anything that you see. So I think that's one exciting direction. The other exciting direction I think is going to will be really key is models going deeper and deeper into proprietary data sets. So today, most of the models are trained off of predominantly internet data. And the internet is, it seems like a lot of data, but most of the data, 99.99% of the data out there is actually private and proprietary.
(23:41): One simple way to think about this is of the photos that you take on your phone, how many end up on the public internet? For me at least, it's like 0.001% or something crazy. So because of that, I think that there's going to be a huge amount of model improvement from enterprises, governments, and large organizations in general taking these models. And then whether it's training them or fine tuning them or using RAG or retrieval augmented generation, somehow intermingling them with your own corpus of data to create AI systems that are uniquely capable, uniquely customizing uniquely yours. A third trend that I think the jury is still out on is what the misuse of models is going to look like. In particular, there's this UK AI summit that's going to happen next week focused on frontier risks. So there's different threads on this. There's people who are worried about AI, the classic AI doom argument, which is AI takes over and is a humanity level risk. There's the other thread which I subscribe to more, which is that AI will be used by bad people to do bad things. And there's real risks around weaponry and biosecurity. There's real risks around use in cybersecurity and cyber attacks. And then maybe the last thread that is interesting, or I think again the jury is still out on, is what is sort of the global story of AI, so to speak? So a lot of Open AI is an American company.
(25:21): Google is an American company. Many of the leading companies today are American companies. DeepMind is maybe the one notable exception in London. And so far it's been a predominantly American and certainly anglophone technology. And I think with ChatGPT, it spurred this. It was like the firing gun of a race, something like the modern space race in which now you see some of the highest performing open source models are coming out of Europe. Mytral French company created some of the best open source models. The UAE and Saudi have been. Saudi Arabia have been incredibly aggressive on AI. They've both announced large supercomputing clusters. The UAE has trained multiple models at this point, multiple LLMs, including some of the largest Arabic LLMs. China's obviously extremely aggressive when they've bought something like 5 billion worth of high-end GPUs from Nvidia. So there's clearly this global proliferation and global race element where many, many countries are viewing this as a key priority for them as countries. And I don't know exactly how that falls long-term.
How enterprise AI can disrupt industries and change the world
Daniel Levine (26:46):
Okay, so lemme just recap this then. I might dig it on some of these. So one multimodal, which I think is a super interesting area, I think they'll unlock massive data sets that'll bring more use cases online. So multimodal is a big one. Enterprise with proprietary data sets is a really good one. I want to double click on that, the sovereign side of things. So the different geopolitical risks, and then of course the kind of the offense versus defense. Good for the world, bad for world peace. I'm curious, a lot of people talk about enterprise AI and it's obviously a huge opportunity. What is the canonical example you use when you're trying to explain it to, I shouldn't say your parents because your parents are overqualified. If you were trying to explain it to my parents, what is the use case that you would use to really crystallize things for somebody in enterprise AI?
Alexandr Wang (27:33):
Ultimately you have to go sector by sector. But I think one way to think about it from a top down perspective is that of the 27 trillion of US GDP, 15 trillion of that is in services. The biggest chunks…. I think one of the opportunities for AI broadly speaking is that you have the ability to now use technology to transform this $16 trillion of US GDP, the vast majority of US GDP. And if you look in those buckets or you peek under their covers, the biggest bucket is healthcare. And the second biggest is financial services. So going to healthcare for a second, it's very reactive. You have an incident, you have symptoms, and then you go into the doctor. And once, in however many times, it ends up being something extremely severe that ends up being a huge risk to health and incredibly expensive. Fundamentally, this paradigm is just wrong. And the reason that we have to be so reactive in the healthcare system is ultimately because of a shortage of medical talent. If there were 10 times as many doctors, then hypothetically, doctors could be proactively checking in on you very, very frequently, and therefore preventing a huge amount of the very tail risks and the sort of late stage risks.
Daniel Levine (28:58):
Alexandr Wang (28:59):
And so I think one vision of how AI changes the world is AI-based healthcare, which is you will have a healthcare AI buddy. Constantly monitoring everything about you. And through that process, ensuring that you remain healthy and if there's anything that might cause you to be unhealthy, catching that very, very early.
Daniel Levine (29:21):
And it's a good example I think in healthcare where a lot of times in AI you need a couple of different things. We've always talked about this, you need the talent in the category, you need the infrastructure, the GPUs data centers, and you need the data. And in healthcare, a good example of this is I think more healthcare data is becoming accessible. Even things like the Apple watch that I'm wearing right now, continuous glucose monitoring scan prices are coming down and it forms this beautiful virtuous cycle where the cost to acquire data goes lower. But historically people would've said, well, what use is more data? We don't have the people who can make sense of that data in a thoughtful way. But that's obviously a huge opportunity for the AI space to say, we can take in that data, we can process it, we can understand it. And maybe it starts out as an early warning. And then over time it gets more and more to give doctors the medical establishment dramatically more leverage.
Alexandr Wang (30:07):
There is a huge question around what the impacts on labor are going to be, and I think it's one of the first technologies that puts a lot of pressure on knowledge labor. I think what that means is as if you are in one of these sectors, how are you go from being the computer? How do you go from being the person doing a lot of the sort of computation, so to speak, to being a user of computers? So how do you utilize the AI systems to be dramatically more effective and orchestrate a number of AI systems behind the scenes?
Daniel Levine (30:39):
Yeah, I mean for what it's worth, and I totally agree, it's a super interesting thing. As you know, you've had the displeasure of knowing me for seven years now. I'm just very optimistic generally, and my optimism is it's so hard to predict what will happen. I mean, I think I'll go on a tangent then come back, which is, it's incredible to me. I think five years ago, you would've heard a lot about the decline of the American technology sector and things like that. And there were a lot of people who would say, I thought we would get this by the 2020s, and we don't have it yet. But it goes to show how hard it is to predict things. I probably agree with you. I think AI optimistically could be a complete and utter game changer for the world, and I think it'll affect things in such different ways.
(31:17): And it all came about because an unlikelyish group of people from places like Stripe and an investor got together with the scientists from Google and said, we're going to work on this OpenAI thing. It even started out as kind of like a nonprofit, and it's become more of a combination of nonprofit and company, but it's a very unlikely beginning. I think it's hard to cite. We're going to draw this up. And then it was going all right, and they were producing some amazing stuff, but all of a sudden they launched a consumer-facing chat product, and that was not only successful, it's the most successful consumer subscription product ever. In that short of time, I can't fathom anyone could have predicted it. And so I think that's one of the great things about, in this case, I'll get a little bit nationalist instead, it's one of the great things about America is we've built a culture, particularly in Silicon Valley, where we don't have the hubris of how it will work or what will make it happen.
(32:08): We kind of give freedom to let a bunch of flowers bloom. So similarly with labor, one of my favorite things about humanity is I never predict how people will spend their money. One of my favorite anecdotes or stories that you've probably heard, I don't have many of them, so I repeat them over and over again, is when Thomas Edison invented the phonograph, the core use case he envisioned was recording office memos. The first recording of sound, he thought office memos. And the reason was to him most, the best music was in a symphony hall, and these buildings had been constructed for their acoustics and it was high end and any other music wasn't worth listening to. That was Thomas Edison. That was his view. The music industry didn't exist the way it does now and not that much. More than a hundred years later. Taylor Swift is keeping the global economy afloat. So you can never tell. So, I'm very eager to see what will happen. I tend to be optimistic about it, I think people will be freed up to pursue creative ideas and think interesting things. And I think AI will think interesting things and come up with creative ideas as well. And I think it'll just be an awesome time to live. What are your Halloween party plans?
Closing thoughts, algo-raving and Halloween plans
Alexandr Wang (33:22):
I think the best way to have a party, my partner and I have been really all about this recently, is this new thing called algo-raving.
Daniel Levine (33:32):
Okay, yeah. Tell us all about algo-raving.
Alexandr Wang (33:34):
There's coding languages that allow you to generatively create music. Just in the same way that, or a similar way that you might write sheet music instead, you can basically control a lot of the waveform using code. And so there's this thing called live coding where somebody will basically, in the course of playing their set or performing, they will, their terminal will be projected onto up on a screen and they will live code. They will code using this generative language and create new music on the fly by editing their editor. And at the same time, usually they also have a shader up, so they're terminals transparent and behind it, they have a visualization that's also controlled by generative code. So they also will sort of alternate between editing the music code and editing shader code. And so it's the most, I think for a coding audience, it's the best way to experience music.
Daniel Levine (34:42):
I think it's very cool. I look forward to 20 years from now, the next Taylor Swift era being algo raving. And then last question on this thread, what's your costume going to be for Halloween? I have fond memories, as I often remind you of in October of 2016, you were a carrot, you were an exceptional carrot. It was very orange, green, high quality carrot. What's the plan or what will you have been in 2023 when people watch this?
Alexandr Wang (35:07):
I pushed my partner a lot this year to do a couple's costume, so my partner will be an acorn. That was the fixed point. There was a lot of insisting. And so therefore, I will be a squirrel.
Daniel Levine (35:28):
You could have gone like a changing color leaf, a different aspect of the tree. What does it say about you that they went with the thing that eats the acorn, sows it away for winter.
Alexandr Wang (35:41):
Tree was considered. Leaf did not make the consideration set. There's just the constraint of, there's a supply chain constraint around what is the universe of costumes that get preemptively made. By the way, this did really cause me to sort of look into and understand the costume industry and the Halloween costume economic opportunity. And it's wild. There's, in many cases, the fastest way to get a costume online is through Halloweencostumes.com.
Daniel Levine (36:16):
Alexandr Wang (36:16):
Yeah. They have next day delivery where Amazon doesn't in most cases.
Daniel Levine (36:22):
I had no idea.
Alexandr Wang (36:23):
And you think about this problem behind the scenes, I run a very operational company, so we think a lot about the operations of it, and it must be crazy. You're basically, the whole year you have probably very mediocre demand. And then the month of October, particularly the last week of October, you just have demand blow through the roof and you have to fulfill all these orders. And your selling prop or your value prop is that you can deliver extremely quickly. I can't imagine what it looks like, but I have to, the staff and real estate footprint for this company must just skyrocket in October and then they shutter all of it.
Daniel Levine (37:06):
Everyone always hears about how many people Amazon's hiring for the holidays, but what is it? Halloweencostume.com.
Alexandr Wang (37:11):
I would love to see the PNL
Daniel Levine (37:13):
You would love the PNL, the October hit. This is the story behind Spirit Halloween, right? The stores that pop up everywhere and they just show up in abandoned warehouses the rest of the year or nothing, and then they're all of a sudden there. Okay, so what you're telling me, let's just, let's sum up the episode. If you're a young founder out there, maybe not AI, Halloween, costumes.
Alexandr Wang (37:34):
Halloween, I think there's a lot of opportunity. I think in any case where you have spiky demand, there's opportunity to take advantage of that. Very cool.
Daniel Levine (37:42):
Well, Alex, thank you so much for coming on Spotlight On, and I'm sure I'll see you around soon.
Alexandr Wang (37:48):
Yeah, thanks Dan.