Organized by: Keigensha
Marc Andreessen | Founder of venture capital fund Andreessen Horowitz
Reason: I’m always skeptical of people who claim that this time is different, whether it’s in technology or cultural trends. So, with artificial intelligence (AI), is this time really different?
Anderson: AI has been a core dream of computer science, dating back to the 1940s. There have been five or six AI booms in history, and people believed that this time AI would realize the dream. However, there was always an AI winter, and it turned out that it still did not succeed. Now we are in the middle of another AI boom.
Things are really different now. We have clear tests that measure human-like intelligence capabilities. Computers are actually starting to outperform humans on these tests. These tests are not just about, “Can you do math faster?” They’re more about interacting with the real world, like, “Can you process reality in a better way?”
In 2012, computers first surpassed humans in identifying objects in images, which was a major breakthrough. This made self-driving cars possible. What is the essence of self-driving cars? It needs to process a large number of images and judge, "Is that a child or a plastic bag? Should I brake or keep going?" Although Tesla's self-driving is not perfect yet, it has performed well. Waymo, which we invested in, has also been put into operation.
About five years ago, we started seeing breakthroughs in what’s called natural language processing, where computers started getting really good at understanding written English. They’re also getting good at speech synthesis, which is actually a very challenging problem. Recently, there was a major breakthrough with ChatGPT.
ChatGPT is just one example of the broader phenomenon of large language models (LLMs), which have astounded not only people outside the tech industry but many within it as well.
Reason: For those of us who don't understand the internals, ChatGPT does seem like a magical trick. As Arthur C. Clarke's third law says: "Any sufficiently advanced technology is indistinguishable from magic." Sometimes it is indeed amazing. What do you think of ChatGPT?
Anderson: Well, it's both a trick and a breakthrough. It gets to the deep questions: What is intelligence? What is consciousness? What does it mean to be human? Ultimately, all of these big questions are not just about, "What can machines do?" They're about, "What do we want to achieve?"
LLM (Large Language Model) can basically be thought of as a very advanced form of autocomplete. Autocomplete is a common form of computer functionality. If you use your iPhone, when you start typing a word, it will automatically complete the rest of the word, saving you the trouble of typing the whole word. Now Gmail can even autocomplete entire sentences, where you only need to type part of the sentence - like "I'm sorry, I can't make it to your event" - and it will suggest the rest of the sentence. LLM can be thought of as autocomplete across paragraphs, across 20 pages, and in the future, across entire books.
When you're ready to write your next book, you type the first sentence, and LLM will suggest the rest of the book. Will you follow its suggestions? Probably not. But it will give you some suggestions, chapter suggestions, topic suggestions, example suggestions, even wording suggestions. With ChatGPT, you can already do that. You can type: "This is my first draft, and here are the five paragraphs I just wrote. How can I rewrite it better? How can I write it more concisely? How can I make it easier for young people to understand?" And it will automatically complete it in all kinds of interesting ways. Then it's up to you to decide what to do with it.
Is this a trick or a breakthrough? The answer is both. Yann LeCun, a legend in the field of artificial intelligence who works at Meta, thinks this is not a breakthrough, but more of a trick. He compares it to a puppy: it will autocomplete the text you want to see, but it doesn't actually understand anything that is being said. It has no idea what a human is, nor does it understand the laws of physics. It will produce what are called hallucinations, and when there is no accurate autocomplete, it still wants to make you happy, so it will autocomplete a "hallucination". It will start weaving names, dates, and historical events that never happened.
Reason: You mentioned the word "illusion," but another concept that comes to mind is imposter syndrome. I'm not sure if it's humans or AI that might have this syndrome, but sometimes we all just say what we think other people want to hear, right?
Anderson: This gets to the heart of the matter: What do people do? And that's what makes a lot of people uncomfortable: What is human consciousness? How do we form our thoughts? I don't know about you, but I find in my life that a lot of people say what they think you want to hear every day.
These autocompletes are everywhere in life. How many people are making points that they actually think, that they actually believe, and how many are making points that they basically think other people expect them to make? We see this in politics—you are an exception, of course—where most people hold the same views on almost every conceivable issue. We know these people haven't discussed all of these issues in depth from first principles. We know it's social reinforcement at work. Is this actually more powerful than a machine trying to do something similar? I think it is. I think we're going to find that we're a lot more like ChatGPT than we think.
Alan Turing invented what's called the Turing test. He basically said, "Suppose we develop a program that we think has artificial intelligence. Suppose we develop a program that we think is as smart as a person. How can we be sure that it's really smart?" So you have a human subject, and they're communicating with another person and a computer in a chat room. Both the human and the computer try to convince the subject that they're a real person and the other is a computer. If the computer can convince you that it's a person, then it's considered artificial intelligence.
One obvious problem with the Turing test is that people can be easily fooled. Is that a computer that is good at fooling you? Or does it just reveal underlying weaknesses in what we believe to be profound human nature?
There is no single metric for intelligence. Both humans and computers are better or worse at some things. But computers have become very good at the things they are good at.
If you try Midjourney or DALL-E, they can create more beautiful art than the vast majority of human artists. Two years ago, did we expect computers to be able to make beautiful art? No, we didn't. Do they do it now? Yes. So what does this mean for human artists? If only a few human artists can create such beautiful art, maybe we're not very good at making art.
Reason: Human nature is often tied to the culture we live in. Should we care whether AI comes from Silicon Valley or somewhere else?
Anderson: I think we should care. One of the topics we talk about here is the future of warfare. You can see it with self-driving cars. If you have a self-driving car, that means you can have a self-driving aircraft, which means you can have a self-guided submarine, which means you can have intelligent drones. Now we have this concept, and in Ukraine we saw what we called "wandering munitions," which are basically suicide drones - they blow themselves up. But before that, they hover in the air until they find a target, then they aim and drop a grenade, or they become the bomb themselves.
I recently saw the new version of Top Gun, and one of the things that was mentioned in the movie is that it costs millions of dollars to train an F-16 or F-18 fighter pilot, plus the pilots themselves are incredibly valuable. We put these people in metal cans and fly them through the air at extremely high Mach numbers. The maneuvers that the aircraft can perform are limited by the physiological tolerance of the pilot. By the way, the aircraft that keeps the pilot alive is very large and expensive, with many systems to accommodate the human pilot.
A supersonic AI drone has none of those limitations. It costs a fraction of the cost. It doesn't even have to be the shape we imagine today. It can take any aerodynamic shape that doesn't have to accommodate a human pilot. It can fly faster, it can be more maneuverable, it can do all sorts of things that a human pilot can't handle. It can make decisions faster. It can process more information per second than any human. You won't just have one of these drones, you'll have 10, 100, 1,000, 10,000, or even 100,000 of them at once. The nation with the most advanced AI will have the most powerful defenses.
Reason: Will our AI be influenced by American values? Does the type of AI we choose have a cultural component? Should we be concerned about these issues?
Anderson: Look at the debate on social media. There's been a lot of debate about the values that are encoded in social media, about the censorship of content, about the ideologies that are allowed to spread.
In China, there is the so-called "Great Firewall," which has been causing debate. If you are a Chinese citizen, it limits what content you can see. And cross-cultural issues arise. TikTok, as a Chinese platform operating in the United States, has many American users, especially American children. Many people speculate whether TikTok's algorithm is deliberately inducing American children to engage in destructive behavior, and whether this is some kind of hostile action?
In short, in the era of social media, all of these issues will be magnified millions of times in the field of AI. These issues become more interesting and important. People can only create limited content, but AI will be applied to everything.
Reason: Does what you just said mean that we need to conduct prudent supervision in advance? Or is this situation impossible to regulate?
Anderson: I wonder what Reason magazine would say about the government?
Reason: Ha! Well, even though some people are skeptical of government, they still think, “Maybe it’s time to put up defenses.” For example, they might want to restrict how states can use AI.
Anderson: I would counter that with your own argument: “The road to hell is paved with good intentions.” It’s like, “Wow, wouldn’t it be great if we could regulate very carefully, thoughtfully, rationally, reasonably, and effectively this time?”
“Maybe this time we can make rent control work, if we’re a little smarter about it.” Obviously, your own argument is that nothing is actually happening, for all the reasons you keep talking about.
So, there is a theoretical argument for something like this. But what we have here is not abstract theoretical regulation, but practical, real-world regulation. What do we get? Regulatory woes, corruption, barriers to early adopters, political capture, and skewed incentives.
Reason: You’ve talked about how innovative technology startups can quickly become part of existing businesses, and this has implications not only for relationships with the state but also for broader business practices. This topic has received a lot of attention recently with the revelations about Twitter documents and the voluntary ways companies work together, but there may also be a looming threat to working with government agencies. It seems to me that we’re going to face more challenges. The blurring of the lines between public and private is our inevitable fate. What do you think? Is this a threat to innovation, or is it likely to foster its growth?
Anderson: The textbook view of the U.S. economy is that it's based on free market competition. Companies compete to solve problems. Different toothpaste companies try to sell you different kinds of toothpaste, and it's a competitive market. Occasionally there are externalities that require government intervention, and then you get some oddities like "too big to fail" banks, but those are the exceptions.
I've been working in startups for 30 years, and in my experience, the opposite is true. James Burnham was right. We moved decades ago from the original model of capitalism, which he called bourgeois capitalism, to a different model, which he called managerial capitalism. The actual correct model of how the American economy works is basically large corporations form oligopolies, cartels, and monopolies, and then, collectively, they corrupt and control regulation and government processes. They end up controlling the regulatory agencies.
So most sectors of the economy are really conspiracies between large corporations and regulators. These conspiracies are designed to ensure the perpetuation of these monopolies and prevent new competition. This, in my opinion, completely explains the education system (both K-12 and college systems), the healthcare system, the housing crisis, the financial crisis and bailouts, and the Twitter file.
Reason: Are there any industries that are less affected by the market phenomena you just described?
Anderson: The question really comes down to, does real competition exist? The idea of capitalism is basically a way of bringing evolution into the economic sphere—natural selection, survival of the fittest, and the idea that superior products in the marketplace should stand out. Markets should be open to competition, and new companies can come up with better products and displace existing market leaders because their products are superior and more popular with customers.
So, does real competition exist? Do consumers really have enough choice among the existing alternatives? Can you really bring new products to market, or will you be locked out by existing regulatory barriers?
The banking industry is a great example. During the 2008 financial crisis, one of the key issues was “we need to bail out these banks because they are ‘too big to fail.’” So Dodd-Frank was created. However, the result of this legislation (which I call the Big Bank Protection Act) is that the “too big to fail” banks are now even bigger than they were in the past, and the number of new banks being formed in the United States has plummeted.
The cynical answer is that in less important areas, this doesn’t happen. Anybody can come out with a new toy. Anybody can open a restaurant. These are premium consumer categories that people really like and so on, but compared to the healthcare system or the education system or the housing system or the legal system—
If you want freedom, it's better not to choose serious business.
If it is of no importance in determining the power structure of society, then so be it. But if it has a major impact on the government and the major policy issues surrounding it, then of course this will not happen.
This is obvious. Why are all these universities so similar? Why are their ideologies so consistent? Why is there no market for ideas at the university level? The question then becomes why aren't there more universities? There aren't more universities because you have to get accreditation. And the accreditation agencies are run by the existing universities.
Why are health care costs so high? One major reason is that they are basically paid for by insurance. There is private insurance and public insurance. The prices of private insurance are not much different than those of public insurance because Medicare is the big buyer.
So how are health insurance prices determined? A division within the Department of Health and Human Services operates something like the Soviet Union's Pricing Commission for Medical Products and Services. So once a year, a group of doctors get together in a conference room, like some Hyatt Regency in Chicago, and they sit down and do the same thing. The Soviet Union had a central pricing bureau, but it didn't help. We don't have one for the entire economy, but we have a pricing agency for the entire health care system. It doesn't work for the same reasons that the Soviet system didn't work. We copied the Soviet system exactly, but we expect better results.
Reason: About 10 years ago, you compared Bitcoin to the Internet. How accurate do you think that prediction is now?
Anderson: I still agree with the views in that article. But to make a correction, we thought that Bitcoin would develop in a widely applicable way, just like the Internet continued to evolve and gave birth to many other applications. However, this is not the case. Bitcoin itself has basically stagnated, but many other alternative projects have emerged at the same time, the largest of which is Ethereum. So if I rewrite that article today, I might mention Ethereum instead of Bitcoin, or just talk about cryptocurrencies.
Other than that, all the same concepts still apply. The ideas I mentioned in that article basically cover crypto, Web3, and blockchain - what I call the other half of the Internet. When we first built the Internet as we know it today, we knew we wanted to have all the functions of the Internet, including conducting business, transactions, and establishing trust. However, in the 90s, we didn't know how to use the Internet to achieve these goals. With the breakthrough of blockchain technology, we now have a way to achieve all of this.
We have the technological foundation to make this happen: a web of trust built on top of the internet. The internet itself is an untrusted network where anyone can impersonate anyone else. Web3 creates trust layers on top of that. In these trust layers, you can represent not only money but many other things, such as ownership claims, home titles, car titles, insurance contracts, loans, digital asset claims, and unique digital art. You can also have a general concept of internet contracts where you can sign physically binding contracts online. You can even use internet escrow services to enable e-commerce transactions. In this case, the two buyers can use a trusted intermediary that is native to the internet and has escrow services.
You can build all the functionality you need for a complete, global, internet-native economy on top of the trustless internet. It’s a grand idea with a lot of potential. We are in the process of making it happen, and many things have already worked, and some haven’t, but I believe they will eventually work.
Reason: What industries do you think are worth investing in right now?
Anderson: The words research and development are often used together, but they are actually two different concepts. Research is mainly to fund smart people to explore deep problems in technology and science, and they may not know what products can be built based on these problems, or even whether something is feasible.
We focus on development. When we invest in a company for product development, the basic research should have been done. There can't be unsolved basic research problems, because then you as a startup, you don't even know if you can build a viable product. Also, the product needs to be close enough to commercialization that in about five years you can actually commercialize it.
This formula worked very well in the computer industry. During and after World War II, the government conducted 50 years of research into information science and computer science. This translated into the computer industry, the software industry, and the Internet. In addition, it works equally well in the biotechnology field.
I think these two areas are the main areas where basic research has produced real results. Should basic research get more funding? Almost certainly yes. However, there is a serious crisis going on in the field of basic research at the moment, known as the replication crisis. It turns out that many projects that are considered basic research are actually not true - and may even be fraudulent. So one of the many problems with modern universities is that much of the research they produce seems to be fake. So would you recommend throwing more money into a system that only produces fake results? No. But would you say that we do need basic research to get new products out the other end? Of course.
On the development side, I'm probably more optimistic. I think we generally don't lack money. Basically, all good entrepreneurs can get funding.
The main problem here is not funding. The problem is how competition and markets work. In what areas of economic activity can startups actually exist? For example, can you really have an education startup? Can you really have a healthcare startup? Can you really have a housing startup? Can you really have a financial services startup? Can you create a new online bank that operates in a different way? For those areas where we want to see a lot of progress, the bottleneck is not whether we can fund them; the actual bottleneck is whether these companies will be allowed to exist.
I think sometimes there are certain areas where the conventional wisdom is that you can’t create a startup, but in fact you can. I’m talking about the space industry, a certain subset of the education space to a certain extent, and crypto.
SpaceX is perhaps the best example. This is a government-dominated market with incredibly strict regulations. I can't even remember the last time someone tried to build a new launch pad. You need to deploy a lot of satellites, which involves a lot of regulatory issues. Then comes the issue of complexity. Elon Musk wants rockets to be reusable and therefore wants them to land autonomously, which is considered impossible. While past rockets were essentially disposable, his rockets can be used repeatedly because they have the ability to land autonomously. SpaceX climbed the wall of doubt, and Musk and his team succeeded through sheer perseverance.
One of the big things we talk about a lot in the business world is that this is a very difficult journey to start a business. That's the agreement that entrepreneurs need to sign up for, and the risk is much greater than if they were to start a new software company. It requires a higher standard of competence, and the risk is higher.
There will be more failures of companies like this because they can't succeed. They'll be held back in some way. Also, you need a certain type of founder who is willing to take on this responsibility. That founder looks a lot like Elon Musk, Travis Kalanick, or Adam Neumann. In the old days, they looked like Henry Ford. It takes someone like Attila the Hun, Alexander the Great, Genghis Khan. To make a company like this work, it takes someone who is extremely intelligent, determined, aggressive, fearless, who can take all kinds of different kinds of harm and is willing to take on a lot of malice, hatred, abuse, and security threats. We need more of them. I hope we can find a way to foster these people.
Reason: Why are some people so angry about billionaire entrepreneurs? For example, a US senator tweeted that billionaires should not exist.
Anderson: I think it goes back to Nietzsche's idea of what he called "resentment," a toxic mixture of resentment and bitterness. It's the cornerstone of modern culture, Marxism, and progressivism. We resent people who are better than we are.
Reason: This also ties into Christianity, right?
Anderson: Exactly, Christianity. The last will be first and the first will be last. The rich would rather go through the eye of a needle than enter the Kingdom of God. Christianity is sometimes described as the last religion, the last possible religion on earth, because it attracts victims. The nature of life is that there are always more victims than winners, so the victims are always the majority. So a religion captures all the victims or all the people who think they are victims, which is usually the majority at the bottom of society. In social science, this is sometimes called the "crabs in a barrel" phenomenon, when one person starts to achieve success, others drag him back to where he was.
This is a problem in education, too—when a kid starts to excel, the other kids bully him until he no longer has an advantage. In Scandinavian culture, there’s a term called “tall poppy syndrome,” which means the tall poppy is always being pushed down. Resentment is like poison. Resentment gives people a sense of satisfaction because it lets us off the hook. “If they’re more successful than me, they must be worse than me. Obviously, they’re immoral. They must be criminals. They must be making the world a worse place.” This mindset is deeply ingrained.
I would say the best entrepreneurs we work with are not influenced by these notions at all. They think the whole concept is ridiculous. Why spend time paying attention to what other people are doing or what other people think of you?