← Library

AI Hurtles Ahead

Howard Marks Oaktree Capital 2026 Memo

AI Hurtles Ahead

Howard Marks, Oaktree Capital — 2026-02-26

PDF Translations

-

Japanese

-

Korean

-

Simplified Chinese

-

Traditional Chinese

Memos from Howard Marks 2026-02-26T08:00:00.0000000Z" pubdate title="Time posted: >2/26/2026 8:00:00 AM (UTC)">Feb 26, 2026

- PDF (English)

- PDF (Translations)

- Listen to Memo

- Archived Memos

Subscribe

AI Hurtles Ahead

When I was preparing to write my December memo about artificial intelligence, Is It a Bubble? , I gained a great deal from speaking with some interesting techies in their thirties and forties. It's stimulating to explore fresh territory and an absolute requirement for staying current as an investor. It's one of the most enjoyable parts of my job.

I recently returned to those people to follow up on the December memo. As part of that process, someone suggested I ask Claude, Anthropic's AI model, to create a tutorial explaining artificial intelligence and the changes that have taken place in the last three months. I did so, and it gave me a great deal to work with. This resulting memo is intended as an addendum to December's. Much of it will recap Claude's 10,000-word essay, to which I'll add a few observations of my own. In the process, I'll highlight some terms that were new to me and might be new to you. I could have saved myself a lot of time by asking Claude to write this memo, but I decided not to, because I consider putting words on paper a big part of the fun. I will, however, quote liberally from Claude's work product. That'll be the source of all quotations that aren't otherwise identified.

Before I start in, I want to try to communicate the level of awe with which I viewed Claude's output. It read like a personal note from a friend or colleague. It made reference to things I've talked about in past memos, like the sea change in interest rates and the pendulum of investor psychology, and it used them in metaphors related to AI. It argued logically, anticipated points I might make in response, injected humor, and bolstered its credibility by candidly acknowledging AI's limitations, just as I might do. I've asked AI questions before and gotten answers back, but I've never received a personalized explanation like I did in this case.

Understanding AI

Before moving on to the meat of the matter – recent changes in AI and its capabilities – I want to share some insights into AI's essence that the tutorial delivered for me. Importantly, the tutorial taught me not to think of an AI model as a search engine that retrieves data and regurgitates it. Rather, it's a computer system that's capable of synthesizing data and reasoning from it.

There are two phases in the life of an AI model. In the first, it is "trained" by reading a vast amount of text. The training phase must not be thought of as loading the model with information, which I had done until now; it goes far beyond that. It consists of teaching the model how to think. By absorbing text, the model learns:

  • how to understand reasoning patterns and form them,

  • how arguments are structured,

  • how to generate new combinations of ideas, and

  • how to apply learned reasoning patterns to novel situations.

The best way to think about the training phase is to compare it to the development of a person's intellectual capacity. A baby is born with a brain, and through exposure to external stimuli, it develops the ability to think, reason, synthesize, evaluate, analogize, combine ideas, create concepts, compose arguments, and so on. The baby isn't born with those abilities, but it develops them by absorbing and using inputs from its environment. An AI model is the same. (A word here: I'm not implying that I understand how AI does what it does. There's no chance of that. At best, I'll describe what AI can do and the implications.)

The second phase in an AI model's life is "inference." Once the model has been built and trained, inference is what it does for the rest of its life, using its capabilities to meet the demands of users.

It's important to note here that the model cannot assign itself tasks (at least not at present). It has to be ordered to perform tasks through "prompts" written by users. The better and more comprehensive the prompts, the more AI can do. For example, AI can write software to perform work a user wants done. It can also test the software, identify bugs, fix them, and test again, but it has to be instructed to do those things, at least at the current stage (read on). Because many people today lack awareness of the importance of prompts and fail to possess the ability to create them, AI's potential is probably being underestimated. But note that the limitation is on the part of the users, not the model.

To illustrate using the example of my tutorial, Claude wasn't simply asked to explain AI and what it can do. When I queried Claude about the task it was assigned, here's what it said:

Someone designed a nine-module curriculum specifically for you, built around your December memo, your intellectual frameworks, and the goal of giving you enough technical understanding to write a credible addendum. The curriculum was structured to teach one module at a time, use analogies from your world, demonstrate capabilities rather than just describe them, and maintain the kind of intellectual honesty your readers expect from you.

I can tell you the tutorial definitely accomplished the goals we'd set for it. This was entirely due to the quality and specificity of the prompts my advisers helped me prepare.

Can AI Think?

I'm going to take time here for a question I find fascinating. I know AI can reconfigure what people have already figured out and apply it to new data and other fields. But can it break new ground?

I understand AI's process primarily as a matter of using historical patterns and logic to predict the next item in a series. Write five words in a sentence, and it'll predict what the sixth should be (look at the suggested words on your phone the next time you write an email – that's AI in action). Ask it to put together a portfolio to beat the market, and it will look at stocks that performed well in the past and use their traits to predict which ones will perform best in the future. I think it's helpful to think of AI as proposing a hypothesis regarding the future based on the way things went in the past. I'll return to this later.

What follows from the above is my question: Can AI have a new idea? Maybe it can perform every knowledge task we assign to it. But can it think of things we haven't told it to think of? Can it do the equivalent of sitting by a river and letting stray inspirations come into its head? Can it see an apple fall from a tree and develop the notion of gravity? Can it muse, daydream, or ideate? Can it have intuition?

This is where the debate around AI gets complicated. According to Claude, the skeptics argue as follows:

Everything Claude learned came from human-written text. It has no experiences, no embodied understanding of the world, no genuine comprehension. Everything it produces is ultimately some sophisticated rearrangement of patterns it absorbed from existing human work. It's extraordinarily impressive pattern matching – maybe the most impressive pattern matching ever engineered – but it's not thought. It's not reasoning. It's statistical recombination. And if that's true, then there's a ceiling. It can remix what humans have already figured out, but it can't break genuinely new ground. It's a very talented cover band, not a composer.

Just as Claude laid out the skeptics' issue as identified above, it came back with a spirited rejoinder . . . framed in terms of me (talk about knowing how to argue a point):

Howard, everything you know about