AI May Be Clever—But It’s Still Not Human

One man dared to cool the AI fever, Joseph Plazo unsettled a roomful of future market leaders with a message few in Silicon Valley are willing to confront: Artificial intelligence can do many things—but it still can’t understand consequence.

**MANILA —** On a humid Thursday morning in the wood-paneled halls of the Asian Institute of Management, Plazo opted for clarity over hype. His audience—a curated gathering from NUS, Kyoto, HKUST—came expecting an ode to artificial intelligence in finance.

Instead, they received a lesson in humility.

“AI is like your smartest intern,” he said, half-joking. “But you still don’t hand the intern the vault keys.”

Laughter followed. And then reflection. Because he wasn’t joking.

### Plazo’s Paradox: Building AI—and Questioning It

Plazo isn’t an outsider to this world—he’s part of the architecture. His firm, Plazo Sullivan Roche Capital, deploys some of the most effective trading AIs globally. But that insider status makes his critique all the more potent.

“The problem isn’t the tech,” he said. “It’s our longing that it will save us from the weight of responsibility.”

Plazo offered real-world case studies—AIs that, on paper, flagged perfect trades. Only to be undone by things no algorithm could foresee: a central bank’s abrupt pivot.

Context, he argued, remains the province of people.

### The Future Pushed Back. Plazo Didn’t Blink.

One Kyoto student asked whether LLMs could model global mood.

Plazo didn’t hesitate.

“AI can detect outrage in a tweetstorm,” he said. “But it can’t smell fear in a leader’s voice.”

A notable hush followed.

Another student asked if AI might simulate conviction.

“Conviction,” Plazo replied, “isn’t data. It’s the bruises of being wrong—and surviving. It’s knowing when *not* to act.”

You can’t upload that.

### This Wasn’t About Code—It Was About Character

Many students—confident in their tools—admitted to viewing AI as a workaround. A way to evade risk. Bypass emotion. Plazo get more info interrupted that notion.

“You can outsource your trading logic. But never your ethics.”

It struck a chord.

Because whether they wore suits or sandals, most in that room shared one goal: success. But Plazo asked a deeper question—*at what cost?*

### Tools Are Not Truth: What AI Can (and Can’t) Do

Plazo was not anti-AI. He enumerated its strengths:

- Filtering massive noise
- Identifying technical patterns at scale
- Stress-testing portfolios in seconds

But he also listed its limits—starkly.

It can’t detect sarcasm. It can’t weigh political nuance. And it doesn’t care that your retirement plan may hang in the balance.

“If the algorithm fails,” he asked, “will you take responsibility? Or just blame the machine?”

The room was quiet. That quiet held meaning.

### This Isn’t Just Finance—It’s Philosophy

What emerged wasn’t a rejection of AI, but a reminder of its place.

Plazo described tools he’s building that consider misinformation, psychological factors—even geopolitical instability. But his parting truth was unambiguous:

“No machine can tell you when *not* to act. That’s a human burden.”

### Wisdom Can’t Be Coded

As the crowd dispersed—some thoughtful, some rattled—one phrase echoed in the corridors:

“AI doesn’t know your values. So don’t let it make your decisions.”

In an age obsessed with speed and prediction, Plazo offered something radical:

Accountability.

Because in the end, investing isn’t about beating the market.

It’s about remembering *why* you entered the arena in the first place.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Comments on “AI May Be Clever—But It’s Still Not Human”

Leave a Reply

Gravatar