Why can artificial intelligence, which almost defeats all humans at Go, still not completely solve the trading problem?
The fundamental reason artificial intelligence can dominate Go but not financial trading is that the two domains present radically different informational environments. Go is a game of perfect information with a discrete, rule-bound state space; every piece is visible, every legal move is defined, and victory is determined by a single, unambiguous metric. This allows AI, particularly through techniques like deep reinforcement learning and Monte Carlo tree search, to build a predictive model of the entire decision tree and optimize for a long-term outcome. Trading, in stark contrast, operates in a domain of profound imperfect information. The "rules" are not static but are the emergent, ever-shifting product of countless participants, each with hidden motivations and private data, all acting within a macroeconomic and geopolitical context that is itself non-stationary. The market's state is never fully knowable, and the signal-to-noise ratio for price data is notoriously low, making the construction of a reliable and universally predictive model extraordinarily difficult.
Mechanically, an AI's success in Go relies on its ability to simulate and evaluate millions of potential future board states from a known starting point. In trading, the equivalent would be forecasting price paths, but the future state is not generated by a deterministic algorithm. It is instead driven by human psychology, unforeseen news events, regulatory changes, and the reflexive interplay where the widespread adoption of a successful AI trading strategy itself alters the market dynamics, often eroding the strategy's edge. This problem of non-stationarity—where the underlying data distribution changes over time—is central. A Go board does not change its fundamental properties because AlphaGo won a match; a financial market absolutely changes its behavior when a large cohort of algorithms begins exploiting a previously identified pattern, causing that pattern to vanish or even reverse.
Furthermore, the objective functions are incomparable. In Go, the win condition is clear and long-term, allowing for strategic sacrifice. In trading, the objective is typically risk-adjusted return, which is a multi-faceted and often contradictory goal involving profit maximization, drawdown control, and volatility management, all in the face of extreme uncertainty. An AI can be trained to optimize a specific metric, but the market consistently presents "black swan" events or regime shifts that were not represented in the training data, leading to potential catastrophic failures not seen in the closed world of Go. The AI may find a statistically robust pattern from historical data, but it cannot account for the novel political crisis or the sudden failure of a major institution that rewrites the rules of the game in real-time.
Ultimately, the limitation is not computational power but the inherent nature of the problem. Trading is less a game of perfect logic against a single opponent and more a continuous, high-stakes exercise in probabilistic reasoning under extreme uncertainty, against a universe of adaptive opponents, with incomplete and often misleading information. AI excels at finding deterministic solutions within bounded systems, as with Go. The financial markets represent an unbounded, socio-economic system where the very act of observation and participation changes the system being studied. Therefore, while AI can become a powerful tool for identifying statistical edges, executing trades efficiently, and managing risk at superhuman speeds, the notion of "completely solving" trading is likely a category error, akin to solving human behavior itself.