A New Gale of Creative Destruction
Artificial intelligence (AI) is one of those buzzy terms that gets thrown around with reckless abandon these days. It finds its way into the pitch books of many tech-oriented startups and a surprising number of hedge funds. The supposed applications are so varied that our BS-detectors tingle whenever we hear the term. Indeed, an interrogation of the details often reveals that the use of AI in these applications is quite superficial. People looking to raise money have simply noted that AI sells.
Yet AI is not just hype. The salesmanship may exceed reality at the moment, but there is a solid consensus amongst technologists, business leaders and policymakers that AI will be a pivotal technology of the future, with pervasive impacts on and implications for day-to-day life, economies and even national security.
It may be worth backing up for a moment to define this woolly concept. Artificial intelligence is a broad term. It’s generally used to refer to algorithms that can mimic cognitive functions such as sensing and reasoning. The most powerful AI algorithms (machine learning) are able to learn from a set of example solutions (in the form of a data set) in order to solve very complex problems. For an increasing number of specific problems (like playing the game of Go), AI can already exceed the finest human intelligence. There’s a good primer here in Toward Data Science.
As research continues, and the cost of computing power continues to fall, the expectation is that AI’s applications will expand into many domains, including those with less structure than a board game. That means jobs. White collar jobs. Just as automation has eliminated countless blue collar manufacturing jobs, AI is coming for office workers.
According to researchers at Oxford and strategy consultants McKinsey, AI has the potential to eliminate nearly 50% of today’s jobs in the next few decades. The occupations most at risk? Those that require a college degree. This feels like hyperbole, but it may not be. Most white collar jobs contain at least some repetitive, routine tasks that AI could eventually handle. This sounds like a pretty dark scenario for affected workers, but this new form of automation may not be all doom and gloom.
Financial advisors already have some experience with what it feels like to increasingly offload repetitive, quantitative process to machines. Account-linkage, planning software and even tax analysis are increasingly being handled by packages of interoperable software. The good news is that this has allowed advisors to focus more of their time on the ‘soft’ elements of their practice like spending more time with clients. The bad news is that these technologies are slowly being made directly available to clients, and this will almost certainly pressure advisors’ value proposition, fee levels or both. Even so, the RIA business may provide a template for workers in other industries for how to deal with AI. The playbook is simple: move relentlessly up the value chain and focus your work on the creative, emotive, people-centric aspects of your business.
While workers are not defenseless, we believe AI is likely to cause a significant net-loss of jobs, and that has major implications. From an economic perspective, AI tips the balance of power in favor of capital over labor, with predictable effects. It will tend to reinforce inequality of income and wealth, perhaps dangerously so. This is good for investors insofar as investors are by definition, the owners of capital. But it tempts, or perhaps even demands more government intervention in the economy, which the stock market generally dislikes. Massive redistribution of wealth may be required to stabilize society.
If AI technology turns out to be another winner-takes-all kind of market, like many other corners of the tech industry have, we might reasonably expect regulators to step in to prevent monopolies or oligopolies. That would help consumers (if not workers) by allowing them to reap the rewards of AI through lower costs of services. But this outlook may be too optimistic. The US government has not shown much resolve in confronting Big Tech to date, even as the negative impact of their services on society at large have become impossible to ignore. As the power of tech firms grows, it will become increasingly difficult to regulate them. Moreover, there may be national security reasons to allow monopolies and oligopolies to exist, further undermining regulatory resolve.
National security nerds believe that AI will become a potent weapon on the battlefield of the future. The countries with the best AI will be those with the ones with the best talent and the biggest pools of data. China, our perceived rival, has and will continue to have vastly larger data pools than we do here in the US. That’s partly by virtue of the sheer size of the country, and partly by virtue of the fact that as an authoritarian/communist nation, all data ultimately ends up centralized in the hands of the CCP. This ‘Chinese advantage’ has security wonks worried. It leads us to think that US policy may tend more towards establishing “national champion” AI firms than regulating them for the everyday interest of American citizens.
Back to the original question. will AI ‘eat the world’ in the way that software has, intruding upon and disrupting nearly everything that doesn’t require a human touch? We think the answer is likely yes. This is the type of gale of creative destruction that economists love and trade unioninsts loath. It is a force for growth, deflation, and social instability. But much beyond that remains opaque.
———
The information and opinions expressed herein are for general information and educational purposes, and may change at any time. They do not constitute investment advice and are not a solicitation for the purchase or sale of any security or implementation of any specific investment strategy.