Fencer
Harris美國價值股票基金
Harris Associates於1976年成立,擅長價值投資。透過Harris美國價值股票基金,發掘被市場低估,但業務持續增長及管理層一致的企業。
Equities

Is AI the ultimate hype – or truly the next big thing in town?

10月 04, 2023 - 13 分鐘的閱讀時間

Major US technology companies went through a financially difficult 2022. And yet they have been busy bringing to the market products with the latest advances in Artificial Intelligence (AI). Given the financial context, some have pointed out that, as the sector is known for its propensity to generate too much hype, the AI excitement could fade quickly. Yet, it would be a mistake to underestimate the potential impact of these innovations.

Deep technological transformations are usually met with scepticism and resistance at first, since they are believed to destroy jobs. And it is clearly something that could happen with AI.
– Carmine de Franco, Head of Research and ESG Ossiam

AI is nothing new. The theory behind it has been developing since the 1950s, when Alan Turning first proposed its Turing test1, whose goal was to objectively assess the ability of a machine to exhibit intelligent behaviour. It has gone through periods of excitement following breakthroughs in mathematics and computer science and depression (as well as cycles in fundings for basic research).

It may appear as though the last few months have brought us significant advances in the area, especially in the realm of Large Language Models (LLMs) such as those that underpin ChatGPT – the chatbot developed by San Francisco-based AI research firm OpenAI – which passed the Turing Test mentioned previously. Yet the mathematics behind these models (the Universal Approximation Theorem2) was proved 33 years before – in 1989 – and it took 30 years for the industry to overcome the three most significant and limiting factors:

  • Data: Training these models requires very large datasets, which are relatively scarce in the structured form. But the internet now offers a way to tap enormous amounts of data to train algorithms – a sequence of rigorous instructions used to perform computations, calculations and data processing – for specific tasks (such as LLMs).
  • Computing power: Today’s machines are sufficiently powerful to handle the training and run these algorithms.
  • Cloud (a distributed collection of servers that host software and infrastructure) and semiconductor (materials that have an electrical conductivity value falling between that of a conductor, such as copper, and an insulator, such as glass): Given the amount of data and power needed to do the job, only supersized and efficient cloud infrastructures, with advanced chips, can provide the appropriate environment to build and train these algorithms.

For societies at large, and for investors in particular, it is important to understand how the technology works, interacts and influences economies, businesses, institutions, and more generally, our lifestyles. It is impossible to know for certain how this technology will evolve, but there are two deep technological transformations from recent history that might hold some clues and help investors better harness the opportunities that will surely arise, while staying away from the hype: productivity gains and the enabling of new ideas.

Productivity – is this time different?

The first thing that comes to mind when thinking about AI is productivity, and more precisely the potential impact that AI can have on jobs and productivity gains. The relationship between technological improvements and productivity is well understood and yet not always visible in economic data.

Nobel-prize-winning economist Robert Solow famously said that one could see the computer age everywhere but in the productivity statistics3. The excitement of personal computers was supposed to lift Western economies productivity to higher levels and therefore push their growth rates up. That productivity increase did indeed happen over the following 20 to 30 years, but it didn’t do much to uplift the GDP growth rate4. So, what’s different this time?

On one side, we can hope that automating tasks away could make jobs more productive, freeing human resources to more value-adding tasks, hence increasing both economic output and wages. Yet, automation, and more generally deep technological transformations, are usually met with scepticism and resistance at first, since they are believed to destroy jobs. And it is clearly something that could happen with AI.

This time though, modern Luddites could be white-collar workers in the service sectors. The idea that technological disruptions take away jobs and lifestyles is as old as capitalism itself. But as Professor Shiller explained in his book Narrative Economics, these ideas, although not backed by scientific data, are hard to kill, and recurrently resurface in the public debate. Stories of people losing their jobs and darkening prospects are too vivid to be discarded by economic agents (households/individuals/consumers, firms, governments, and central banks), so there is the risk that AI could be met with the same mix of excitement and fear.

Will AI steal all our jobs?

A McKinsey report5 estimated that the switch from horses to cars at the beginning of the twentieth century was responsible for the creation of more than six million jobs from 1910 to 1950 (from the oil sector to manufacturing, from car services to gas stations and the advertising industry). And we are not even counting the thousands of businesses that were made possible by the widespread use of cars.

Yet, a report of the US Bureau of the Census6 right after the Depression in the 1930s’ singled out the switch from horses to cars as one of the major aggravating factors in the depressions, with implications on the agriculture sectors (who lost their best clients, horses!) to food markets (who saw a spiral of depressing prices as agricultural production tried to find new markets) to regional banks (whose clients saw their margin7 squeezed and many defaulted) to finally spread to other sectors and the labour markets as a whole. In total, a loss of 13 million jobs.

In the case of AI, we do not yet know in which ways the technology will be adopted and what impacts it may have on the labour market. The consensus is that the adoption will likely be slow, and so will its effects. Part of the reason for this is that it is very difficult to reengineer business in a way to maximize the potential of AI: equipping the salesforce with cars rather than horses to increase sales is not exactly the same as reshuffling complex industries to take better advantage of AI. Furthermore, a substantial portion of our economies are made of sectors that will prove hard to redesign (think of the healthcare sector, education, hospitality, arts or sports, for example) in a way in which AI can eventually lift their productivity. These are big sectors in the economy, and their productivity matters.

More optimistically, following recent research8 on disruptive technologies, it is possible to imagine productivity changes following a so-called ‘J-curve’ – ie a trend that begins with a sharp drop and is followed by a dramatic rise. More precisely, before the technology can yield positive results, businesses need to invest money and time to deploy these tools and train the workforce to use them effectively. This, in turn, will likely lower their productivity, as they spend time in non-producing activities. Only then can economic gains be expected, and productivity will eventually jump.

What about technology enabling new ideas?

If productivity is not the correct angle to apprehend AI, another way to look at it would be through the lens of an enabling technology. Rather than doing things we currently do, faster and perhaps better (which is still possible in certain cases), AI may end up becoming a conduit for new services that people will be willing to pay for in the future, which would lead to the emergence of new businesses that can provide them. If this is true, then AI’s impact would not end up resembling that of the car, but rather electric power.

The advent of electric power was a definitive breakthrough in human history. It has also allowed new services that humans previously could not dream of to be developed, and it still does. The transformation has been huge and yet, in terms of job creation alone, the impact of electrification has not been as sizeable as the switch from horses to cars. The result is that we only notice how our lives depend on it when the power goes off.

Can the AI revolution be as impactful as electrification? Or, more precisely, is AI a new example of a general-purpose technology, GPT (not to be confused with the GPT in ChatGPT)? These are technological breakthroughs that allow widespread increases in productivity and the rise of entire new services and industries. To be considered a GPT9 something must:

  1. be used across multiple sectors – likely to be the case for AI
  2. be able to be continuously improved – a defining feature of AI
  3. have the potential to lead to the introduction of industry-specific innovation – which is also believed to be the case for AI.

If this is true, economic gains (and financial opportunities for investors) could arise not necessarily from providers of AI tools, but from new AI-driven businesses built around sector-specific needs. Like the internet, the innovation provided by the web, itself born out of military needs, has enabled new businesses to rise (think of Meta – formerly Facebook – for instance) only once intermediate technologies have been deployed at scale (in this case, fast mobile networks and smartphones). In the end though, it is the likes of Meta that have been able to reap the financial benefits of the base technology (the internet), while companies that provide the tool to use the internet (such as network operators or web browsers software companies) have not been as successful10.

Another hint that points towards the general-purpose technology hypothesis seems to be the fact that AI-powered applications have so far been relatively consumer-friendly. Contrary to, say, quantum computing at this stage, the explosion in use cases of AI and the ease with which people quickly become accustomed to it is clear. Just consider how fast the adoption of ChatGPT was in the first months of its release – estimated to have reached 100 million monthly active users in January 2023, just two months after launch, making it the fastest-growing consumer application in history11.

How should investors assess the opportunities?

The idea that AI will be responsible for a significant reshuffles in how companies work and unlock productivity gains could, in theory, sustain higher valuations for years to come. Furthermore, corporations may be sitting on huge amount of precious data on their customers, which could unlock even higher potential if it can be exploited to improve sales for instance.

Finally, some would argue that unlike fixed assets (land, buildings, equipment) that depreciate over time as they become obsolete, with AI-powered tools, assets may become better and more valuable over time. Lowering the depreciation rate or stopping it altogether would mean a significant boost to profitability and hence to returns.

This could only happen if companies embrace the AI revolution and reengineer their businesses to ensure data sits at the core of their processes. And it is not an easy task, since many companies’ data is still collected and stored in different formats, by different teams and in different departments. Therefore, exploiting this information will require time, investments, and skilled talent, three things that usually are not conductive to short term valuations.

It is not a surprise that, apart from the large and well-known names in the US Tech sector (Nvidia, Microsoft, Meta, Google), many companies are not even trying to do so, especially smaller firms, which usually lack both the right skills and money to invest in these technologies. And yet, AI could in theory make it possible for them to compete in fields that were off limits before, such as legal, compliance, marketing or advisory businesses.

For these firms, a substantial part of deliverables are time – and resource – consuming, which de facto cuts off smaller firm from the competition. In a (not so distant) future where AI can be relied upon to help with these tasks, ideas, creativity and customization could become more central to the value creation process of these firms, more than simply sheer resources.

The main question for investors is whether AI will be the next big thing for a limited number of Tech companies (in the US and China mainly), to the benefit of their investors mainly (although some positive spill overs cannot be ruled out, such as the role that social media plays for many small businesses who rely on them for their B2C – business to consumer – strategies). Or if, as a general-purpose technology, it will have broad economic impacts, with tomorrow’s winners not necessarily today’s protagonists. The investment thesis in the two scenarios is not the same and choosing the wrong one could have significant consequences for investors.

Is regulation still something of a wildcard?

As we try to draw potential paths through which AI could unfold, we also need to bring into the equation the impact that regulation will have in its development.

Historically, regulation has followed the introduction of great inventions and breakthroughs, especially in technology. Societies learn from them and, sometimes, experience their negative consequences. Therefore, new rules are put in place to steer their use for the benefit of all, while limiting and possibly eliminating their negative outcomes.

A clear example is provided by the ever-increasing corpus of safety rules in technologies such as computing or electricity. There is no reason to believe that with AI it will be different, even if many have been calling for regulations before the technology can be deployed at scale.

The goal for preventive regulation is, in the minds of many, to avoid the pitfalls of winner-takes-all type of scenarios, such as social media, for which regulation came perhaps too late and is now difficult to enforce given the existing infrastructure. The objectives will be very broad, starting from what AI will be allowed to do to who finally bears responsibility for its acts; from who owns the results of AI-powered tools to the access of data and algorithms; from the creation of fake news to the exploitation of fake content.

The US and the UK are taking a wait-and-see approach to regulation, with the idea that we first need to see how AI-tools are used and deployed, gain better knowledge on their risks and other issues, such as ownership and responsibility. The goal is to set up progressive regulation that should in theory address problems as they come. This approach is clearly willing to encourage the development of AI ecosystems (from businesses to services and other uses). Critics would argue that this laissez-faire approach, if not addressed early enough, may fail to rein in negative and unintended consequences (sometimes called ‘externalities’), which may lead to factual inaccuracies, systemic racism, political disinformation, and so on.

On the other end of the spectrum, we see China taking a relatively firm stance on the AI world, with a clear goal of controlling the entire value chain (from data to algorithms and final uses and applications). Both geopolitical and internal stability reasons are said to be the main drivers of their conservative and rather restrictive approach.

The EU sits in the middle, trying to balance the need to ensure that AI is deployed and used responsibly, while at the same time, allowing businesses to innovate and explore the fields in search of economic opportunities.

One of the most important features of the regulation will likely address the data ownership issue. As modern models need enormous amounts of data to be trained, the internet with all its resources and content is the usual place to plug them in. But although the data on the internet is publicly available, it does not mean that it is free to use.

Moreover, people whose data is used for these purposes may one day want to monetize it. After all, training someone (AI) or producing material (data) over which it can learn is a job as old as human history and, with the exception of rare cases, it is still a paid job12 - it’s called a teacher. And litigation is to be expected between those who own the data and those who use the data to train their models13.

On a related issue, while the internet is for many practical applications effectively infinite, the amount of good quality data is not growing as fast as the needs of the most sophisticated algorithms. Some argue that good quality data could be exhausted by 202614 . These are of course projections and do not properly account for potential improvements in training techniques. Yet, there is a growing understanding that gains in AI efficiency may be bounded by power and data, the two not able to grow at the required rate.

What about the unknown unknowns?

As we try to contemplate the potential implication of the deployment at scale of AI (the known unknowns), we should also consider the eventual negative externalities that AI could spark. This is usually more difficult to come up with (the unknown unknowns) and usually we are taken by surprise by negative development from genuinely positive technological breakthroughs.

The major example was the hope that open discussions and interaction brought by the internet would be, over time, the key factors thanks to which democracy, freedom and trust would spread everywhere. Accuracy of information and accountability were thought to be unavoidable consequences of the internet.

While there is some truth in it (after all, good quality information is freely available on the internet) it turned out that polarization, mistrust, propaganda and straight lies (fake-news) are a common plague of our internet-era. More problematic still, the ability to generate human-like content, including voice and pictures (deep fakes), clearly has the potential to disrupt established processes that govern our lives, including the democratic process and elections.

In military affairs, the expected increased use of AI and unmanned weapons pose challenging questions as to what extent humans must retain control and being held responsible for their uses, as well as, if not more importantly, for the decision-making process itself.

Ossiam is an affiliate of Natixis Investment Managers, and forms part of our Expert Collective.

Glossary

  • Artificial intelligence (AI) – the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.
  • Chatbot – Short for ‘chatterbot’, these are computer programs that simulate human conversation through voice commands or text chats or both.
  • Deep learning – Also known as ‘deep neural networks’, deep learning is part of a broader family of ‘machine learning’ methods based on learning data representations, as opposed to task-specific algorithms. Neural networks are a series of algorithms that endeavours to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. Deep learning analyses data in multiple layers of learning (hence ‘deep’) and may start doing so by learning about simpler concepts, and combining these simpler concepts to learn about more complex concepts and abstract notions.
  • Generative AI – Made famous by the likes of text-generating chatbots such as ChatGPT, generative AI is a conversational technology that can analyse a vast amount of data. Yet it can accomplish essentially only what it was programmed to do – which is where it differs from AGI. However, generative AI’s out-of-the-box accessibility makes it different from all AI that came before it. Users don’t need a degree in machine learning to interact with or derive value from it; nearly anyone who can ask questions can use it. It can enable capabilities across a broad range of content, including images, video, audio, and computer code. And it can perform several functions in organizations, including classifying, editing, summarizing, answering questions, and drafting new content.
  • Machine learning – A branch of AI that allows computer systems to learn directly from examples, data and experience. Increasingly used for the processing of ‘big data’, machine learning is the concept that a computer program can learn and adapt to new data without human interference – it keeps a computer’s built-in algorithms current regardless of changes in the worldwide economy.
  • Natural language processing – A subfield of computer science, information engineering, and AI concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyse large amounts of natural language data. It is one of the tools used in Siri, the voice-controlled digital assistant. Systems attempt to allow computers to understand human speech in either written or oral form. Initial models were rule or grammar based but couldn’t cope well with unobserved words or errors (typos).
  • Quantum Computing – Quantum computing is a type of computation that harnesses the collective properties of quantum states (based on quantum mechanics), such as superposition, interference, and entanglement, to perform calculations. The devices that perform quantum computations are known as quantum computers – sometimes referred to as supercomputers.

References

1. McKinsey, 2018, Is the Solow Paradox back? https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/is-the-solow-paradox-back

2. The annual average rate of change of the gross domestic product (GDP) at market prices based on constant local currency, for a given national economy, during a specified period of time.

3. McKinsey, 2018, Is the Solow Paradox back?, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/is-the-solow-paradox-back

4. The annual average rate of change of the gross domestic product (GDP) at market prices based on constant local currency, for a given national economy, during a specified period of time.

5. McKinsey Global Institute (2017), “Jobs lost, jobs gained: Workforce transition in a time of automation.”

6. Greene, A.N. (2008), “Horses at work”, Harvard University Press.

7. The ratio of a company’s profit to its revenue.

8. Brynjolfsson, Rock and Chad Syverson (2017), “Artificial intelligence and the modern productivity paradox: a clash of expectations and statistics”, NBER.

9. Bresnahan and Trajtenberg (1995), “General purpose technologies ‘Engines of growth’?”, Journal of Econometrics, Volume 65, Issue 1, January 1995, Pages 83-108.

10. Web browsers software is today mostly free or open source. No company actually managed to make money out of it, with the exclusion of Microsoft which bundles it with other paying software.

11. Reuters, Feb 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/

12. Ancient Greek philosophers Plato and Aristotle were well-paid private tutors. Socrates was an exception as he apparently did not charge for its teachings.

13. https://www.reuters.com/legal/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06/

14. https://epochai.org/blog/will-we-run-out-of-ml-data-evidence-from-projecting-dataset

The provision of this material and/or reference to specific securities, sectors, or markets within this material does not constitute investment advice, or a recommendation or an offer to buy or to sell any security, or an offer of any regulated financial activity. Investors should consider the investment objectives, risks and expenses of any investment carefully before investing. The analyses, opinions, and certain of the investment themes and processes referenced herein represent the views of the portfolio manager(s) as of the date indicated. These, as well as the portfolio holdings and characteristics shown, are subject to change. There can be no assurance that developments will transpire as may be forecasted in this material. The analyses and opinions expressed by external third parties are independent and does not necessarily reflect those of Natixis Investment Managers. Past performance information presented is not indicative of future performance.

DR-59768

訂閱我們的通訊

歡迎註冊以獲取所有最新的投資觀點