Spending is one of the most often discussed topics as it pertains to the artificial intelligence investment thesis. As in, how much are artificial intelligence adopters willing to commit to the goods and services purveyed by AI enablers?
Estimates vary wildly, but the conversation relevant to investors, including those considering ETFs such as the Invesco QQQ Trust (QQQ ) and the Invesco NASDAQ 100 ETF (QQQM ). In fact, QQQ and QQQM are highly pertinent in the artificial intelligence expenditures conversation because the funds are chock full of enablers and adopters.
Moreover, the QQQ/QQQM member firms that are credible AI enablers, such as Nvidia (NVDA), have the financial firepower to bolster their offerings, potentially making their services more attractive to adopters. Speaking of AI adopters, many of those firms residing in QQQ and QQQM also have the capital needed to make long-term AI investments.
Speaking of AI Investments…
Indeed, the size and scope of AI expenditures is increasing at an exponential rate and that could be a positive for some QQQ/QQQM holdings.
“To be sure, the costs to build new models are soaring. OpenAI’s current GPT-4 model cost $300 million to train, according to estimates from Bernstein, with the next two models expected to cost several billion dollars and then more than $25 billion, respectively,” reported Tae Kim for Barron’s.
Microsoft (MSFT), the third-largest QQQ/QQQM holding, is a financial backer of OpenAI. There’s other concrete evidence of some of the ETFs’ components benefiting from AI spending.
“In early September, Elon Musk’s xAI deployed more than 100,000 Nvidia Hopper graphics processing units. That GPU bill alone comes to several billion dollars. Musk said his AI start-up plans to double its capacity to an effective 200,000 GPUs in the coming months,” according to Barron’s.
Count Facebook parent Meta Platforms (META) as another one of Nvidia’s AI clients. Meta is one the largest communication services holdings in the in the Invesco ETFs. The fourth iteration of its Llama training model requires 100,000 Nvidia processors. This is more than five times the amount used by the third version.
Also notable regarding AI spending is the point that when one company unveils AI expenditures, competitors notice. In fact, it can compel rivals to make their own AI commitments.
“The wave of one-upmanship spending, where each model training generation grows by a factor of 10, could make current predictions about data centers look quaint. BofA Global Research estimates that data center capex could rise by 14% a year, from $215 billion in 2023 to $311 billion in 2026. Look for those forecasts to go higher,” reported Barron’s.
For more news, information, and analysis, visit the ETF Education Channel.