Skip to main content
Back to Pulse
data-backedSlow Burn
Bloomberg

BlackRock Likes Chips, Hardware in Overweight AI Stance

Read the full articleBlackRock Likes Chips, Hardware in Overweight AI Stance on Bloomberg

What Happened

Wei Li, chief global investment strategist at BlackRock, discusses the outlook for artificial intelligence stocks, capital expenditure and earnings. "We're specifically overweight semis and hardware," Li tells Bloomberg Television. (Source: Bloomberg)

Our Take

Investment flows into semiconductor hardware signals a major shift in the AI compute landscape. BlackRock's stance confirms that massive CapEx is being funneled into specific chips and hardware, directly affecting deployment costs and latency for complex tasks like RAG. This physical layer dictates the achievable performance ceiling for systems utilizing models like GPT-4 or Claude 3. This investment is a metric for future scaling, not just current quarterly earnings.

Hardware allocation is not just about buying GPUs; it is about optimizing inference cost and latency across deployment platforms. When capital is directed toward specific hardware, it forces developers to optimize their workflow by choosing the right model and serving framework. Building RAG pipelines requires careful assessment of inference costs using tools like Haiku or GPT-4; ignoring hardware constraints results in wasted budget and poor user experience. Assume that hardware bottlenecks are the primary limit on system performance, not algorithmic brilliance.

Teams running production agent workflows must re-allocate Q3 budget toward optimizing deployment infrastructure. Focus on optimizing inference cost by benchmarking deployment on custom hardware instead of generic cloud services. Do X instead of Y because hardware choice directly dictates the achievable cost per token for running complex agent logic.

What To Do

Do X instead of Y because hardware choice directly dictates the achievable cost per token for running complex agent logic

Builder's Brief

Who

teams running RAG in production, ML infrastructure engineers

What changes

The necessity of optimizing inference cost and latency based on specific hardware benchmarks

When

now|weeks

Watch for

Semiconductor supply chain bottlenecks and custom ASIC adoption

What Skeptics Say

The market is overhyping the immediate impact; chip demand remains subject to cyclical economic shifts. This trend is long-term infrastructure buildout, not immediate feature change.

Cited By

React

Newsletter

Get the weekly AI digest

The stories that matter, with a builder's perspective. Every Thursday.

Loading comments...