
High-performance GPUs have become central to the global AI ecosystem. They power advanced models, enable cloud-scale computing, and support a growing array of AI-driven applications across industries. For cloud providers, hyperscale data centers, and AI-focused companies, GPUs are not merely hardware—they are major capital investments. The financial and operational decisions surrounding these assets carry significant implications for profitability, cash flow, and strategic planning.
A core point of contention in recent discussions has been GPU depreciation: how quickly these chips lose economic value and how this is reflected in company financials. Critics argue that some companies, including Nvidia and hyperscalers, may be understating depreciation, using schedules that span five or six years. This, they contend, could inflate reported profits and obscure the economic reality of hardware wear and obsolescence. Advocates of longer depreciation schedules counter that extended lifespans are justified by operational practices, including cascading workloads, maintenance programs, and the ongoing utility of older GPUs for less demanding tasks.
This debate has far-reaching consequences, influencing investor confidence, corporate strategy, market valuations, and even regulatory oversight. Understanding the nuances of GPU depreciation is crucial for anyone seeking insight into the economics of AI infrastructure.
The Basis of GPU Depreciation
Rapid Technological Evolution
The pace of GPU innovation is unprecedented. Every 18 to 24 months, major new architectures emerge, offering improved performance, energy efficiency, and AI-specific features. For high-performance computing and AI model training, these improvements are often transformative, allowing for faster model iterations and reduced energy consumption.
As newer GPUs enter the market, older generations can become less competitive. Even if a chip remains operational, it may no longer meet the requirements for cutting-edge workloads. This technological obsolescence raises questions about the appropriate financial treatment of GPUs in depreciation schedules. While accounting principles allow companies to estimate useful life, the economic reality may diverge from these estimates.
Accounting and Economic Useful Life
Depreciation is an accounting mechanism designed to allocate the cost of an asset over its useful life. Theoretically, it reflects the economic consumption of the asset’s value. In practice, depreciation schedules rely on estimates and judgment. When the actual economic utility of a GPU declines faster than its accounting life, financial statements may overstate profitability and understate the real cost of capital deployment.
The discrepancy between accounting useful life and economic useful life is particularly pronounced in AI infrastructure. Unlike traditional servers, GPUs face unique stressors: they operate at high utilization for extended periods, are subject to thermal cycling, and often perform tasks that push hardware to its limits. These factors can accelerate wear and reduce effective life, challenging extended depreciation assumptions.
Capital Intensity and Inventory Exposure
Deploying a large-scale AI infrastructure is capital-intensive. For hyperscalers, a single data center may house tens of thousands of GPUs, with each unit costing thousands of dollars. Inventory, both in warehouses and in operational deployment, represents a substantial portion of total capital assets. If the market value of older GPUs declines faster than anticipated, companies risk holding stranded assets. Inventory write-downs and accelerated depreciation could materially affect financial performance.
The capital-intensive nature of GPU deployment also magnifies the importance of accurate economic depreciation. If revenue generation does not keep pace with asset depreciation, return on investment can be negatively impacted, and the timing of asset refresh cycles becomes critical.
Evidence Supporting Depreciation Concerns
Several strands of evidence suggest that current GPU depreciation practices may be optimistic.
Investor Warnings
Prominent investors, including Michael Burry, have argued that hyperscalers may be extending GPU depreciation schedules beyond what is economically reasonable. Burry contends that GPUs lose value more rapidly than accounting schedules suggest, potentially resulting in understated depreciation by billions of dollars across multiple companies over several years. These warnings have fueled debate among investors and analysts, drawing attention to the potential financial risk hidden in infrastructure accounting.
CoreWeave and IPO Disclosures
CoreWeave, a company that leases GPU compute for AI workloads, explicitly highlighted depreciation risk in its IPO prospectus. The company assumed a six-year useful life for its technology equipment, including GPUs. Analysts have raised concerns that this may understate actual economic depreciation, especially given the rapid pace of GPU advancement. The prospectus illustrates how companies apply judgment in determining asset life, but also highlights the potential exposure if real-world usage diverges from accounting assumptions.
Independent Financial Modeling
Research from Cerno Capital examined the implications of varying useful life assumptions. Extending depreciation schedules from three to six years reduces reported annual depreciation expense, increasing apparent profitability. The analysis suggests that overstated asset life could mask the economic costs of GPU deployment, creating the appearance of healthier financial metrics than underlying economics support.
Hardware Reliability Studies
Academic and industry studies on GPU reliability provide additional insight. Research on high-performance computing clusters has shown that GPUs experience error rates and failures that increase with usage and age. Maintaining system availability often requires overprovisioning, which adds cost and effectively shortens usable life. Other studies have documented permanent faults in GPU control units, highlighting the risk of gradual decline in hardware reliability even before complete failure. These findings support the argument that economic depreciation may outpace accounting depreciation.
Arguments Supporting Longer Depreciation Schedules
Not all evidence supports the criticism of extended depreciation periods. Several factors suggest that longer asset lifespans may be reasonable.
Cascading Workloads
Older GPUs may be redeployed from high-intensity AI model training to less demanding tasks such as inference, simulation, or internal data processing. This cascading approach extends the effective economic life of the hardware, allowing companies to extract value beyond the initial deployment phase. Observations from data centers indicate that repurposed GPUs can continue to generate meaningful revenue, supporting longer depreciation schedules.
Maintenance Programs and Warranties
Maintenance programs, service contracts, and extended warranties mitigate the risk of premature obsolescence. By maintaining operational reliability and reducing downtime, these programs enable GPUs to remain productive for longer periods, aligning with extended depreciation assumptions.
Real-World Market Evidence
Secondary market transactions for older GPUs suggest that these assets retain value beyond the initial deployment cycle. Companies that sell or lease used GPUs can recover part of the capital cost, supporting the notion that economic utility persists. Contract renewals for older GPU models at competitive pricing further reinforce this argument.
Accounting Standards Flexibility
Depreciation schedules rely on judgment, and accounting standards permit companies to estimate useful life based on operational expectations. As long as the methodology is consistent and disclosed, extended depreciation periods are permissible. The contention that such schedules constitute manipulation is not universally accepted, especially when companies demonstrate that extended use is consistent with operational practices.
Risks and Uncertainties
Despite these defenses, uncertainties remain. Technological advancement may accelerate, shortening the effective economic life of older GPUs. Hardware reliability risks and component degradation may also reduce practical asset life. Secondary market volatility adds further unpredictability, as resale values could decline if demand softens. Additionally, companies relying on debt financing may face collateral risk if GPU values fall faster than anticipated. Regulatory and auditing scrutiny may also increase, particularly if asset valuations appear inconsistent with observed operational outcomes.
Implications for Stakeholders
Hyperscale cloud providers could face earnings volatility if depreciation assumptions do not align with economic reality. Adjustments to capital expenditure, refresh cycles, and asset management strategies may be required to manage risk. AI compute leasing companies are particularly sensitive to depreciation assumptions, as their unit economics depend on accurate assessment of hardware life. GPU manufacturers such as Nvidia benefit when older models remain valuable, but risks arise if market demand shifts. Investors must evaluate both accounting disclosures and operational metrics to understand hidden risks. Auditors and regulators may seek greater transparency in depreciation policies, particularly as AI infrastructure becomes a critical economic asset class.
Observing Trends and Future Considerations
Monitoring several indicators can help stakeholders assess GPU depreciation risk. These include capital expenditure patterns, reported useful life estimates, secondary market pricing, GPU reliability data, inventory write-downs, and adjustments to financial reporting. How companies adapt depreciation schedules in response to technological evolution, market conditions, and operational data will offer insights into the sustainability of AI infrastructure economics.
Conclusion
GPU depreciation represents a critical intersection of accounting, operational economics, and technological innovation. While some investors and analysts argue that depreciation schedules may be overly optimistic, evidence suggests that operational strategies such as workload cascading, maintenance programs, and secondary market transactions can extend asset life. Understanding these dynamics is essential for companies, investors, and regulators navigating the AI compute ecosystem. Ongoing observation of financial disclosures, market behavior, and technology trends will remain vital to accurately assess the risks and opportunities associated with high-performance GPU deployment.
Sources and References
Accounting for AI: Financial Accounting Issues and Capital Deployment in the Hyperscaler Landscape – Cerno Capital (cernocapital.com)
This big AI bubble argument is wrong – Business Insider (businessinsider.com)
Nvidia AI‑Chip Depreciation Fears Are Overblown, Bernstein Says – Barron’s (barrons.com)
Eight odd things in CoreWeave’s IPO prospectus – Financial Times (ft.com)
CoreWeave tests investor risk appetite with $7.5bn in looming debt repayments – Financial Times (ft.com)
Michael Burry says Oracle and Meta are wildly overvalued – MarketWatch (marketwatch.com)
Characterizing GPU Resilience and Impact on AI/HPC Systems – arXiv (arxiv.org)
Understanding the Effects of Permanent Faults in GPU’s Parallelism Management and Control Units – arXiv (arxiv.org)
Meta’s AI Data Center ‘Depreciation’ Problem – Chip Stock Investor (chipstockinvestor.com)
The $800B AI Bet: Why Wall Street Prices GPUs at 14% Yet Deploys Billions – Medium / Truthbit AI (medium.com)
The Big Short: Is Michael Burry right about the AI trade? – Saxo (home.saxo)