The pace of change occurring in data centers right now is unlike anything else in real estate. AI demand, technical obsolescence, and a structural power shortage are all accelerating simultaneously, and the gap between investors who understand these dynamics and those who don’t is widening fast.
GreenGen’s From the Experts series highlights insights from GreenGen professionals on emerging trends, challenges, and opportunities shaping real estate performance and value.
By:

Amir Mortazavi, PhD
Director, Systems Engineering + AI
Amir Mortazavi serves as Director of Systems Engineering and Artificial Intelligence at GreenGen, where he leads the firm’s engineering practice across the assessment, design, and optimization of high-performance energy and infrastructure systems for clients worldwide.
Amir holds a Ph.D. in Mechanical Engineering from the University of Maryland, where he also serves as a research partner with a focus on data center performance and next-generation infrastructure. He is a certified Data Center Infrastructure Expert (DCIE) and Data Center Engineering Specialist (DCES) through the International Data Center Authority, and an active member of both ASME and ASHRAE.
With over two decades of hands-on engineering experience, Amir has led optimization initiatives for mission-critical data center facilities serving organizations including X (formerly Twitter), Sprint, and U.S. Customs and Border Protection.
1. AI + Data Centers Are Reshaping Every Asset Class
The bigger story isn’t specific to data centers, it’s how the sector’s explosive growth is transforming every asset class around it. Accelerated computing demand has become a real estate-wide story. The shift is structural, not cyclical, and investors who understand it early will have a significant advantage in asset selection and underwriting.
Office:
- AI is reshaping the workforce by automating routine tasks and amplifying the value of specialized expertise. The result is an organization increasingly structured around a smaller core of high-impact knowledge workers supported by AI systems rather than layers of traditional staff.
- The traditional Class A value proposition (great amenities, prime location) is giving way to buildings that offer high power density, flexible electrical and MEP systems, and compute-ready infrastructure.
- Tenants increasingly expect “compute-ready” workplaces as part of hybrid work infrastructure.
Manufacturing & Industrial:
- Facilities with sufficient access to power, water, and cooling are being repurposed as potential sites for data centers.
- The rise of real-time logistics from dynamic routing to autonomous fulfillment has made compute infrastructure a primary driver of site selection, alongside traditional factors like transportation access and labor markets.
- GPU-driven automation increases demand for high-bandwidth connectivity, particularly within data centers and across latency-sensitive segments of the digital infrastructure stack.
- The reshoring of U.S. manufacturing, combined with the push toward robotic automation, is creating a new category of industrial power demand. Facilities that can support onsite AI computing infrastructure for operations will command a premium.
Medical:
- AI-driven diagnostics and real-time analytics require low-latency, locally deployed inference, compute, and storage infrastructure.
- Hospitals are increasingly deploying on-site micro data hubs, blurring the boundary between clinical and digital infrastructure real estate.
- New compliance and redundancy requirements are impacting facility design and total cost of ownership.
The throughline across all three sectors is the same: power, density, and connectivity are now table-stakes features, not differentiators. Buildings that lack them will face accelerating obsolescence, a challenge that brings us directly to the second major trend.
2. The Data Center Obsolescence Wave Is Already Here
Perhaps the most underappreciated risk in the current market is the speed at which existing data center infrastructure is becoming technically obsolete. This isn’t gradual depreciation, it’s a step-change driven by a specific force: the NVIDIA Vera Rubin and Grace Blackwell architectures, which are redefining what “minimum spec” means for the next generation of compute facilities.
Existing brownfield shells, many of which were considered state-of-the-art just five years ago, now require major reinvestment to meet the thermal load, density, and water-cooling demands of modern GPU clusters. Air-cooled designs, once the default, are rapidly failing to keep pace. Water-cooling and hybrid liquid solutions are no longer premium upgrades; they’re becoming the baseline requirements.
The regulatory environment is compounding this pressure. In Northern Virginia, a dominant data center market in the U.S., regulators and utilities are tightening capacity utilization oversight in ways that directly impact financing structures, permitting timelines, and underwriting assumptions. Virginia is effectively becoming the national test bed for grid management policy, and what gets decided there will set precedent for emerging markets across the country.
For investors, power scarcity is actively increasing cost of capital, extending lead times, and creating new layers of valuation uncertainty. Assets that appeared sound on acquisition may require significant capital infusion to remain competitive, or face a rapid loss of tenant demand. This dynamic flows directly into the third and most pressing trend we’re tracking.
3. The Power & Permitting Bottleneck Is Systemic
If there is one issue keeping data center investors up at night heading into the back half of this decade, it is the deepening shortage of power and permitting pathways needed to support next-generation facilities. This constraint is fundamentally reshaping how deals are underwritten, priced, and exited.
On the power side, utilities simply cannot build transmission infrastructure fast enough to support AI-era loads. Interconnection queues of four to eight or more years are now standard in key U.S. markets, meaning a developer who secures a site today may not have reliable power access until well into the next decade. Capital investors are responding by elevating power procurement strategy from a diligence checkbox to a first-order investment thesis question. Who controls the power? Is it contracted, captive, or speculative? The answers increasingly determine whether a deal gets done.
Permitting is following a similar trajectory. Local communities are pushing back on new data center development over concerns about noise, water consumption, viewshed impacts, and strain on local grids. What was once a manageable entitlement process has become a genuine gating factor, with environmental review timelines now influencing IRR assumptions more directly than construction schedules. This is a meaningful shift for any investor modeling development-stage returns.
The result is an emerging bifurcation in the market. Assets with guaranteed, contracted power (particularly those backed by captive renewable generation, on-site power generation, or microgrid infrastructure) are commanding significant premiums. Location, which has historically been the primary driver of data center valuations, is increasingly secondary to power certainty. Operators who understood this early are now sitting on a structural competitive advantage that will be very difficult for late movers to replicate.
For capital allocators, the combined effect of these constraints is a risk profile that looks meaningfully different from prior data center investments: higher upfront capex, longer development timelines, greater valuation unpredictability, less certainty in exit strategies, and deeper dependence on regulatory outcomes that sit outside any individual investor’s control. This is becoming the defining risk calculus for anyone entering the space between now and 2030.
Data Center Q&A
You asked, Amir answered. The questions below came from across our network of investors, operators, advisors, and engineers thinking through the same challenges. Have a question of your own? Submit it here and we’ll get back to you.
Q1: Are hyperscalers crowding out private capital, or does the new power shortage create opportunities for collaborative financing structures?
A: Hyperscalers are not crowding out private capital; rather, their scale is fundamentally reshaping the market by making power availability the primary constraint. This shift is creating new opportunities for collaborative financing structures that involve utilities, private capital, and hyperscalers. Particularly in areas such as power infrastructure, generation, and grid integration.
As a result, the traditional approach to data center planning and development must evolve. Decision-making should no longer follow a linear process where key stakeholders are engaged late in the project lifecycle. Instead, it should adopt a more integrated, multi-stakeholder model, where utilities, infrastructure providers, capital partners, and operators are involved early in the planning phase. This enables more effective alignment of power strategy, site selection, financing, and system design, ultimately reducing risk, accelerating timelines, and improving overall project viability.
Q2: If we’re underwriting a data center acquisition today, what technical red flags should we look for that could signal near-term obsolescence?
A: The two most critical factors are available power capacity and the underlying data center architecture. Power has become the primary constraint, as rack power densities have increased by more than an order of magnitude compared to legacy data centers.
At the same time, data center architecture is influenced by a wide range of factors; including cooling strategy, electrical distribution design, physical layout, and scalability which collectively determine whether a facility can support next-generation workloads.
As a result, a comprehensive technical evaluation is required to assess upgrade feasibility. In many cases, legacy facilities are so constrained, especially in terms of power scalability and cooling adaptability, that the cost and complexity of retrofitting can exceed that of demolishing and redeveloping the site as a new, purpose-built data center.
Q3: How should we model stranded power risk in markets like Phoenix, Northern Virginia, and Silicon Valley?
A: It requires a market-specific financial and risk analysis that accounts for project scale, development timeline, and local grid constraints.
In general, we strongly recommend avoiding markets with insufficient or highly constrained power capacity, as the financial impact of project delays, interconnection uncertainty, or intermittent power availability can be substantial. These risks can materially affect project returns through:
- Delayed revenue realization
- Increased capital costs (e.g., temporary power solutions or infrastructure upgrades)
- Reduced operational reliability
As a result, power availability should be treated as a primary underwriting variable, not just a technical constraint, when evaluating data center investments
Q4: Are secondary markets (e.g., Ohio, Tennessee, Utah, Nebraska) actually safer long-term bets because they have more scalable power?
A: For hyperscale AI data centers, secondary markets can indeed represent lower-risk options from a power availability and scalability standpoint. However, several additional factors must be carefully evaluated, including:
- Workforce availability to support construction, operations, and maintenance
- Network latency and connectivity, particularly for latency-sensitive applications
- Market demand and absorption, which ultimately determine long-term asset utilization
While power availability is a critical driver, it should not be considered in isolation. We strongly recommend conducting a comprehensive market demand analysis alongside power and infrastructure assessments to ensure overall project viability and to align the development strategy with both technical requirements and commercial realities.
Q5: I am worried about pending Montgomery County, MD data center proposals. How does your team view the environmental impact on the Potomac and quality of life for the communities being asked to allow the use? Mike Rubin spent a lot of time, energy, and money to help create the Ag Reserve, and it is a precious use of land.
A: From our perspective, the environmental impact particularly on the Potomac watershed and surrounding ecosystems is highly dependent on how these facilities are designed and operated. The two primary concerns are:
- Water usage and discharge, especially for traditional evaporative cooling systems
- Power demand and associated emissions, which can indirectly affect regional environmental quality
Uncontrolled growth with conventional designs can place meaningful strain on both water resources and local utility infrastructure, which understandably raises concerns about long-term impacts on the Ag Reserve and overall quality of life.
That said, the next generation of data center design is moving in a different direction. Many operators are transitioning toward:
- Air-cooled or hybrid cooling systems to significantly reduce or eliminate water consumption
- Liquid cooling (direct-to-chip / immersion) for high-density AI workloads, which improves efficiency and reduces overall energy intensity
- Integration with on-site or low-carbon power strategies where feasible
From a community standpoint, the key is how projects are executed, not just whether they are built. Well-designed facilities can:
- Minimize water usage
- Reduce noise and visual impact
- Operate with significantly lower carbon intensity
Q6: We have an existing portfolio of datacenters, considering the next generation of cooling centers should we switch our technology?
A: This is exactly the right time to evaluate whether a technology shift makes sense. In many cases, transitioning to newer cooling architectures can:
- Improve energy efficiency
- Reduce operational costs
- Align better with evolving regulatory and community expectations
- Position assets competitively for next-generation compute demand (AI / high-density loads)
We typically recommend a forward-looking assessment that evaluates:
- Current infrastructure vs. future compute density requirements (5–10 year horizon)
- Cooling strategy alternatives (air vs liquid vs hybrid)
- Water and power constraints
- Community and regulatory risks
Happy to discuss this further or take a look at your portfolio to identify where a transition would create the most value.
Q7: How should institutional investors think about power procurement? Are PPAs, on-site generation, or utility partnerships the better long-term strategy?
A: It depends on several factors, including the site characteristics, cost of power, and project timeline. The power strategy is one of the most critical considerations when siting a data center and requires thorough engineering and financial analysis.
We strongly recommend early coordination with the local utility. In many cases, there is an opportunity to leverage on-site infrastructure (such as onsite power generation, energy storage systems, and emergency/backup generation) to support grid stability. By participating in demand response or grid-support programs, operators can often negotiate more favorable utility rates or interconnection terms.
Q8: We own office and industrial assets near transmission corridors. How do we evaluate whether they could support edge or modular data center deployments?
A: We recommend performing a data center feasibility assessment or an edge application study to evaluate the site’s capability to support next-generation workloads.
Clients are often surprised to find that even adjacent transmission infrastructure may not have sufficient available capacity to support the power requirements of modern AI-driven data centers, which can exceed tens to hundreds of megawatts depending on scale and density.
In contrast, edge deployments typically require significantly lower power; however, depending on the application and the size of the host facility (e.g., office buildings), they may still necessitate substantial electrical infrastructure upgrades, including service capacity increases, transformer upgrades, and distribution modifications.
A structured evaluation helps align site capability, power availability, and application requirements, ensuring that both centralized and edge strategies are technically and economically viable.
Q9: What is the realistic lifespan of a data center built in 2020–2022 under today’s AI-driven density assumptions?
A: It depends on the data center type, location, and underlying infrastructure. However, under current AI-driven demand, obsolescence has accelerated, and we generally estimate a competitive lifespan of approximately 10 years for data centers built in the 2020–2022 timeframe.
Depending on the original design and construction, some facilities may be retrofitted or upgraded to meet the requirements of next-generation, high-density data centers. That said, the primary factors governing long-term competitiveness are power availability and achievable power density. Facilities that cannot scale power capacity or support higher rack densities will face increasing limitations, regardless of the condition of the building or mechanical systems.
Q10: How will the shift toward liquid cooling impact operating costs, water rights, and local permitting risk?
A: Depending on the design, next-generation liquid-cooled data centers can consume significantly less water than traditional air-cooled facilities, particularly those relying on evaporative cooling systems. By reducing or eliminating the need for cooling towers, liquid cooling can substantially lower water consumption and discharge requirements.
In addition, liquid cooling improves thermal efficiency by enabling more direct heat transfer at the chip or rack level, which reduces the amount of energy required for cooling relative to the IT load. As a result, the cooling power overhead (i.e., cooling energy per unit of compute power) is typically lower compared to conventional air-cooled systems.
Overall, liquid-cooled data centers offer both improved energy efficiency and reduced water usage, making them a more sustainable solution, especially for high-density AI and HPC applications.
Q11: How do we evaluate whether a site will support the cooling requirements of next-gen GPU clusters (200kW+ per rack)?
A: To determine whether a site can support next-generation GPU clusters at 200 kW+ per rack, we first evaluate whether the existing facility can be upgraded to a liquid-cooling architecture and whether the building structure can support the higher rack and equipment loads. This includes reviewing floor loading, slab capacity, rack anchorage, routing space for supply/return piping, CDU placement, leak-detection strategy, and the ability to isolate and maintain the liquid-cooling loop. High-density AI deployments generally require direct liquid cooling or similar architectures rather than traditional air-only systems.
Assuming sufficient electrical capacity is available, the next step is to confirm that the facility can reject the heat at the building level. In practice, that means verifying whether the central plant can be upgraded or replaced to support the required coolant temperatures, flow rates, redundancy, and heat-rejection capacity through chillers, dry coolers, cooling towers, or hybrid systems. If the site is power-viable and can be upgraded structurally and hydraulically for liquid cooling, then the cooling demand can generally be accommodated through selection of an appropriate facility cooling infrastructure.
If those conditions are not met, particularly if the structure cannot support the rack loads, if there is no practical path for liquid piping and CDU deployment, or if the plant cannot be economically upgraded—then the site is unlikely to be a good candidate for high-density GPU deployment and may be better suited to lower-density workloads or full redevelopment.