Data Center Demand Forecasting and Capacity Planning: How Utilities Modernize the Grid for High-Load Growth
Data center energy demand is accelerating, and utilities are being pushed to upgrade demand forecasting and capacity planning across transmission and distribution networks. U.S. data centers used about 176 TWh in 2023, up from 58 TWh in 2014, and federal analysis indicates electricity use could rise sharply by 2028 as AI and cloud computing expand (Shehabi et al., 2024; Offutt & Zhu, 2025). To keep pace while maintaining reliability and protecting ratepayers, utilities should modernize forecasting models and planning workflows with better grid data, operational technology, and flexible interconnection approaches that reflect how large loads can enter the queue and scale quickly (Electric Power Research Institute [EPRI], 2024).
U.S. data centers used roughly 176 terawatt-hours of electricity in 2023, up from 58 TWh in 2014 with projected data center usage doubling or tripling to 580 TWh by 2028 (Shehabi et al., 2024; Offutt & Zhu, 2025).

As AI and cloud computing drive unprecedented load growth, utilities must act decisively by upgrading tools, processes, and infrastructure planning to match data center timelines.
Instead of steady, predictable increases in demand, utilities are now managing concentrated, high-capacity loads that can exceed traditional planning cycles. Many are under pressure to deliver power at the pace required for new data centers while maintaining grid reliability and resilience.
Meanwhile, many utilities are advancing sustainability and decarbonization goals, adding new layers of complexity to already stretched systems. Achieving both will require modernized operations built on data, collaboration, and digital tools that improve grid visibility and responsiveness.
The interconnection challenge for utilities: data center grid capacity and queue delays
Across the United States, utilities are handling record volumes of interconnection requests from data center developers, many exceeding available capacity. In some regions, proposed data centers would require more power than the utility’s existing customer base consumes in total (EPRI, 2024).
Before exploring solutions, it helps to understand the scale and complexity of what utilities are being asked to connect. Data centers vary widely in size, redundancy requirements, and grid connection type, as demonstrated below.

- *Typical voltage levels in the United States are 13.8 kV, 34.5 kV, and 69+ kV.
- N: Basic capacity to support the load
- N+1: One additional unit for backup (e.g., one extra generator)
- 2N: Full duplication of all critical components
- 2N+1: Full duplication plus an extra unit for added fault tolerance
- Note: Approximate values for planning purposes. Actual designs and interconnection types vary by region and utility policy. Data center size ranges are indicative and vary by utility service area, site design, and technology mix. These categories are presented for planning context only, to illustrate how grid connection and redundancy need to scale with facility size.
Smaller enterprise or colocation sites—typically under 20 megawatts—connect at the distribution level through pad-mounted transformers and local substations. By contrast, hyperscale and AI-driven facilities above 20 megawatts interconnect directly at the transmission level, often requiring dedicated substations and 2N or higher redundancy to maintain uptime.
These design and redundancy expectations place extraordinary strain on transmission and distribution (T&D) planning, as each connection must meet both reliability and resiliency standards while accommodating unprecedented load growth.
Legacy approval and engineering workflows may struggle to keep pace with hyperscale data center build schedules. As interconnection requests increase in volume and complexity, many utilities are finding that manual reviews and static grid models are not always sufficient for today’s timelines. Beyond sheer volume, the nature of data center loads adds further complexity.
Because these facilities rely heavily on power-electronic equipment, their demand can ramp rapidly, fluctuate with compute cycles, and create harmonics that challenge traditional grid models.
Various utilities are adopting digital tools such as dynamic modeling and digital twins to simulate grid behavior, test scenarios, and locate viable connection points far faster than legacy methods allow.
Utilities are also contending with challenges around data interoperability, shifting regulatory requirements, and persistent supply chain constraints. These factors can delay projects and raise costs when multiple agencies and developers are involved. Streamlining data exchange and aligning technical standards among stakeholders will be critical to keeping interconnection projects on pace.
Data center load flexibility: A new opportunity in planning
Not all data center loads are created equal. While hyperscale and AI-driven sites often demand full capacity from day one, others can scale gradually or adjust consumption dynamically. Emerging “flexible interconnection” models allow utilities to connect new data centers more quickly by accounting for flexibility in how and when they draw power (Narzikul, 2025).
Factors influencing flexibility include
- Load ramp-up profiles
- Ability to shift or shed demand
- On-site generation and storage
- Redundancy strategies (e.g., N+1 configurations that support partial operation during constraints)
Utilities that model and plan for this flexibility can unlock capacity faster, reduce upgrade needs, and align interconnection timing with real-world load behavior.
Operational technology as the backbone of modern utility operations
Rising data center demand is making operational technology (OT) the backbone of utility operations for managing interconnections and maintaining grid performance. Systems such as supervisory control and data acquisition (SCADA), advanced distribution management systems (ADMS), and distributed energy resource management systems (DERMS) provide continuous visibility and control across distribution networks. These capabilities help operators monitor grid conditions, anticipate stress points, and adjust as new high-load facilities come online.
Automation now plays a central role. Functions such as automated switching, fault isolation, and load balancing help sustain reliability even when data center demand fluctuates or expands unexpectedly. Automation also helps utilities detect and correct imbalances early, reducing outage risk and keeping power quality consistent for all customers.
Integrating distributed energy resources (DERs)—including on-site generation—provides added flexibility, especially in areas where multiple data centers compete for limited transmission capacity. When paired with responsive OT systems, these assets can ease congestion and strengthen grid resilience during periods of peak demand.
Upgrading legacy OT systems remains a critical step. Modern, interoperable architectures that support real-time data exchange give operators a live view of grid performance and the flexibility to adjust as conditions and power requirements evolve across large-scale digital infrastructure.
Using data to forecast and manage load growth
Rapid data center expansion is reshaping how utilities forecast demand and plan capacity. Traditional models built on historical load patterns and gradual growth can break down in markets where data center requests arrive as step changes. In a 2024 EPRI utility survey, 60% of responding utilities reported data center interconnection requests of 500 MW or larger, and 48% reported requests of 1,000 MW or larger, highlighting just how large individual requests can be. To stay ahead, various utilities are using AI-driven forecasting tools to predict how new data centers will affect grid performance and long-term resource planning.
IoT telemetry from substations, transformers, and field assets provides early insight into stress points near large data center clusters. These continuous data streams help operators spot localized constraints, prioritize upgrades, and adjust dispatch strategies before reliability issues develop.
However, utilities still face challenges in obtaining timely and accurate information from data center customers. Many operators are unable or unwilling to share load and performance data, even under confidentiality agreements, limiting system planners’ ability to model real-world behavior and forecast growth accurately.
Bringing grid and facility data together through integrated analytics platforms strengthens both planning and day-to-day operations. When data moves freely between departments and systems, utilities can model different growth scenarios and prepare for multiple outcomes. This capability is increasingly important for forecasting the high-intensity, variable energy demands of AI workloads, which require models that evolve alongside new technologies and usage patterns.
Strategic planning to align with data center growth
Expanding data center clusters are prompting utilities to rethink how they plan and coordinate large-scale investments. Utilities should treat expanding data center clusters as a planning inflection point. By partnering early with developers, regulators, and local governments, utilities can better synchronize upgrade timelines with data center development and provide transparent cost allocations before projects accelerate.
Flexible resource planning that incorporates microgrids, DERs, and on-site generation helps reduce strain in areas with heavy data center activity. These strategies enable utilities to balance local reliability with broader grid stability while advancing renewable integration and decarbonization goals.
Addressing supply chain constraints and interoperability gaps remains essential for maintaining project schedules and cost predictability. Collaborative planning, modular design, and scalable technologies help utilities prepare for continued data center growth while maintaining the reliability and resilience that digital infrastructure depends on.
Securing the grid in an increasingly data center connected environment
Cybersecurity is now a shared responsibility between utilities and their largest data center customers as networks become increasingly interconnected. High-capacity connections expand the number of devices on the network and create new entry points that must be monitored and secured. The stakes are high: a single breach could disrupt both digital infrastructure and the electric grid that supports it.
Utilities are reinforcing defenses with layered controls such as segmentation, authentication, and intrusion detection to isolate threats before they spread.
Ongoing monitoring, role-based access, and anomaly detection extend visibility across operational and connected device environments. Embedding cybersecurity in every stage of modernization—from design through maintenance—helps prevent disruptions and strengthen resilience across an increasingly data center–connected grid.
The future grid demands adaptability
Rising data center demand highlights a new reality for utilities: flexibility and capacity are equally vital in shaping tomorrow’s grid. Integrating operations, data analytics, and cybersecurity will help utilities manage the next wave of large-scale digital infrastructure. Developing adaptive systems will keep them ahead of the curve as AI-driven data centers expand and evolve.
As utilities modernize the grid to support this surge, both regulatory and technical flexibility will be critical. Emerging flexible interconnection models—based on more dynamic load forecasting and shared operational data—can accelerate project timelines while reducing system strain.
At the same time, cost allocation and ratepayer protection will remain central considerations. Determining who bears the expense of infrastructure upgrades—developers, utilities, or ratepayers—will require transparent collaboration among regulators, policymakers, and industry leaders to balance grid resilience with equitable investment.
Making that shift takes more than new tools. It calls for a mindset grounded in collaboration, agility, and decisive action. The next phase of grid modernization will depend less on scale and more on utilities’ ability to anticipate change and navigate complexity across a rapidly transforming energy landscape.
This article was created with the assistance of generative AI tools and was edited by the Logic20/20 content team for clarity and accuracy.
References
Electric Power Research Institute. (2024, September 16). Utility experiences and trends regarding data centers: 2024 survey (Report No. 3002030643). https://www.epri.com/research/products/000000003002030643
Narzikul, G. (2025, September 30). Boosting grid resilience through DERMS: A guide for flexible interconnection deployment. Utility Analytics Institute. https://utilityanalytics.com/derms-flexible-interconnection-guide/
Offutt, M. C., & Zhu, L. (2025, August 26). Data centers and their energy consumption: Frequently asked questions. (CRS Report No. R48646). Congressional Research Service. https://www.congress.gov/crs-product/R48646
Shehabi, A., Smith, S. J., Hubbard, A., Newkirk, A., Lei, N., Siddik, M. A., Holecek, B., Koomey, J. G., Masanet, E. R., & Sartor, D. A. (2024, December 19). 2024 United States data center energy usage report. (LBNL Report No. LBNL-2001637). Lawrence Berkeley National Laboratory. https://escholarship.org/uc/item/32d6m0d1
About the Authors
Syed Ali is a Manager of Grid Operations with deep professional experience in the power and utilities sector. He has managed teams across multiple engineering disciplines and possesses broad technical knowledge of the grid system. Syed’s extensive design and implementation experience in the electric utility industry encompasses T&D substations, FLISR, VVO, ADMS-OMS, AMI, DERMS, and other areas.
Michael Emmanuel, Ph.D. has over 15 years of experience in utilities with a focus on power system operations, planning and DERs integration at both distribution and transmission systems domains. Dr. Emmanuel’s area of expertise includes DER hosting capacity analysis, DERMS, ADMS, market operations, and production cost modeling. In addition, he has led the development of regulatory aspects in a transformational platform to enable a true end-to-end monitor, control, and orchestration of DERs within both transmission and distribution systems.

