While Artificial Intelligence (A.I.) is not new to utilities, new insights via Machine Learning are increasing the range and size of A.I.-related benefits
From self-healing grids to voice recognition applications, electric utilities have already been enjoying a positive return on investment when it comes to specific solutions relying upon Artificial Intelligence (A.I.) capabilities. But now our industry is entering a brave new data-driven world in which A.I. is becoming indispensable and ubiquitous. In this new world, Machine Learning (ML) is improving existing processes, addressing new challenges, and providing unexpected insights.
Let’s quickly look at two representative use-cases for Machine Learning at electric utilities–optimization of demand response customer profiling, and analysis of real-time T&D O&M data to augment condition-based maintenance procedures and leverage outage related data.
First, in the case of demand response, some utilities have gained unexpected insights from Machine Learning (ML) analysis. Consider how ML disproved expectations that newer homes’ higher efficiency appliances and better insulation would make them relatively bad candidates for demand response programs to shave summer peaks. When ML analysis gave the unexpected result, utility personnel reviewed the data underlying the seemingly-incorrect conclusion, and saw that the majority of newer homes in their service territory had been built in hotter areas, and were actually better DR candidates that typical older homes, despite the fact that most older homes had less efficient appliances and poorer insulation.
Second, some utilities are using ML to find failure patterns in large populations of T&D equipment. A population of a specific class of equipment, such as a certain size and type of transformer, will have a range of failure modes, with failures having complex dependencies based on loading, temperature, and other operational data. Procedurally, ML analysis is applied to 75% of a specific data set. Then the results are employed to test predictions for the remaining 25% of the data. This helps engineers fine-tune existing condition-based maintenance practices so as to better allocate scarce O&M budgets. ML results show where existing condition-based maintenance procedures are best applied, and where they need fine-tuning. And it helps utilities go beyond preventive maintenance by providing insights for predictive capabilities based on anomaly detection. The result is that ML can, in some cases, enable utilities to address specific potential outages before they occur.
Multiple Big Data drivers behind A.I. and ML
Instead of addressing a single flood of data, A.I. and ML are addressing multiple floods of data across various areas, with increased utilization of sensors and internet-connected (IoT) devices, and meter data, and local short and long-interval weather prediction capabilities, to name a few.
The grid optimization flood of data involves utilities leveraging more widespread monitoring of T&D assets. The service-related flood of data involves deepening levels of customer engagement via analysis of meter data and home energy network data, and bringing customers greater value by integrating distributed energy resources, shaving peak demand by way of more granular Demand Response and dynamic pricing programs.
In these and other areas, Machine Learning specifically, and A.I. generally, both extract value from datasets, and make it easier to perform previously onerous advanced analytics tasks. Along the way, utilities have to stay ahead of privacy concerns by improving their ability to monitor networks to address cyber and physical vulnerabilities.
Higher customer expectations and bigger problems to solve
Consider how amazing many of us thought it was, a mere seven years ago when IBM’s question answering system, Watson, defeated the two greatest Jeopardy! champions by a significant margin. More recently, the 2017 Google DeepMind accomplishments of AlphaZero moved machine learning forward significantly, with AlphaZero teaching itself Go, chess, and shogi and then defeating major competitors.
Customer expectations have been rising in parallel with the acceleration of A.I.-driven problem-solving capabilities. From voice recognition to auto-fill to GPS-driving instructions, consciously or subconsciously, our A.I.-related customer service expectations seem to be rising at a pace on par with Moore’s Law.
In this context, two lessons about A.I. that have come to the forefront in March 2018 are that a lot of people now unquestioningly expect 100% perfection from A.I., and that we seem to also unquestioningly accept the idea that the solution to any A.I.-related problem should involve more A.I. rather than less.
The two lessons of March 2018 that will likely be remembered for a long time are:
- The first human fatality associated with operation of an autonomous vehicle, and;
- The revelation of a major data privacy breach involving 50 million Facebook users and Cambridge Analytica.
- Regarding Uber’s recent autonomous vehicle fatality, Michigan State Senator Jim Ananich is demanding that A.I. achieve 100% accident-free perfection, per a 3/20/2018 NY Times article.[1] The fact that this demand may seem reasonable initially should give us pause: If it is proven in the future that autonomous vehicles will reduce fatalities in comparison to human-driven vehicles, would it be ethical to stick with human-driven vehicles afterward?
- In the 3/21/2018 NY Times interview, “Mark Zuckerberg’s Reckoning: ‘This Is a Major Trust Issue’” Facebook’s CEO stated “The new A.I. tools we built after the 2016 elections found, I think, more than 30,000 fake accounts that we believe were linked to Russian sources who were trying to do the same kind of tactics they did in the U.S. in the 2016 election. We were able to disable them and prevent that from happening on a large scale.” In the same interview Zuckerberg referenced a more recent application of A.I. to solve A.I.-driven problems: “In last year, in 2017 with the special election in Alabama, we deployed some new A.I. tools to identify fake accounts and false news, and we found a significant number of Macedonian accounts that were trying to spread false news, and were able to eliminate those.”
New A.I. historical insights and definitions from McKinsey
A newly-released McKinsey & Company Executive Guide to A.I. defines Artificial Intelligence as “the ability of a machine to perform cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with the environment, problem solving, and even exercising creativity.
McKinsey’s guide is designed to enable business executives to stay at the forefront of the accelerating artificial-intelligence race, and it elaborates on how most recent advances in A.I. have been achieved by applying machine learning to very large data sets. Machine-learning algorithms detect patterns and learn how to make predictions and recommendations by processing data and experiences, rather than by receiving explicit programming instruction.
The team at McKinsey Analytics has designed the guide interactively. It targets executives seeking to make nimble, informed decisions about where and how to employ A.I. in their business by staying ahead in the accelerating artificial-intelligence race.
The McKinsey Analytics team boils A.I.-related trends down to business problems being solved, including utilization of A.I. in robotics and autonomous vehicles, computer vision, language, virtual agents, and machine learning.
The guide identifies current drivers which propelled A.I. from hype to reality via convergence of algorithmic advances, data proliferation, along with tremendous increases in computing power and storage.
The guide’s introductory “Why A.I. Now?” section is sure to be of interest to a wide audience, as it boasts a well-designed interactive chart ideal for readers to navigate their own research pathways by freely skimming or delving more deeply into any of several levels of layered information regarding each historical milestone in A.I..
The associated timeline section organizes dozens of key sub-category or sub-era developments from 1991 to the present in the same interactive, drill-down fashion.
Source: An Executive’s Guide to A.I., McKinsey & Company
Beyond the introductory content areas, the depth and breadth of current tool and trend information provided by the guide is impressive and is exemplified by the multi-layered hierarchy in the interactive Machine Learning section, with similar depth evident in sections on Deep Learning, both of which are then rounded out further with multiple business use cases.
Source: An Executive’s Guide to A.I., McKinsey & Company
Machine learning algorithms also adapt in response to new data and experiences to improve efficacy over time. Depth of details is provided on three types of machine learning:
- Supervised learning
- Unsupervised learning
- Reinforcement learning
Deep learning is a type of machine learning that can process a wider range of data resources, requires less data preprocessing by humans, and can often produce more accurate results than traditional machine-learning approaches (although it requires a larger amount of data to do so).
In deep learning, interconnected layers of software-based calculators known as “neurons” form a neural network. The network can ingest vast amounts of input data and process them through multiple layers that learn increasingly complex features of the data at each layer.
The network can then make a determination about the data, learn if its determination is correct, and use what it has learned to make determinations about new data. For example, once it learns what an object looks like, it can recognize the object in a new image.
The Deep learning part of the guide provides answers for sub-sections with a lot of detail as follows:
Convolutional neural network | Recurrent neural network |
What it is | What it is |
When to use it | When to use it |
How it works | How it works |
Business use cases | Business use cases |
Here is the link to McKinsey’s Executive Guide to A.I., authored by partners Michael Chui and Brian McCarthy.
[1] “Ananich called the death of 49-year-old Elaine Herzberg a tragedy and said companies need to continue refining their systems. “I want that work to happen here, because we have a 100-year history of making the best cars on the planet,” he said. “It’s not perfect by any means, and we are just going to have to keep working until it is.” (“Can Self-Driving Cars Withstand First Fatality?” NY Times, March 20, 2018.)