Skip to main content
Technology

Community a major caveat to successful open-source model adoption

By August 27, 2019No Comments

Open-source analytical models abound, but having a dedicated community is essential.

In More to open-source than you think, the ABCs of open source were described in the utility context. While “open-source” is being used to describe developer tools, data, models, standards, frameworks, and platforms, in this blog, we’ll look at open-source analytics most applicable to utilities.

There are plenty of places where models built using open source tools[1] can be found; utility analytics teams are taking advantage of these. , one gas utility is using machine learning (ML) and deep learning models (DLM) from non-utility specific open source libraries to identify facilities with consumption patterns that require a field visit. In another case, a power authority plans to use open source machine learning platform TensorFlow and deep learning library Keras for predicting generation plant performance. PRECISE, a set of algorithms developed by NREL using open-source tools, is being tested at SMUD. The analytics determine ideal setpoints for PV inverters based on simulations using predictions of distributed energy resources at specific locations on the grid.

Utility-specific models that developers can build on

There is a difference between using non-utility specific open-source tools to build models for utility use and creating open-source utility-specific computational models. There are open-source models for calculating load flow, simulating the impact of line failures, determining optimal power flow, and running large-scale power simulations, for example. There are also models created by the renewable energy industry that calculate irradiance and perform wind resource assessment. Many of these can be found in repositories such as GitHub.

National labs are contributing to utility-specific models as well. Pacific Northwest National Laboratory (PNNL) developed the Energy Operating Model as an open-source production cost analysis tool for power generation unit commitment.  More recently, Los Alamos National Lab announced the Severe Contingency Solver for Electric Power Transmission, an open-source analytical model which predicts where extreme weather events will likely impact the grid.

Getting to validity, flexibility and security

What’s even better, though, especially in power systems simulation models, is to be able to ingest data easily and get to a close approximation of a grid model. One effort worth watching is an LF Energy project, PowSyBL.  PowSyBL, an LF Energy project, is currently being developed by TSOs in Europe to simulate the impact of possible line failures on other parts of the transmission grid.  Open-source was chosen for its flexibility.  PowSyBL provides the user with the ability to use “building blocks” (such as transformers, generators, loads, buses, nodes, etc.) to create a network model which closely approximate their own. It accepts data in standard formats, such as CIM or CGMES.

A similar initiative in the US the Open-Source Distributed System Platform (OpenDSP).  Duke and Avista are at the beginning stages of developing an Open-Source Distributed System Platform (OpenDSP) Utilities Collaborate on Open-Source Software.

There is, of course, trepidation when it comes to adopting open source because of concerns about security and validity. That is why it is important to have a dedicated community responsible for open-source, managed through well-defined governance. The community of developers are constantly working on improving the models. Examples of such communities are LF Energy, the Eclipse Foundation and the AWEA’s Wind Resource Working Group. Also, of note is the National Rural Electric Cooperative Association’s (NRECA) Open Modeling Framework (OMF), with users from 80 utilities, along with 137 other vendors, national labs and universities.

Takeaway:  The importance of community

While repositories, such as GitHub, create a place to access open-source, dedicated communities improve and vet the code. One method is feature branch workflow where a new feature is merged into the code only if the branch successfully compiles, passes unit tests, passes quality checks (code quality, code coverage, coding style, etc.), has a minimum number of quality reviews and is incorporated into the code by at least one member of the review team Without multiple sets of eyes identifying bugs and delivering patches, there is risk of errors.

A word to the wise, then, when approaching open-source analytics.  Choose your community wisely.  Look for communities that have a long-term view of their mission and have the resources to be sustainable.

 

[1] Tools like Python and R make it easier to convert mathematical models into code and make it easier to pair with other modeling efforts.