Hierarchical Distribution Grid Intelligence PDF
Document Details
Uploaded by Deleted User
2023
James Stoupis, Rostan Rodrigues, Mohammad Razeghi-Jahromi, Amanuel Melese, and Joemoan I. Xavier
Tags
Summary
This paper from the IEEE power & energy magazine September/October 2023 discusses the application of edge computing to electrical power distribution grids, a growing trend that addresses integration of distributed energy resources, faults, measurements, and cybersecurity. It also covers data fusion techniques crucial to smart grid infrastructure.
Full Transcript
D DUE TO THE PROLIFERATION OF INTERNET-OF-THINGS (IoT)-based technologies in the last several years, digital computing hardware and software technologies have seen massive performance improvement. Additionally, these technologies provide lower costs for compar...
D DUE TO THE PROLIFERATION OF INTERNET-OF-THINGS (IoT)-based technologies in the last several years, digital computing hardware and software technologies have seen massive performance improvement. Additionally, these technologies provide lower costs for comparatively higher computation and storage, more compact size hardware, and compatibility with a large selection of operating systems. Furthermore, communication protocols have increased the penetration of single-board computers in many consumer and indus- trial applications. This article presents the application of a state- of-the-art edge computing infrastructure to the electrical power distribution grid. Electrical power distribution is becoming increas- ingly complex with the large degree of integration of distributed energy resources (DERs). The distribution system also experiences many different undesired events, such as different types of tempo- rary and permanent faults, loss of measurement data, and cyberat- tacks. This article highlights a small-scale experimental validation of edge computing in power distribution automation that can be used for classifying different faults, detecting anomalies in the grid, mea- surement data recovery, and other advanced analytics techniques. Introduction With a large number of parallel data sources becoming readily available in a smart grid, data fusion techniques that combine mul- tiple data sources lie at the heart of smart grid platform integration. Hierarchical Distribution Grid Intelligence By James Stoupis , Rostan Rodrigues, Mohammad Razeghi-Jahromi, Amanuel Melese, and Joemoan I. Xavier Digital Object Identifier 10.1109/MPE.2023.3288596 Date of current version: 21 August 2023 38 ieee power & energy magazine 1540-7977/23©2023IEEE september/october 2023 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. Using Edge Compute, Communications, and IoT Technologies IMAGE LICENSED BY INGRAM PUBLISHING Related to this concept, the authors de- veloped intelligent applications that re- side on edge computing device hardware in the distribution grid. The developed edge processor platform performs ad- visory control functions to assist in the larger goal of providing enhanced grid resilience and distributing intelligence and provides a faster response time to sys- tem anomalies. Edge Computing and Communications Basics Edge computing combines technologies related to data processing, communications, and analyt- ics at the grid edge, which is defined as the distri- bution system between the substation and the end use customer sites. Edge computing devices (ECDs) are con- nected to grid field devices, such as pole top reclosers and switches, meters, line post sensors, and other field devices, via wired and wireless communications. Each ECD is also capable of communicating with its peers on the same distribution system, other edge processors, and providing communication redundancy and applica- tion coordination. Furthermore, each ECD is capable of communicating upstream to substation computers, utility control centers, and even the cloud. See Figure 1 for an example. Multiple communication media can be implemented using the ECDs, including Wi-Fi, cellular, and radio communication. Utility preference will determine which media are used for the field commu- nications and for the upstream communications. The deployment of 5G and future advanced cellular platforms will only enhance the functionality that is capable of being deployed by ECDs. The edge processor concept entails the merging of edge computing, ubiquitous communications, and advanced analytics. As shown in Figure 2, the architecture contains layers of communication devices and intelligent appli- cations. At the outer layer, low-cost edge computing devices (LC-ECDs) gather data from the local field devices and communicate this data to the medium-cost edge computing devices (MC-ECDs) or to the high-cost edge computing devices (HC-ECDs). The MC-ECDs contain basic to slightly advanced applications and communicate and coordinate between each other and with the upstream HC-ECDs. The HC-ECDs contain advanced applica- tion and management functions and can communicate to the cloud if desired by system designers. Edge Computing Framework The proposed embedded framework for the edge processor concept is shown in Figure 3. The framework is designed to support two operating system variants (Windows/Linux) to accommodate different applications and commercialization options. Many communication libraries (industrial communication, web server, pro- tection, and control proprietary software) are generally hosted in the Windows operating systems with separate september/october 2023 ieee power & energy magazine 39 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. The framework is designed to support two operating system variants (Windows/Linux) to accommodate different applications and commercialization options. development toward containerized solutions that are com- repositories contain various tagged versions of container puter operating system independent. On the other hand, images and the lower level ECDs can pull down the image novel secure virtual private network (VPN) technologies they require by requesting the device or using configuration- (OpenVPN, wireguard, and so on), mesh wireless libraries, specific tags. and machine learning applications are easiest to evaluate A management bus in this architecture implements con- and implement in Linux. Container engines such as Docker trol functions, and the container image pulls over secure or Qemu can then be installed to support containerized protocols. The transfer of applications to specific devices grid applications. is as simple as providing a descriptive text file for image Some implementations of the ECDs or device network dependencies to pull or for pulling the actual image itself may have a hierarchical structure. A high-level block dia- from the higher level ECD. gram of such implementation is shown in Figure 4. This The management bus features a broker communi- figure shows a multilayer hierarchal architecture with dif- cating over publish-subscribe type message queuing proto- ferent capability ECDs (HC-ECD, MC-ECD, LC-ECD). cols, such as message queuing telemetry transport (MQTT), These device networks feature a central or main ECD with advanced message queuing protocol (AMQP), and so on. management capabilities for pushing applications to individ- Each application supports receiving message commands to ual devices. A container registry, hosted either on the high- start and stop application processes, receive configuration- est level ECD or in the cloud, provides image repositories descriptive files coordinating grouping with other devices, and for the applications hosted on each computing device. These zone configurations. This management interface also manages pushing updates to deployed appli- cations, syncing databases at each Wired Utility individual device for distributed Network applications, and any updates to security or public key infrastruc- Substation Utility Control ture (PKI) technologies for opera- Computer Center tional needs. The lower-level ECDs regularly check for updates to indi- vidual device configuration and Other Edge Processors for Cloud Server Redundancy and Coordination zone configuration files exchanged over a JavaScript Object Notation (JSON) or equivalent file format. Meter A separate communication bus Pole Top Edge Computing Device provides protocols used for com- Recloser munication between applications on each device necessary for wider area or distributed applications. Line Post Service Transformer The applications of the communi- Sensor Monitor cation bus could be sensor mea- surements, status messages, or other necessary data interchanges. This communication bus will Capacitor Solar also include modbus, distributed Bank Switch Inverter network protocol 3 (DNP3), and Downstream Wireless Communications other established protocols used Upstream Wireless Communications for distribution automation. A few Peer-to-Peer Wireless Communications examples of base applications for distribution automation are figure 1. Edge computing device connections. machine learning, data collection 40 ieee power & energy magazine september/october 2023 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. RMS RMS Data Data Source Source RMS Data Source Streaming Data Source MC-ECD MC-ECD RMS RMS Data Data Source Source MC-ECD HC-ECD HC-ECD MC-ECD Streaming Data Source Streaming Data Source RMS Data Source MC-ECD RMS RMS Data Data Source Source figure 2. Proposed edge computing architecture for power distribution applications. and fusion, database management, and human–machine interac- tion or web-server-hosted graphical viewing applications. Mgmt. Comm. Grid-Apps Sensors Figure 5 shows a high-level process and components Bin/Libs Bin/Libs Bin/Libs Bin/Libs for converting standalone grid applications (software algo- ECD Container rithms) into containers and then deploying, managing, delet- Docker Image ing, and updating these containers across many ECDs in the field application. Qemu Docker Engine Several tools exist that can be used to accomplish the arm/v6/v7/v8 aarch64 amd64 X86_64 containerization and orchestration tasks. In this demonstra- Multi-Arch Selector tion, Docker Engine was selected to containerize the appli- Host OS (ARM64/X86_64) cation due to its simplicity in implementation. However, there are multiple tools for orchestration, namely Docker- figure 3. Edge computing hardware and framework setup. Swarm, Kubernetes (also known as K8S), and K3S (a cer- tified Kubernetes distribution for resource constrained or remote locations). Among these, K3S is the simplest and Cloud Resource least resource-consuming platform for container orchestra- tion, which is desired for LC-ECDs and MC-ECDs. There- fore, K3S was chosen to be the container orchestration plat- HC-ECD form in this demonstration. Experimental Prototype Design A fault detection, isolation, and restoration (FDIR) application scheme, as shown in Figure 6, was considered for validating MC-ECD MC-ECD MC-ECD the edge processor concept. Depending on the specific needs of the application, the edge computers can directly commu- Communication Buses nicate with intelligent electronic devices (IEDs) and sensors AMQP, MQTT, WS, HTTPS as well as with other ECDs in the hierarchy. The logical sys- tem in this demonstration consists of two substations with a figure 4. Multilevel control and communications typical five-recloser loop, including four protection recloser architecture. september/october 2023 ieee power & energy magazine 41 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. Grid Application Grid Application (FDIR, Volt-Var, Soft-Reclose..) Containerized Application (Docker, Kubernetes) Containerization Container Orchestration Orchestration (DockerSwarm, Kubernetes, K3s...) User Interface User Interface (Portainer-GUI, kubectl-CLI) figure 5. Simplified process for building containerized grid application on edge computing platform. controllers, one grid-tie switch, multiple loads, voltage and cur- or supervisor ECD. Each logical state represents the state of rent sensors, three ECDs, and two gateway devices. Gate- the electrical system. More logical states can be added in the way devices communicate with multiple ECDs for redundancy future development of complex applications. and backup. For example, recloser controller R2 can be con- trolled by both ECD1 and ECD3, and recloser controller R3 Demonstration With Wireless can be controlled by ECD2 and ECD3, and so on. Sensors and IED Devices The simulation of the FDIR scheme under test was done A demonstration of the FDIR scheme using wireless line sensors in a JavaScript-based webserver tool called Node-RED. The and commercial protection and control products is discussed Node-RED software allows creation of logical and commu- here. The demonstration consists of three edge computing nication nodes that can represent different components in (three HPE EL10) devices, as shown in Figure 8. The hard- the FDIR scheme and perform intended control operations. ware setup uses actual recloser controllers (RER620) to Another benefit of the Node-RED system is that it also has emulate recloser R1 and grid tie GT1 from Figure 6. Other graphical user interface (GUI) tools that can be used for cre- relay devices are modeled by simplified software switches ating a very interactive simulation demonstration. in Node-RED. Wireless sensors were fabricated by using The FDIR algorithm is based on the state machine repre- commercially available line sensors (voltage and current) sented in Figure 7. The state machine runs inside the ECD3 and digital signal processor-based sensor data acquisition and wireless modules to connect and share data with respective ECDs. Wireless MQTT was used as the commu- Sensor 1 Fault 1 Fault 2 nication protocol to move the data R R R Substation 1 R1 R2 and commands between ECD devices, whereas Modbus TCP Gateway 1 Fault 3 protocol was used to communi- ECD1 cate with IED RER620. However, other protocols are also supported GT1 Grid Tie 1 ECD3 (e.g., DNP3, IEC 61850). In the setup of Figure 8, the Gateway 2 actual electrical load is connected to ECD2 both wireless sensors. The load volt- Substation 2 R4 R3 age is standard 110 V 60 Hz and a R R R Wireless few resistive loads were connected Sensor 2 to draw sufficient current for the pur- Switching Device in “Closed” Position pose of the demonstration. The trip Switching Device in “Open” Position threshold is set to 0.2 A in the super- visor ECD (ECD2), meaning when figure 6. Example FDIR scheme for validating proposed edge processor-based the current through any load exceeds container orchestration. 0.2 A the respective relay should trip. 42 ieee power & energy magazine september/october 2023 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. The wireless sensors measure both voltage and current of their via a router. The K3S master and agent tool are installed on load and send that measurement data to the respective ECD over respective ECDs. Together, the six ECDs form a K3S cluster. the wireless local network. As soon as the load current crosses Once the master ECD detects all the agent nodes, the setup is the threshold, the supervisor takes an action and sends the trip ready to run the FDIR application. The container image for command to the recloser protection relay. The relay opens and its FDIR application is also developed on two different hardware status is sent back to the supervisor. Later, when the load current platforms, i.e., ARM7 (Raspberry Pi4) and x86 (EL10). Both goes below 0.2 A, indicating that the fault is cleared, the relay container image versions are then uploaded to the Docker- then goes back to the closed position. Hub public repository for test purposes. Each image is about The GUI panel for one of the edge devices used in this 200 MB. The container orchestration tool, K3S, assures that demonstration is shown in Figure 9. One thing common in all the set performance metrics of the container cluster system GUI panels (across all ECDs) is a “system diagram” and fault are met all of the time, for example, start and stop of contain- status indicators. Apart from that, each edge GUI panel shows ers, updating to the latest version, deleting containers, creat- settings and statuses relevant to their location in the system. ing more instances of a container in the same edge device, The GUI accepts certain inputs from the user to simulate dif- deploying a specific container on a specific edge device in the ferent types of faults as well as displays system status, param- network, and so on. eters, and waveforms. For example, ECD1 shows the status In this demonstration, for simplicity, a command line of Substation A breaker, Relay 1 and 2, and has GUI buttons interface is used to deploy and monitor the FDIR appli- with the capability to inject a simulated fault at locations Fault cation. A configuration file is developed that defines the 1 and Fault 2. The demonstration shows the capability to sim- settings for each ECD. The configuration file includes ulate faults at three different locations in the two-feeder sub- the type of container image that should run on the tar- station topology. During normal operation, simulated faults get device(s), port mapping for user interface, number of can be introduced at different loca- tions in the system via GUI buttons While No Fault Detected for faults. Depending on the fault type, the algorithm in the ECD Normal decides to open or close certain Fault Injected Fault Cleared recloser devices. The system main- (Manually) (Manually) tains the state as long as the fault exists. Upon removal of the simu- While Fault Power Not Cleared Fault Fault Type lated fault, the system is restored to Restoration Identification (Fault 1/Fault 2/Fault 3) normal operation. Prefault load on Trip/No-Trip GT faulted circuits is used to check the Immediately After load before restoration. Immediately After Fault Identification Fault Isolation Fault This simple demonstration veri- Isolation fies many different aspects of the Trip Relays edge computing platform, such as B1/R1/R2 connecting to and collecting data from wireless sensors, connectiv- figure 7. Simplified state-machine algorithm for FDIR scheme. ity with existing protection relay devices, as well as distributed com- Wireless Sensor 1 Wireless Sensor 2 puting and relaxed real-time con- trol through the edge computing 110 V, 60 Hz 110 V, 60 Hz architecture. + Load + Load Demonstration With Software Containers and ECD3 Orchestration EL10 MQTT A final demonstration for FDIR ECD1 using container orchestration is LAN SW EL10 Modbus discussed here. The demonstration TCP Wi-Fi ECD2 consists of six ECDs (3 × EL10 and Router RER620 RER620 EL10 3 × Raspberry Pi4) as shown in Fig- ure 10. All the edge devices are con- Recloser 1 Grid-Tie Relay nected over a wireless network that is also connected to the Internet figure 8. Edge processor hardware setup for validating FDIR scheme. september/october 2023 ieee power & energy magazine 43 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. instances of each application, and so on. After deploying of operations in order to first isolate the fault, then restore the yaml file (a type of data serialization file) through the power, and when the fault is cleared, go back to normal the command line, the status of K3S cluster can be moni- prefault operation. tored. After deploying the yaml configuration file, all the ECDs download their respective (either ARM7 or x86) Applications of Interest FDIR container image from the DockerHub file registry. The data fusion techniques deployed on the edge proces- The download can take several minutes, depending on the sor platform have addressed two main objectives: “Fixing speed of Internet access. As soon as the images are down- problematic data,” which refers to the case when the data loaded, they are automatically deployed in the respective source is having quality issues such as inconsistency, imper- ECD. Then, based on the logic inside the container image, fection, and so on, and “Extracting higher-level information” the Node-RED application is automatically started in all to obtain knowledge from multiple data sources. The follow- ECDs, and they each run specific parts of the entire image ing provides more information regarding four applications based on their identifications. developed in the edge processor framework. The ECD that runs the master node of K3S can be used to access the GUI panel for the FDIR demonstration. The Distribution Grid Event Classification demonstration has the capability to simulate faults at three Both machine-learning-based and domain-expertise-based different locations in the two-feeder substation topology. methods were developed to help differentiate between per- Depending on the type of fault simulation, the state machine manent and temporary events observed on the grid. Exam- algorithm discussed previously will execute a certain sequence ples of permanent failures include cable/conductor faults, Simulated Fault Location Control and Monitoring Sub A Wireless Line Sensor A Load A Fault Status Fault 1 S1 Fault 1 R1 Fault 2 R2 Wireless Sensor 1 Fault 1 R R R Substation 1 R1 R2 Gateway 1 ECD1 GT1 Grid Tie 1 ECD3 Gateway 2 Fault Restoration ECD2 Substation 2 R4 R3 R R R Wireless Sensor 2 Fault Isolation Updated System Diagram Switching Device in “Closed” Position Switching Device in “Open” Position figure 9. Graphical panel of an example Edge device showing near real-time system parameters/information. 44 ieee power & energy magazine september/october 2023 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. animal contacts, and equipment failures. Examples of tem- network parameter estimation with applications to short-circuit porary faults include vegetation management issues, light- detection, load flow management, dynamic rating, load model- ning strikes, and switching transients. ing, fault detection, and network parameter estimation. Synchrophasor Data Recovery Application Integration for State Estimation The data fusion and related applications were developed Phasor measurement unit (PMU) data are sampled at syn- and tested on a PC but had not been implemented on an chronized instants, and measurements of nearby PMUs are ECD. These applications were containerized like the FDIR correlated based on the power system topology. As a result, application. The containers made it easy to deploy the PMU data exhibit low-dimensional structure despite the applications across different hardware and operating sys- high-dimensionality of raw data, as demonstrated by Patel tems. These were tested on ECDs to test and compare et al. Specifically, the resulting matrix that contains mea- performance. The synchrophasor data recovery and false surements of nearby PMUs at multiple sampling instants is data injection applications were both originally developed approximately low rank. Therefore, reconstructing missing in Matlab but were later translated to Python to simplify PMU data can be formulated into a low-rank matrix com- deployment. The two event classification applications were pletion problem as demonstrated by Candes et al. Different developed in Python and had no need to be translated. Each matrix completion methods such as nuclear, Hankel, and of the applications and their dependencies were packaged total variation norm minimization have been introduced into a Docker image. A script was also written to build and and developed based on convex optimization technique to run the applications initially. Another script was written to recover both randomly and temporally missing data with run each of these applications after the initialization script applications to synchrophasor (PMU) data recovery for had stopped. These applications were then run on an MC- state estimation. ECD and a HC-ECD. The four applications were containerized and worked as False Data Injection, Anomaly Detection, containerized applications on the MC-ECD and HC-ECD. and Fault Location in Feeder Data Each of the applications had results that were the same as False data injection attacks can be created to be unobservable when tested on a PC but had different execution timing. The by the commonly applied residual-based bad data detection. MC-ECD had timing that was between five and ten times However, given the high sampling rate of PMUs, one would slower than the HC-ECD and six to ten times slower than expect the PMU measurements to be highly correlated in the PC. The slower timing was anticipated due to hardware time because the power system is unlikely to have dramatic constraints and is acceptable. The HC-ECD had timing that changes in such a short time period under normal operating was comparable to the PC, with timing that was in the range conditions. Therefore, a detector that takes into consideration of 0.8–1.4 times the speed of the PC. the temporal correlations of the PMU measurements may be able to detect such “unobservable” false 192.168.1.21 192.168.1.31 data injection attacks. A predic- tion method is developed based R1 + S1 R2 + S2 on the data completion techniques with applications to false data or anomaly detection in PMU data. A MQTT B1 ECD1 192.168.1.20 statistical inference method based 192.168.1.41 on the random matrix theory tech- nique has also been developed for anomaly detection and fault loca- Supervisor ECD3 GT + S5 tion based on the eigenvalue’s dis- 192.168.1.40 192.168.1.81 tribution and eigenvectors of the 192.168.1.51 192.168.1.10 sample covariance matrix follow- B2 ECD2 192.168.1.30 ing the Marcenko-Pastur law as demonstrated by author Qiu et al. Distribution Network R3 + S3 R4 + S4 Parameter Estimation 192.168.1.61 192.168.1.71 An estimation tool based on the recursive least squares technique figure 10. Hardware setup for a demonstration with six ECD devices and container- was developed for distribution ized FDIR application. september/october 2023 ieee power & energy magazine 45 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. After deploying the yaml configuration file, all the ECDs download their respective (either ARM7 or x86) FDIR container image from the DockerHub file registry. Hierarchical Grid Intelligence HC-ECD) will determine the level of advanced applica- and Future Applications tion that can be deployed. At the substation and cloud While the edge computing concept presented in this article levels, where computation power and capabilities are focuses on distributed intelligence, the concept of hierarchi- greater, more advanced applications can be deployed. For cal grid intelligence will likely be the future of the grid’s example, machine learning-based applications could be protection, automation, and control architecture. The grid deployed at all levels, but the complexity of the technique ECDs will provide distributed intelligence and fast response and data models required will determine the deployment to time-critical grid issues (e.g., fault detection, isolation, hardware platform. A simpler machine learning method and restoration). Communicating pertinent data to devices could be deployed even on the LC-ECDs. A more com- at the substation level (e.g., substation computers, high-end plex method (e.g., deep/reinforcement learning based) ECDs) will provide another layer of protection, automation, would need to be deployed on a substation computer or and control (PAC), and potentially a sanity check that the even on the cloud. This flexible deployment architecture ECD-based control decisions made were correctly taken. provides the basis for a future vision of a hierarchical Last, by interfacing and coordinating with the cloud and end grid intelligence framework, allowing for a dynamic pro- customer application-based IoT systems, a comprehensive tection, control, and automation system that can accom- picture of the effects of grid events can be obtained. This modate many new subsystems to the grid (e.g., DERs, level of communications enables the link to consumer and energy storage). industrial and commercial sites from the utility and other Considering the architecture proposed previously, future systems. Advanced analytics are another application that can applications to be deployed include site-specific (e.g., event be applied at all levels of this framework to provide more analysis and detection) to broader system-level (e.g., DER hierarchical intelligence. F igure 11 shows this concept at monitoring and control) applications. Some general technol- a high level. ogies, such as data fusion, machine learning/artificial intel- At the grid edge, basic to slightly advanced applica- ligence, and 5G real-time communications that are being tions, such as fault detection, isolation, and restoration, can implemented in IoT-based solutions today will also have a be deployed. The type of ECD (LC-ECD, MC-ECD, or significant impact on future product offerings. Data fusion PV Edge Sensing, Industrial/ Analytics, and V2G Commercial Sites Communications Substation and Communities Energy Storage PV Computer/ Server Hierarchical PAC Functions and V2G Analytics Edge Sensing, Energy Storage Analytics, and Utility Grid Communications V2G Fiber Optic Communications Cloud-Based Energy Storage Analytics and Communications figure 11. Proposed edge computing architecture for power distribution applications. 46 ieee power & energy magazine september/october 2023 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply. Main applications of interest include distribution automation, overall grid management, advanced analytics, DER monitoring and control, and asset management. merges various data sources and types together to manip- Acknowledgments ulate data and provide useful inputs to end applications. The information, data, or work presented herein was funded Machine learning techniques use supervised, unsupervised, in part by the Advanced Research Projects Agency-Energy and reinforcement learning methods to solve traditional and (ARPA-E), U.S. Department of Energy. The views and new distribution protection, automation, and control appli- opinions of authors expressed herein do not necessar- cations. The deployment of 5G and other advanced, real- ily state or reflect those of the U. S. Government or any time communication systems will enable a paradigm shift agency thereof. in distribution PAC functionality, allowing for the deploy- ment of applications that could only be realized on paper For Further Reading 20 years ago. M. Patel et al., “Real-time application of synchrophasors for “Virtualized” protection and control [aka “centralized improving reliability,” NERC, Princeton, NJ, USA, NERC protection and control” (CPC)] is another new technology Rep., Oct. 2010. [Online]. Available: https://www.naspi.org/ that can leverage the edge computing and advanced com- node/664 munication infrastructure to provide enhanced hierarchical N. Dahal, R. L. King, and V. Madani, “Online dimen- grid intelligence. A virtual platform entails the implementa- sion reduction of synchrophasor data,” in Proc. IEEE tion of the PAC functions on a server (i.e., in software) that PES T&D, Orlando, FL, USA, 2012, pp. 1–7, doi: 10.1109/ resides in the substation. The advantage of this platform is TDC.2012.6281467. mainly the flexibility to provide enhanced PAC applications, S. Oymak, A. Jalali, M. Fazel, Y. C. Eldar, and B. Has- analytics, and cybersecurity methods, on top of the exist- sibi, “Simultaneously structured models with application ing virtualized PAC functions. For example, implementing to sparse and low-rank matrices,” IEEE Trans. Inf. Theo- PAC functions on a server instead of a substation computer ry, vol. 61, no. 5, pp. 2886–2908, May 2015, doi: 10.1109/ allows for more advanced machine learning-based applica- TIT.2015.2401574. tions to be implemented on the platform. Furthermore, if 5G E. J. Candes and B. Recht, “Exact matrix completion via and IEC 61850 communications are utilized with the virtual convex optimization,” Found. Comput. Math., vol. 9, no. 6, platform, then real-time sampling at the field device level pp. 717–772, Dec. 2009, doi: 10.1007/s10208-009-9045-5. can be achieved, providing a comprehensive view of the sys- R. C. Qiu and P. Antonik, Smart Grid Using Big Data tem in real time, which leads to the implementation of more Analytics: A Random Matrix Theory Approach. Hoboken, advanced and comprehensive PAC functions that span the NJ, USA: Wiley, 2017. distribution grid. Z. Ling, R. C. Qiu, X. He, and L. Chu, “A new approach of exploiting self-adjoint matrix polynomials of large ran- Conclusions dom matrices for anomaly detection and fault location,” Based on the potential of edge computing and the results IEEE Trans. Big Data, vol. 7, no. 3, pp. 548–558, Jul. 2021, from experimental tests presented, it can be concluded that doi: 10.1109/TBDATA.2019.2920350. the proposed concept offers faster response to grid events than centralized or substation-based solutions, indicating that Biographies coordination with supervisory control and data acquisition, James Stoupis is with ABB U.S. Research Center, Raleigh, distribution management system and substation computers NC 27606 USA. is feasible. Edge computing is a new wave of technology Rostan Rodrigues is with ABB U.S. Research Center, that can provide significant benefits to the utility distribu- Raleigh, NC 27606 USA. tion grid. Main applications of interest include distribution Mohammad Razeghi-Jahromi is with ABB U.S. Re- automation, overall grid management, advanced analytics, search Center, Raleigh, NC 27606 USA. DER monitoring and control, and asset management. The Amanuel Melese is with ABB U.S. Research Center, merging of edge computing technology, ubiquitous commu- Raleigh, NC 27606 USA. nication medium and protocols, and advanced analytics will Joemoan I. Xavier is with ABB Distribution Solutions, provide strong, distributed intelligence platforms, support- Cary, NC 27511 USA. ing the next generation of distribution automation and grid p&e management products. september/october 2023 ieee power & energy magazine 47 Authorized licensed use limited to: IEEE Staff. Downloaded on September 07,2023 at 17:42:30 UTC from IEEE Xplore. Restrictions apply.