In the last article, I cited a made-up scenario where ARCH was used to analyze the power footprint of an ASIC. The reason why I played with such an idea was due to the growing number of IEEE papers pitching on stochastic analysis. From improving timing error tolerance and latency of turbo decoders to dynamic analysis of power systems under certain variability, many researchers are currently finding stochastic computing a possible solution to the growing demands of the electronics industry. Do note however, that I also mentioned such a made-up scenario was a dubious proposition, and I hope to explain why I thought so in this article.

To start, let us look at a paper by Pan, X. titled "Stochastic Dynamic Analysis for Power Systems Under Uncertain Variability". It proposes using stochastic methods to analyze impacts of uncertain variability on power system dynamics. As a model for the power system, a set of Ito stochastic differential equations (a scary mouthful that's really just a D.E. using stochastic terms) is introduced - serving as the pedestal to a later computed intra-region probability of system energy.

This intra-region probability of system energy sets the expected domain, expressed as a PDF with respect to system energy and time on a mesh plot. To get an intuition of what this means, take the series of R commands below that regress on some series data "tesla_lr":

>>library(tidyverse)
>>library(stats)
>>library(readxl)
>>library(tseries)
>>library(forecast)
>>library(fGarch)
>>library(ggplot2)
>>library(vars)

>>tesla_garch_model <- garchFit(formula = ~ arma(0,0) + garch(1,1), data = final_data_set_ex\$tesla_lr, trace = F)
>>tesla_forecast <- predict(tesla_garch_model,n.ahead=63, trace=FALSE)
>>plot(x=final_data_set\$Date, y=final_data_set\$tesla_lr, type="o")
>>lines(x=final_data_set_ex2\$Date, y=tesla_forecast\$meanForecast,col="red", lwd=5)
>>lines(x=final_data_set_ex2\$Date, >>y=tesla_forecast\$meanForecast+2*tesla_forecast\$standardDeviation, col="cornflowerblue", lty="dashed", lwd=5)
>>lines(x=final_data_set_ex2\$Date, y=tesla_forecast\$meanForecast-2*tesla_forecast\$standardDeviation, col="cornflowerblue", lty="dashed", lwd=5)

Output:

(these are the log returns of Tesla courtesy of Yahoo! Finance - where VAR was first tested to obtain the optimum parameters for a generalized ARCH or GARCH estimation, where such data may be needed to build portfolios)

The tail part of 2018 was regressed with GARCH(1,1) - the determined optimal parameters that defines the boundary or  "intra-region probability" of the time-series data. In the case of power dynamics, instead of time-series data, we have uncertain variability of power dynamics depicted below (and instead of a simple GARCH estimation, a more complex algorithm was derived in the paper):

These kinds of series data are usually analyzed to determine whether the signal involved is stationary, non-stationary with random drift, explosive, etc. and by eye-balling the curve alone, we can come to the conclusion that the "uncertain variability" is mean-reverting (confirmed in the paper) and has constant variance.

### The Problem

Now, I have a limited exposure to Ito SDEs which may have contradictions or proofs to my thoughts below, but I'm pretty sure this signal that does not exhibit any random walk or have any exogenous variable affecting it's variance may be too ideal a model for a power system.

The paper in fact brings this up but lacks an emphatic tone on the issue: "However, in a power system under stochastic disturbances, a small disturbance may have significant impacts" (Pan, X. IEEE Transactions on Power Systems, Vol. 33, No. 4, July 2018)

Yes, 2D series data looks simpler to deal with than discontinuous manifolds or tensors or state equations and what not but analysis is just as complex and demanding as with other quantifiers and higher dimension process representations. Small disturbances, also affected by duration/length of time, contribute to the deterministic attributes of the signal.

Case in point, the key takeaway is that stochastic estimation demands sound knowledge of exogenous variables, something onerous to predict when it comes to power systems. You may be under the impression that stable power systems are exempt from such, but we know power line filters are riddled with imperfections (such as high Q-factors and slow roll-offs) that give way to undesired current transients from a gamut of noise sources. This makes the signal explosive in nature, and stochastic processes have scanty leeway to sound the alarm because, you know... it's explosive.

I'm not completely closing the book on stochastic methods, though. I do believe that in some way, stochastic computing has its niche in electronics engineering, we just have to make sure the primary necessities are met before delving into more complex applications.

Again, I am not a specialist in measure-theoretic stochastic calculus. Please do not take the above as doctrine.