Statistics in Circuit Design and Engineering

From All About Circuits column by Roger Keim (see below)

For several decades, I taught statistics in Departments of Sociology and used them in my Labs in research centers and institutes. Much of my academic career pioneered the use of computers in social science research. There I frequently hired electrical and computer engineering students to work in my Lab. In fact, I’d get a stream of both undergraduate and graduate students sent by CS and CE faculty over to try and get a job in “that guy Howell’s Lab.” The comment was usually based on the student receiving the CS/CE faculty advisor’s advice: you’ve got what we’re teaching you down well. Go work for Dr. Howell if you can. He’s always doing weird stuff that you supposedly can’t do.” Plus, I paid well. OK, weird stuff being defined as what others say you can’t do was always taken as a badge of honor! Like Artisoft who sold us Lantastic saying we could not use their LAN software in a TCP/IP stack. We did. Later Microsoft Workgroups and Novell pushed them out of the marketplace because they didn’t officially adopt that stack and couldn’t compete.

One thing that surprised the E.E. faculty was what we actually taught as fairly commonplace in the social sciences. In my graduate courses, I frequently had other professors ask to audit the course so they could get on top of the topic my course was emphasizing that semester, such as survey research methods, data management and computation, statistical methods (basic and advanced), structural equation models, or spatial analysis of social data. During the late 1990s, I was a Coordinator of a 5 year, $60M project in commercial remote sensing with NASA and a Department Head in the Agricultural Experiment Station. A couple of years earlier, I was sitting in a Department Head’s meeting when the Department of Forestry Head, a golfing buddy in the Faculty Golf League, asked if he should replace the remote sensing faculty member who had just retired. My response? No, unless you want to be a good Department of Forestry. If so, hell yes! I helped him hire a top flight GIS and remote sensing scientist who was being down-sized from the USDA Forestry Lab located on leased space on campus at Mississippi State University. David was one of the very best in the nation at photogrammetry or identifying what’s on the ground based upon pictures taken from the sky.

I’ll get to the point of this here. David was conducting a workshop to all of the MS Space Commerce Initiative team I coordinated on using Landsat data for photogrammetry (landuse from landcover inference in this case). As he began walking through the two fundamental statistical techniques of analyzing the eight bands of sensor data from Landsat I, an E.E. Professor, Roger, noticed I wasn’t taking notes. Roger began to goad me with, “What’s the matter, Howell? The sociologist lost already?” David the instructor just smiled as he and I had worked together during the proposal phase of the MSCI. I said nothing as David explained phase one of the analysis was conducting a Principal Components Analysis (PCA) on the 8 Landsat variables, extracting several principal components from the data. During the next phase of analysis, the K-Means Clustering, Roger couldn’t help himself, repeating what was basically a mantra of can’t the social scientist keep up with the engineers? You’re not even taking notes! David asked me if I cared to respond. I did and said, politely, that when David got to material that I didn’t already teach in our second graduate statistics course, I’d take notes! Roger’s flag was quietly folded and he later asked me how to handle missing data (sensor dropped out, etc.) in the multivariate analysis. I later learned a lot from Roger about his goniometer and using it to calibrate ground-based test images from test sensors against a “white” color standard. Some other engineers in weed science laughed at Roger’s application but he’s a smart guy. He just didn’t have any idea of what is taught outside the College of Engineering curriculum! And that’s more typical than many realize.

The moral of this little story? Engineering can get quickly silo-ed in terms of what is learned in the curriculum. Some mathematics training focused on Fourier Transforms and the like doesn’t necessarily generalize to all numerical computations. And statistics is likely one of them. But many engineers, especially those trained in earlier decades, don’t always recognize it. Yet, with the transition from the analog to the digital world, statistics are increasingly important to understand the data arising from not only the digital circuity but the digital test equipment necessary to design, test, and repair it. And this says nothing about the incredible data visualization methods and tools now available for such data (for an example, see the 3D Smith Chart implementation). Frequently, as in the team creating the 3D Smith Chart, cross-fertilization of ideas outside engineering can yield breakthroughs that won’t come about from more silo-ed training. But this does not tend to happen in the silos we in academia and the engineering industry have created.

I was delighted when the All About Circuits email hit my inbox with a new column by Robert Keim regarding statistics in engineering. The first column is “Descriptive Statistics in Electrical Engineering” and will be followed by on one inferential statistics. This will make an impact, I’m sure.

Another example from Roger Keim column in All About Circuits

Beginning with the basics, he explains how the simple mean score can assist in analyzing noise in two signals: “A mean is a straightforward way to reduce noise in a collection of measurements, because it approximates the value that would be observed if we eliminated the small positive and negative deviations caused by noise. We can also use the arithmetic mean to determine the DC offset of a waveform.” Now, this isn’t earth shattering analysis but he walks the uninitiated reader through how simple descriptions of data on signals can be of great benefit. Roger’s future columns will continue this line of application. I hope.

Amateur radio operators who have electronic workbenches and have read test equipment texts by Joe Carr or Bob Witte are already aware that statistical tools are the foundation of measurement in electronic design and testing. Joe Carr’s Elements of Electronic Instrumentation and Measurement (3rd ed.) contains two opening chapters laying the foundation for descriptive statistics and their role in measurement. In some of Joe’s other texts, he discusses electornic measurement theory regarding what test gear measures and what the phenomenon actually is: the difference largely being measurement error. That’s the same as True Score Theory that I taught using Lord and Novick’s (1968) classic text plus other materials.

True Score Theory
(https://en.wikipedia.org/wiki/Classical_test_theory)

Whether it’s digital signals in electronics or Rosenberg’s Self Esteem Scale, it is the same measurement theory: all about the error term. And how you understand measurement theory instead of just meter readings.

Bob Witte K0NR’s Electronic Test Instruments: Analog and Digital Measurements (2nd Ed.) spends part of Chapter 1 on Measurement Theory, invoking statistical aspects of the fundamentals. Formerly at HP and then Agilent, Witte’s 1st and 2nd edition texts are terrific reads and teaches much in straightforward fashion. But understanding the material requires understanding some statistical principles as foundational. He does a good job weaving that into the narrative. I highly recommend Witte’s textbook. I have both editions in my library.

Much of what I’m writing about here is exemplified in the narratives on various websites and social media outlets regarding the exciting NanoVNA and it’s various offshoots. Arriving on the scene over a year ago, the $50-ish dual port Vector Network Analyzer has caught the amateur radio experimenter market by storm. And, has propelled the rank-and-file ham to ask, “Is the NanoVNA better than my MFJ-269 antenna analyzer?” While the two instruments are fundamentally distinct in many ways, something for $50 or double will catch many eyes. But the many discussions about the software tools for the NanoVNA, especially around the necessity of calibration of the NanoVNA and how that works, really hinge around a good understanding of measurement theory and a sound statistics base of knowledge.

The original NanoVNA (https://nanovna.com/)

So that’s why I’m delighted to see that All About Circuits is featuring a new regular column by it’s Director of Engineering concerning the use of statistics in electrical engineering. Don’t assume you already know it because you can do FFT’s in your sleep (or did while take a course). There’s a lot more awaiting you. And more on the way in the burgeoning digital world that is today’s electronics field. Now, let me open that box with the NanoVNA-H that arrived this week…..I might also need to review some trigonometry.