“How can we analytically estimate the stability of a transfer function?”
We know that transfer functions have a certain level of stability afforded to them. However, sometimes the functions can be quite complicated. So how can we analytically analyze their stability? Well, let’s think about it. First, let’s take our polynomial. Then, let’s take the coefficients for all of our functions. Then, let’s make a graph with the number of rows equal to the highest exponent and number of columns equal to half rounded up of that. Afterward, let’s place the coefficients on the first two columns such that the greatest coefficient goes on the top left, the next one goes beneath, the next one goes to the right of the first one, the next one goes below, and so on. Now let’s fill in the rest of the table by taking the difference between the coefficient in the first column two rows up multiplied by the coefficient up and right one, and then let’s subtract it from the value directly up two and right one minus the one at the first column one row up while dividing everything by the number directly up one row and put the result in place of the column . Let’s now repeat this pattern until we have the entire table filled out. If any coefficients in the first column are of a different sign, then the system will be unstable. This mathematical tool is known as the Routh–Hurwitz stability criterion and is used in designing all sorts of control systems.
“How can we measure the stability of a control system?”
Control systems are necessary for the function of society. However, if our system proves to be unstable, then it can cause serious harm to its operation. So how can we measure the control system stability? Well if we take the Laplace transform of the transfer function and observe that there are poles in the right-hand plane, then the exponential part of the function will grow to infinity over time, thereby causing a system malfunction. Control system stability is used to analyze a diverse range of fields ranging from aerospace controls to robotics and even building energy management.
Control systems can be very complicated in nature due to their reliance on feedback systems. However, if we want, we can make our systems much more flexible if we take away such a mechanism. This is known as an open loop control. An example of an open loop control is a movement mechanism that pushes an object towards a destination regardless of what is in the way. If we were to model this on a control diagram, then the input would go straight to the output and never come back (hence the name open loop)
“What is the most ideal form for controls systems?”
There is a motley of types of control systems out there. So before we begin any sort of analysis, let’s start with the most simple form, known as Linear and Time Invariant Systems. LTI systems have three properties.
Homogeneity If an input signal is scaled by a constant then the output will be scaled by the same constant
SuperpositionIf two unique inputs are summed together, then the sum of their outputs will be produced.
Time InvarianceThe system will perform the same way no matter what the time is.
Unfortunately, Most controls systems are not LTI systems, but they are still important to study due to their easy to solve structure.
We often run into problems when using air conditioning. Sometimes we don’t have enough refrigerant or we it might be too expensive to make some more.
But there is a simple solution to this!
What if we were to make the ice needed for HVAC systems overnight? This way, we will not have to worry about it being used and we can take advantage of the lower utility night rates. This strategy is known as thermal energy storage and is used for cooling systems worldwide
“What is one model for a closed loop controller?” Imagine a robot moving from one spot to another. If it was operating under a closed loop controller system, it would work by sensing the target location, comparing it to the current location and performing an error estimation. However, what is one way that we can implement this? Well, let’s begin with one idea; for every second we are not at our setpoint (destination) let’s take how far we are, take it as an error value, and put it on a graph. Then, let’s take the proportion (or magnitude of the error), integral (area under the graph) and derivative (current rate of change) and combine these values to estimate how far we are from our desired value. This type of control is known as proportional-integral-derivative control (or PID) and is implemented in control systems worldwide.
“How can we plot the gain and phase shift for a transfer function?”
A transfer function will change the magnitude and phase of a sinusoid in some way. So wouldn’t it be logical if we could plot this out on a graph? Well, let’s think about how we could do this ourselves. First, since we have to plot two different outputs (gain and phase shift) let’s put make two separate graphs side by side. Then, let’s put the input (frequency) on the x axis and the output (gain or phase shift) on the y axis. Now, since our input variable will cover an extremely large range, let’s make it on a logarithmic scale. Specifically, let’s take a frequency as an input, plug it into the formula 20log10(omega)), and then graph. Since the units on the x axis are not normal numbers but rather ratios, let’s give them the unit decibels. This type of plot is called a bode plot and is used for analyzing control systems worldwide.
“What is the desired output for an objective function?” Objective functions in control systems work by trying to compare a measured value to a setpoint, or ideal value. If there is a discrepancy between these two values, then the system will give back feedback to rectify the error.
“How does a dynamic system respond to an external response?”
Dynamic systems are not isolated from the external world and are therefore subject to external inputs. The response of the system is known as an impulse response and can consist of any form of sinusoidal function.