Everything You Need to Know About Control Theory
Summary
TLDREl vídeo explica la teoría de control, esencial para diseñar sistemas autónomos. Se discuten métodos de control abierto y cerrado, la importancia de los modelos matemáticos, y cómo se aplican para controlar dinámicamente diferentes sistemas. Se exploran controladores como PID y controladores no lineales, así como técnicas de planificación y estimación de estado. El vídeo también destaca la necesidad de análisis y simulación para garantizar que los sistemas cumplan con los requisitos deseados.
Takeaways
- 🧠 La teoría de control es un marco matemático que proporciona herramientas para desarrollar sistemas autónomos.
- 🚗 Un sistema dinámico es algo que queremos controlar automáticamente, como un edificio, una columna de destilación o un automóvil.
- 🔄 Los sistemas pueden verse afectados por entradas controladas (U) y perturbaciones no deseadas (D).
- 🔄 Un controlador de retroalimentación (o de bucle cerrado) utiliza tanto la referencia como el estado actual del sistema para determinar las entradas de control apropiadas.
- 🔄 Los controladores de avance (o de bucle abierto) generan la señal de control sin necesidad de medir el estado real del sistema.
- 🔄 Los modelos matemáticos son cruciales para casi todos los aspectos de la teoría de control, ya que se utilizan para el diseño de controladores, estimación de estado, planificación y análisis.
- 🔍 La identificación del sistema se puede realizar mediante principios físicos o a través de datos para ajustar un modelo a los datos.
- 🔄 El control de retroalimentación es un mecanismo auto-corregido que puede hacer que un sistema sea más estable, pero también puede cambiar su dinámica y estabilidad.
- 🛠 Hay una variedad de tipos de controladores de retroalimentación, como los controladores lineales, no lineales, robustos, adaptativos, óptimos, predictivos e inteligentes.
- 📈 La planificación es esencial para definir la referencia que el controlador debe seguir, y es donde se determina qué debe hacer el sistema.
- 🔬 La estimación de estado es necesaria para manejar las mediciones ruidosas y garantizar que el controlador tenga una estimación precisa del estado del sistema.
Q & A
¿Qué es la teoría de control y cómo se relaciona con la creación de sistemas autónomos?
-La teoría de control es un marco matemático que proporciona herramientas para desarrollar sistemas autónomos, como un automóvil que conduce solo, un edificio con control de temperatura o una columna de destilación eficiente.
¿Cuáles son las dos fuentes de entradas que afectan a un sistema dinámico en la teoría de control?
-Las dos fuentes de entradas son las entradas de control (U), que son intencionadas para afectar el sistema, y las perturbaciones (D), que son fuerzas no deseadas que afectan al sistema.
¿Qué es un controlador de bucle abierto y cómo funciona?
-Un controlador de bucle abierto, también conocido como controlador de retroalimentación directa, toma la referencia (R) de lo que se desea que haga el sistema y genera la señal de control sin necesidad de medir el estado actual del sistema.
¿Por qué pueden ser necesarios modelos matemáticos en la teoría de control?
-Los modelos matemáticos son necesarios para entender y predecir el comportamiento del sistema, diseñar controladores, estimar el estado del sistema, planificar y analizar el rendimiento y la estabilidad del sistema.
¿Qué es un controlador de retroalimentación y cómo difiere de un controlador de bucle abierto?
-Un controlador de retroalimentación, o bucle cerrado, utiliza tanto la referencia como el estado actual del sistema para determinar las entradas de control. Esto difiere del controlador de bucle abierto, que no necesita medir el estado actual del sistema.
¿Qué son los controladores lineales y no lineales y cuándo se usan?
-Los controladores lineales, como el PID y la retroalimentación de estado completo, se usan cuando se asume que el comportamiento general del sistema es lineal. Los controladores no lineales, como los controladores de encendido/apagado y el control de modo deslizante, se utilizan cuando el sistema no es lineal.
¿Qué es la planificación en el contexto de la teoría de control y por qué es importante?
-La planificación es el proceso de determinar un plan o ruta para que el sistema siga, considerando obstáculos, reglas y capacidades físicas. Es importante porque el controlador necesita un plan para generar comandos que lleven al sistema a cumplir sus objetivos.
¿Qué es la estimación de estado y por qué es crucial en los sistemas de retroalimentación?
-La estimación de estado es el proceso de determinar el estado actual del sistema a partir de mediciones con ruido. Es crucial porque los controladores de retroalimentación necesitan conocer el estado del sistema para ajustar las entradas de control de manera efectiva.
¿Cómo se puede analizar y simular un sistema controlado para garantizar que cumpla con los requisitos?
-Se pueden usar herramientas de análisis, como diagramas de Bode, gráficos de Nyquist y simulaciones en Matlab y Simulink, para verificar la estabilidad, el rendimiento y asegurar que el sistema funcione según lo previsto.
¿Cuáles son algunas técnicas de control que se mencionan en el guion y cómo se relacionan con diferentes tipos de sistemas?
-Se mencionan técnicas como el control PID, el control de ganancia programada, el control fuzzy, el aprendizaje por refuerzo, el control predictivo y el control inteligente. Estas técnicas se relacionan con diferentes sistemas dependiendo de sus características, la presencia de incertidumbre y las necesidades específicas de control.
Outlines
🔍 Introducción a la Teoría de Control
Este párrafo introduce la teoría de control como un marco matemático esencial para el desarrollo de sistemas autónomos. Se describe cómo la teoría de control puede ser aplicada a diferentes sistemas, como un automóvil, un edificio o una columna de destilación. Se explica que los sistemas pueden ser afectados por entradas controladas (como el volante o el acelerador en un automóvil) y por disturbios no deseados (como el viento o las imperfecciones en la carretera). Además, se plantea la pregunta de cómo un algoritmo puede determinar las entradas de control necesarias sin conocer constantemente el estado actual del sistema.
🔧 Control de Bucles Abiertos y Bucles Cerrados
Se contrastan los controladores de bucles abiertos (también conocidos como controladores de retroalimentación directa) con los controladores de bucles cerrados (control de retroalimentación o control de retroalimentación). Los controladores de bucles abiertos toman una referencia y generan una señal de control sin medir el estado real del sistema, mientras que los controladores de bucles cerrados utilizan tanto la referencia como el estado actual del sistema para determinar las entradas de control. Se discuten las ventajas y desventajas de cada tipo de control, y se enfatiza la importancia de comprender la dinámica del sistema y el entorno para implementar un control efectivo.
🛠 Diversidad en los Métodos de Control de Retroalimentación
Este párrafo explora la variedad de algoritmos de control de retroalimentación, que van desde controladores lineales como el PID y el retroalimentación de estado completo hasta controladores no lineales como los controladores de encendido/apagado y los controladores de modo deslizante. Se mencionan controladores robustos, adaptativos, óptimos y predictivos, así como controladores inteligentes como los controladores basados en datos. Se destaca la importancia de elegir el controlador adecuado en función del sistema que se está controlando y los objetivos deseados.
📈 Planificación, Estimación de Estado y Análisis del Sistema
Se abordan los aspectos adicionales de la teoría de control, como la planificación, la estimación de estado y el análisis del sistema. La planificación es crucial para definir la referencia que el controlador debe seguir. La estimación de estado es necesaria para manejar la retroalimentación del estado del sistema a pesar de la presencia de ruido en las mediciones. Finalmente, el análisis del sistema es esencial para garantizar que el sistema diseñado cumpla con los requisitos establecidos. Se sugiere que los modelos matemáticos son fundamentales en todos estos aspectos de la teoría de control.
🔗 Recursos y Referencias para Aprender Más
El último párrafo ofrece recursos adicionales para aprender más sobre la teoría de control, incluyendo enlaces a otros videos de Matlab Tech Talks que cubren temas específicos mencionados en el video. Se menciona que hay videos disponibles para temas como el control de retroalimentación, el control de bucles cerrados, la identificación de sistemas y más. También se proporciona un enlace a un recorrido en Resourceum.org que organiza todos los recursos mencionados en el video.
Mindmap
Keywords
💡Control Theory
💡Dynamical System
💡Control Inputs (U)
💡Disturbances (D)
💡System State (X)
💡Open Loop Controller
💡Feedback Control
💡Mathematical Model
💡State Estimation
💡Planning
Highlights
Control theory is essential for designing autonomous systems.
A dynamical system can be affected by control inputs and disturbances.
Automating a process involves determining control inputs without constant state knowledge.
Open loop controllers, or feed forward controllers, generate control signals without measuring the system state.
Feed forward controllers require a good understanding of system dynamics.
System identification is crucial for developing models used in control theory.
Feedback control, or closed loop control, uses the current state of the system to adjust control inputs.
Feedback control is a self-correcting mechanism but can be dangerous if not designed properly.
There are various types of feedback controllers, including linear, non-linear, robust, adaptive, optimal, and intelligent.
Planning is necessary to create a reference for the control system to follow.
State estimation is required to accurately estimate the system state from noisy measurements.
Analysis, simulation, and testing ensure the designed system meets requirements.
Mathematical models are used throughout control theory for design, estimation, planning, and analysis.
Matlab Tech talks cover various topics in control theory, providing in-depth knowledge.
Resource links and a journey at resourcium.org are provided for further learning.
Transcripts
an important question that has to be
answered when you're designing an
autonomous system is how do you get that
system to do what you want I mean how do
you get a car to drive on its own how do
you manage the temperature of a building
or how do you separate liquids into
their component parts efficiently with a
distillation column
and to answer those questions we need
control theory
control theory is a mathematical
framework that gives us the tools to
develop autonomous systems and in this
video I want to walk through everything
you need to know about control theory so
I hope you stick around for it I'm Brian
and welcome to a Matlab Tech talk
we can understand all of control theory
using a simple diagram and to begin
let's just start with a single dynamical
system
this system is the thing that we want to
automatically control like a building or
a distillation column or a car it can
really be anything but the important
thing is that the system can be affected
by external inputs and in general we can
think of the inputs as coming from two
different sources there are the control
inputs U that we intentionally use to
affect the system for a car these are
things like moving the steering wheel
and hitting the brake and pressing on
the accelerator pedal and then there are
unintentional inputs these are the
disturbances D and they are forces that
we don't want affecting the system but
they do anyway these are things like
wind and bumps in the road
now the inputs enter the system interact
with the internal Dynamics and then the
system State X changes over time
so for a car we move the steering wheel
and we press the pedals which turn the
wheels and revs the engine producing
forces and torques on that vehicle and
then combined with the forces and
torques from the disturbances the car
changes its speed position and direction
now if we want to automate this process
that is we want the car to drive without
a person determining the inputs where do
we go from here
and the first question is can an
algorithm determine the necessary
control inputs without constantly having
to know the current state of the system
or maybe a better way of putting it is
do you need to measure where the car is
and how fast it's going in order to
successfully drive the car with good
control inputs and the answer is
actually no we can control a system with
an open loop controller also known as a
feed forward controller
a feed forward controller takes in what
you want the system to do called the
reference R and it generates the control
signal without ever needing to measure
the actual state
in this way the signal from the
reference is fed forward through the
controller and then forward through the
system never looping back hence the name
feed forward
for example let's say that we want the
car to autonomously drive in a straight
line and at some arbitrary constant
speed if the car is controllable which
means that we have the ability to
actually affect the speed and direction
of the car then we could design a feed
forward controller that accomplishes
this
the reference drive straight means that
the steering wheel should be held at a
fixed zero degrees and drive at a
constant speed means that we depress the
accelerator pedal some non-zero amount
the car would then accelerate to a
constant speed and drive straight
exactly as we want
however let's say that we want the car
to reach a specific speed like 30 miles
an hour we can actually still do it with
a feed forward controller but now the
controller needs to know how much to
depress the accelerator pedal in order
to reach that specific speed and this
requires knowledge about the Dynamics of
the system and this knowledge can be
captured in the form of a mathematical
model
now developing a model can be done using
physics and first principles where the
mathematical equations are written out
based on your understanding of the
System Dynamics
or it can be done by using data and
fitting a model to that data with a
process called system identification
both of these modeling techniques are
important Concepts to understand because
as we'll get into models are required
for almost all aspects of control theory
now as an example of system
identification we could test the real
car and record the speed it reaches
given different pedal positions and then
we could just fit a mathematical model
to that data basically speed is some
function of the pedal position
now for the feed forward controller
itself we could just use the inverse of
that model to get pedal position as a
function of speed
so given a reference speed the feed
forward controller would be able to
calculate the necessary control input
so feed forward controllers are a pretty
straightforward way to control a system
however as we can see it requires a
really good understanding of the System
Dynamics since you have to invert them
in the controller and any error in that
inversion process will result in error
in the system state
also even if you know your system really
well the environment the system is
operating in should have predictable
Behavior as well you know so that
there's not a lot of unknown
disturbances entering the system that
you're not accounting for in the
controller
of course it doesn't take much
imagination to see that feed forward
control breaks down for systems that
aren't robust to disturbances and
uncertainty I mean imagine wanting to
autonomously drive a car across the city
with feed forward control
theoretically you could map the city
well enough and know your car well
enough that you could essentially
pre-program in all of the steering wheel
and pedal commands and just let it go
and if you had perfect knowledge ahead
of time then the car would execute those
commands and then make its way across
the city unharmed
obviously though this is unrealistic I
mean not only are other cars and
pedestrians impossible to predict
perfectly but even the smallest errors
in the position and speed of your car
will build over time and eventually
deviate much too far from the intended
path
so this is where feed back control or
closed loop control comes to the rescue
in feedback control the controller uses
both the reference and the current state
of the system to determine the
appropriate control inputs that is the
output is fed back making a closed loop
hence the name
and in this way if the system state
starts to deviate from the reference
either because of disturbances or
because of errors in our understanding
of the system then the controller can
recognize those deviations those errors
and adjust the control inputs
accordingly so feedback control is a
self-correcting mechanism and I like to
think of feedback as a hack that we have
to employ due to our inability to
perfectly understand the system and its
environment we don't want to use
feedback control but we have to
all right so feedback control is
powerful but it's also a lot more
dangerous than feed forward control and
the reason for this is that feed forward
changes the way we operate a system but
feedback changes the Dynamics of the
system it changes its underlying
behavior and this is because with
feedback the controller changes the
system State as a function of the
current state and that relationship is
producing new Dynamics
and changing Dynamics means that we have
the ability to change the stability of
the system and on the plus side we can
take an unstable or marginally stable
system and make it more stable with
feedback control but on the negative
side we can also make a system less
stable and even unstable and this is why
a lot of control theory is focused on
designing and importantly analyzing
feedback controllers because if you do
it wrong you can cause more harm than
good and since feedback control exists
in many different types of systems the
control community over the years have
developed many different types of
feedback controllers
there are linear controllers like PID
and full State feedback that assume the
general behavior of the system being
controlled is linear in nature
and if that's not the case there are
non-linear controllers like on off
controllers and sliding mode controllers
and gain scheduling
now often thinking in terms of linear
versus non-linear isn't the best way to
choose a controller so we Define them in
other ways as well for example there are
robust controllers like mu synthesis an
active disturbance rejection control
which focus on meeting requirements even
in the face of uncertainty in the plant
and in the environment so we can
guarantee that they are robust to a
certain amount of uncertainty there are
adaptive controllers like extremum
seeking and model reference adaptive
control that adapt to changes in the
system over time there are optimal
controllers like lqr where a cost
function is created and then the
controller tries to balance performance
and effort by minimizing the total cost
there are predictive controllers like
model predictive control that use a
model of the system inside the
controller to simulate what the future
state will be and therefore what the
optimal control input should be in order
to have that future State match the
reference
there are intelligent controllers like
fuzzy controllers or reinforcement
learning that rely on data to learn the
best controller and there are many
others and the point here isn't to list
every control method I just wanted to
highlight the fact that feedback control
isn't just a single algorithm but it's a
family of algorithms and choosing which
controller to use and how to set it up
depends largely on what system you are
controlling and what you want it to do
so what do you want your system to do
what state do you want the system to be
in what is the reference that you want
it to follow
and this might seem like a simple
question if we're balancing an inverted
pendulum or designing a simple Cruise
controller for a car the reference for
the pendulum is vertical and for the car
it's the speed that the driver sets
however for many systems understanding
what it should do takes some effort and
this is where planning comes in
the control system can't follow a
reference if one doesn't exist and so
planning is a very important aspect of
Designing a control system
with a self-driving car for example
planning has to figure out a path to the
destination while avoiding obstacles and
it has to follow the rules of the road
plus it has to come up with a plan that
the car is physically able to follow you
know it doesn't accelerate too fast or
it doesn't turn too quickly and if there
are passengers then planning has to
account for their comfort and safety as
well and only after the plan has been
created can the controller then generate
the commands to follow it an example of
two different graph-based planning
methods are rapidly expanding random
trees rrt and a star
once again there are too many different
algorithms to name but the important
thing is that you understand that you
have to develop a plan that your
controller will then try to follow
all right so once you know what you want
the system to do and you have a feedback
controller to do it now you need to
actually execute this plan and as we
know for feedback controllers this
requires knowledge of the state of the
system that is after all what we are
feeding back and the problem is that we
don't actually know the state unless we
measure it and measuring it with a
sensor introduces noise so for our car
example we're not feeding back the true
speed of the car we're feeding back a
noisy measurement of the speed
and our controller is going to react to
that noise so in this way noise in a
feedback system actually affects the
true state of the system and so this is
one additional problem that we're going
to have to tackle with feedback control
a second problem is that of
observability in order to feedback the
state of the system we have to be able
to observe the state of the system and
this requires sensors in enough places
that every state that is fed back can be
observed
now it's important to note that we don't
have to measure every state directly we
just need to be able to observe every
state for example if our car only has a
speedometer we can still observe
acceleration by taking the derivative of
the speed
so there are two things here we need to
reduce measurement noise and we need to
manipulate the measurements in such a
way that allows us to accurately
estimate the state of the system
State estimation is therefore another
important area of control theory and for
this we can use algorithms like the
column and filter the particle filter or
even just run a simple running average
and choosing an algorithm depends on
which states you are directly measuring
and how much noise and what type of
noise is present in those measurements
now the last major part of control
theory is responsible for ensuring the
system that we just designed works that
it meets the requirements that we set
for it and this comes down to analysis
simulation and test for this we can plot
data in different formats like with a
body diagram a nickels chart or a
Nyquist diagram
we could check for stability and
performance margins we could simulate
the system using Matlab and simulink and
all of these tools can be used to ensure
that the system will function as
intended
and so this full diagram here I think
represents everything you need to know
about control theory you have to know
about different control methods both
feed forward and feedback depending on
the system you're controlling you have
to know about State estimation so that
you can take all of those noisy
measurements and be able to feed back an
estimate of System state
you have to know about planning so that
you can create the reference that you
want your controller to follow
you have to know how to analyze your
system to ensure that it's meeting
requirements and finally and possibly
most importantly you have to know about
building mathematical models of your
system because models are often used for
every part we just covered they are used
for controller design they're used for
State estimation they're used for
planning and they are used for analysis
all right I always leave links below to
other resources and references and this
video is no exception and there are a
bunch for this video since I mentioned
so many different topics and something I
think is nice is that we already have
Matlab Tech talks for almost every topic
I mentioned we have feed forward and PID
and gain scheduling and fuzzy logic and
Coleman filters and particle filters and
planning algorithms and system
identification and more so if there's an
area of control theory that you want to
learn more about I hope you check out
the links below
and to make it easier to browse through
all of them I put together a journey at
resourcium.org that organizes all of the
references in this video again link to
that is below as well
so this is where I'm going to leave this
video if you don't want to miss any
other future Tech talk videos don't
forget to subscribe to this Channel and
if you want to check out my channel
control system lectures I cover more
control theory topics there as well
thanks for watching and I'll see you
next time
Посмотреть больше похожих видео
5.0 / 5 (0 votes)