Everything You Need to Know About Control Theory

MATLAB
27 Oct 202216:07

Summary

TLDREl vídeo explica la teoría de control, esencial para diseñar sistemas autónomos. Se discuten métodos de control abierto y cerrado, la importancia de los modelos matemáticos, y cómo se aplican para controlar dinámicamente diferentes sistemas. Se exploran controladores como PID y controladores no lineales, así como técnicas de planificación y estimación de estado. El vídeo también destaca la necesidad de análisis y simulación para garantizar que los sistemas cumplan con los requisitos deseados.

Takeaways

  • 🧠 La teoría de control es un marco matemático que proporciona herramientas para desarrollar sistemas autónomos.
  • 🚗 Un sistema dinámico es algo que queremos controlar automáticamente, como un edificio, una columna de destilación o un automóvil.
  • 🔄 Los sistemas pueden verse afectados por entradas controladas (U) y perturbaciones no deseadas (D).
  • 🔄 Un controlador de retroalimentación (o de bucle cerrado) utiliza tanto la referencia como el estado actual del sistema para determinar las entradas de control apropiadas.
  • 🔄 Los controladores de avance (o de bucle abierto) generan la señal de control sin necesidad de medir el estado real del sistema.
  • 🔄 Los modelos matemáticos son cruciales para casi todos los aspectos de la teoría de control, ya que se utilizan para el diseño de controladores, estimación de estado, planificación y análisis.
  • 🔍 La identificación del sistema se puede realizar mediante principios físicos o a través de datos para ajustar un modelo a los datos.
  • 🔄 El control de retroalimentación es un mecanismo auto-corregido que puede hacer que un sistema sea más estable, pero también puede cambiar su dinámica y estabilidad.
  • 🛠 Hay una variedad de tipos de controladores de retroalimentación, como los controladores lineales, no lineales, robustos, adaptativos, óptimos, predictivos e inteligentes.
  • 📈 La planificación es esencial para definir la referencia que el controlador debe seguir, y es donde se determina qué debe hacer el sistema.
  • 🔬 La estimación de estado es necesaria para manejar las mediciones ruidosas y garantizar que el controlador tenga una estimación precisa del estado del sistema.

Q & A

  • ¿Qué es la teoría de control y cómo se relaciona con la creación de sistemas autónomos?

    -La teoría de control es un marco matemático que proporciona herramientas para desarrollar sistemas autónomos, como un automóvil que conduce solo, un edificio con control de temperatura o una columna de destilación eficiente.

  • ¿Cuáles son las dos fuentes de entradas que afectan a un sistema dinámico en la teoría de control?

    -Las dos fuentes de entradas son las entradas de control (U), que son intencionadas para afectar el sistema, y las perturbaciones (D), que son fuerzas no deseadas que afectan al sistema.

  • ¿Qué es un controlador de bucle abierto y cómo funciona?

    -Un controlador de bucle abierto, también conocido como controlador de retroalimentación directa, toma la referencia (R) de lo que se desea que haga el sistema y genera la señal de control sin necesidad de medir el estado actual del sistema.

  • ¿Por qué pueden ser necesarios modelos matemáticos en la teoría de control?

    -Los modelos matemáticos son necesarios para entender y predecir el comportamiento del sistema, diseñar controladores, estimar el estado del sistema, planificar y analizar el rendimiento y la estabilidad del sistema.

  • ¿Qué es un controlador de retroalimentación y cómo difiere de un controlador de bucle abierto?

    -Un controlador de retroalimentación, o bucle cerrado, utiliza tanto la referencia como el estado actual del sistema para determinar las entradas de control. Esto difiere del controlador de bucle abierto, que no necesita medir el estado actual del sistema.

  • ¿Qué son los controladores lineales y no lineales y cuándo se usan?

    -Los controladores lineales, como el PID y la retroalimentación de estado completo, se usan cuando se asume que el comportamiento general del sistema es lineal. Los controladores no lineales, como los controladores de encendido/apagado y el control de modo deslizante, se utilizan cuando el sistema no es lineal.

  • ¿Qué es la planificación en el contexto de la teoría de control y por qué es importante?

    -La planificación es el proceso de determinar un plan o ruta para que el sistema siga, considerando obstáculos, reglas y capacidades físicas. Es importante porque el controlador necesita un plan para generar comandos que lleven al sistema a cumplir sus objetivos.

  • ¿Qué es la estimación de estado y por qué es crucial en los sistemas de retroalimentación?

    -La estimación de estado es el proceso de determinar el estado actual del sistema a partir de mediciones con ruido. Es crucial porque los controladores de retroalimentación necesitan conocer el estado del sistema para ajustar las entradas de control de manera efectiva.

  • ¿Cómo se puede analizar y simular un sistema controlado para garantizar que cumpla con los requisitos?

    -Se pueden usar herramientas de análisis, como diagramas de Bode, gráficos de Nyquist y simulaciones en Matlab y Simulink, para verificar la estabilidad, el rendimiento y asegurar que el sistema funcione según lo previsto.

  • ¿Cuáles son algunas técnicas de control que se mencionan en el guion y cómo se relacionan con diferentes tipos de sistemas?

    -Se mencionan técnicas como el control PID, el control de ganancia programada, el control fuzzy, el aprendizaje por refuerzo, el control predictivo y el control inteligente. Estas técnicas se relacionan con diferentes sistemas dependiendo de sus características, la presencia de incertidumbre y las necesidades específicas de control.

Outlines

00:00

🔍 Introducción a la Teoría de Control

Este párrafo introduce la teoría de control como un marco matemático esencial para el desarrollo de sistemas autónomos. Se describe cómo la teoría de control puede ser aplicada a diferentes sistemas, como un automóvil, un edificio o una columna de destilación. Se explica que los sistemas pueden ser afectados por entradas controladas (como el volante o el acelerador en un automóvil) y por disturbios no deseados (como el viento o las imperfecciones en la carretera). Además, se plantea la pregunta de cómo un algoritmo puede determinar las entradas de control necesarias sin conocer constantemente el estado actual del sistema.

05:01

🔧 Control de Bucles Abiertos y Bucles Cerrados

Se contrastan los controladores de bucles abiertos (también conocidos como controladores de retroalimentación directa) con los controladores de bucles cerrados (control de retroalimentación o control de retroalimentación). Los controladores de bucles abiertos toman una referencia y generan una señal de control sin medir el estado real del sistema, mientras que los controladores de bucles cerrados utilizan tanto la referencia como el estado actual del sistema para determinar las entradas de control. Se discuten las ventajas y desventajas de cada tipo de control, y se enfatiza la importancia de comprender la dinámica del sistema y el entorno para implementar un control efectivo.

10:01

🛠 Diversidad en los Métodos de Control de Retroalimentación

Este párrafo explora la variedad de algoritmos de control de retroalimentación, que van desde controladores lineales como el PID y el retroalimentación de estado completo hasta controladores no lineales como los controladores de encendido/apagado y los controladores de modo deslizante. Se mencionan controladores robustos, adaptativos, óptimos y predictivos, así como controladores inteligentes como los controladores basados en datos. Se destaca la importancia de elegir el controlador adecuado en función del sistema que se está controlando y los objetivos deseados.

15:01

📈 Planificación, Estimación de Estado y Análisis del Sistema

Se abordan los aspectos adicionales de la teoría de control, como la planificación, la estimación de estado y el análisis del sistema. La planificación es crucial para definir la referencia que el controlador debe seguir. La estimación de estado es necesaria para manejar la retroalimentación del estado del sistema a pesar de la presencia de ruido en las mediciones. Finalmente, el análisis del sistema es esencial para garantizar que el sistema diseñado cumpla con los requisitos establecidos. Se sugiere que los modelos matemáticos son fundamentales en todos estos aspectos de la teoría de control.

🔗 Recursos y Referencias para Aprender Más

El último párrafo ofrece recursos adicionales para aprender más sobre la teoría de control, incluyendo enlaces a otros videos de Matlab Tech Talks que cubren temas específicos mencionados en el video. Se menciona que hay videos disponibles para temas como el control de retroalimentación, el control de bucles cerrados, la identificación de sistemas y más. También se proporciona un enlace a un recorrido en Resourceum.org que organiza todos los recursos mencionados en el video.

Mindmap

Keywords

💡Control Theory

La Teoría del Control es un marco matemático utilizado para desarrollar sistemas autónomos. En el vídeo, se menciona que proporciona las herramientas para controlar dinámicamente sistemas como automóviles autónomos, edificios con sistemas de temperatura o columnas de destilación. Se utiliza para diseñar y analizar cómo un sistema debe reaccionar ante diferentes entradas para alcanzar un objetivo deseado.

💡Dynamical System

Un Sistema Dinámico es cualquier cosa que se puede controlar automáticamente, como un edificio, una columna de destilación o un automóvil. Se ve afectado por entradas externas, que pueden ser intencionadas (control inputs) o no intencionadas (disturbances). En el vídeo, se discute cómo estos sistemas dinámicos pueden ser controlados y modelados para predecir y ajustar su comportamiento.

💡Control Inputs (U)

Las Entradas de Control son las acciones intencionadas que se toman para afectar el sistema, como mover el volante o presionar el acelerador en un automóvil. El vídeo explica cómo estas entradas son fundamentales para la teoría del control, ya que son las que el controlador manipula para alcanzar el objetivo deseado.

💡Disturbances (D)

Los Disturbances son fuerzas no deseadas que afectan al sistema, como el viento o las bump en la carretera para un automóvil. El vídeo destaca la importancia de manejar estos disturbios en la teoría del control para asegurar que el sistema funcione según lo planeado a pesar de las condiciones externas imprevisibles.

💡System State (X)

El Estado del Sistema se refiere a las condiciones actuales del sistema, como la velocidad y la dirección de un automóvil. El vídeo discute cómo el control理论 se utiliza para gestionar y ajustar el estado del sistema a lo largo del tiempo en respuesta a las entradas y las perturbaciones.

💡Open Loop Controller

Un Controlador de Bucles Abiertos, también conocido como Controlador de Avance, es un tipo de controlador que genera señales de control basadas únicamente en las referencias deseadas sin necesitar medir el estado actual del sistema. El vídeo lo ejemplifica con el control de un automóvil que mantiene una velocidad constante sin medir su velocidad actual.

💡Feedback Control

El Control de Retroalimentación, o Bucles Cerrados, es un método donde el controlador utiliza tanto la referencia como el estado actual del sistema para determinar las entradas de control. El vídeo lo presenta como una solución para sistemas que no son robustos ante perturbaciones y donde es necesario un controlador que auto-corrija en respuesta a desviaciones del estado deseado.

💡Mathematical Model

Un Modelo Matemático es una representación matemática de un sistema que se utiliza para predecir su comportamiento y diseñar controladores. El vídeo destaca la importancia de los modelos en la teoría del control, ya que son esenciales para la identificación del sistema, el diseño del controlador y la estimación del estado.

💡State Estimation

La Estimación del Estado es el proceso de estimar el estado actual del sistema a partir de mediciones que pueden ser ruidosas. El vídeo lo menciona como un área importante de la teoría del control, ya que es necesario para proporcionar una retroalimentación precisa al controlador a pesar de la presencia de ruido en los sensores.

💡Planning

La Planificación es el proceso de determinar una secuencia de acciones para alcanzar un objetivo específico. En el vídeo, se destaca la importancia de la planificación en la teoría del control, ya que es necesaria para crear una referencia que el controlador siga, como en el caso de un automóvil autónomo que debe planificar un camino para llegar a su destino evitando obstáculos.

Highlights

Control theory is essential for designing autonomous systems.

A dynamical system can be affected by control inputs and disturbances.

Automating a process involves determining control inputs without constant state knowledge.

Open loop controllers, or feed forward controllers, generate control signals without measuring the system state.

Feed forward controllers require a good understanding of system dynamics.

System identification is crucial for developing models used in control theory.

Feedback control, or closed loop control, uses the current state of the system to adjust control inputs.

Feedback control is a self-correcting mechanism but can be dangerous if not designed properly.

There are various types of feedback controllers, including linear, non-linear, robust, adaptive, optimal, and intelligent.

Planning is necessary to create a reference for the control system to follow.

State estimation is required to accurately estimate the system state from noisy measurements.

Analysis, simulation, and testing ensure the designed system meets requirements.

Mathematical models are used throughout control theory for design, estimation, planning, and analysis.

Matlab Tech talks cover various topics in control theory, providing in-depth knowledge.

Resource links and a journey at resourcium.org are provided for further learning.

Transcripts

play00:00

an important question that has to be

play00:02

answered when you're designing an

play00:03

autonomous system is how do you get that

play00:06

system to do what you want I mean how do

play00:08

you get a car to drive on its own how do

play00:11

you manage the temperature of a building

play00:13

or how do you separate liquids into

play00:16

their component parts efficiently with a

play00:18

distillation column

play00:20

and to answer those questions we need

play00:23

control theory

play00:24

control theory is a mathematical

play00:26

framework that gives us the tools to

play00:29

develop autonomous systems and in this

play00:31

video I want to walk through everything

play00:33

you need to know about control theory so

play00:36

I hope you stick around for it I'm Brian

play00:38

and welcome to a Matlab Tech talk

play00:42

we can understand all of control theory

play00:45

using a simple diagram and to begin

play00:47

let's just start with a single dynamical

play00:50

system

play00:51

this system is the thing that we want to

play00:54

automatically control like a building or

play00:56

a distillation column or a car it can

play00:59

really be anything but the important

play01:01

thing is that the system can be affected

play01:03

by external inputs and in general we can

play01:07

think of the inputs as coming from two

play01:08

different sources there are the control

play01:10

inputs U that we intentionally use to

play01:14

affect the system for a car these are

play01:16

things like moving the steering wheel

play01:18

and hitting the brake and pressing on

play01:20

the accelerator pedal and then there are

play01:22

unintentional inputs these are the

play01:25

disturbances D and they are forces that

play01:28

we don't want affecting the system but

play01:30

they do anyway these are things like

play01:32

wind and bumps in the road

play01:34

now the inputs enter the system interact

play01:37

with the internal Dynamics and then the

play01:39

system State X changes over time

play01:42

so for a car we move the steering wheel

play01:44

and we press the pedals which turn the

play01:47

wheels and revs the engine producing

play01:49

forces and torques on that vehicle and

play01:51

then combined with the forces and

play01:53

torques from the disturbances the car

play01:55

changes its speed position and direction

play01:58

now if we want to automate this process

play02:01

that is we want the car to drive without

play02:04

a person determining the inputs where do

play02:07

we go from here

play02:09

and the first question is can an

play02:11

algorithm determine the necessary

play02:12

control inputs without constantly having

play02:15

to know the current state of the system

play02:18

or maybe a better way of putting it is

play02:20

do you need to measure where the car is

play02:22

and how fast it's going in order to

play02:25

successfully drive the car with good

play02:26

control inputs and the answer is

play02:30

actually no we can control a system with

play02:33

an open loop controller also known as a

play02:35

feed forward controller

play02:37

a feed forward controller takes in what

play02:40

you want the system to do called the

play02:42

reference R and it generates the control

play02:44

signal without ever needing to measure

play02:47

the actual state

play02:48

in this way the signal from the

play02:50

reference is fed forward through the

play02:52

controller and then forward through the

play02:54

system never looping back hence the name

play02:57

feed forward

play02:59

for example let's say that we want the

play03:01

car to autonomously drive in a straight

play03:03

line and at some arbitrary constant

play03:05

speed if the car is controllable which

play03:08

means that we have the ability to

play03:10

actually affect the speed and direction

play03:12

of the car then we could design a feed

play03:14

forward controller that accomplishes

play03:16

this

play03:17

the reference drive straight means that

play03:19

the steering wheel should be held at a

play03:21

fixed zero degrees and drive at a

play03:24

constant speed means that we depress the

play03:26

accelerator pedal some non-zero amount

play03:28

the car would then accelerate to a

play03:31

constant speed and drive straight

play03:32

exactly as we want

play03:35

however let's say that we want the car

play03:36

to reach a specific speed like 30 miles

play03:39

an hour we can actually still do it with

play03:42

a feed forward controller but now the

play03:43

controller needs to know how much to

play03:45

depress the accelerator pedal in order

play03:47

to reach that specific speed and this

play03:50

requires knowledge about the Dynamics of

play03:53

the system and this knowledge can be

play03:55

captured in the form of a mathematical

play03:57

model

play03:59

now developing a model can be done using

play04:01

physics and first principles where the

play04:04

mathematical equations are written out

play04:05

based on your understanding of the

play04:07

System Dynamics

play04:08

or it can be done by using data and

play04:11

fitting a model to that data with a

play04:13

process called system identification

play04:15

both of these modeling techniques are

play04:17

important Concepts to understand because

play04:19

as we'll get into models are required

play04:22

for almost all aspects of control theory

play04:26

now as an example of system

play04:28

identification we could test the real

play04:30

car and record the speed it reaches

play04:33

given different pedal positions and then

play04:36

we could just fit a mathematical model

play04:38

to that data basically speed is some

play04:41

function of the pedal position

play04:45

now for the feed forward controller

play04:47

itself we could just use the inverse of

play04:49

that model to get pedal position as a

play04:51

function of speed

play04:53

so given a reference speed the feed

play04:55

forward controller would be able to

play04:57

calculate the necessary control input

play05:00

so feed forward controllers are a pretty

play05:03

straightforward way to control a system

play05:04

however as we can see it requires a

play05:07

really good understanding of the System

play05:08

Dynamics since you have to invert them

play05:11

in the controller and any error in that

play05:13

inversion process will result in error

play05:15

in the system state

play05:17

also even if you know your system really

play05:20

well the environment the system is

play05:22

operating in should have predictable

play05:24

Behavior as well you know so that

play05:26

there's not a lot of unknown

play05:27

disturbances entering the system that

play05:30

you're not accounting for in the

play05:32

controller

play05:33

of course it doesn't take much

play05:35

imagination to see that feed forward

play05:36

control breaks down for systems that

play05:39

aren't robust to disturbances and

play05:41

uncertainty I mean imagine wanting to

play05:43

autonomously drive a car across the city

play05:45

with feed forward control

play05:47

theoretically you could map the city

play05:50

well enough and know your car well

play05:51

enough that you could essentially

play05:53

pre-program in all of the steering wheel

play05:55

and pedal commands and just let it go

play05:58

and if you had perfect knowledge ahead

play06:00

of time then the car would execute those

play06:03

commands and then make its way across

play06:05

the city unharmed

play06:07

obviously though this is unrealistic I

play06:10

mean not only are other cars and

play06:12

pedestrians impossible to predict

play06:13

perfectly but even the smallest errors

play06:16

in the position and speed of your car

play06:18

will build over time and eventually

play06:21

deviate much too far from the intended

play06:24

path

play06:25

so this is where feed back control or

play06:28

closed loop control comes to the rescue

play06:31

in feedback control the controller uses

play06:34

both the reference and the current state

play06:37

of the system to determine the

play06:38

appropriate control inputs that is the

play06:41

output is fed back making a closed loop

play06:45

hence the name

play06:47

and in this way if the system state

play06:49

starts to deviate from the reference

play06:51

either because of disturbances or

play06:53

because of errors in our understanding

play06:55

of the system then the controller can

play06:57

recognize those deviations those errors

play07:00

and adjust the control inputs

play07:02

accordingly so feedback control is a

play07:06

self-correcting mechanism and I like to

play07:09

think of feedback as a hack that we have

play07:11

to employ due to our inability to

play07:14

perfectly understand the system and its

play07:16

environment we don't want to use

play07:18

feedback control but we have to

play07:22

all right so feedback control is

play07:24

powerful but it's also a lot more

play07:26

dangerous than feed forward control and

play07:29

the reason for this is that feed forward

play07:31

changes the way we operate a system but

play07:34

feedback changes the Dynamics of the

play07:36

system it changes its underlying

play07:39

behavior and this is because with

play07:41

feedback the controller changes the

play07:44

system State as a function of the

play07:46

current state and that relationship is

play07:49

producing new Dynamics

play07:52

and changing Dynamics means that we have

play07:54

the ability to change the stability of

play07:57

the system and on the plus side we can

play07:59

take an unstable or marginally stable

play08:01

system and make it more stable with

play08:03

feedback control but on the negative

play08:05

side we can also make a system less

play08:08

stable and even unstable and this is why

play08:11

a lot of control theory is focused on

play08:13

designing and importantly analyzing

play08:16

feedback controllers because if you do

play08:18

it wrong you can cause more harm than

play08:21

good and since feedback control exists

play08:23

in many different types of systems the

play08:26

control community over the years have

play08:28

developed many different types of

play08:31

feedback controllers

play08:32

there are linear controllers like PID

play08:35

and full State feedback that assume the

play08:37

general behavior of the system being

play08:39

controlled is linear in nature

play08:42

and if that's not the case there are

play08:44

non-linear controllers like on off

play08:46

controllers and sliding mode controllers

play08:48

and gain scheduling

play08:50

now often thinking in terms of linear

play08:52

versus non-linear isn't the best way to

play08:55

choose a controller so we Define them in

play08:58

other ways as well for example there are

play09:00

robust controllers like mu synthesis an

play09:03

active disturbance rejection control

play09:04

which focus on meeting requirements even

play09:07

in the face of uncertainty in the plant

play09:09

and in the environment so we can

play09:12

guarantee that they are robust to a

play09:14

certain amount of uncertainty there are

play09:17

adaptive controllers like extremum

play09:19

seeking and model reference adaptive

play09:21

control that adapt to changes in the

play09:23

system over time there are optimal

play09:26

controllers like lqr where a cost

play09:29

function is created and then the

play09:31

controller tries to balance performance

play09:33

and effort by minimizing the total cost

play09:37

there are predictive controllers like

play09:40

model predictive control that use a

play09:41

model of the system inside the

play09:43

controller to simulate what the future

play09:45

state will be and therefore what the

play09:47

optimal control input should be in order

play09:49

to have that future State match the

play09:51

reference

play09:52

there are intelligent controllers like

play09:55

fuzzy controllers or reinforcement

play09:57

learning that rely on data to learn the

play10:00

best controller and there are many

play10:03

others and the point here isn't to list

play10:06

every control method I just wanted to

play10:08

highlight the fact that feedback control

play10:10

isn't just a single algorithm but it's a

play10:13

family of algorithms and choosing which

play10:15

controller to use and how to set it up

play10:17

depends largely on what system you are

play10:19

controlling and what you want it to do

play10:23

so what do you want your system to do

play10:26

what state do you want the system to be

play10:28

in what is the reference that you want

play10:30

it to follow

play10:31

and this might seem like a simple

play10:33

question if we're balancing an inverted

play10:35

pendulum or designing a simple Cruise

play10:37

controller for a car the reference for

play10:40

the pendulum is vertical and for the car

play10:43

it's the speed that the driver sets

play10:46

however for many systems understanding

play10:48

what it should do takes some effort and

play10:52

this is where planning comes in

play10:54

the control system can't follow a

play10:56

reference if one doesn't exist and so

play10:59

planning is a very important aspect of

play11:00

Designing a control system

play11:02

with a self-driving car for example

play11:04

planning has to figure out a path to the

play11:06

destination while avoiding obstacles and

play11:08

it has to follow the rules of the road

play11:10

plus it has to come up with a plan that

play11:12

the car is physically able to follow you

play11:15

know it doesn't accelerate too fast or

play11:16

it doesn't turn too quickly and if there

play11:19

are passengers then planning has to

play11:20

account for their comfort and safety as

play11:22

well and only after the plan has been

play11:25

created can the controller then generate

play11:27

the commands to follow it an example of

play11:30

two different graph-based planning

play11:32

methods are rapidly expanding random

play11:34

trees rrt and a star

play11:38

once again there are too many different

play11:40

algorithms to name but the important

play11:42

thing is that you understand that you

play11:44

have to develop a plan that your

play11:46

controller will then try to follow

play11:50

all right so once you know what you want

play11:51

the system to do and you have a feedback

play11:54

controller to do it now you need to

play11:56

actually execute this plan and as we

play11:59

know for feedback controllers this

play12:00

requires knowledge of the state of the

play12:02

system that is after all what we are

play12:05

feeding back and the problem is that we

play12:07

don't actually know the state unless we

play12:09

measure it and measuring it with a

play12:12

sensor introduces noise so for our car

play12:15

example we're not feeding back the true

play12:17

speed of the car we're feeding back a

play12:19

noisy measurement of the speed

play12:21

and our controller is going to react to

play12:23

that noise so in this way noise in a

play12:26

feedback system actually affects the

play12:27

true state of the system and so this is

play12:30

one additional problem that we're going

play12:31

to have to tackle with feedback control

play12:34

a second problem is that of

play12:35

observability in order to feedback the

play12:38

state of the system we have to be able

play12:40

to observe the state of the system and

play12:43

this requires sensors in enough places

play12:45

that every state that is fed back can be

play12:48

observed

play12:49

now it's important to note that we don't

play12:51

have to measure every state directly we

play12:53

just need to be able to observe every

play12:55

state for example if our car only has a

play12:57

speedometer we can still observe

play12:59

acceleration by taking the derivative of

play13:01

the speed

play13:03

so there are two things here we need to

play13:06

reduce measurement noise and we need to

play13:09

manipulate the measurements in such a

play13:10

way that allows us to accurately

play13:12

estimate the state of the system

play13:15

State estimation is therefore another

play13:17

important area of control theory and for

play13:20

this we can use algorithms like the

play13:22

column and filter the particle filter or

play13:25

even just run a simple running average

play13:28

and choosing an algorithm depends on

play13:30

which states you are directly measuring

play13:32

and how much noise and what type of

play13:34

noise is present in those measurements

play13:38

now the last major part of control

play13:40

theory is responsible for ensuring the

play13:42

system that we just designed works that

play13:45

it meets the requirements that we set

play13:47

for it and this comes down to analysis

play13:49

simulation and test for this we can plot

play13:52

data in different formats like with a

play13:55

body diagram a nickels chart or a

play13:57

Nyquist diagram

play13:59

we could check for stability and

play14:01

performance margins we could simulate

play14:03

the system using Matlab and simulink and

play14:07

all of these tools can be used to ensure

play14:09

that the system will function as

play14:11

intended

play14:13

and so this full diagram here I think

play14:16

represents everything you need to know

play14:18

about control theory you have to know

play14:20

about different control methods both

play14:22

feed forward and feedback depending on

play14:24

the system you're controlling you have

play14:26

to know about State estimation so that

play14:28

you can take all of those noisy

play14:29

measurements and be able to feed back an

play14:31

estimate of System state

play14:33

you have to know about planning so that

play14:35

you can create the reference that you

play14:37

want your controller to follow

play14:39

you have to know how to analyze your

play14:41

system to ensure that it's meeting

play14:43

requirements and finally and possibly

play14:46

most importantly you have to know about

play14:48

building mathematical models of your

play14:50

system because models are often used for

play14:53

every part we just covered they are used

play14:56

for controller design they're used for

play14:58

State estimation they're used for

play15:00

planning and they are used for analysis

play15:05

all right I always leave links below to

play15:07

other resources and references and this

play15:10

video is no exception and there are a

play15:12

bunch for this video since I mentioned

play15:14

so many different topics and something I

play15:16

think is nice is that we already have

play15:18

Matlab Tech talks for almost every topic

play15:21

I mentioned we have feed forward and PID

play15:24

and gain scheduling and fuzzy logic and

play15:27

Coleman filters and particle filters and

play15:28

planning algorithms and system

play15:30

identification and more so if there's an

play15:34

area of control theory that you want to

play15:36

learn more about I hope you check out

play15:38

the links below

play15:40

and to make it easier to browse through

play15:42

all of them I put together a journey at

play15:44

resourcium.org that organizes all of the

play15:46

references in this video again link to

play15:50

that is below as well

play15:52

so this is where I'm going to leave this

play15:54

video if you don't want to miss any

play15:56

other future Tech talk videos don't

play15:57

forget to subscribe to this Channel and

play15:59

if you want to check out my channel

play16:00

control system lectures I cover more

play16:02

control theory topics there as well

play16:04

thanks for watching and I'll see you

play16:06

next time

Rate This

5.0 / 5 (0 votes)

Related Tags
Teoría del ControlSistemas AutónomosModelos MatemáticosControl de EstadoPlanificaciónEjecuciónMétodos de ControlSimulaciónAnálisis de SistemasControl de Disturbios
Do you need a summary in English?