The version you’re consulting is not final. This course description may change. The final version will be published on 1st June.
5.00 credits
30.0 h + 22.5 h
Q1
Language
English
Prerequisites
This course assumes familiarity with notions on dynamical systems (level of LEPL1106: Signals and Systems, and LINMA1510: Linear Control) and calculus and linear algebra (level of LEPL1101: Algebra, and LEPL1102: Calculus I). LINMA2470: Stochastic Modelling is highly recommended.
Main themes
- Foundations of probabilities, optimal control
- Finite-state systems and MDPs
- State-space models: LTI, hybrid, and nonlinear
- Optimal control in the face of model uncertainty
- Reinforcement learning
Learning outcomes
At the end of this learning unit, the student is able to : | |
Contribution of the course to the program objectives: AA1.1, AA1.2, AA1.3, AA2.2 AA5.5 AA6.3 At completion of this course, the student will be able to: • Understand the concept of optimizing a stochastic process or system; • Reformulate practical problems as mathematical decision/design problems for stochastic systems; • Utilize the foundational tools from stochastic optimal control and reinforcement learning to solve decision/design problems for stochastic systems; • Apply algorithmic tools for the exact or approximate solving of stochastic optimal control problems, as well as understand their strengths and limitations and scope of applicability; • Apply the concept of exploitation vs exploration and regret minimization; • Provide an exact or approximate solution to stochastic optimal control problems, with applications in diverse fields, such as financial mathematics, robotics, … Transversal learning outcomes : • Handling unforeseen technical issues that appear when optimizing a real-world system ; • Making reasonable hypothesis for a given problem, and evaluating them a posteriori ; • Taking part to a technical class in English. |
|
Faculty or entity