The Departement of Mathematical Engineering organizes a series of seminars. The seminars are usually held on Tuesday from 2:00pm to 3:00pm in the Euler lecture room, Building EULER, av. Georges Lemaître 4-6, Louvain-la-Neuve (Parking 13). Be mindful that exceptions may occur; see the talk annoucements.
If you wish to receive the seminar announcements by email, please send an email to Etienne Huens.
Master students can take this seminar for credit in either of the two semesters; see LINMA2120 for more information.
Academic year 2020-2021: The Covid-19 restrictions imposes us to organize all seminars online till November 19th 2020, via Microsoft Teams on the channel of the course.
Seminars to come
|18/5/2021 (14h) [Location: MS Teams] |
Master Students' presentations (1/2)
Title: A quick overview on different opinion dynamics models and the revisited agreement algorithm for reaching consensus
Speaker: Albert Van Zeebroeck
Abstract: Opinion dynamics utilizes mathematical and physical models and the agent-based computational modeling tools, to investigate the spreading of opinions in a collection of human beings. In this 20 minutes presentation we will walk through a number of discrete and continuous opinion dynamics models such as the classical French-Haary-DeGroot or Abelson's model. In a second time we will have a look at the revisited agreement algorithm for reeching consensus as described by proffessors Vincent Blondel and Julien Hendrickx in 2005. In such algorithm, a number of agents exchange their values (or opinions) asynchronously and form weighted averages with values possessed by their neighbors
Title: Robust optimization – methodology and applications
Speaker: Nizar Bousselmi
Abstract: Optimization problems with uncertain data often arise in practice. Classical resolution of such problem can lead to very erroneous results. Therefore, it is crucial to have a methodology to robustly solve them. In this presentation, we propose to look at the well developed theory of robust optimization in the framework of linear, conic quadratic and semidefinite programming. And, we attempt to illustrate the importance of the robust optimization in practical applications.
Title: The mathematics of infectious diseases
Speaker: Guillaume Roisin
Abstract: In this talk we will estimate values of the basic reproduction number R0 for various diseases. We will start by presenting simple SIR epidemic and endemic models and then extending the theory to MSEIR model. We will need to introduce demographic models with different age structures to obtain new expressions for R0 for the MSEIR and SEIR models with continuous age or age groups.
|11/5/2021 (14h) [Location: MS Teams]|
Yassine Kamri, Florence Blondiaux, Rémi Delogne, and Olivier Leblanc (UCLouvain)
Welcome Seminar for new PhD students (#3)
Title: Convex interpolation, Performance estimation problems and Automatic design and analysis of first order-methods for convex optimization
Speaker: Yassine Kamri
Abstract: For many large-scale convex optimization problems, with low accuracy requirement on the solution, first-order methods are relevant due to their cheap computational cost and convergence properties. It is for example the case in machine learning, signal processing, imaging and control where first-order methods proved to be very useful. However, designing first-order methods with optimal worst-case guarantees can be complex without standard assumptions like smoothness or exactness of the first-order information. One aspect of my research will be to develop a framework for automatic design and analysis of optimal first- order methods for more general large-scale settings involving for instance noise, constraints, or non-smoothness. I will rely and expend on the recent works done on Performance estimation problems (PEP) which is a framework that casts the worst-case analysis of first- order methods as an optimization problem itself. Convex interpolation is one of the main ideas behind the tractable formulation of PEPs. Therefore, I will also present some preliminary results on convex interpolation.
Title: Essential tremor, a cerebellar disorder?
Speaker: Florence Blondiaux
Abstract: Essential tremor is a common disorder affecting up to 4% of the population above 40 years old. It usually affects patients with hand shaking, mainly during movement. To date, little is known about the origin of this disorder, although a role of cerebellum is suspected. The current treatments used for this tremor and the diagnostic methods are based on indirect countermeasures and would benefit to be improved. Combining systems theory and an experimental approach, we aim to better understand the role of cerebellum in the generation of the tremor by testing patients in a series of tasks known to depend on this brain structure. We suspect that the oscillations observed in the biological system could be due to an erroneous estimation of the feedback delays, inherent to the conduction of the neural sensory signals through the nerves. This step is hypothesized to be performed by the cerebellum, which is why we studied the abilities of essential tremor patients in these tasks to better characterize the functional impact of ET on cerebellar-dependent control mechanisms.
Title: Optical random features and asymmetric random features in kernel methods
Speaker: Rémi Delogne
Abstract: Within machine learning, kernel methods provide a clever way to “kernelise” or distort the feature space to allow linear algorithms to successfully classify instances that would normally require non-linear algorithms. The standard way to apply this kernelisation is to use the “kernel trick” that use a similarity function (kernel function) on pairs of instances in the original feature space to exactly replicate the inner product of the instances in the kernelised feature space. However this trick does not scale well with large datasets due to its algorithmic complexity. A way to work around this limitation is to approximate the kernel function with random projections. They map the features to a randomised lower dimensional feature space allowing for computational complexity independent of the number of instances in the dataset. Performing the random mapping can be achieved with physical projections of an instance, transformed into a light beam, through a piece of scattering material. Our first purpose is to study what kernel such an optical projection is capable of approximating and with what statistical guarantees. This approach assumes we perform the inner product between two instances transformed using the same mapping. However that is not always going to be the case in practice. Some instances could indeed be quantised while the others are not. Our second objective is to work on such asymmetric random features, exploring the framework and give statistical guarantees on their use.
Title: Interferometric Lensless Endoscopy: Rank-one Projections of Image Frequencies with Speckle Illuminations
Speaker: Olivier Leblanc
Abstract: Lensless endoscopy (LE) with multicore fibers (MCF) enables fluorescent imaging of biological samples at cellular scale. The first approaches to image a 2-D biological sample consisted in raster scanning (RS) it with a focused beam illuminated from the MCF distal end; the light re-emitted by the fluorescent sample being collected and recorded pixel by pixel for each focused spot location. In a recent scheme called speckle imaging and inspired by the compressive sensing theory, the sample is illuminated with random speckles—achieved by randomly configuring the SLM—which mimics a compressive imaging system with random waveforms and allows for faster reconstruction. In this talk, I will show that by exploiting the physics of light propagation under a common far-field approximation, the corresponding imaging process is tantamount to collecting multiple rank-one projections (ROP) of an Hermitian interferometric matrix— a matrix encoding a subsampling of the Fourier transform of the sample image. Specifically, each ROP of this matrix is achieved with the complex vector shaping the incident wavefront using a spatial light modulator (SLM), and, for specific MCF core arrangements, the interferometric matrix collects as many image frequencies as the square of the core number.
|4/5/2021 (14h) [Location: MS Teams]|
Marco Claudio Campi (University of Brescia, Italy)
Data-driven decision making and the scenario approach
Abstract: Decisions are based on past experience but they are applied to future cases. In this talk, we shall present the "scenario approach", a broad methodology to make data-driven decisions, and present a theory to rigorously judge the probability that a decision based on experience will under-perform in new situations. The generality of this approach makes it useful across a variety of fields, including control, finance and medical applications.
Biography: Marco Claudio Campi is Professor of Inductive Methods and Systems Theory at the University of Brescia, Italy. He was in various capacities on the editorial board of leading journals including Automatica, Systems and Control Letters and the European Journal of Control. Since 2016, he has been a Distinguished Lecturer of the IEEE Control Systems Society. He has delivered plenary and semi-plenary addresses at major conferences including CDC, SYSID, ECC and MTNS.. In year 2008, he received the IEEE Control Systems Society George S. Axelby Outstanding Paper Award. Marco Campi is a Fellow of IEEE and a Fellow of IFAC.
|27/4/2021 (14h) [Location: MS Teams]|
Hazan Daglayan Sevim, Le Hoang Trieu Vy, Julien Calbert, and Sébastien Mattenet (UCLouvain)
Welcome Seminar for new PhD students (#2)
Title: Matrix Completion Methods for Exoplanet Detection
Speaker: Hazan Daglayan Sevim
Abstract: Exoplanet detection by direct imaging is a difficult process because the planet is faint in the light of the star. The presence of quasi-static speckles in the images, which obscure the astrophysical signatures, also makes detecting the planet difficult. Angular Differential Imaging, which takes advantage of the earth's rotation to take several images in a night to differentiate between rotating planets and quasi-static speckles, is one of the most commonly recommended methods for removing these speckles. A low-rank matrix approximation of the images can capture the speckle noise while ignoring moving planets. As a result, the difference between the original image and the low-rank matrix approximation can help to detect the planets. But in this method, the signal of the planet can also be absorbed into the model. We aim to propose a new technique that uses matrix completion. We start by assuming the presence of a planet at a specific location and calculating its path through the data cube. Then we take this path out of the cube and use matrix completion to fill in the blanks and obtain a low-rank approximation. We also propose a maximum-likelihood estimator to compute the intensity of the planet while comparing the model to the original data along the path.
Title: Discrete Exterior Calculus framework and proximal alternating minimization for solving discrete Mumford-Shah and Amborsio-Tortorelli models
Speaker: Le Hoang Trieu Vy (UCLouvain and ENS Lyon)
Abstract: This work is dedicated to joint image restoration and contour detection. We consider both the discrete Mumford-Shah and the Ambrosio-Tortorelli models, which can be formulated as a minimization involving the sum of a data fidelity term, a term enforcing smoothness everywhere in the image except at the location of the contours, and a third term that penalizes the length of the contours, being the discrete counterpart of the 1D Hausdorff measure. A particular attention is paid for the derivation of the coupling and the penalization terms relying on the derivative operator and expressed by means of the Discrete Exterior Calculus framework. In this work, we derive proximal alternating minimization schemes with convergence guarantees (PALM-AT and SL-PAM) to estimate a critical point of our energy. We compare their behaviors when the discrete counterpart of the 1D Hausdorff measure is modeled either by an l1-norm or Ambrosio-Tortorelli penalization. The closed-form expressions of the involved proximity operators are provided. A thorough numerical study is conducted to evaluate the performance of both numerical schemes as well as comparisons between l1-norm and Ambrosio-Tortorelli penalization.
Title: Alternating Simulation on Hierarchical Abstractions
Speaker: Julien Calbert
Abstract: Abstraction techniques provide formal guarantees for generic optimal control problems on nonlinear and hybrid systems. Computing an abstraction solving the problem over the whole state-space is computationally demanding in high-dimensional spaces. We circumvent this curse of dimensionality by introducing a hierarchical abstraction approach for solving an optimal control problem for nonlinear systems with three nested partitions. These nested partitions allow the construction of auxiliary systems that characterize simulation relations, which are suitably exploited to provide upper and lower bounds for a branch and bound algorithm to yield an optimal solution for the control problem.
Title: Categorification of path complete lyapunov function
Speaker: Sébastien Mattenet
Abstract: In this talk I will consider the proof of stability of a variety of different system with the same general algorithm. In mathematic, one of the several types of proof at your disposal, is the proof by monovariant, where you introduce some value on the state of a system (like energy in the case of physical system) and show that it only goes one way, concluding something for the evolution of the system, without having to solve it exactly. One important property we may ask of a given system is the so called Global Asymptotic Stability, roughly saying that there is one point of equilibrium and every starting point eventually tends toward it. Luckily such a property is equivalent to the existence of a Lyapunov function, which is a particular example of a monovariant. Given a particular system, several algorithm have been proposed to find some Lyapunov function, each with their limitation and narrow scope of application, moreover those algorithm are often only valid in one class of system. The goal of my research is to use the tool of category theory to extend the scope of application of existing tactics, in particular, to extend them to greater class of system.
|20/4/2021 (14h) [Location: MS Teams]|
Panagiotis Andrianesis (Boston University, MA, USA)
Computation of Convex Hull Prices in Electricity Markets with Non-Convexities
Abstract: The presence of non-convexities in electricity markets has been an active research area for about two decades. The— inevitable under current marginal cost pricing — problem of guaranteeing that no market participant incurs losses in the day-ahead market is addressed in current practice through make-whole payments a.k.a. uplift. Alternative pricing rules have been studied to deal with this problem. Among them, Convex Hull (CH) prices associated with minimum uplift have attracted significant attention. Several US Independent System Operators (ISOs) have considered CH prices but resorted to approximations, mainly because determining exact CH prices is computationally challenging, while providing little intuition about the price formation rationale. In this paper, we describe the CH price estimation problem by relying on Dantzig-Wolfe decomposition and Column Generation, as a tractable, highly paralellizable, and exact method — i.e., yielding exact, not approximate, CH prices — with guaranteed finite convergence. Moreover, the approach provides intuition on the underlying price formation rationale. A test bed of stylized examples provides an exposition of the intuition in the CH price formation. In addition, a realistic ISO dataset is used to support scalability and validate the proof-of-concept.
This is joint work with D. Bertsimas (MIT), M. Caramanis (Boston University), and W. Hogan (Harvard University), under a recent ARPA-E PERFORM grant.
Biography: Panagiotis Andrianesis has been recently appointed as a Research Associate Professor of Systems Engineering at Boston University. He is a graduate of the Hellenic Army Academy, also holding a B.Sc. degree in economics from the National and Kapodistrian University of Athens, and a Diploma degree in electrical and computer engineering from the National Technical University of Athens, Greece. He received his M.Sc. degree in production management and his Ph.D. degree in the area of design and analysis of electricity market mechanisms from the University of Thessaly, Greece. Has was a Postdoctoral Associate at Boston University, affiliated with the Division of Systems Engineering and the Information and Data Science Research Group, and currently a Senior Fellow of the Institute for Sustainable Energy. He has also been a Consultant and Research Associate of ECCO International Inc. His research interests include power system economics, electricity markets, operations research, optimization, and applied mathematics. Dr. Andrianesis is a Member of IEEE, the Power and Energy Society and the Control Systems Society, and of INFORMS. He is the recipient of the 2010 IEEE APS Pre-Doctoral Research Award.
|13/4/2021 (14h) [Location: MS Teams]|
Wei Ren and Alexander Stollenwerk (UCLouvain)
Welcome Seminar for new postdocs (#2)
Title: Event-Triggered Tracking Control of Networked Control Systems
Speaker: Wei Ren
Abstract: In this talk, I will show how to deal with the tracking control problem of networked control systems under both multiple networks and event-triggered mechanisms. Multiple networks are to connect the plant and reference system with decentralized controllers to guarantee their information transmission, whereas event-triggered mechanisms are to reduce the information transmission via multiple networks. All networks are independent and asynchronous and have local event-triggered mechanisms, which are based on local measurements and determine whether the local measurements need to be transmitted. I will implement an emulation-based approach to develop a novel hybrid model for tracking control of networked control systems. Then, sufficient conditions are derived and decentralized event-triggered mechanisms are designed to ensure the tracking performance. This is a joint work with Dimos V. Dimarogonas (KTH) and Raphael Jungers (UCLouvain).
Title: Uniform Signal Recovery from Non-Linear Observations
Speaker: Alexander Stollenwerk
Abstract: In this talk, I will consider the problem of estimating high-dimensional signal vectors from few non-linear measurements. The focus will be on uniform recovery guarantees, that is, results which show that recovery succeeds for an entire class of signal vectors using a fixed measurement ensemble. In the linear case, classical compressed sensing theory shows that efficient recovery algorithms exist, which are able to robustly estimate signals from very few measurements. The standing assumption in these type of results is that the signal vectors exhibit some form of low-complexity (e.g. sparsity), which the algorithm may exploit. In contrast to the well-understood linear case and despite significant results in various special cases, a general understanding of uniform recovery from non-linear observations is still missing. Taking a step in that direction, I will present a unified approach to this problem under the assumption of i.i.d. sub-Gaussian measurement vectors. This is joint work with Martin Genzel from Utrecht University.
|30/3/2021 (14h) [Location: MS Teams]|
Damien Lepage and Alexis Davin (N-SIDE)
Advanced analytics for Energy and Life Sciences applications
N-SIDE is a spinoff from UCLouvain that combines advanced analytics with business know-how to deliver optimization solutions to actors of the Energy and Life Sciences industries. Damien Lepage and Alexis Davin (both graduated from EPL - UCLouvain) will present how they use Applied Mathematics in their daily jobs to solve the challenges of those actors. In the Energy sector, we will present how mixed-integer programming can reduce investment costs in electrical grids at European level. We will also give an example of how Machine Learning can be used for time-series forecasting of interconnectors load. In the Life Sciences sector we will talk about Monte Carlo simulations and optimization techniques to reduce waste and lower risk in clinical trials.
|23/03/2021 (14h) [Location: MS Teams]|
Ayush Bhandari (Imperial College London, UK)
Computational Sensing via Signal Folding: Variations on a Theme
Digital data capture is the backbone of all modern day systems and “Digital Revolution” has been aptly termed as the Third Industrial Revolution. Underpinning the digital representation is the Shannon-Nyquist sampling theorem and more recent developments such as compressive sensing approaches. The fact that there is a physical limit to which sensors can measure amplitudes poses a fundamental bottleneck when it comes to leveraging the performance guaranteed by recovery algorithms. In practice, whenever a physical signal exceeds the maximum recordable range, the sensor saturates, resulting in permanent information loss. Examples include (a) dosimeter saturation during the Chernobyl reactor accident, reporting radiation levels far lower than the true value and (b) loss of visual cues in self-driving cars coming out of a tunnel (due to sudden exposure to light). To reconcile this gap between theory and practice, we introduce the Unlimited Sensing framework or the USF that is based on a co-design of hardware and algorithms. On the hardware front, our work is based on a radically different analog-to-digital converter (ADC) design, which allows for the ADCs to produce modulo or folded samples. On the algorithms front, we develop new, mathematically guaranteed recovery strategies.
In the first part of this talk, we prove a sampling theorem akin to the Shannon-Nyquist criterion followed by certain variations on the theme. We show that, remarkably, despite the non-linearity in the sensing pipeline, the sampling rate only depends on the signal’s bandwidth. Our theory is complemented with a stable recovery algorithm. We then discuss sampling of sparse and parametric signals. Beyond the theoretical results, we will also present a hardware demo that shows our approach in action. Moving further, we reinterpret the unlimited sensing framework as a generalized linear model that motivates a new class of inverse problems. We conclude this talk by presenting a research overview in the context of single-shot high-dynamic-range (HDR) imaging, sensor array processing and HDR computed tomography based on the modulo Radon transform.
|16/3/2021 (14h) [Location: MS Teams]|
Laurent Jacques (UCLouvain)
Keep the phase! Signal recovery in phase-only compressive sensing
In this seminar, we show how a sparse signal can be estimated from the phase of complex random measurements, in a "phase-only compressive sensing" (PO-CS) scenario. With high probability and up to a global unknown amplitude, we can perfectly recover such a signal if the sensing matrix is a complex Gaussian random matrix and the number of measurements is large compared to the signal sparsity. Our approach consists in recasting the (non-linear) PO-CS scheme as a linear compressive sensing model. We built it from a signal normalization constraint and a phase-consistency constraint. Practically, we achieve stable and robust signal direction estimation from the basis pursuit denoising program. Numerically, robust signal direction estimation is reached at about twice the number of measurements needed for signal recovery in compressive sensing.
This is a joint work with T. Feuillen (UClouvain)
|9/3/2021 (14h) [Location: MS Teams]|
Hamza Fawzi (University of Cambridge, UK)
Polyhedral approximations of the positive semidefinite cone
LP solvers are more efficient than SDP ones and can scale to much larger problem sizes. A natural question is: how well can we approximate SDPs using LPs? In this talk I will discuss a hardness result showing that any "reasonable" approximation of the nxn positive semidefinite cone, must have extension complexity that is at least superpolynomial in n. The proof uses tools from Fourier analysis which I will present in the talk. Based on arXiv:1811.09649.
|2/3/2021 (14h) [Location: MS Teams]|
Daniele Catanzaro (UCLouvain)
The Balanced Minimum Evolution Problem: Introduction, Combinatorics, and Recent Advances
Balanced Minimum Evolution (BME) is one of the most important criteria to estimate phylogenies from molecular data (such as DNA, RNA, codon sequences, or whole genomes). This criterion can be stated in terms of a combinatorial optimization problem, called the Balanced Minimum Evolution Problem (BMEP), which is characterized by highly nonlinear objective function and constraints and which is proven to be APX-hard. The optimal solution to the BMEP is highly desirable in practice due to its properties of statistical consistency. Unfortunately, the exact methods currently described in the literature struggle to solve even relatively small instances of the problem. In this talk we introduce the audience to the BMEP. We present some fundamental characteristics of the BMEP polytope, the connections between the BMEP and information entropy, as well as a massively parallel branch-and-bound algorithm which redefine the current state-of-the-art for the problem.
|23/02/2021 (14h) [Location: MS Teams]|
Henk van Waarde (University of Groningen, Netherlands)
Data-driven control à la Finsler and Yakubovich
Recently, there has been a renewed interest in the problem of obtaining control laws directly from measured data. This talk focuses on the design of state feedback controllers using finite, noise-corrupted trajectories of linear systems. The goal is to construct control laws that achieve guaranteed stability and performance for all systems explaining these data. As our main results, we present necessary and sufficient conditions for the existence of such controllers, as well as control design methods in terms of tractable linear matrix inequalities. The main technical ingredients that underlie our approach are generalizations of Finsler's lemma and Yakubovich's S-lemma. These results are of interest by themselves, and will be discussed in detail during the talk.
|16/2/2021 (14h) [Location: MS Teams]|
Jean-Charles Delvenne (UCLouvain)
Markov Chains in Physics: Thermodynamic Uncertainty Relations and the Paradox of the Diode
Many microscopic phenomena can be adequately modelled with Markov chains. We will review briefly how the chief concepts of statistical physics such as entropy production, heat, energy are expressed as Markov-chain-theoretic quantities. Within this formalism, we study two specific problems of recent interest, depending on time and interaction with the audience: 1) First we formulate and prove a general Thermodynamic Uncertainty Relation, ie a relation showing that an observable of interest (e.g. total work or heat or displacement over the cycle of an engine) must have a high variance unless a lot extra power is spent: 'precision is costly'. A motivating example is the kinesin, a protein which transports material across the cell. 2) Brillouin's paradox, or the Paradox of the Diode, states that a diode could apparently rectify the positive and negative fluctuations of the thermal noise into an always-positive voltage source, thus extracting useful energy from noise. To solve this paradox, one must accept that the behaviour of a device cannot be defined independently of its environment: the Principle of Modularity is violated at microscopic scale. With this model in mind, we can derive novel models of MOS transistors and CMOS gates subject to thermal noise.
|9/2/2021 (14h) [Location: MS Teams]|
Nicolas Boumal (EPFL, Switzerland)
Finding stationary points on the bounded-rank variety: a geometric hurdle and a smooth workaround
The set of matrices of a certain size and rank is a smooth manifold. Unfortunately, it is not closed: this is uncomfortable for optimization. The closure of that manifold, namely, the set of matrices with bounded rank, is an algebraic variety but it is not smooth. That also is uncomfortable for optimization. Case in point, the norm of the (projected) gradient of the cost function can go to zero along a sequence even if the limit point of the sequence is not stationary. This can trick algorithms. I will characterize the geometry of this phenomenon. Then, I will discuss how lifting the problem to a smooth manifold makes it possible to converge to stationary points with certainty under mild conditions.
Joint work with Eitan Levin (CalTech) and Joe Kileel (UT Austin).
|8/12/2020 (14h) [Location: MS Teams]|
Rainer von Sachs (UCLouvain)
Intrinsic Wavelet Smoothing of Curves of Hermitian Positive Definite Matrices
In multivariate time series analysis, non-degenerate autocovariance and spectral density matrices are necessarily Hermitian and positive definite, and it is important to preserve these properties in any estimation procedure. Our main contribution is the development of intrinsic wavelet transforms and nonparametric wavelet regression for curves in the non-Euclidean space of Hermitian positive definite matrices. The primary focus is on the construction of intrinsic average-interpolation wavelet transforms in the space equipped with a natural invariant Riemannian metric. In addition, we derive the wavelet coefficient decay and linear wavelet thresholding convergence rates of intrinsically smooth curves of Hermitian positive definite matrices. The intrinsic wavelet transforms are computationally fast, and nonlinear wavelet thresholding captures localised features, such as cups or kinks, in the matrix-valued curves. In the context of nonparametric spectral estimation, the intrinsic (linear or nonlinear) wavelet spectral estimator satisfies the important property that it is equivariant under a change of basis of the time series, in contrast to most existing approaches. The finite-sample performance of the intrinsic wavelet spectral estimator based on nonlinear tree-structured trace thresholding is demonstrated by some simulated and real data examples.
All details of this work can be found in Chau, J., von Sachs, R. Intrinsic wavelet smoothing of curves of Hermitian positive definite matrices. Journal American Statistical Association 2020, to appear (available on https://arxiv.org/pdf/1701.03314.pdf).
|01/12/2020 (14h) [Location: MS Teams]|
Daniel Molzahn (Georgia Institute of Technology, USA)
Applications of Convex Restriction Techniques to Electric Power System Optimization Problems
Electric power systems are critical infrastructure that underlie almost all aspects of modern society. With rapidly increasing quantities of renewable generation and the continuing expansion of electricity markets, electric power systems are undergoing significant changes. New algorithms for optimizing the design and operation of electric power systems are needed in order to enable these transformational changes. Power system optimization problems are constrained by the so-called “power flow” equations which relate the voltages and power injections. Due to the nonlinearity of these equations, power system optimization problems are nonconvex, resulting in a variety of algorithmic and theoretical challenges. Over the last decade, the research community has developed many new techniques for addressing the nonlinearities associated with the power flow equations. These techniques can be categorized as approximations, convex relaxations (outer approximations of the feasible space), and convex restrictions (inner approximations of the feasible space). After overviewing each of these categories of techniques, this presentation details recent progress in the development of convex restrictions of power system optimization problems. This presentation then describes two algorithms that use convex restrictions to 1) compute paths between operating points such that all points on the paths are feasible with respect to both the power flow equations and the operational constraints, and 2) solve robust optimization problems that ensure constraint satisfaction despite uncertain perturbations from renewable generators.
This is joint work with Dongchan Lee, Line Roald, and Kostya Turitsyn.
|24/11/2020 (14h) [Location: MS Teams]|
Alexandre Heeren (UCLouvain)
Can we use network analysis and tools from graph theory to radically transform clinical psychology?
For decades, science has attempted to understand the world by boiling concepts down to their simplest components in a quest for reductionism, but there is now a general scientific move to map the world's complexities. Clinical psychology has recently started to embrace this perspective, as well. To do so, researchers have begun to conceptualize symptoms of mental disorders as network systems using computational tools from graph theory and network analysis. In this talk, Alexandre Heeren will introduce the theoretical conjectures, methodological approaches, and levels of evidence of this "network takeover" currently transforming clinical psychology. He will also discuss the theoretical and methodological challenges and the potential translational value of this radically new approach to clinical psychology.
|17/11/2020 (14h) [Location: MS Teams]|
Tachella Julian (University of Edinburgh, UK)
Large system limit of convolutional neural networks for image denoising
Convolutional Neural Networks (CNNs) are now a well-established tool for solving computer vision and imaging problems, and modern CNNs often obtain state-of-the-art performance. Perhaps surprisingly, it has been recently shown that, despite being highly overparameterized, such networks can be trained with a single corrupted image and still perform as well as fully trained networks - a phenomenon encapsulated in the deep image prior (DIP). Here we attempt to explain what might be going on in terms of recent advances of Neural Tangent Kernel (NTK) theory, which characterizes the large system limit of neural networks. We identify strong links between CNN architectures and well-known signal processing techniques such as non-local means, showing that the function associated with a CNN to a given image can be obtained in closed form without need to train the network. Although our analysis shows that the NTK still does not fully explain the DIP phenomenon, we argue it suggests that CNN's inductive bias is better characterized by images with non-local self-similar structure.
|10/11/2020 (14h) [Location: MS Teams]|
Valentin Leplat (UMons)
Algorithms for Nonnegative Matrix Factorization
The low-rank approximation of a matrix is a key problem in data analysis, and is equivalent to linear dimensionality reduction (LDR). LDR techniques such as principal component analysis are powerful tools for the analysis of high-dimensional data. In this talk, we consider a particular low-rank approximation model, namely nonnegative matrix factorization (NMF). NMF is the problem of approximating a given nonnegative matrix, V, with the product of two nonnegative matrices, W and H. The nonnegativity constraint on W and H allows us to extract easily interpretable and meaningful information from the input data. However, they make the problem much more difficult to solve (NP-hard). This talk focuses on the algorithmic aspect to compute an NMF for an input matrix. First, we introduce NMF models and algorithms. Second, we introduce a general framework to design algorithms (namely multiplicative updates) for NMF problems minimizing the β-divergence between V and WH, with disjoint equality constraints and penalty terms in the objective function. Finally, we introduce a novel framework using tools from conic optimization in order to compute an NMF which is exact, that is, with V = WH. The content of this talk contains joint works with Nicolas Gillis (UMons), Jérôme Idier (Ecole Centrale de Nantes), Yurii Nesterov (UCLouvain) and François Glineur (UCLouvain).
Keywords: nonnegative matrix factorization (NMF), β-divergences, disjoint constraints, exact NMF, conic programming, successive approximations.
|03/11/2020 (14h) [Location: MS Teams]|
Manuel Mazo Jr. (TU Delft)
The quest to schedule event-based controllers. An approach via formal abstractions.
Modern control systems are often implemented as networked systems, in which bandwidth of a communication medium is shared among a number of applications. Furthermore, often the act of communicating also has an energy impact on the system infrastructure, e.g. in the case of wireless control systems. In the last decade the study of controller implementation strategies aiming at reducing the communication footprint, to save bandwidth (and energy), has thus attracted great attention. Arguably, one of the most promising implementation paradigms in the literature is that of event-triggered control (ETC), in which communications are triggered by the state of the plant itself, which makes the system communicate only when necessary to retain stability of the closed-loop. ETC implementations, while reducing drastically the amount of required bandwidth, result in aperiodic transmissions whose time of occurrence are highly unpredictable, leading to difficulties for scheduling such systems. Such difficulties usually imply that the freed bandwidth may not be reusable by other (real-time) applications, and that energy savings may be completely lost, thus rendering ETC schemes inefficient. In this talk I describe a plan to the ETC scheduling problem through the construction of timed-automata models for the ETC systems' traffic. I start introducing ETC and describing the difficulty of scheduling such systems employing a control theoretic perspective. Next, I introduce the notion of model abstraction we employ to make the scheduling problem solvable. Two approaches to the construction of this abstractions are described and how the resulting models can be employed to solve scheduling problems. The talk finalizes going briefly over our most recent advances, the difficulties, and possible opportunities for future research in this field.
|27/10/2020 (14:00) [Location: MS Teams]|
Isabella Loaiza, Anne Hoffmann, and Sébastien Colla (UCLouvain)
Welcome Seminar for new PhD students (#1)
Title: Controlling COVID-19: Labor structure more important than lockdown policy
Speaker: Isabella Loaiza
Abstract: The government response to the COVID-19 pandemic in Colombia has been characterised by the closure of international borders and strict lockdown measures. In spite of a dictated nationwide lockdown, the implementation of this lockdown has been administered by municipal governments. Thus, lockdown measures vary across municipalities. Different measures allow residents to go out for essential shopping based on the last digit of their national ID card, their gender or a mix of both. This creates an opportunity to gain insight into the effects of different lockdown policies on urban mobility via this natural experiment. We investigate the extent to which different policies have affected residents' mobility and compare pre and post lockdown disruption of commute patterns. While we find that residents of municipalities with more severe policies had, on average, greater reductions in trip frequency during lockdown, the effect of policy severity is only about half that of city size - residents of larger cities. We also find that policy severity had no bearing on the extent to which residents reduced their daily travel distance during lockdown - nor on the likelihood that residents abandoned their established commuting patterns. However, we do find that municipalities with higher GDP, larger populations, and higher labor formality have experienced disproportionately greater reductions in mobility and greater reductions in commuting patterns. Interestingly, this difference in mobility reduction across cities of varying size/wealth is more pronounced than the difference within cities based on neighborhood socioeconomic stratum. To validate our results derived from CDR data we use google mobility data and find strong correlation with our original results.
Title: Multisensory Integration in Dynamic Environments
Speaker: Anne Hoffmann
Abstract: In my project I am investigating how the brain integrates information from different sensory modalities in a dynamic environment using a combination of reaching experiments and computational modeling. Human motor proficiency results from an intricate interaction between sensory and motor systems in the brain. The nervous system plans and controls movements using vision, proprioception, and tactile sensors. Being able to combine sensory information about the body and the environment in real-time is essential for executing fast, adaptable, and accurate movements in the presence of uncertainty and external disturbances. Today, most of our knowledge about multisensory integration is based on experiments in which the sensory input is constant over time. These static cue-combination tasks have shown that humans optimally integrate information from different sensory modalities to form a coherent percept of their body and the environment. However, when we move, the position of our body is constantly changing and sensory organs provide the brain with disparate and asynchronous information. How may the human brain resolve these computational challenges? To address this important question, I am conducting a series of psychophysical experiments on healthy adults in which they perform arm movements while encountering various visual and force perturbations. My empirical results will be interpreted in the context of Optimal Feedback Control theory. Developing a computational explanation of dynamic multisensory integration will have important implications for our fundamental understanding of the sensorimotor system, as well as for clinical applications. Indeed, most of our everyday behavior requires coordinated motor actions and deficits in multisensory integration may lead to biased or unreliable perception of body and environment, which ultimately impairs motor performance.
Title: Automatic Performance Estimation for Decentralized Optimization
Speaker: Sébastien Colla
Abstract: Decentralized optimization consists in having interconnected computers optimizing a function f collaboratively, by iteratively updating estimates of the optimal solution. It is particularly relevant in machine learning applications where f depends on datasets too large to be stored on a single computer, or in sensor networks, multi-robot and micro-grid applications. We take the view that progress in this promising field is hindered by the high complexity of analyzing these decentralized algorithms, due to the interplay between the effects of optimization and of the interconnection network; Most performance bounds are indeed complex and conservative, making it difficult to compare algorithms and understand the reasons limiting their performance. I will create a mathematical formulation allowing to automatically compute the exact performance of any decentralized optimization algorithm and identify the bottleneck instances, providing insight on what limits the performance. This will allow for rapid exploration of ideas and iterative improvement and help develop a better understanding of the decentralized optimization algorithms. I intend to achieve this by extending to the decentralized case the Performance Estimation (PEP) approach. It relies on formulating the evaluation of an algorithm worst-case performance as an optimization problem itself, whose variables are the objective functions and initial conditions. Such problems are very complex, but have been shown to be solvable exactly. One of my main challenges will be to embed description of classes of interconnection networks in the PEP formulations.
|20/10/2020 (14:00) [Location: MS Teams]|
Evgeniia Vorontsova and Masoud Ahookhosh (UCLouvain)
Welcome Seminar for new postdocs (#1)
Title: Oracle Complexity Separation in Convex Optimization
Speaker: Evgeniia Vorontsova
Abstract: Empirical risk minimization problems in machine learning are considered. Such kind of problems are often consisted of several blocks that can be treated with different types of oracles, e.g., full gradients, stochastic gradients or coordinate derivatives. Optimal oracle complexity is known and achievable separately for each type of the oracles. We propose a generic framework to combine several optimal algorithms for different types of oracles in order to achieve separate optimal oracle complexity for every block. As a particular example, we demonstrate that in a case of full and stochastic gradient oracles' combination our approach leads to the optimal number of oracle calls separately for the full gradient part and the stochastic part.
Title: Accelerated first-order methods for large-scale convex optimization: nearly optimal complexity under strong convexity
Speaker: Masoud Ahookhosh
Abstract: In this study, four accelerated (sub)gradient algorithms (ASGA) are introduced and analyzed for solving several classes of convex optimization problems with a large number of variables. The objective function is the sum of two convex functions in which the first one is weakly smooth (either nonsmooth with the bounded variation of subgradients or smooth with Hölder continuous gradients). Specifically, two novel estimation sequences are presented to majorize the objective function, and accordingly two iterative schemes are developed with respect to these upper estimations. In both cases, the first scheme requires the smoothness parameter and a Hölder constant, while the second scheme is fully parameter-free (except for the strong convexity parameter which we set zero if it is not available) at the price of applying a finitely terminated backtracking line search. Notably, the proposed algorithms attain the optimal complexity for smooth problems with Lipschitz continuous gradients, nonsmooth problems with the bounded variation of subgradients, and weakly smooth problems with Hölder continuous gradients. In addition, for strongly convex problems, they are optimal for smooth problems while nearly optimal for nonsmooth and weakly smooth problems. Finally, numerical experiments for applications in sparse optimization and machine learning are reported, which support our theoretical foundations.
|13/10/2020 (14h) [Location: Euler building (room A.002) + MS Teams]|
Loïc Van Hoorebeeck (UCLouvain)
Nonconvex and nonsmooth separable objectives: what if we use piecewise linearization?
In this presentation, we detail an algorithm that aims at the global solution of a structured nonsmooth and nonconvex problem. The algorithm consists in solving a sequence of surrogate optimization problems where the original objective function is replaced by a carefully constructed piecewise underapproximation. The method is applied to the economic dispatch problem, an important task in power systems which consists in the optimal allocation of power among committed units. In particular, the consideration of the valve-point effect in gas power plants makes this problem a challenging nonsmooth and nonconvex optimization problem, which fits the framework developed here.
|06/10/2020 (14:00) [Location: MS Teams]|
Alexander Shestopaloff (School of Mathematical Sciences, Queen Mary University of London, UK )
Scaling Monte Carlo inference for State-Space Models
The iterated conditional Sequential Monte Carlo (cSMC) method is a particle MCMC method commonly used for state inference in non-linear, non-Gaussian state space models. Standard implementations of iterated cSMC provide an efficient way to sample state sequences in low-dimensional state space models. However, efficiently scaling iterated cSMC methods to perform well in models with a high-dimensional state remains a challenge. One reason for this is the use of a global proposal, without reference to the current state sequence. In high dimensions, such a proposal will typically not be well-matched to the posterior and impede efficient sampling. I will describe a technique based on the embedded HMM (Hidden Markov Model) framework to construct efficient proposals in high dimensions that are local relative to the current state sequence. A second obstacle to scalability of iterated cSMC is not using the entire observed sequence to construct the proposal. Typical implementations of iterated cSMC use a proposal at time t that relies only on data up to time t, even when data after time t is available. In high dimensions and in the presence of informative data, such proposals can become inefficient and slow down sampling. I will introduce a principled approach to incorporating all data in the cSMC proposal at time t. By considering several examples, I will demonstrate that both strategies improve the performance of iterated cSMC for state sequence sampling in high-dimensional state space models.
|29/09/2020 (14h) [Location: Euler building (room A.002) + MS Teams]|
Benoît Pairet (UCLouvain)
MAYO: a pipeline for planetary systems imaging
Direct imaging of planetary systems is a challenging task mainly because of the tiny angular separation between faint circumstellar signal and the overwhelming bright star. Even with state-of-the art instruments this task heavily relies on post-processing. Indeed, the acquired images are plagued with bright quasi-static spots, called speckles, that hide the astrophysical signal, made of exoplanets and circumstellar disks. The latter are extended structure, which are inherently difficult to image and methods tailored for exoplanets imaging highly distort the morphology of disks. We thus propose a processing pipeline based on prior knowledge about disks, exoplanets and speckles that produces faithful images of disks, both in terms of morphology and intensity distribution. In addition, our pipeline is designed to separate exoplanets from disks.
|22/09/2020 (14h) [Location: Euler building (room A.002) + MS Teams]|
John Lee (UCLouvain)
Mathematics in medical physics for radiation oncology
Radiation oncology treats cancer by irradiating patients locally with ionizing beams, like X-rays, protons or even heavier particles. Treatment planning can be seen as a ballistic problem whose solution is such that beams reach a certain dose on a target volume to control the tumor, while they do not exceed some threshold dose in organs at risk to limit adverse effects. In other words, treatment planning can be translated into a multi-objective optimization problem or a constrained optimization problem, the former allowing better "tweakability" to satisfy complex clinical objectives. Current research in radiation oncology, including in the UCLouvain/MIRO group, aims at modeling treatment uncertainties with multiple error scenarios in minimax problems, to make treatment more robust against those. Other lines of investigation go towards faster, more user-friendly multi-objective interaction with Pareto frontier pre-calculation. Yet another direction aims at improving accuracy in objective function evaluation by replacing fast "analytical" dose computation with Monte Carlo simulations. Depending on simulation speed, hybrid optimization schemes can be devised, interleaving slow, accurate checkpoints of the objective function with faster, cheaper surrogates. Further perspectives include clinical decision support system involving artificial intelligence. This entails automating the few last manual steps in treatment planning, simulation, and evaluation with respect to clinical endpoints. Such automation consists for instance in predicting the optimal planned dose with image-to-image deep convolutional neural networks like U-net. In these long workflows, though, the challenge is to evaluate how uncertain the prediction is and how it propagates till the endpoints the physicians consider to make their treatment decision. Most treatment workflows feed on medical images, whose quality is key, and lately reconstruction methods rely more and more on inverse problem solving to overcome the pitfalls of classical algebraic reconstruction.
|15/09/2020 (14h) [Location: Euler building (room A.002) + MS Teams]|
Christophe Kervazo (UMons)
Blind source separation of linear-quadratic near-separable mixtures
In blind source separation (BSS), the data sets are assumed to stem from the mixing of elementary signals, called sources. Although the BSS methods assuming linear mixing models have known a wide range of successes, we will depart in this presentation from this setting and focus on the more advanced linear-quadratic (LQ) mixing models, which also include additional products of order two of the sources. Precisely, we here extend the near-separable nonnegative matrix factorization (NMF) framework to LQ mixings, enabling to design an efficient two-step unmixing algorithm coined SNPALQ + PP. The algorithm is shown to be robust to noise under discussed conditions. SNPALQ + PP furthermore exhibits good separation qualities in realistic hyperspectral unmixing numerical experiments.
|09/06/2020 (14:00) [Location: MS Teams]|
Yurii Nesterov (UCLouvain)
Online prediction of COVID19 dynamics. Belgian case study
In this talk, we present a new axiomatic model of epidemic development, which is consistent with the very special features of COVID19. This is a discrete-time switching linear model for predicting the dynamics of total number of infected persons and concentration of the asymptomatic virus holders in population. A small number of its parameters can be tuned using the available real-time dynamic data on virus propagation. Therefore, this model provides us also with online prediction of the future. As an application example, we present the conclusions of this model for epidemic of COVID19 in Belgium in 2020. During several months, our predictions were exact, typically, within the accuracy of 0.5%.
|26/05/2020 (14:00) [Location: MS Teams]|
Sébastien Colla (UCLouvain)
A 20-minute talk given by a student (2/2)
Title: Derivative Free Optimization : Directional direct search methods
Abstract: Derivative free optimization (DFO) is the study of « zero order methods » that use no derivatives to find (or approach) the optimum of a problem. Nowadays there are many optimization problems for which the derivative computation is very expensive or noisy or even impossible. Without the use of the derivative we cannot expect the same performance as the classical first-order methods, but it is still possible to have methods that are guaranteed to converge to some stationary point. This is the case of the directional direct-search methods that poll multiples points in different well-chosen direction at each iteration and choose as next iterate the one that (best) decreases the objective. A basic example of this is the coordinate search method. The seminar will focus on this set of direct-search derivative free methods and their convergence results.
|19/05/2020 (13:30) [Location: MS Teams]|
Mattéo Couplet and Julien Calbert (UCLouvain)
Two 20-minute talks given by the students (1/2)
Please note the unusual time of this event (starting at 13:30).
Speaker: Mattéo Couplet
Title: Random Projections, Compressive Sensing and Compressive Classification
Abstract: About a decade ago, groundbreaking papers revealed the possibility of faithfully recovering high-dimensional signals from some incomplete information about them. They gave rise to the field of compressive sensing, which is now in a mature state and whose foundations rely on an elegant mathematical theory. This talk will start with an overview of the field. Then, the compressive classification task will be addressed, for which it can be shown that even fewer compressive measurements are required than for signal reconstruction, using ideas from high-dimensional convex geometry.
Speaker: Julien Calbert
Title: Proximal methods
Abstract: In this talk, we will discuss a particular class of optimization algorithms: proximal algorithms. These algorithms are well suited to solve convex, non-smooth, constrained and large-scale optimization problems. They are based on the successive resolution of a convex optimization problem called proximal operator. These methods are of great interest because they work under general conditions, they can be fast if a closed form can be derived for the proximal operator and under certain conditions they can be solved in a distributed way. In addition, proximal methods can be seen as an additional level of abstraction to the classical optimization methods. They form a theoretical framework for demonstrating simultaneously the convergence of several optimization methods such as the gradient method or the projected gradient method. Finally, we will see several applications in the field of signal processing.
|12/05/2020 (14:00) [Location: MS Teams]|
Philippe Lefèvre (UCLouvain)
Prediction and uncertainty in the control of visual tracking
Vision plays a key role in behaviour. In humans, visual acuity is non-homogeneous across the visual field and is characterized by a foveal zone of very high resolution. Therefore, for human observers, the quality of visual inputs is heavily influenced by their ability to detect and fixate parts of the visual scene that are of particular interest. When everything is stationary if the environment, this is performed with saccades that are fast orienting eye movements allowing to scan the visual scene. However, in a dynamic environments, objects can move with respect to the observer, requiring to track moving objects with smooth pursuit eye movements. Saccades and smooth pursuit eye movements are two different modes of oculomotor control. However, behavioural and neurophysiological data demonstrated that both types of eye movements work in synergy for visual tracking. This suggests that saccades and pursuit are two outcomes of a single sensorimotor process that aims at orienting the visual axis. More recently, it has been hypothesized that oculomotor behaviors integrate sensory (visual) and prior information to overcome sensory-motor delays and noise. After much debate about this process, reliability-based integration has recently been proposed and several models of smooth pursuit now include recurrent Bayesian integration or Kalman filtering. In this talk, I will present some recent experimental and modeling work demonstrating the key role played by prediction and uncertainty in the synergy between saccades and smooth pursuit in the control of visual tracking.
|05/05/2020 (14:00) [Location: MS Teams]|
Tim Marrinan (COLORAMAP Group, UMons)
Identifiability and Detection of Multiset Correlation Structure
Relationships between random vectors with deterministic but unknown linear mixing are commonly assessed via canonical correlations. Detecting this correlation structure allows us to infer unseen associations between the underlying variables. For example, this type of analysis has been used to identify conditions that lead to a rise in sea surface temperature and to localize regions of brain activity used for visual recognition tasks. For pairs of random vectors, general conditions have been discovered under which the correlation structure is identifiable, but for collections of more than two random vectors such general conditions remain evasive. In this lecture we describe conditions under which a subset of multiset correlation structures can be identified, and demonstrate how the correlation structures can be detected from the augmented coherence matrix of the data.
|28/04/2020 (14:00) [Location: MS Teams]|
Julien Hendrickx (UCLouvain)
Challenges in Open Multi-Agent Systems Subject to Arrivals and Departures
Scalability and robustness to agent losses are often cited as advantages of multi-agent systems, but almost all theoretical results apply to system with fixed compositions. We consider open systems, that agents continuously leave and join during the process considered. We discuss the general challenges to analyze and design algorithms for such systems. Challenges in the analysis come from the fact that each arrival or departure implies a discontinuity and a change of the system state dimension. Moreover, these repeated events forbid any asymptotic convergence. On the design side, arrivals perturb the system and may also modify the algorithms objective. Departures result in information loss, or conversely, in a persistent influence of information that is no longer relevant. Moreover, correction mechanisms designed to cope with a small number of arrivals or departures may fail or even have a counterproductive and destabilizing effect, when these events keep taking place. We focus in particular on averaging, decentralized estimation, computation of the maximum value and decentralized optimization. We also present some fundamental performance limitations in open systems.
|21/04/2020 (14:00) [Location: MS Teams]|
Christophe De Vleeschouwer (UCLouvain)
Deep Learning in Computer Vision
Deep learning and more specifically Convolutional Neural Networks (CNNs) are now predominant in computer vision. In this seminar, we survey a variety of image processing problems and present how they are formulated in terms of CNNs optimization, using image-wise or pixel-wise predictions. We then introduce some of the practical issues encountered when training and deploying neural networks in real-life, which reveals that a major shortcoming of neural networks lies in the poor understanding we have about their inner processes and generalization capabilities. Since a solid mathematical framework is still lacking to explain neural networks performance, we summarize a few recent experimental results revealing the tight connection between network pruning and generalization. Those results suggest to look for an explanation in the way neurons split data in diverse and representative binary classes.
|03/03/2020 (15:00) [Location: Euler building (room A.002)] |
Special event: visit of intoPIX
The students from the Seminar in Applied Mathematics (LINMA2120) organize a visit intoPIX, an LLN-based world-class company specialized in image processing, video compression and security technologies.
Registration is free but mandatory (number of places limited; the registraiton form has been sent by email)
The schedule is as follows:
15:00: meeting at the entrance of the Euler building for group departure (carpool)
15:30: start of the visit
16:30 (approx.): end of the visit and drink
|03/03/2020 (14:00) [Location: Euler building (room A.002)]|
Alexandre Mauroy (UNamur, Belgium)
Koopman operator-based identification of nonlinear ODEs and PDEs
The Koopman operator provides a linear description of nonlinear systems. In this talk, we will exploit this description in the context of nonlinear identification and parameter estimation. Since the generator of the semigroup of Koopman operators is directly connected to the underlying dynamics, we will exploit the key idea that identifying the linear generator is equivalent to identifying the nonlinear dynamics. Using this systematic approach, we will report on two dual linear techniques. These techniques will be illustrated with several examples and complemented with convergence properties derived from the theory of strongly continuous linear semigroups. Finally, the proposed identification framework will be extended to nonlinear PDEs by considering Koopman operator semigroups acting on a space of nonlinear functionals.
|25/02/2020 (14:00) [Location: Euler building (room A.002)]|
Alexander Stollenwerk (TU Berlin, Germany)
Binarizing Johnson-Lindenstrauss transforms
In many problems from data science, the encountered data is massive and encoded by high-dimensional vectors. For applications it is therefore appealing to first reduce its dimensionality before performing computations. If enough geometric information between data vectors is preserved, then the problem can be solved approximately using the embedded data. In this talk, we will focus on the problem of encoding a set of high-dimensional vectors into a small number of bits, while approximately preserving their pairwise Euclidean distances. By binarizing fast Johnson-Lindenstrauss transforms, we will see that this can be achieved efficiently, even for infinite data sets of low-complexity. The talk is based on joint work with Sjoerd Dirksen.
|19/02/2020 (14:00) [Location: Euler building (room A.207)]|
Miel Sharf (Technion, Israel)
Model-Free Practical Cooperative Control Using Passivity and Network Optimization
Please note the unusual location and day of this talk.
In recent times, networked control systems have become an important field of research, as multi-agent and large-scale networks have become common, and the system-of-systems design philosophy has become the state-of-the-art. The ever-growing complexity of systems makes the process of obtaining a reliable mathematical model arduous and time-consuming, especially for networked systems which include many different agents corresponding to many different models. Coincidentally, technological advancements introduced more efficient ways to gather, store, and analyze data. This motivates the idea of data-driven controller design, in which data is used to solve control problems, either by building an approximate model for the system, or by learning a control policy directly.
In this talk, we'll consider a problem in which the relative output between the (nonlinear) agents must converge -close to prescribed values (e.g., consensus or formation control). Our approach relies on a connection between passivity theory and network optimization, as well as high-gain control. We use network optimization to show not only that there exists a gain attaining the desired control goal, but also prescribe a data-driven method of estimating it. We further present another data-driven solution using iterative sampling schemes. We will also discuss how to verify the required passivity assumptions and illustrate the developed methods in a case study.
The talk is based on joint work with Anne Romer, Daniel Zelazo and Frank Allgöwer.
|18/02/2020 (14:00) [Location: Euler building (room A.002)]|
Adrien Taylor (INRIA, France)
Computer-aided worst-case analyses and design of first-order optimization methods
The goal of this presentation, is to provide a high-level overview of recent computer-assisted approaches for analyzing and designing first-order optimization methods (mostly for convex optimization). In other words, we want to illustrate how to use computer algebra software (symbolic computations) and/or numerical tools (semidefinite programming solvers) for analyzing optimization schemes. The presentation will be example-based, as the main ingredients necessary for understanding the methodologies are already present in basic examples such as the vanilla gradient method. For convincing the audience, we will provide other examples that include novel analyses of the Douglas-Rachford splitting, and a variant of the celebrated conjugate gradient method.
|11/02/2020 (14:00) [Location: Euler building (room A.002)]|
Sasa V. Rakovic (Beijing Institute of Technology)
Minkowski, Lyapunov, and Bellman: Inequalities and Equations for Stability and Optimal Control
The algebraic Lyapunov and Bellman equations, and inequalities, are cornerstone objects in linear systems theory. These equations, and inequalities, are concerned with convex quadratic functions verifying stability in case of Lyapunov equation and providing optimality in case of Bellman equation. Rather peculiarly, very little had been known about Lyapunov and Bellman equations, and inequalities, within space of Minkowski functions of nonempty convex compact subsets containing the origin in their interior prior to my work in the area. Key results of my related research on these fundamental problems have provided characterization of solutions to both Lyapunov and Bellman equations within space of Minkowski functions, referred to as the Minkowski Lyapunov and Minkowski Bellman equations. The talk outlines key results underpinning these two fundamental equations and related inequalities, and draws parallel to classical results on algebraic Lyapunov and Bellman equations and inequalities.
|04/02/2020 (14:00) [Location: Euler building (room A.002)]|
Antoine Legat, Ousmane Diao, and Julien Dewez (UCLouvain)
Three 15-minute talks
Speaker: Antoine Legat
Title: Local dynamics identification via a graph-theoretical approach
Abstract: In this talk we propose a novel approach to network identification: the recovery of the local dynamics from the global input-output behavior and the network structure. First, we express identifiability conditions on the rank of transfer matrices, which can be computed by routing vertex disjoint paths in the network. We then address the unstudied case of partial measurement and excitation, and derive a linearization which yields promising conditions for the identifiability of the whole network, backed up by our implementation.
Speaker: Ousmane Diao
Title: Mathematical modeling of malaria transmission taking into account the influence of current prevention and treatment
Abstract: Malaria is a parasitic infection transmitted by a mosquito (female Anophele) which is very deadly for humans. Many strategies are provided from the Senegal s National Malaria Control Program of to reduce morbidity and mortality. We have insecticide-treated bed-nets (ITNs), Vaccines, L'Artemisinin-based combination therapy (ACT) and indoor residual spraying (IRS) etc For it, mathematical modelling may play an important role in operational and optimizing this strategies on control. So we want to develop compartments mathematical model of malaria transmission to show the effects of prevention and treatment.
Speaker: Julien Dewez
Title: Lower bounds for the nonnegative rank
Abstract: Nonnegative matrix factorization (NMF) consists in finding two nonnegative matrices whose product is equal to/approximates a nonnegative data matrix. It has many interesting applications in data analysis when the observed data is nonnegative by nature, e.g., in text mining, in hyperspectral unmixing, and many more. Computing the minimum dimension of such a factorization, called the nonnegative rank, is NP-hard in general. However, it also has interesting applications, e.g., in combinatorial optimization, in statistics, etc. In this talk, we introduce new lower bounds on the nonnegative rank based on properties of some geometric representations of the matrix.
|17/12/2019 (14:00) [Location: Euler building (room A.002)]|
Florentin Goyens (Alan Turing Institute and University of Oxford, UK)
Nonlinear matrix recovery
In this talk we investigate the recovery of a partially observed high-rank matrix whose columns obey a nonlinear structure. The structures we cover include points grouped as clusters or belonging to an algebraic variety, such as a union of subspaces. Using a mapping to a space of features, we reduce the problem to a constrained non-convex optimization formulation, which we solve using Riemannian optimization methods. We also show how the same tools allow to align point clouds that belong to the same algebraic variety (point set registration).
|10/12/2019 (14:00) [Location: Euler building (room A.002)]|
Pascal Barbara (ENS Lyon, France)
How scale-free texture segmentation turns out to be a strongly convex optimization problem?
Texture segmentation still constitutes an ongoing challenge, especially when processing large-size images. The aim of this work is twofold. First, we provide a variational model for simultaneously extracting and regularizing local texture features, such as local regularity and local variance. For this purpose, a scale-free wavelet-based model, penalised by a Total Variation regularizer, is embedded into a convex optimisation framework. Second, we investigate convergence acceleration strategies, relying on strong-convexity of the objective function, in order to deal with computational cost induced by the minimization. Finally, we illustrate the developed procedures on real-world images of multiphasic flows.
|03/12/2019 (14:00) [Location: Euler building (room A.002)]|
Tsiaflakis Paschalis (Nokia Bell Labs, Antwerp)
Optimization for ultrabroadband internet access
Broadband internet access is an essential part in modern life society. Offering universal, cheap and ultrabroadband internet connectivity is a top priority within the European Digital Agenda. The Nokia Bell Labs Fixed Networks Team located in Antwerp is recognized for pioneering and contributing to breakthroughs in wireline broadband internet access technologies. These technologies enable data communication at gigabit speeds between residential customers and the fiber backbone, and are characterized by optimization-intensive physical-layer designs. This seminar will explain a number of optimization problems that are considered in practice to innovate communication technologies. The main aim of this seminar is to show that the application of optimization theory is very relevant for industry.
|26/11/2019 (14:00) [Location: Euler building (room A.002)]|
Rolando Mosquera and Antoine Falaize (LaSIE, Université de la Rochelle, France )
Some geometric and machine learning methods for model order reduction
Geometric methods have proved to be effective for data mining (simulations, measurements) in the context of nonlinear model order reduction problems in mechanics (e.g. nonlinear methods for dimensionality reduction such as Local Principal Component Analysis, LPCA or Locally Linear Embedding, LLE). In addition, deep learning methods by so-called neural network (NN) architectures proved to be effective for a class of problems involving data classification, detection and compression.
We first present a set of geometric tools for the interpolation of reduced bases obtained by orthogonal decomposition (POD) of simulation results. More precisely, by exploiting the intrinsic geometry of the sub- spaces engendered by the POD bases, we give the construction of different interpolators on the Grassmann manifold.
Then, we use the architecture of neural networks to extract geometric informations about the data manifold. More precisely, a metric on the manifold can be found from the representations of coordinates learned by the network. Conversely, classical architectures of neural networks can be informed by the geometry of the data to build optimizers that respect the local geometry. These approaches are illustrated on model order reduction problems in mechanics.
|19/11/2019 (14:00) [Location: Euler building (room A.002)]|
Tran Nhan Tam Quyen (Institute for Numerical and Applied Mathematics, University of Goettingen, Germany.)
Parameter identification in elliptic PDEs from boundary data: finite element method with total variation regularization
In this presentation we would like to present the problem of identifying the conductivity and the source term in elliptic PDEs from several sets of boundary data. The finite element method with total variation regularization technique is applied to tackle the ill-posed identification problem. We analyze the stability of the proposed approach and the convergence of the finite element regularization approximations to the sought parameters, which confirm that the parameters distributed inside the physical domain can be reconstructed from a finite number of observations on the boundary.
|12/11/2019 (14:00) [Location: Euler building (room A.002)]|
Necmiye Ozay (Univ. Michigan, USA)
Control synthesis for large collections of dynamical systems with counting constraints
Can we control a swarm of systems and give guarantees on their collective behavior? In this talk I will discuss an instance of this problem: given a large almost homogeneous collection of dynamical systems and a novel class of safety constraints, called counting constraints, how to synthesize a controller that guarantees the satisfaction of these constraints. Counting constraints impose restrictions on the number of systems that are in a particular mode or in a given region of the state-space over time. I will present an approach for synthesizing correct-by-construction controllers to enforce such constraints. Our approach exploits the structure of the problem, the permutation invariance of dynamics due to homogeneity and the permutation invariance of counting constraints, to achieve massive scalability. I will discuss several potential applications of this approach and illustrate it on the problem of coordinating a large collection of thermostatically controlled loads while ensuring a bound on the number of loads that are extracting power from the electricity grid at any given time.
|05/11/2019 (14:00) [Location: Euler building (room A.002)]|
Guillaume Van Dessel and Brieuc Pinon (UCLouvain)
Two 20-minute talks
Speaker: Guillaume Van Dessel
Title: Convex optimization using tunable first-order inexact oracles (TFOIO)
Abstract: In this talk will be introduced/ motivated the concept of tunable first-order inexact oracle (TFOIO) in convex optimization. More specifically: It will be first discussed convergence results for smooth-convex optimization methods such as (P)GD and FGD taking into account for inexactness both in the information used (tunable first-order inexact oracle built upon surrogates for the objective function and its (sub)gradient) and the accuracy on the update steps. Then it will be shown how and when it is possible to derive, based on a cost (either computational or economical) associated with oracle calls, optimal schedules for the above methods. Worst-case cost gains will be presented for a proper set of chosen experiments. Finally, if the time allows it, practical examples using a personal toolbox will be displayed in live to confirm the empirical behavior of the cited methods under the use of inexact information.
Speaker: Brieuc Pinon
Title: New developments in Machine Learning: a Kolmogorov Complexity approach
Abstract: We first identify some intrinsic limitations in currently used learning algorithms in Machine Learning. These algorithms lack of learning automatization and expressiveness in order to efficiently exploit some structures in some complex tasks. A solution can be found within the Kolmogorov Complexity framework, "perfect" learning that exploits all computable patterns in the data exists. However, it is non-computable. We point to potential practical solutions for this problem, and theoretical questions that we investigate. An understanding of introductory notions in Machine Learning and Theoretical Computer Science is advised.
|29/10/2019 (14:00) [Location: Euler building (room A.002)]|
Gianmarco Ducci (UCLouvain (IMMC))
Gait parametrization and limit cycle stability of flapping bird flight
The biomechanics of birds is of crucial interest for modern engineering applications. Indeed, flapping birds display the ability of maintaining a stable flight in a continuously perturbed environment, while executing complex maneuvers. We developed a method identifying trimmed flight conditions for flapping wings. These particular conditions correspond to limit cycles in the state space domain, with the same period as the flapping input. Our method is based on a multiple-shooting algorithm that can simultaneously identify unknown limit cycles of the longitudinal equations of bird s motion, and assess its stability relying on Floquet theory. This framework further employs a lifting line aerodynamic model developed in the group of fluid mechanics at UCLouvain. Consequently, sensitivity analysis of gait configurations as a function of the wing kinematic parameters can be conducted. Results suggest that birds should continuously rely on feedback control scheme to achieve steady-state flapping flight, and possible solutions to achieve such flight stabilization are discussed.
|15/10/2019 (14:00) [Location: Euler building (room A.002)]|
Mengbin Ye (University of Groningen, Netherlands)
Modelling and Analysis of Opinion Dynamics on Social Networks
Social network analysis is a rich and exciting area of interdisciplinary research that has been tackled by many different scientific communities. Opinion dynamics is a popular topic which uses mathematical models to describe how opinions change as individuals interact over a network. Two recent developments are presented from the perspective of a systems and control engineer, giving some insight into how individual-level processes combine with network-level interactions to determine the evolution, including limiting values, of opinions in the network. In the first part, a novel model is proposed to describe how an individual s private and expressed opinions (which are not the same in general) evolve under pressure to conform to the group norm. We establish sufficient conditions for a discrepancy to arise between an individual s private and expressed opinions on general networks. We then use the model to explore Asch's conformity experiments and the phenomenon of pluralistic ignorance. In the second part, we consider a model that captures a group of individuals simultaneously discussing logically interdependent topics. We show that when heterogeneity exists in the way individuals view the logical interdependencies, disagreement can arise because of the logical interdependence structure, even though all individuals are trying to reach a consensus of opinions.
|08/10/2019 (14:00) [Location: Euler building (room A.002)]|
Nikita Doikov and Anton Rodomanov (UCLouvain)
Two 20-minute talks
Speaker: Nikita Doikov
Title: Proximal Method with Contractions for Smooth Convex Optimization
Abstract: In this talk we study a construction of proximal accelerated methods for smooth convex optimization. At every step the method minimizes a sum of contracted objective and a regularizer in a form of Bregman divergence, by some auxiliary subroutine. We present complexity analysis for a general scheme. In the case when Tensor Method is used as internal method, we show accelerated rate of convergence with additional logarithmic factor as a cost for solving the subproblem.
Speaker: Anton Rodomanov
Title: Greedy Quasi-Newton Method with Explicit Superlinear Convergence
Abstract: We propose a new quasi-Newton method for unconstrained minimization of smooth functions. Our method is based on the famous BFGS scheme but it uses a greedily selected coordinate vector for updating the Hessian approximation instead of the previous search direction. We prove that the proposed method has local superlinear convergence and establish a precise bound for its rate. To our knowledge, this result is the first explicit non-asymptotic rate of superlinear convergence for quasi-Newton methods.
|01/10/2019 (14:00) [Location: Euler building (room A.002)]|
Yu Guan and Bin Gao (UCLouvain)
Two 20-minute talks
Speaker: Yu Guan
Title: Alternating minimization algorithm for graph regularized tensor completion
Abstract: In this talk, we consider low-rank tensor completion problem which aims to exactly recover a low-rank tensor from an incomplete observation. It plays an important role in many applications such as signal processing, computer vision, machine learning, and neuroscience. A widely used convex relaxation of low-rank tensor completion problem is to minimize the sum of the nuclear norm of its unfolding matrices. We build our low-rank completion model based on the CANDECOMP/PARAFAC decomposition approach. In order to improve the recovery quality, the graph information such as correlations between the column/row entities from the side matrices are exploited. In this paper, we propose efficient alternating minimization algorithms combined with conjugate gradient method or alternating direction method of multiplier method dealing with the subproblems. Furthermore, based on the Kurdyka- ojasiewicz property, we show that the sequence generated by the alternating minimization globally converges to a critical point of the objective function since our model is a Kurdyka- ojasiewicz function. Moreover, the complexity and convergence rate of proposed algorithms are also derived. In addition, the regularizer terms in the model could be reformulated as a generalized weighted nuclear norm. As a result, the statistical consistency is guaranteed for our completion problem. Extensive numerical experiments including synthetic data, real data indicate the proposed algorithms are effective.
Speaker: Bin Gao
Title: Optimization Problems with Orthogonality Constraints -- from Feasible to Infeasible
Abstract: To construct a parallel approach for solving optimization problems with orthogonality constraints is usually regarded as an extremely difficult mission, due to the low scalability of the orthogonalization procedure. However, such demand is particularly huge in some application domains such as material computation. In this talk, we introduce several efficient algorithms including feasible and infeasible methods. Such methods have much lower computational complexity and can also benefit from parallel computing since they are full of BLAS3 operations. In the infeasible algorithm based on a modified augmented Lagrange method, the orthogonalization procedure is only invoked once as the last step. Consequently, the main parts of the proposed algorithms can be parallelized naturally. We establish global subsequence convergence results for our proposed algorithms. Worst-case complexity and local convergence rate are also studied under some mild assumptions. Numerical experiments, including tests under parallel environment, illustrate that our new algorithms attain good performances and high scalability in solving discretized Kohn-Sham total energy minimization problems.
|24/09/2019 (14:00) [Location: Euler building (room A.002)]|
Abdou Sene (LANI - Laboratoire d'Analyse Numérique et Informatique [Sénégal])
Global Stabilization of the Navier-Stokes Equations Around an Unstable Steady State
We present a mixed (Dirichlet-Neumann) boundary feedback controller for stabilizing the Navier-Stokes equations around a prescribed steady state, in a bounded domain. The Neumann part of the boundary controller is designed to be zero when the inflow vanishes, and to have the magnitude of the kinetic energy. An appropriate choice of the controllers allows to prove exponential decrease of the perturbation in L2, without blowup. In addition, we prove, on the one hand, that the exponential convergence towards zero holds in H1, on the other hand, that the weak solution is unique when the computational domain is two-dimensional. The procedure we adopt is mainly based on the Galerkin discretization method, and the use of compacity results. Indeed, the stabilization is achieved first for the finite dimensional system, then we pass to the limit by exploiting compactness theory.
|17/09/2019 (14:00) [Location: Euler building (room A.002)]|
Nelly Pustelnik (ENS Lyon, France)
Discrete Mumford-Shah model: from image restoration to graph analysis
The Mumford Shah model is a standard model in image segmentation and many approximations have been proposed in order to approximate it. The major interest of this functional is to be able to perform jointly image restoration and contour detection. In this work, we propose a general formulation of the discrete counterpart of the Mumford Shah functional. We derive a new proximal alternated minimization scheme, allowing to deal with the non-convex objective function, with proven convergence and numerical robustness to the initialization. The good behavior of the proposed strategy is evaluated and compared to state-of-the art approaches in image restoration and extended to graph analysis.
|06/09/2019 (14:00) [Location: Euler building (room A.002)]|
Martin Gueuning (UNamur-UCLouvain, Belgium)
Spreading and diffusion on temporal networks
Network theory provides a framework to model interacting systems from many different fields. The recent increase in the availability of empirical data has allowed to take into account more realistic characteristics of the networks. In particular, the details of the timing of the interactions have highlighted the non-Markovian nature of the agents in many systems. In this work, we investigate stochastic processes on temporal networks. First, we study the impact of non-Markovian activities on Random Walks. We show that memory in the trajectory of the random walker naturally emerges due to bursty behaviours and the presence of short cycles. Second, we investigate spreading strategies on temporal networks. In particular, we show that the temporal sequence of a cascade of retweets may provide an insight about the way it spread on the network and how to exploit it to provide a list of users to target simultaneously, in order to maximize the final share of a message.
|27/08/2019 (15:00) [Location: Euler building (room A.002)]|
Debdipta Goswami (University of Maryland)
Koopman Based Control: Bilinearization, Controllability and Optimal Control of Control-Affine Nonlinear Systems
Note the unusual time of this talk (15:00).
Nonlinear systems are ubiquitous in real world applications, but the control design for them is not an easy task. Hence, methods are sought to transform a control-affine nonlinear system into linear or bilinear forms to alleviate the problem of nonlinear controllability and control design. While there are linearization techniques like Carleman linearization for embedding a finite-dimensional nonlinear system into an infinite-dimensional space, they depend on the analytic property of the vector fields and work only on polynomial space. The Koopman-based approach described here utilizes the Koopman Canonical Transform (KCT) to transform the dynamics and ensures bilinearity from the projection of the Koopman operator associated with the control vector fields on the eigenspace of the drift Koopman operator. The sufficient conditions for exact bilinearization are derived. Even if the conditions are not fully met, the approximate bilinearization can also be posed as an optimization problem. The resulting bilinear system is then subjected to controllability analysis using the Myhill semigroup method and Lie algebraic structures. Using the same Myhill semigroup structure, we also seek to prove the existence of an energy-only optimal control.