Research Work

\( \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator{\sbjto}{subject\;to} \newcommand{\R}{\mathbb{R}} \newcommand{\Nz}{\mathbb{N}} \newcommand{\N}{\mathbb{N}^\ast} \newcommand{\C}{\mathbb{C}} \newcommand{\puncturedC}{\C^\ast} \newcommand{\Z}{\mathbb{Z}} \newcommand{\Lp}[1]{\mathrm{L}^{#1}} \newcommand{\Lz}{\mathrm{L}_{0}} \newcommand{\ra}{\rightarrow} \newcommand{\sysDim}{d} \newcommand{\state}{x} \newcommand{\control}{u} \newcommand{\sysDyn}{f} \newcommand{\conDim}{m} \newcommand{\horizon}{T} \newcommand{\admStates}{\mathcal{S}} \newcommand{\admControls}{\mathcal{U}} \)

Constrained Optimal Control Problems

One of my primary research directions has been developing computationally tractable conditions to arrive at the solutions of continuous-time optimal control problems.

Consider a continuous-time dynamical system described by

\begin{align} \label{e:system} \tag{1} \dot{x}(t) = f \bigl(x(t), u(t), t \bigr), \end{align}

where \(\state(t) \in \R^{\sysDim}, \control(t) \in \R^{\conDim}\) denote the states of the system and the control input respectively at time \(t\). A typical finite-horizon optimal control problem on the system (1) can be stated as follows:

\begin{align} & \minimize && C (\state, \control)\\ & \sbjto && \begin{cases} \text{dynamics \eqref{e:system}},\\ \control (t) \in \admControls,\\ \tag{2} \label{e:OCP} \state (0) \in \admStates_{0}, \quad \state (\horizon) \in \admStates_{\horizon}, \end{cases} \end{align}

where \(\state : [0, \horizon] \ra \R^{\sysDim}\) and \(\control : [0, \horizon] \ra \R^{\conDim}\) represent the state and control trajectories, \(\admControls \subset \R^{\conDim}\) denotes the set of admissible control values, and \(\admStates_{0}, \admStates_{\horizon} \subset \R^{\sysDim}\) denote the sets containing the desired start and end states of the system respectively. The real valued map \(C\) from the space of state and control trajectories, represents the objective function.

The Pontryagin maximum principle (PMP) is one of the popular tools that enables us to solve constrained optimal control problems like (2). The PMP provides a set of necessary conditions for optimality in the form of a system of ordinary differential equations involving two-point boundary conditions.

Optimal control with frequency constraints

The constraints \(\control (t) \in \admControls\), called pointwise constraints in time, on the control inputs in (2) specify that the control value lies in a pre-specified range at each time. These constraints model the limitations of the actuators. Apart from the magnitude limitations, actuators may sometimes be limited in the rates at which their outputs can change. For example, control moment gyroscopes employed for orientation manoeuvres of satellites, due to their inertia, cannot reproduce control commands beyond a certain range of frequencies. Such limitations can be modelled by introducing global constraints on the control trajectories. We have addressed the incorporation of frequency constraints on the control and state trajectories in discrete-time optimal control problems - on systems with nonsmooth dynamics, and systems evolving on matrix Lie groups. We are currently trying to extend the same for continuous-time optimal control problems.

Approximating optimal controls by discretization

Observe that in (2), the state trajectories are only constrained at the endpoints. The presence of constraints along the trajectory of the states makes the obtained two-point boundary value problem computationally intractable due to the presence of infinite dimensional objects in the necessary conditions. However, for discrete-time systems, even in the presence of constraints along the state trajectories, the corresponding necessary conditions involve algebraic recursions with two-point boundary conditions which are benign compared to the complexity of the continuous-time counterpart.

Since one can obtain discrete-time models of continuous-time systems that preserve the underlying configuration space and certain invariant properties of the system, an approximate discrete-time optimal control problem can be formulated by discretizing the system and transforming the constraints suitably. We are thus motivated to investigate the possibility of approximating the solutions of (2) by solving sufficiently close discete-time formulations whose solutions can be obtained relatively easily.

Under the following assumptions:

  1. the underlying system \(\eqref{e:system}\) is linear;

  2. \(\admStates_{0}, \admStates_{\horizon}\) in \(\eqref{e:OCP}\) are singleton sets (i.e. considering point-to-point state transfer problem);

  3. the control trajectory \(\control: [0, \horizon] \ra \R^{\conDim} \) are restricted to a closed subset of \(\Lp{2}([0, \horizon], \R^{\conDim})\)

  4. a piecewise constant control exists in the feasible set of the original optimal control problem \(\eqref{e:OCP}\)

we have managed to prove that the solutions of the approximate discrete-time optimal control problems converge weakly (in \(\Lp{2}\)) to the solution of \(\eqref{e:OCP}\) as the step size of the discretization is brought closer to zero.

At this stage, we have started investigating the sufficient conditions for when the assumption of feasibility by piecewise constant control inputs. In other words, when is a state transfer possible under piecewise constant controls.

Maximum hands-off control

A general way to maximize resource consumption is to design controls that achieve desired objectives while requiring minimum attention. Such controls are especially important in networked and embedded systems and more generally in situtations where a central processor is shared by multiple controllers. By designing maximum hands-off controls, where the amount of time the control value is non-zero (i.e., controls that minimize \(\Lz\) pseudonorm), we can maximize the availability of the communication channels and the processors. In contrast with discrete-time systems where the sparsity of the controls can take only a finite range of values, the existence of true minimizer for the \(\Lz\) cost for a generic objective, say like a particular state transfer, in continuous-time systems is an open question. We have been trying to investigate the existence of \(\Lz\)-optimal controls.

Averaging in Control Theory

Under the influence of fast oscillating control inputs, control systems tend to behave as if they were controlled by an ''averaged'' control signal. This averaging behaviour can be exploited for the design of control signals in applications where predicting the response of the system for generic controls is complicated.

In particular, we show the following: consider a class of control affine systems described by

\[\tag{3} \label{e:CFS} \dot{\xi}(t) = f \bigl(\xi(t)) + g (\xi(t)) \mu(t)\]

where \(\xi(t) \in \R^{d}\) represents the states, \(\mu(t) \in \R^{m}\) represents the control input. Given a sequence of bounded control signals \((u_{n})_{n \in \N}\), let \((x_{n})_{n \in \N}\) be the corresponding solutions to \(\eqref{e:CFS}\) starting from a common initial state, say \(\bar{x}\).

Suppose \(u_{n} \xrightarrow[]{\text{weak-}*\; \Lp{\infty}} u\), and let \(x\) be the solution to \(\eqref{e:CFS}\) corresponding to the control input \(u\) starting from the same initial state \(\bar{x}\). If the sequence of solutions \((x_{n})_{n \in \N}\) is uniformly bounded on an interval \([0, \horizon]\), then the sequence converges uniformly on \([0, \horizon]\) to the limiting solution \(x\).

Applications to NMR

In general, control signals employed in NMR experiments are highly oscillatory in order to utilize resonance effects of the system. Due to the nature of the system, the actual response of the system to the control is not known but can be approximated using the response of the system to the averaged control. Since fast oscillating controls converge to zero, the limiting response is typically the natural evolution of the system which can be used to approximate the response to the desired control.

Extension to PDE based control systems

We are trying to extend the results of averaging based studies to control systems whose dynamics are governed by PDEs. These systems are modelled as evolving on infinite dimensional spaces.