## McCormick's Recursive Convex Relaxation of Factorable Functions

In the "branch and bound" family of global optimization algorithms, one needs to find a convex underestimation (a.k.a. Convex Relaxation) of the objective functions. While a tight convex underestimation is hard to obtain, a not-so-tight one can be recursively computed, thanks to an impressive result from Garth P. McCormick.

### Factorable Functions

Finding a convex underestimator for a complicated function may seem hard; but we may attempt to break them apart into familiar components, and underestimate then part by part (think of divide and conquer). Notice that most real-valued functions can be factored into variables, $+$, $\times$, and familiar functions like $\exp(), \log(), \sin()$, etc. We call these *factorable functions*.

Here, a function $f$ is a "familiar function" if, when restricting its domain to a certain closed interval, we can compute its convex envelope, as well as its global max / global min.

Formally, let $X := (x_1, ..., x_m)$ be the set of variables in our optimization problem, where each $x_i$ has a lowerbound $x_i^l$ and an upperbound $x_i^u$ (the lowerbound and upperbound are used for interval computations, to be explained later). We say a function $F: X \to \mathbb R$ is a factorable function if it has one of the following forms:

- $F = F_1 + F_2$, where $F_1, F_2$ are factorable.
- $F = F_1 \cdot F_2$, where $F_1, F_2$ are factorable.
- $F = \phi(F_1)$, where $F_1$ is factorable and $\phi: \mathbb R \to \mathbb R$ is a function we are familiar with.

Factorable function can be defined using some intermediate expressions: we may say that $F(X)$ is factorable if there exists some $v_1, ..., v_n$ such that $v_1, ..., v_m = x_1, ..., x_m$, for any $m < k \le n$, one of the following holds

- $v_k = v_i + v_j$ for some $i, j < k$
- $v_k = v_i v_j$ for some $i, j < k$
- $v_k = \phi(v_i)$ for some $i < k$ and familiar function $\phi$

This formulation allows us to recursively compute properties (lowerbounds, convex underestimators, etc.) via dynamic programming, since each intermediate expression only refers to expressions preceeding it.

#### -> Worked Example

For example, if we have some complicated function $F(x_1, x_2, x_3) = \sin(x_1) + x_2 x_3 \log(1/(x_2 x_3))$, then we can factor it as the following: $$v_1 = id(x_1), v_2 = id(x_2), v_3 = id(x_3)$$ $$v_4 = \sin(v_1)$$ $$v_5 = v_2 \cdot v_3$$ $$v_6 = 1/v_5$$ $$v_7 = \log(v_6)$$ $$v_8 = v_5 \cdot v_7$$ $$v_9 = v_4 + v_8$$

and $F(x_1, x_2, x_3) = v_9$.### Recursive Underestimation

Let $v$ be some intermediate expression (which can be viewed as a function over $X$). We need a way to compute a convex underestimator for $v$ in each of its three possible forms (addition, multiplication, familiar function). For conciseness, we only discuss the construction of convex underestimators; while concave overestimators of some expressions are used in the computation, they are computed in a similar manner to their convex counterparts.

We first clarify the notations: for some real-valued function $v$ over the set of all variables $X = (x_1, ..., x_m)$, we use $\underline v, \overline v$ for some valid lowerbound and upperbound of $v(X)$ over all possible inputs among $X$, and $v_\cup, v^\cap$ for some valid convex underestimator and concave overestimator of $v(X)$ with respect to all possible inputs among $X$. Note that none of the $\underline{v}, \overline{v}, v_\cup, v^\cap$ is necessarily tight.

#### -> Addition

The addition case is easy: if $v = u + w$, then we simply define $$v_\cup := u_\cup + w_\cup$$.

#### -> Multiplication

This is a bit trickier: say $v = u w$, we cannot simply multiply the convex underestimations of $u, w$, because the product of convex functions are not necessarily convex (e.g. the product of $f(x) = x$ and $g(x) = -x$); however notice that $$u \ge \underline u$$ Multiply both sides by a non-negative qualtity $w - \underline w$, we have $$u (w - \underline w) \ge \underline u (w - \underline w)$$ $$uw \ge u \underline w + \underline u w - \underline u \underline w$$ Now things become much easier: notice that $\underline u, \underline w$ are both numbers, not functions, so the right-hand-side is an underestimator of $v$ that contains no product of functions! We then find a convex underestimator of it piece by piece, and define $$v_\cup := \min(u_\cup \underline w, u^\cap \underline w) + \min(\underline u w_\cup, \underline u w^\cap) - \underline u \underline w$$.

In the above expression, the $\min(\cdot)$ functions handles the case where $\underline u$ and/or $\underline w$ are negative.

Similar thing can be done using the upper-bounds of $u, w$; then $v_\cup$ can be set as the pointwise max of these two convex underestimators, which can make a tighter underestimation.

#### -> Function Application

This is also tricky: let $v = \phi \circ w$, then we $c_\phi(\cdot)$ be the convex envelope of $\phi(\cdot)$, and $z := \text{argmin}_{[\underline w, \overline w]} c_\phi(w)$; in other words, $z$ is the (unique) value of $w$ that *minimizes* $c_\phi(w)$. Now we have
$$v_\cup := c_\phi \circ h$$
where $h$ is a function defined by the pointwise median of $w_\cup, w_\cap, z$. You might wonder why in the world is this function (a) underestimates $v$; and (b) is convex. The following are proofs:

For underestimation, note that $h$ never goes farther from the global minimum $z$ than $w$ does; and for a convex function, closer to global minimum means a lower value. Therefore we have $$c_\phi(h) \le c_\phi(w) \le \phi(w) =: v $$.

For convexity of $c_\phi \circ h$, we first consider the case where $X$ is 1-dimensional, and identify some $x^{(m)}$ that minimizes $|h(x) - z|$; and as the value of $x$ increases starting from $x^{(m)}$, the value of $|h(x) - z|$ increases (weakly) superlinearly w.r.t. $x - x^{(m)}$; since $c_\phi$ is convex with global min $z$, the value of $c_\phi(h(x))$ increases superlinearly w.r.t. $|h(x) - z|$. The same applies if the value of $x$ decreases starting from $x^{(m)}$. Since such $x^{(m)}$ minimizes $|h(x) - z|$, it also minimizes $c_\phi \circ h$, and the superlinear increase of $c_\phi \circ h$ as $x$ deviates from $x^{(m)}$ shows that $c_\phi \circ h$ is convex.

Next, when $X$ has multiple dimensions, we apply the above argument to each $x_i$ and show $c_\phi \circ h$ is convex w.r.t. $x_i$.

### Interval Computations

Now we see that valid convex underestimations can indeed be recursively constructed via one of these three cases; but the "multiplication" part requires upper and lower bounds of expressions. These can be computed via interval computations: for example, we use $IV(v) := [\underline v, \overline v]$ to denote the interval composed of the lowerbound and upperbound of expression $v$, then the followings are valid recursive computations of the lower and upper bounds of each expression: $$IV(u + w) := IV(u) +_I IV(w)$$ $$IV(u \cdot w) := IV(u) \cdot_I IV(w)$$ $$IV(\phi(u)) := \phi_I(IV(u))$$ where $+_I, \cdot_I$ are interval addition / multiplication, and $\phi_I(iv)$ is the function $\phi$ applied to the set of real numbers in the interval $iv$.

### Tightness

Unfortuately, the convex underestimation produced by McCormick's method is not tight in general, for example, given $-1 \le x \le 1$, the convex relaxation of $x \cdot x$ would be $$\min(2x-1, -2x-1)$$. (that is, if one makes use of both $\underline x$ and $\overline x$ to produce a tighter approximation, as mentioned at the end of "Multiplication" convex estimation; if only $\underline x$ is used, then $-2x-1$, an even looser bound, would be obtained instead)

In either way, such an underestimation is not tight, since $x \cdot x = x^2$ is convex itself.

In general, however, obtaining the convex envelope of an arbitrary convex function is NP-hard: one can solve an Mixed Integer Linear Program (MILP) instance to its optimum by obtaining the convex hull of its feasible region, and then solve the corresponding LP in poly time. With tight convex underestimations out of question, these non-tight relaxations may be quite close to the next best thing, and are thus extensively used in modern MINLP solvers.