# TDA206 - Discrete Optimisation

Lecture notes - Lecture 3

These notes were not written in class as I was away on vacation during this lecture, therefore it is instead based on last year's lecture notes for the second week. These notes will mainly be paraphrases of those notes. That's why there might be some overlap between these notes and last lecture's notes.

## 2 Linear Programming (cont.)

Linear programming is a special case of convex optimization that was one of the first optimization problems to be carefully studied and solved as it is tightly bound to linear algebra. Much of the advancements in other areas of optimization theory were inspired by Linear programming. In the same way that affine functions and half-spaces are the building blocks of convex functions and convex sets, linear programming is the foundation of all convex optimization problems.

Linear programming is interesting in this course because it is used heavily in the theory of discrete optimization. We can formulate many interesting problems as Integer Linear Programs (ILP) which are similar to LP's but have discrete variable space instead of continuous. However, before we dive into ILP's we will focus a bit more on LP's and possibly recap a bit from last lecture's notes.

### 2.1 Definition 1

An optimization problem is linear if:

1. The variable domain is for som .
2. The cost function and the constraint functions and are all affine functions.

It is popular to write an LP as

### 2.2 Matrix definition

This could easily be written using matrices, like this:

Now the optimization can be written as

### 2.3 Alternative forms of expressing LPs

Often we don't need all the matrices A, b, c, D and e to express an LP. There are two common forms to express an LP: standard form (or canonical form) and the augmented form.

Standard form:

Augmented form:

To turn a general form of LP into one of these forms, the following modifications are straightforward:

• Minimization of can be turned into maximization of .
• Lower bounds can be turned into .
• Equality constraints can be turned into two inequality constraints and • Inequalities can be turned into equalities by introducing a new slack variable (hence the term "augmented" form), and replacing the constraint by .
• Unconstrained variables can be replaced by .

### 2.4 Primal and dual LPs

Given a primal problem P

The dual problem D is defined as

### 2.5 Weak duality in LP

By minimizing the objective function for D ( ) we can obtain an upper bound for the maximal solution of P.

Theorem (Weak duality in LP).
If there exists a feasible in P, and a feasible in D, then .

Proof. If we left-multiply the constraint in P ( ) with a non-negative vector we will get the inequality . Doing the same thing for the constraint in D with another non-negative vector gives us and transposing that leaves us with Assuming that the feasible regions of P and D are note empty we can pick a and an to obtain the inequality . The difference between two objectives is called the duality gap. There is for linear programs an even stronger theorem which we are not going to prove.

### 2.6 Strong duality of LP

Theorem (Strong duality of LP).
If P has a feasible optimal solution , then D has an optimal feasible solution , and . If P is unbounded, then D is infeasible. If D is unbounded, then P is infeasible. If either one is infeasible, the other is infeasible or unbounded.

In other words, the duality gap in LP is zero, except when both are infeasible, then it is infinite. Strong duality tells us that we can solve a maximization problem by solving its dual and vice versa. We can quickly decide if a problem in unbounded. Also, in LP, the dual of the dual is the primal.

### 2.7 Complementary slackness

There is another way to determine whether a solution to a promal is optimal, using a property called complementary slackness. Consider the first equality in the strong duality theorem, . This can be written as follows:

Since is feasible, we know that and therefore for all . On the other hand, we know that for all since is also feasible. Since and none of its terms can be positive, we have because there can be no positive terms to cancel out any negative ones. This menas that in order to satisfy the equation above, for every , at least one of and must be 0. means that is satisfied with equality.

### 2.8 Active constraints, constraint with slack and complementary slackness

Definition An inequality constraint is said to be binding or active if for some given it is satisfied with equality, i.e . It is said to have slack /if it is not binding.

An analogous argument can be made for . This yields the following important result:

Theorem (Complementary slackness).
If optimal solutions exist for the primal and dual LPs P and D, then the following holds:

• If the i-th primal constraint has slack, then the i-th dual variable is zero.
• If the j-th dual constraint has slack, then the j-th primal variable is zero.
• If the i-th dual variable is positive, then the i-th primal constraint is binding.
• If the j-th primal variable is positive, then the j-th dual constraint is binding.

The following might be useful for visualizing the correspondence of variables in the primal-dual relation:

#### 2.8.1 Complementary slackness matrix

Created: 2019-01-22 tis 19:54

Validate