Operation Research

Operation Research

 Introduction of Operations research

During the Second World War, British leaders ask scientist and engineers to analyze several military problems, the deployment of Radar and the management of Convoy bombing and mining operations. The applications of mathematics and the scientific methods to military operations was called operations research or management science means a scientific approach to decision making, which seeks to determine how best is to design and operate a system, usually under conditions requiring the allocation of scarce resources.

History of operations research

Along the history, is feculent to find collaboration among scientists armies officer with this same objective, ruling the optimal decision in battle in fact that many experts consider the start of operational research in the third century B.C.

Leonardo Da Vinci tool part in 1503, like engineer in the war against paisa due to he knew techniques to accomplish bombardments, to construct ships, armored vehicles, cannons, catapults, and another war like machines.

Thomas Edison made use of operational research, contributing in the antisubmarine war, with his greats ideas, like shields against torpedo for the ships.

States decide supplying the city or else by means of escorted convoys (that would be able to give rise to new confrontation) or by means of airlift, choused, starting the Luftbrucke (airlift) at 25th June in 1948. This went another from the problems in which worked the SCOOP group, in December of that same year, could carry 4500 daily tons and after studies of Research Operations optimized the supplying to get to the 8000-9000 daily tons in March of 1949. This cipher was the same that would have been transported for terrestrial means, for that the Soviet decided to suspend the blockage at May 12, in 1949.

After Second World War, the order of United States’ resources (USA) (energy, armaments, and all kind of supplies) took opportune to accomplish it by models of optimization, resolved intervening liner programming.

At the same time that the doctrine of Operation Research is being developed, the techniques of computation and computers are also developing, thanks to them the rime of resolution of the problems decreased.

The first result of these techniques was given at the year 1952, when a SEAC computer from was used National Bureau of Standards in way to obtain the problem’s solution. The success at the resolution time was so encouraging that was immediately used for all kind of military problems, like determining the optimal height which should fly the planes to locate the enemy submarines, monetary founds management for logistics and armament, including to determine the depth which should send the charges to reach the enemy submarines in way to cause the causalities bigger number, that was translated in a increase I five times in Air Force’s efficacy.

During the 50’s and 60’s decade, grows the interest and developing of Operational Research, due to its applications in the space of commerce and the industry. Take for example, the problem of the calculation of the optimal transporting plan of send of constructions to the works of edification of the city o Moscow, which had 10 origins points and 230, destiny. To resolve it was used and Sterna Computer, that tool 10 days in the month of June of 1958, and such solution contributed a reduction of the 11% of the expenses in relation to original costs.

Previously, this problem were presented in a discipline knew as Research Companies or Analysis Companies, that did not have so effective methods like the developed during Second World War (for example the Methods). No war applications Operations Research there are so as you want imagine, with problems like nutrition of cattle raising, distribution of field of cultivation in agriculture, goods transports, location, personnel’s distribution, and nets, queue problems, graphics etc.

Definition of Operation Research

Operations Research is the applications of scientific methods by Interdisciplinary teams to problems involving the control of organized (man-machine) systems so as to provide solutions which nest serve the purpose of the organization as whole topics. As operation is a large subject so many definitions of operations research have been suggested from time to time. Some of the widely accepted definitions are:

 According to Mores and Kimball:

Operations research is a scientific method of providing executive departments with a quantitative basis for decisions regarding the operation under their control.

According to E.L. Arnoff and M.j. Netzrog:

Operation research is the systematic method oriented study of the basic structure, characteristic, functions and relationships of an organization to provide the executive with a sound, scientific and quantitative basis for decision making.

Scopes of Operation Research:

Whenever there is a problem for optimization, there is scope for the application operations researches. It has wide scope in the social, economic and industrial problems of today.

Such as

  1. Industrial Management
  2. Defense Operations
  3. Developing and Development Economics
  4. Agricultural Sector
  5. Business and Society

Operations Research is a problem solving and decision making science. In the areas of management where OR techniques have been successfully applied are:

  1. Production and Facility Planning
  2. Procurement
  3. Marketing
  4. Finance
  5. Personnel

Complexity of operations:

In a big industry the numbers of factors influencing a decision have increased. For example consider a factory schedule which has to take into account

  1. Customer demand
  2. Requirement of raw materials
  3. Equipment capacity and possibility of equipment failure
  4. Restrictions on manufacturing process

How Operations Research Works:

Operations Research, like all scientific research, is based on scientific methodology, which proceeds along the following lines:

  1. Formulating the problem
  2. Constructing a model to represent the system under study
  3. Deriving a solution from the model
  4. Testing the model and the solution derived from it
  5. Establishing controls over the solutions
  6. Putting the solution to work, i.e. Implementation.

 Models in Operations Research:

A model, as used in operations research, is defined as an idealized representation of the real life situation. It represents one or a few aspects of reality. Diverse items such a map, a multiple activity chart an auto biography PERT (Program Evaluation and Review Technique) network breakeven equation, balance sheet etc. are all models because each one of them represents a few aspects of the real life situation. A map for instance, represents the physical boundaries but normally ignores the height of the various places above the sea level.

 Symbolical or Mathematical Models:

Symbolic models employ a set of mathematical symbols (letters, numbers, etc.) to represent the decision variables of the system, under study. These variables are related together by mathematical equations which describe the properties of the system. A solution of the model is then, obtained by applying well developed mathematical techniques. In symbolic models are used wherever possible, not only because they are easier to manipulate but also because they yield more accurate results. Most of the lecture, therefore, is devoted to the formulation and solution of these mathematical models.

Limitations of Operations Research:

  1. Mathematical models are applicable to only specific categories of problems.
  2. Mathematical models which are essential of OR, do not take into account qualitative factors or emotional factors which are quit5e real. All influencing factors cannot be quantified, find no place in mathematical models.
  3. Being a new field, generally there is a resistance from the employees to the new proposal.
  4. Management which has to implement the advised proposals may itself offer a lot of resistance due to conventional thinking.
  5. At the implementation stage, the decision cannot be governed by quantitative considerations alone.

Objectives of Operations Research:

The industrial growth has brought with it the need for division of management function within and organization. Thus every organization has met a number of functional units or departments, each performing a part of the whole job and for its successful working, developing its own policies and objectives.

These objectives though in the best interest of the individual department, may not be in the best interest of the organization as a whole. In fact, these objectives of the individual departments may be inconsistent and clashing with each other. In this Context reference was made in section with regard to the attitudes of the various departments of a business Organization towards its inventory policy.

The application of Operation research (OR) methods helps in making decisions in such complex situations. It combines the knowledge of various disciplines such as mathematics, statistics, economics, psychology and engineering and the combined effort of all these disciplines helps in analyzing the problem in finer details.

Two business realities a manager has to face are change and uncertainty. The market demand fluctuates, raw materials and spares become scarce, production on equipment fails or the change in Govt.’s policy may affect the company’s resources drastically or impose restrictions on the current manufacturing processes. Under such situations past experience can be a (though only a rough) guide. A knowledge of the past data and future trends can help the manager to assess the risk and accordingly make his decisions. OR can help him here. The statistician will analyze the past data and extrapolate it for near future. The accountability will be able to estimate the cost associated with any decision shale the engineer will assess the effect of changes in technology, quality of material, and availability of new types of machines.

To summaries, the objective of OR is to provide a scientific basis to the Managers of an organization for solving problems involving interaction of the components of the system by employing army approach by a team of scientists sawn from different disciplines ding a solution which is in the interest of the organization as a whole.

  Formulating the problem:

It is very essential that the problem at and be clearly defined. It is almost impossible to get the right answer from a wrong problem. The problem may be set out explicitly by the consumer, the sponsoring organization or may have to be formulated by the OR team. In OR it is not uncommon to start the study with tentative formulation of the problem.

This may be reformulating over and again during the study. Efforts should, however, be made to formulate the problem fairly elaborately before starting with the research. The problem formulation phase is generally lengthy, requiring considerable time and effort but it is quite.

Justified since, unlike other researches, OR is a search into the operations of a man-machine organization and must take into consideration the economics of the operations. In formulation a problem for OR study, analysis must be made of the four major components:

  1. The environment
  2. The decision maker
  3. The objectives
  4. Alternative course of action and constrains.

Out of the four components environment is most comprehensive since embraces and provides a setting for the other three. In general environment is the framework within which a system of organized activity I directed to attain the prescribed objectives or goals.

Simple method:

General Form of Liner Programming Problems: The general form of LPP is given by

            Maximize (Minimize)  z = c1x1 + c2x2 + c3x3 + ………. + cnxn

            Subject to, a11x1 + a12x2 + a13x3 +  ……………. + a1nxn ( ≤, =, ≥) b1

                            a21x1 + a22x2 + a23x3 + ……………… + a2nxn (≤, =, ≥) b2



                        am1x1 + am2x2 + am3x3 + ……………….. + amnxn (≤, =, ≥) bm

                                    x1, x2, x3 ………………… xn ≥ 0

The constants ej ( j = 1, 2, 3 ….. n) are called cost coefficients; the constants bi ( i = 1, 2, 3 ….. m) in the constraints are called stipulations and the constants aij (i = 1, 2, 3 …… m and j = 1, 2, 3 ….. n) are called structural coefficients.

Summation Form of Linear Programming Problems: The above formulation can be put in the following compact form by using summation

                        Maximize ( or Minimize)  Z =

Subject to,

                        And xj ≥ 0, j = 1, 2, 3 ………….. n

 Matrix Form of LPP: The above formulation can be put in the matrix form

                                    Maximize ( or Minimize) Z = cx

                                                Subject to, Ax (≤, =, ≥)b

                                                            And x ≥ 0


where. A is called the coefficient matrix, x is called decision vector, b is requirement vector and c is the cost (price) vector of LPP.

Canonical Form of LPP: The Canonical form of LPP is defined as

                                    Maximize Z =

                                    Subject to  ≤ bi   i = 1, 2, 3, ………., m

                                                and x1 ≥ 0, j = 1, 2, 3 ……….. n

The Characteristics of this form are

a)      Objective function is of maximization form

b)      All constraints are of the (≤) type, and

c)      All decision variables are non-negative.

  Example: Convert the LPP to Canonical form:

            Minimize Z  = 60x1 + 40x2

            Subject to,       30x1 + 10x2 ≥ 240

                                    10x1 + 10x2 ≤ 160

                                    20x1 + 60x2 = 480

                                                x1, x2 ≥ 0

Solution: The above LPP can be converted to the Canonical form as:

                                    Maximize G = -60x1 – 40x2

                                    Subject to – 30x1 – 10x2 ≤ -240

                                                    10x1 + 10x2 ≤ 160

                                                    20x1 + 60x2 ≤ 480

                                                    -20x1 -60x2 ≤ -480

                                                       x1, x2 ≥ 0


Convert the following LPP to standard form:

                                                            Maximize        Z = 3x1 + 2x2

                                                            Subject to        2x1 + x2 ≤ 2

                                                                                    3x1 + 4x2 ≥ 12

                                                                                    x1, x2 ≥ 0

Solution: Introducing the slack and surplus variables, the standard form can be written as:

                        Maximize Z = 3x1 + 2x2

                        Subject to, 2x1 + x2 + s1 = 2

                                    3x1 + 4x2 – s2 = 12

                                    x1, x2, s1, s2 ≥ 0

Standard LPP in Matrix Form

Problem the following LPP in standard matrix form:

                        Maximize Z = 2x1 + 3x2

                        Subject to, x1 + x2 ≥ 5

                                        x1 + 2x2 = 7

                                        5x1 – 2x2 ≤ 9

                                       x1, x2 ≥ 0

First we consider the LPP in standard form:

Maximize z = 2x1 + 3x2 + 0x3 – Mx4 – Mx5 + 0x6

Subject to,

                        x1 + x2 – x3 + x4 + 0 + 0 = 5

                        x1 + 2x2 + 0 + 0 + x3 + 0 = 7

                        5x1 – 2x2 + 0 + 0 + 0 + x6 = 9

                        x1, x2, x3, x4, x5, x6 ≥ 0

where, x3 is the surplus, x6 is the slack and x4 and x5 are the artificial variable

Working rule for solving a liner programming problem by one phase simplex method:

# The matrix form of standard LPP is:

Maximize (or Minimize) Z = CX

Subject to, (A, I) X = b

and X ≥ 0


Maximize z = (2, 3, 0, -M, -M, 0)


Thus we can say, X =    C = (2, 3, 0, -M, -M, 0)

A =

Step-1: Verify whether the objective function of the given problem is to be maximized or minimized. If it is to be minimized then we transfer it into a maximization problem by using Maximum z = – Minimum z.

Step-2: Verify whether all bj ³ O. If any one of bi is negative, then multiply the corresponding constraints by 1, to make all bi ³ 0.

Step-3: Transfer all the constraints into equation by introducing slack and surplice variables.

Step-4: Add artificial variables if needed. Then given L.P.P. reduced to standard

form A x = b.

Step-5: Construct the simplex table.

Step-6: Compute the net evaluation zj – cj or, Ej – cj by using the relations Ej – cj by using the relations Ej – cj = cby1 – cj where y1 = column of A.

Step-7: If each (Ej-cj) ³ 0 then the initial basic feasible solution is an optimum basic feasible solution.

(i) If at least one Ej – cj = 0 for a non-basic column vector and at least one yij is positive then by entering the vector in the basic, we get an alternative optimal basic feasible solution.

(ii) If at least one of Ej – cj is equal to zero for a non-basic column vector and all yij ≤ 0 then we get an alternative non-basic optimal solution.

(iii) If Ej –cj > 0 for all non-basic column vector then the problem has unique solution.

Step-8: If at least one of Ej – cj is negative then proceed on the next step.

If there are more than one negative values of Ej – cj then select the most negative of them.

 Let it be Ep – cp for some j = p.

(i) If all yip ≤ 0 hen there is an unbounded solution.

(ii) If at least one of yip is positive then the corresponding vector y peters the basis.

(iii) If two or more decision variables are identical select any one of them.

(iv) If a decision variable and a slack variable are identical select any one of them.

(v) If a decision variable and a slack variable are identical select decision variable.

Step-9: Compute the ratios and select the minimum of them. Let the minimum of these ratios be , then the column vector yk will leave the basis yB. The element ykp is known as leading element (or pivot element or key element).

Step-10: Convert the leading element to unity by dividing its row by the leading element itself and all other elements in its column to zero by using the relations.


Step-11: Now go to he step-6 and repeat next step until either an optimum solution is obtained or there is an indication of an unbounded solution.

 The Canonical form of L.P. Problem:

The general L.P.P can be always put in following form called the canonical form



The characteristics of this form are

            (a) All decent variables are non-negative

            (b) All constraints are of the type and

            (c) Objective function is of maximum type

Any L.P.P can be put in the canonical form by the use of some elementary transformations

(i) The maximization of a function is equivalent to the maximization of the negative experience of this function – . For example the liner objective function


is equivalent to

maximize, with  Therefore for all L.P.P the objective function can be expressed in the maximization form

(2) An inequality in one direction can be changed to an inequality in the direction by multiplying both sides of the inequality by (-1)

(3) An equation many be replaced by two inequalities in opposite directions .

(4) If decision variable is unrestricted it is expressed as the difference between two non-negative variables.

The Standard form of L.P.P

The characteristics of the standard form are

(i) All the constraints are expressed in the form of equations except the non-negativity constraints which remain inequalities

(2) The R.H.S of each constraints equation is non-negative

(3) All the decision variables are non-negative

(4) The objective function is of the maximization or minimization type

The inequality constraints are changed to equality constraints by adding or subtracting some variables known as slack variables (when added) and surplus variable (when subtracted)

Express the following L.P.P into standard form



Solution: Introduction slack and surplus variables the problem in the standard form can be expressed as



(2) Express the following L.P.P in the standard form

(1) If there are m equality constraints and m + n is the number of variables a start for the optimal solution is made by putting n unknowns ( out of m + n unknowns) equal to zero and then solving for the ‘m’ equals in remaining m unknowns ; provided that the solution exists and is unique.

The n zero variables are called non-basic variables and the remaining m variables are called basic variables which form basic solution. If the solution yields all non-negative basic variables, it is called basic feasible solution; otherwise it is infeasible. This step reduces the number of alternatives for the optional solution form infinitive to a finite number whose maximum limit can be

The resulting number of alternative solution is still too large to be computationally feasible and is reduced by the 2nd condition

(2) We know that in a liner programming problem all the variables must be non-negative. Since the basic solution selected by condition (1) above are not necessaricty non-negative the number of alternatives can be further reduced by eliminating all infeasible basic solutions (solution having variables less than zero)

In the simple method this is achieved by starting with a basic solution which is non-negative . A condition called feasibility condition is then provided which ensures that the next basic solution to be selected from all the possible basic solutions is always feasible . This solution is called basic feasible solution. If all the basic variables are greater than zero , the solution is called non-degenerate ; if some of them are zero, the solution is called degenerate. A new basic feasible solution can be obtained from a previous one. By setting one of the m basic variables equal to zero and replacing it by a new non-basic variable. The basic variable set equal to zero is called a “ baring variable” while the new one is called an ‘entering variable”.

(3) The entering variable can be so selected that it improved the value of objective function so that the new solution is better than the previous one. This is achieved by the use of another condition called optimality condition which select that entering variable which produces the largest per unit gain in there objective function. This procedure is repeated successively unit  no further improvement in value of the object function is possible. The final solution is then call an optimal basic feasible solution as simply optimal solution.

This is the solution which satisfies the objective function equation. The constraints as well as the non-negativity conditions. This is of course, true only if the objective function has a finite value

The for going discussion shows that simplex method procedure is principally a screening process since f eliminates the solutions that are not promising solutions for the optimal solution.

It will not be out of place to further elaborate a few important terms with reference to equations of the general linear programming problem.

 Introduction to the Simplex Method

Simple Method- An algebraic, iterative method to solve linear programming problems.

The simplex method identifies an initial basic solution (Corner point) and then systematic moves to an adjacent basic solution, which improves the value of the objective func Eventually, this new basic solution will be the optimal solution.

The simplex method requires that the problem is expressed as standard LP problem method on Standard Form). This implies that all the constraints are expressed as equation adding slack or surplus variables.

The method uses Gaussian elimination (sweep out method) to solve the linear simulate equations generated in the process.


 Solution: A solution is any specification of values for the decision variables.

Feasible solution: A feasible Solution is a solution for which all the constraints are satisfied

Feasible Region: The feasible region is the set of all feasible solution.

Optimal Solution: An optimal Solution is a feasible solution that has the most favorable value of the objective function.

Non Basic Variable: If there are m equality constraints and n number of variables we start for optimal solution by putting n-m unknown equal to zero. These m-n variables are called non basic variables.

Basic solution:  A Basic Solution is a solution of m basic variables when each non basic variable is set zero.

The maximum number of possible basic solutions for m equations and n unknowns is

Basic feasible solution: A Basic Feasible solutions is a solution for which all the variables are greater than or equal to zero

Non-degenerate basic feasible solution: If all the basic solutions are greater than zero then the solution is called non-degenerate solution.

 De-generate Solution: If one or more basic variables is (are) zero then the solution is called the De-generate solution.

Slack variables:

Consider an inequality of the form

            ≤ bi (bi ≥ 0); i = 1, 2, ………, m

This constraint is equivalent to the equality constraint,

+ yi = bi; i = 1, 2, ………, m.

Where yi ≥ 0 and is called the slack variable.

Surplus variables: In a liner programming problem, if the constraints are of the type (≥) and a variable is subtracted to the left hand side of the in equation it represent an equation.

 Mathematically: Consider a L.P.P.

Max or Min

Subject to ≥ bj; j = 1, 2, 3 ……….. m

If the following form exist and sj ≥ 0 then the non-negative variables sj are called surplus variables.

Introduction to Linear Programming & Basic Theorems Max or Min,

Subject to

Problem : Find all the basic feasible solutions of the equations

Soultion: The given system of equations can be presented in the following maxim form


Since A is of order . We can take any of the following

 subs natives as our basic matrix B

The variables not associated with the first sub matrix are  and . The basic solution to the given system is obtained by setting  and solving the system

Also in each of the remaining there basic feasible solutions at least one of the basic variables is zero.

Problem: Show using matrix vector notation that the following system of loner equations has degenerate solutions.

Dual Simplex Method

Definition: Primal and Dual problems in L.P. The L.P model we develop for a situation is referred to as the primal problem.

 The dual Problem in a L.P defined directly and systematically form the original (Primal) L.P model. The two problems are so closely related that the optimal solutions of one problem automatically yields the optimal solution to the other.

Rules of correspondence between primal and dual:

(a)    If the primal contains n variables and m constraints, the dual will contain m variable and n constraints.

(b)   the maximization problem in the primal becomes the minimization problem in the dual and vice-versa.

(c)    The maximization problem has ‘≤’ type constraints while the minimization problem has ‘³’ type constraints.

(d)   The constants c1, c2 ……………, cn in the objective function of the primal appear in the constraints of the dual.

(e)    The variables in both problems are non negative.

 That is, the following correspondence rule may be applied:



(a) Objective function min zp.

(b) Requirement vector

(c) Co-efficient matrix


(d) Constraints with ³ sign


(e) (ith) in equality

(f) (ith) equality


(g) xj ³ 0

(h) xj unrestricted

(i) Finite optimal solution

(j) Unbounded solution(a) Objective function Max. zD

(b) Price vector

(c) Transpose of the Co-efficient matrix.


(d) Constraints with ≤ sign


(e) w1 ³ 0

(f) w1 unrestricted in sign


(g) (jth) inequality

(h) (jth) constraint equation

(i) Finite optimal solution with equal value of objective function.

(j) No F.S. or unbounded solution.

General Proof: As the l-th constraint of the primal is an equation, therefore, the primal in the standard form can be written as

            Maximize, z = c1x1 + c2x2 + ………………………. + cnxn

            Subject to        a11x1 + a12x2 + …………. + a1n xn ≤ b1

                       a21x1 + a22x2 + ………. + a2nxn ≤ b2


                        ak1x1 + ak2x2 + …………… + aknxn ≤ bk

                        – ak1x1 – ak2x2 …………….. – aknxn ≤ – bk

                        ak+1, 1 x1 + ak+1, 2 x2 + ………. + ak+1, n. xn ≤ bk+1


                        am1 x1 + am2 x2 + ……………….. + amn xn ≤ bm

                        x1, x2, ……………, xn ³ 0.

The dual of the above problem can be written as.

Minimize W = b1y1 + b2y2 + … + bk(y’k – yk”) + bk+1 yk+1 … + bmym

Subject to

a11y1 + a21y2 + .. + ak1(yk’ – yk”) + ak+1 , 1yk+1+ …. + am1 ym ³ c1.

a12y1 + a22y2 + …. + ak2 (yk’ – yk”) + ak+1, 2yk+1 + …. + am2 ym ³ c2


a1ny1 + a2ny2 + … + akn (yk’ – yk’) + ak+1, nyk+1 + …. + amn ym ³ cn

Putting yk’ – yk” = yk, the dual problem can be

writer as,

Minimize, W = b1y1 +b2y2 + ………. + bkyk + ….. + bnym

Subject to

            a11y1 + a21y2 + …… + ak1yk + …… + am1ym ³ c1

            a12y1 + a22y2 + ……. + ak2yk + ……. + am2ym ³ c2


            a1n y1 +a2ny2 + …… + aknyk + ……….. + amn ym ³ cn

and y1, y2, ………. yk-1, yk+1, …….. ym ³ 0, but yk is unrestricted is sign as yk is expressed as the difference of two non-negative variables. yk’ and yk

            Hence the theorem is proved.

Example-: Construct the dual of the problem

Maximize z = 3x1 + 17x2 + 19x3

Subject to         x1 – x2 + x3 ³ 3

                        – 3x1 + 2x3 ≤ 1

                        2x1 + x2 – 5x3 = 1

                        x1, x2, x3 ³ 0


As the problem is of maximization, all constraints must be expressed in “≤” form

(a) First inequality of equality constraint is multiplied by (-1)

                                    i.e. –x1 + x2 – x3 ≤ -3

(b) Third constraint of equality form is replaced by a pair of inequalities.

                                                2x1 + x2 – 5x3 ≤ 1

                                       and   2x1 + x2 – 5x3 ³ 1

                                                i.e. 2x1 + x2 – 5x3 ≤ 1

                                       and  -2x1 – x2 + 5x3 ≤ – 1

Thus, the given problem in symmetric form may be expressed as

Maximize        z = 3x1 + 17x2 + 19x3

            Subject to        – x1 + x2 – x3 ≤ -3

                                    -3x1 + 2x3 ≤ 1

                                    2x1 + x2 – 5x3 ≤ 1

                                    – 2x1 – x2 + 5x3 ≤ -1

                                    x1,  x2,  x3 ³ 0

Let y1, y2, y3’ and y3” be the associated

non-negative dual variables. Then dual is

Minimize         W = -3y1 + y2 + y3’ – y3

Subject to        -y1 – 3y2 + 2y3’ – 2y3” ³ 3

                        y1                     + y3’ – y3” ³ 17

                        -y1 + 2y2 – 5y3’ + 5y3” ³ 19

Substituting y3 = y3’ – y3” where y3 is unrestricted in sign, the dual becomes.

Minimize W = – 3y1 + y2 + y3

Subject to         – y1 – 3y2 + 2y3 ³ 3

                           y1                  + y3 ³ 17

                           y1, y2 ³ 0, y3  is unrestricted in sign.

Theorem: The dual of the dual is the primal.

Proof: Let the primal problem is canonical form be

Maximize        z = c1x1 + c2x2 + ………  + cnxn

Subject to,       a11x1 + a12x2 + ………… + a1nxn ≤ b1

                        a21x1 + a22x2 + ……….. + a2nxn ≤ b2



                        am1x1 + am2x2 + ……………… + amnxn ≤ bm

                        where x1, x2 …………….. xn ≥ 0

Then the dual of this problem is

Minimize         w = b1y1 + b2y2 + ……….. + bmym

Subject to,       a11y1 + a21y2 + ………….. + am1ym ≥ c1

                        a12y1 + a22y2 + …………. + am2ym ≥ c2


                        a1ny1 + a2ny2 + ………… + amnym ≥ cn

            y1 ≥ 0, y2 ≥ 0 …………….. ym ≥ 0

Theorem: State and prove Weak Duality theorem in linear programming.

 Statement: Any feasible solution to the primal problem (P) has the value z less than or at best equal to the value W for any feasible solution to the dual problem (D).

                                    i.e     z ≤ W.

Proof: The general linear programming problem (Primal) involving n decision variables and m constraints in canonical form is given by relations.

(Primal) maximize    z = c1x1 + c2x2 + …….. + cnxn

Subject to                    a11x1 + a12x2 + ……….. + a1nxn ≤ b1

                                    a21x1 + a22x2 + ……. + a2nxn ≤ b2

                                    ——————————————                              ………. (i)


                                    am1x1 + am2x2 + …………. + amn xn ≤ bm

            where             x1. x2, ……… xn ≥ 0

            (Dual) Minimize   W = b1y1 + b2y2 + ……….. bmym

            Subject to        a11y1 + a21y2 + ………. + am1ym ≥ c1

                                    a12y1 + a22y2 + …………. am2ym ≥ c2

                                    ———————————————-                ………… (2)


                                    a1ny1 + a2ny2 + …………… amnym ≥ cn

            Where      y1, y2 ……. ym ≥ 0

Multiplying the 1st inequality of (1) by y1, the 2nd  inequality of (1) by y2 etc. and add them all, we get,

 (a11x1y1 + a12x2y1 + ………… a1nxny1) + (a21x1y2 + a22x2y2 + ……….. + a2nxny2) + …….. + (am1x1ym + am2 x2ym + ………. + amnxnym) ≤ b1y1 + b2y2 +

……….. bmym ;      ………… (3)

Similarly, multiplying the 1st inequality of (2) by x1, the 2nd inequality by x2 etc. and add them all. We get,

(a11x1y1 +a21x1y2 + ……… am1x1ym) + (a12x2y1 + a22x2y2 + ………. am2x2ym) + …… (ainxny1 + a2nxny2 + …….. + amnxnym) ≥ c1x1 + c2x2 + ………. cnxn ; …………. (4)

Now the sums of on the left hand side of inequalities (3) and (4) are equal. Hence.

                        c1x1 + c2x2 + ….. + cnxn ≤ b1y1 + b2y2 + …….. + bmym

i.e.     z ≤ W.  (Proved the theorem).

Theorem: As the k-the m constraints of the primal is an equation, therefore, the primalin the standard form can be written as

Maximize                      z = c1x1 + c2x2 + …….. + cnxn

Subject to                    a11x1 + a12x2 + ……….. + a1nxn ≤ b1

                                    a21x1 + a22x2 + ……. + a2nxn ≤ b2


                                    ak1x1 + ak2x2 + ……………. + aknxn ≤ bk

                                    – ak1 x1 – ak2x2 ……….. –aknxn ≤ – bk

                                    ak+1, 1 x1 + ak+1, 2 x2 + …….. + ak+1, n. xn ≤ bk+1


                                    am1 x1 + am2 x2 + ……………. + amn xn ≤ bm

                                    x1, x2, ……………………. , xn ≥ 0.

The dual of the above problem can be written as.

Minimize W = b1y1 + b2y2 + .. + bk(y’k – yk”) + bk+1 yk+1 …. + bmym

Subject to

a11y1 + a21y2 + .. + ak1(yk’ – yk”) + ak+1, 1yk+1 + …. + am1 ym ≥ c1.

a12y1 + a22y2 + ….. + ak2(yk’ – yk”) + ak+1, 2yk+1 + …. + am2ym ≥ c2


a1ny1 + a2ny2 + …. + akn (yk’ – yk’) + ak+1, nyk+1 + …. + amn ym ≥ cn

Putting yk’ – yk” = yk, the dual problem can be

Written as,

Minimize, W = b1y1 + b2y2 + ……….. + bkyk + …………. bmym

Subject to

            a11y1 + a21y2 + ……. + ak1yk + ……… + am1ym ≥ c1

            a12y1 + a22y2 + …….. + ak2yk + ……… + am2ym ≥ c2


a1n y1 + a2ny2 + ……….. + akn yk + ……… + amnym ≥ cn

and y1, y2, ……… yk-1, yk+1, …….. ym ≥ 0, but yk is unrestricted in sign as yk is expressed as the difference of two non-negative variables, yk’ and yk”.

            Hence the theorem is proved.

Theorem: State and prove Complementary slackness theorem.

Statement: Let x° and y° be feasible for the primal problem

(P) Maximize       z = cx

Subject to             Ax ≤ b

                             x ≥ 0

            and the dual problem (D) Minimize    W = by

                                                     Subject to    Ay ≥ c

                                                                            y ≥ 0    respectively.

Then x° and y° are optimal to their respective problems, if and

only if (y°A – c) x° + y° (b – Ax°) = 0

Proof: Let the column vector u = (u1, u2,  ………… um) T represents the slack-vector for the primal and the row vector

            V = ( v1, v2, ………. , vn) be the surplus vector for the dual problem. Since x° and y° are feasible solutions.

            We have Ax° + u° = b;                        x°, u° ≥ 0 …………….. (i) since [Ax ≤ b,

                           y°A – v° = c;                         y°, v° ≥ 0 …………… (ii) since Ay ≥ c ]

            (u° and v° represent the valus of the slack variable u and v corresponding to the feasible solutions x° and y°)

            Multiplying equation (1) by y° and (2) by x°, we get.

                        y° Ax° + y°  u°  = y° b ……………. (iii)

            and       y° Ax° – v°x°    = cx° …………….. (iv)

            Subtraction (iv) from (iii), we obtain

            y°u° + v°x° = y°b – cx° …………… (v)

To prove this theorem, we have to show that x° and y° are optimal solutions to the primal and dual problems

            Iff v°x° + y°u° = 0 ………… (vi)

Part- 1. We will assume that x° and y° are optimal solutions.

Then by ‘main duality theorem’ cx° = y°b

            Then equation (v)         y° u° + v°x° = y°b – y°b

                                                 y°u° + v°x°  = 0

Part- 2. Let us suppose that equation (vi) is true.

Then equation (v) implies that y°b – cx° = 0

                                                 y°b = cx°

Since x° and y° are feasible solutions of primal and dual problem respectively. “By optimality criterion theorem”. x° and y° are the optimal solutions of respective problem.

            Now, (iii)  y°b – y°Ax° = y°u° …………. (vii)

            (iv)  y°Ax° – cx° = v°x° ………… (viii)

            Adding (vii) and (viii) we get

            (y° A – c) x° + (b – Ax°) y° = v° x° + u° y°

             (y° A – c) x° + (b – Ax°) y°= 0

            Hence the theorem is proved.

 Construct the dual of the following primal problem:

                                        Max z  = 5x1 +12x2 + 4x3

                                     S/t,         x1 + 2x2 + x3 ≤ 10

                                                  2x1 – x2 + 3 x3 = 8

                                                            x1, x2,  x3 ≥ 0.


Primal L.P.P in equation form:

                                                Max Z = 5x1 + 12 x2 + 4 x3

                                            S/t,              x1 + 2x2 + x3 + x4 = 10

                                                           2 x1 – x2 + 3x3 + 0.x4 = 8

                                                                 x1,    x2,    x3,     x≥ 0

 Dual Problem:

                                         Min W     = 10y1 + 8y2

                                      S/t,                y1 +2y2 ≥5

                                                           2y1 – y≥ 12

                                                            y1 + 3y2 ≥ 4

                               y1  ≥ 0,       y2 is unrestrictedly in sign.

 Problem: Construct the dual of the primal problem

                          Max  Z = 2x1 + x2 + x3

                             S/t,        x1 + x2 + x3 ≥ 6

                                      3x1 – 2x+ 3x3 = 3

                                     – 4x1 + 3x2 – 6x3 = 1

                                           x1,    x2,         x3 ≥ 0

Primal L.P.P in equation form:

Max Z = 2x1 + x2 + x3 + 0.x4

             S/t,        x1 + x2 + x3 – x4 = 6

                         3x1 – 2x2 + 3x3 = 3

                       – 4x1 + 3x2 – 6x3 = 1

                        x1,        x2,    x3,     x4,      ≥ 0

 Dual Problem:                                         Min W = 6y1 + 3y2 + y3

              S/t,         y1 + 3y2 – 4y3 ≥ 2

                            y1 – 2y2 + 3y3  ≥ 1

                            y1 + 3y2 – 6y3 ≥ 1

                            0 – y1 + 0.y2 + 0.y3 ≥ 0

                                      – y1 ≥ 0

                                        w1 ≥ 0

– y1 + 0.y2 + 0.y3 ≥ 0

                    – y1 ≥ 0

                     w1 ≥ 0

          where w1 = – y1

\ Dual problem becomes,

          Min W = – 6w1 + 3y2 + y3

            S/t,         – w1 + 3y2 – 4y3 ≥ 2

                          – w1 – 2y2 + 3y3 ≥ 1

                          – w1 + 3y2 – 6y3 ≥ 1

                                      w1 ≥ 0

            y2,       y3 are unrestricted in sign

Problem: Solve by dual simplex method the following problem.

                        Min Z = 2x1 + 2x2 + 4x3

                         S/t,        2x1 + 3x2 + 5x3 ≥ 2

                                      3x1 + x2 + 7x3 ≤ 3

                                       x1 + 4x2 + 6x3 ≤ 5

                                     x1,     x2,    x3    ≥ 0.

Solution: The given minimization problem is converted. into maximization problem by writing

                        Max G   = – 2x1 – 2x2 – 4x3

                                 S/t,   – 2x1 – 3x2 – 5x3 ≤ – 2

                                          3x1 + x2 + 7x3 ≤ 3

                                        x1 + 4x2 + 6x3 ≤ 5

                                             x1,    x2,    x3 ≥ 0

Introducing the slack variables S1, S2, S3, we get

                 Max G = – 2x1 – 2x2 – 4x3 +0.S1 +0S2 +0.S3

                    S/t,       – 2x1 – 3x2 -5x3 –S1 = -2

                                 3x1 + x2 + 7x3 + 0.S1 + S2 = 3

                                 x1 + 4x2 + 6x3 + 0.S1 + 0.S2 + S3 = 5

                                 x1,      x2,    x3,      S1,      S2,     S3  ≥ 0

Now express the above information’s in the simple tableau form:

                     Cj             – 2            – 2            – 4           0           0             0

   CB             Basic           x1            x2             x3          S1          S2           S3            b

   0                S1              -2            (-3)            -5          1            0             0      -2 ¬ key row

   0                S3               1              4              6            0            0             1             5

   X                Ej              0              0              0              0           0             0            0

               Cj-Ej              -2             -2           -4               0           0              0

                                                     Key column

Basic intial solution is  (x1, x2, x3, S1, S2, S3 )

                                                      = ( 0, 0, 0, -2, 3, 5 )

                                                Which is infeasible since S1 is –ve

As Cj-Ej are either negative or zero and bi is negative the solution is optimal by infeasible.

As b = -2, the first new is key row.

Find the ratios of elements of Cj – Ej row to the element of key row.    [ Cj – Ej ¸ key row]

Since is the smallest ration, x2 column is the key column and -3 is the key element.

As All Cj – Ej are negative or zero and all bi are positive, the solution given by the above tableau is optimal. The optimal solution is

                                    x1  =   0,   x2  =  ,    x3  =  0

             \ Max G =

         =>  Min Z    =      Ans.

4.11.2 : problem : Write down the dual of the primal (P) and solve by using complementary slackness theorem.

(P) Maximize, z = x1 + 2x2 + 3x3 + 4x4

Subject            to        x1 + 2x2 + 2x3 + 3x4     £ 20

                           2x1 + x2 + 3x3 + 2x4  £ 20

                           x1,   x2,  x3,  x4  ³ 0

Solution:  The dual of the primal (P) is given as follows

(D) Minimize,              w = 20y1 + 20y2

Subject to                    y1 + 2y2 ³ 1

                                    2y1 + y2 ³ 2

                                    2y1 + 3y2 ³ 3

                                    3y1 + 2y2 ³ 4

                                    y1, y2 ³ 0

Now, we want to solve the dual problem (D) by using graphical method.

            y1 + 2y2 ³ 1                                        2y1 + y2 ³ 2

             ………. (i)                    ……..…. (ii)

            2y1 + 3y2 ³ 3                                       3y1 + 2y2 ³ 4

             ………. (iii)               ……….. (iv)

The graph is shaded in figure.

At A ,  W = 20,  + 20.0 = 30

At B ( 1,2, 0, 2), W = 20 ´ 1.2 + 20 ´ (0.2) = 24 + 4 = 28

At C (0, 2)   W = 40

Thus the optimal solution of dual is given at (1, 2, 0, 2)

With Wmin = 28.

Now, introducing slack and surplus variable, the primal problem (P) and the dual problem (D) becomes

(P) Maximize,              z = x1 + 2x2 + 3x3 + 4x4

Subject to                    x1 + 2x2 + 2x3 + 3x4 + u1 = 20

                                    2x1 + x2 + 3x3 + 2x4 + u2 = 20

                                    x1, x2, x3, x4, u1, u2, ³ 0

(D) Minimize, w = 20y1 + 20y2

            Subject to        y1 + 2y2 – v1 = 1

                                    2y1 + y2 – v2 = 3

                                    3y1 + 2y2 – v4 = 4

                                    y1, y2, v1, v2, v3, v4 ³ 0