Solving Optimization Problems

Solving Nonlinear Constrained Optimization Problems with Matlab

Hello everyone and welcome.

In this video, I’m going to show you how to solve nonlinear constrained optimization problems with Matlab.

This optimization solution method is based on an optimization solver named “fmincon solver” in Matlab. To enhance the robustness of the solver, I use a simple code for running the solver multiple times (say: 10 times), each with random initial solution, and the final solution is the best solution found in all the runs. Thereby, the chance to get the global optimal solution is significantly improved.

Here is the detail of the benchmark function.

Let’s see how it works.

For more videos like this, check my YouTube channel here.

Matlab code

objective_function.m

function Output = objective_function(Input)
x = Input(1);
y = Input(2);
Output = sin(y)*exp((1-cos(x))^2) + cos(x)*exp((1-sin(y))^2) + (x-y)^2; 

nonlinear_constraint.m

function [C Ceq] = nonlinear_constraint(Input)
x = Input(1);
y = Input(2);
C = (x+5)^2 + (y+5)^2 - 25;  % inequality constraint 
Ceq = [];  % no equality constraint

fminconsolver.m

function [x,fval,exitflag,output,lambda,grad,hessian] = fminconsolver(x0,lb,ub)
options = optimoptions('fmincon');
options = optimoptions(options,'Display', 'off');
options = optimoptions(options,'PlotFcn', {  @optimplotfval @optimplotconstrviolation });
[x,fval,exitflag,output,lambda,grad,hessian] = ...
fmincon(@objective_function,x0,[],[],[],[],lb,ub,@nonlinear_constraint,options);

main_code.m

clc
clear all
close all
%--------------------------------------------------------------------------
% customization: lower bound lb = [1xn], and upper bound lb = [1xn]
lb = [-10 -6.5];
ub = [0 0];
%--------------------------------------------------------------------------
n=10; % number of starts
[x y]=size(lb);
S=zeros(n,y);
R=zeros(n,1);
% multi-start optimization solver
for j=1:n
    for i = 1:y
        x0(i)=lb(i) + (ub(i)-lb(i))*rand; % generate initial solution
    end
    [x,fval,exitflag,output,lambda,grad,hessian] = fminconsolver(x0,lb,ub);
    S(j,:)=x;
    R(j,1)=fval;
end
[x1 y1]=find(R == min(R));
objective_function_value = R(min(x1),1)
optimal_solution = S(min(x1),:)

P/s: If you find the post useful, share it to remember and to help other people as well.

Dr.Panda

View Comments

  • How do you solve an optimisation problem. Where one of the constraints is a logarithmic inequality.

Recent Posts

Adaptive Re-Start Hybrid Genetic Algorithm in Matlab

Hello everyone! In this post, I am going to show you my innovative version of…

7 months ago

Test Your Understanding About Genetic Algorithm (Test 2)

Hello everyone. Let’s take a test to check your understanding about genetic algorithm, with multiple…

7 months ago

Adaptive Restart Hybrid Genetic Algorithm

Hello everyone! In this post, I am going to show you my innovative version of…

7 months ago

Adaptive Re-Start Hybrid Genetic Algorithm (Test the Performance in Case Studies)

Hello everyone. In this post, I am going to show you my innovative version of…

1 year ago

Adaptive Re-Start Hybrid Genetic Algorithm in Matlab

Hello everyone! Let’s see how my innovative version of Genetic Algorithm, called Adaptive Re-start Hybrid…

1 year ago

Crypto Quiz (Test Your Knowledge About Cryptocurrency)

Hello everyone! Let’s take a short quiz, to test your knowledge about crypto-currency, or crypto.…

2 years ago