28 Apr 2013 17:12

### Termination condition on the gradient?

```Hello,

I'm using NLopt to solve a convex inequality bounded optimization
problem in hundreds of dimensions using LBFGS. However, NLopt often
crashes with

RuntimeError: nlopt failure

when I request ftol of ~10^-15, but if I reduce the precision to, say,
~10^-12, the optimization succeeds.

I noticed, however, that when this happens, the gradients are ~10^-4,
and this made me think that the optimizer doesn't like it when the
gradients are computed much more precisely than the objective function.

So I tried to improve on the precision of the objective function, and
now most optimizations succeed with ftol of ~10^-15, whereas the
gradients got down to ~10^-10, but still, some fail and there is little
I can do to improve the precision of the objective function any further.

Is it maybe possible to put a termination condition on the gradients so
that the optimization stops when they are <= X, instead of using the
function value? I couldn't find anything like that in the manual.

Are there any other ways I can profit with NLopt from the fact that my
problem is convex (although the objective function itself is *very*
difficult to compute), but I have the analytical expressions for the

Thanks!
```

24 Apr 2013 15:49

### Solving a mixed integer optimization problem using NLopt ?

Hi,

I have an optimization problem which I considered it as a mixed integer and a multiobjective OPT one:

Assume that, I have a system S = S(x1,x2,...,xm,r1,r2,...,rp).
With each set of R, I can determine the vector X = g(R). Then compute the value of f(X):
f(X) = {0,1} ; f(X) = 0 if any xi violates certain constraints, f(X) = 1, if all xi satisfy those constraints. Here, X = (x1,x2,...,xm)
Note that, some of the constraints cannot be expressed as a (continuous) function of R. That is why I govern them as a logic function.

My goals are to find minimum and maximum values of the decision variables R = (r1,r2,...,rp)

(minimize + maximize) R

subject to:  { f(X) = 1 }

---------------------------------------------------------------------------------------------------------------------------
Can the above problem be solved using NLopt package?

Thanks,
Quan
```<div><div>
<div><span>Hi,</span></div>
<div>
<br><span></span>
</div>
<div>
<span>I have an optimization problem which I considered it as a mixed integer and a multiobjective </span><span><span>OPT </span>one:</span>
</div>
<div>
<br><span></span>
</div>
<div><span>Assume that, I have a system S =
S(x1,x2,...,xm,r1,r2,...,rp). <br></span></div>
<div>
<span><span>With each set of R, I can determine the vector X = g(R). Then compute the value of f(X): </span></span><br><span>f(X) = {0,1} ; f(X) = 0 if any xi violates certain constraints, f(X) = 1, if all xi satisfy those constraints. </span>Here, <span>X = (x1,x2,...,xm) </span>
</div>
<div><span>Note that, some of the constraints cannot be expressed as a (continuous) function of R. That is why I govern them as a logic function.<br></span></div>
<div><span><br></span></div>
<div><span><span></span><span>My goals are to find minimum and maximum values of the decision variables R = (r1,r2,...,rp) </span><br></span></div>
<div><span><br></span></div>
<div><span><span>(minimize + maximize) R <br></span></span></div>
<div><span><span><br></span></span></div>
<div><span><span>subject to:&nbsp; { f(X) = 1 }</span><br></span></div>
<div>
<br><span></span>
</div>
<div><span>---------------------------------------------------------------------------------------------------------------------------</span></div>
<div><span>Can the above problem be solved using NLopt package? <br></span></div>
<div>
<br><span></span>
</div>
<div><span>Thanks,</span></div>
<div>Quan<br><span></span>
</div>
</div></div>
```
23 Apr 2013 11:27

### Is there a nlopt pdf manual?

Hi guys

Is there a 200-400 page long nlopt pdf manual?

Best
Sasha
```<div><div dir="ltr">
<div>
<div>
<div>Hi guys<br><br>
</div>Is there a 200-400 page long nlopt pdf manual?<br><br>
</div>Best<br>
</div>Sasha<br>
</div></div>
```
23 Apr 2013 00:35

### I cant run the tutorial example

Good afternoon nlopt users.

How are you?

I installed nlopt and tried to run the tutorial example on c. Here is my file:
----------------------------------------------------------------------------------
#include <math.h>
#include <nlopt.h>

double myfunc(unsigned n, const double *x, double *grad, void *my_func_data)
{
}
return sqrt(x[1]);
}
typedef struct {
double a, b;
} my_constraint_data;

double myconstraint(unsigned n, const double *x, double *grad, void *data)
{
my_constraint_data *d = (my_constraint_data *) data;
double a = d->a, b = d->b;
grad[0] = 3 * a * (a*x[0] + b) * (a*x[0] + b);
}
return ((a*x[0] + b) * (a*x[0] + b) * (a*x[0] + b) - x[1]);
}

double lb[2] = { -HUGE_VAL, 0 }; /* lower bounds */
nlopt_opt opt;

opt=nlopt_create(NLOPT_LD_MMA, 2); /* algorithm and dimensionality */
nlopt_set_lower_bounds(opt, lb);
nlopt_set_min_objective(opt, myfunc, NULL);
my_constraint_data data[2] = { {2,0}, {-1,1} };
nlopt_set_xtol_rel(opt, 1e-4);
double x[2] = { 1.234, 5.678 };  /* some initial guess */
double minf; /* the minimum objective value, upon return */

if (nlopt_optimize(opt, x, &minf) < 0) {
printf("nlopt failed!\n");
}
else {
printf("found minimum at f(%g,%g) = %0.10g\n", x[0], x[1], minf);
}
nlopt_destroy(opt);
------------------------------------------------------------------------
And here is the output of
gcc tutorial.c -o tutorial -lnlopt -lm
called from the same directory where tutorial.c is
-------------------------------------------------------------------------
tutorial.c:30:1: warning: data definition has no type or storage class [enabled by default]
tutorial.c:30:1: error: conflicting types for ‘opt’
tutorial.c:28:11: note: previous declaration of ‘opt’ was here
tutorial.c:30:5: warning: initialization makes integer from pointer without a cast [enabled by default]
tutorial.c:30:1: error: initializer element is not constant
tutorial.c:31:1: warning: data definition has no type or storage class [enabled by default]
tutorial.c:31:1: warning: parameter names (without types) in function declaration [enabled by default]
tutorial.c:32:38: error: expected ‘)’ before ‘(’ token
tutorial.c:34:52: error: expected ‘)’ before ‘&’ token
tutorial.c:35:52: error: expected ‘)’ before ‘&’ token
tutorial.c:36:25: error: expected ‘)’ before numeric constant
tutorial.c:40:1: error: expected identifier or ‘(’ before ‘if’
tutorial.c:43:1: error: expected identifier or ‘(’ before ‘else’
tutorial.c:46:1: warning: data definition has no type or storage class [enabled by default]
tutorial.c:46:1: warning: parameter names (without types) in function declaration [enabled by default]
tutorial.c:46:1: error: conflicting types for ‘nlopt_destroy’
In file included from tutorial.c:2:0:
/usr/local/include/nlopt.h:194:20: note: previous declaration of ‘nlopt_destroy’ was here
--------------------------------------------------------------------

I would greatly appreciate your help

Best
Sasha

```<div><div dir="ltr">
<div>
<div>
<div>
<div>Good afternoon nlopt users.<br><br>
</div>How are you?<br><br>
</div>I installed nlopt and tried to run the tutorial example on c. Here is my file:<br>----------------------------------------------------------------------------------<br>
#include &lt;math.h&gt;<br>#include &lt;nlopt.h&gt;<br><br>double myfunc(unsigned n, const double *x, double *grad, void *my_func_data)<br>{<br>&nbsp;&nbsp;&nbsp; if (grad) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; grad[0] = 0.0;<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; grad[1] = 0.5 / sqrt(x[1]);<br>
&nbsp;&nbsp;&nbsp; }<br>&nbsp;&nbsp;&nbsp; return sqrt(x[1]);<br>}<br>typedef struct {<br>&nbsp;&nbsp;&nbsp; double a, b;<br>} my_constraint_data;<br><br>double myconstraint(unsigned n, const double *x, double *grad, void *data)<br>{<br>&nbsp;&nbsp;&nbsp; my_constraint_data *d = (my_constraint_data *) data;<br>
&nbsp;&nbsp;&nbsp; double a = d-&gt;a, b = d-&gt;b;<br>&nbsp;&nbsp;&nbsp; if (grad) {<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; grad[0] = 3 * a * (a*x[0] + b) * (a*x[0] + b);<br>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; grad[1] = -1.0;<br>&nbsp;&nbsp;&nbsp; }<br>&nbsp;&nbsp;&nbsp; return ((a*x[0] + b) * (a*x[0] + b) * (a*x[0] + b) - x[1]);<br>&nbsp;}<br><br>double lb[2] = { -HUGE_VAL, 0 }; /* lower bounds */<br>nlopt_opt opt;<br><br>opt=nlopt_create(NLOPT_LD_MMA, 2); /* algorithm and dimensionality */<br>nlopt_set_lower_bounds(opt, lb);<br>nlopt_set_min_objective(opt, myfunc, NULL);<br>
my_constraint_data data[2] = { {2,0}, {-1,1} };<br>nlopt_add_inequality_constraint(opt, myconstraint, &amp;data[0], 1e-8);<br>nlopt_add_inequality_constraint(opt, myconstraint, &amp;data[1], 1e-8);<br>nlopt_set_xtol_rel(opt, 1e-4);<br>
double x[2] = { 1.234, 5.678 };&nbsp; /* some initial guess */<br>double minf; /* the minimum objective value, upon return */<br><br>if (nlopt_optimize(opt, x, &amp;minf) &lt; 0) {<br>&nbsp;&nbsp;&nbsp; printf("nlopt failed!\n");<br>
}<br>else {<br>&nbsp;&nbsp;&nbsp; printf("found minimum at f(%g,%g) = %0.10g\n", x[0], x[1], minf);<br>}<br>nlopt_destroy(opt);<br>------------------------------------------------------------------------<br>
</div>And here is the output of <br>
gcc tutorial.c -o tutorial -lnlopt -lm<br>
</div>called from the same directory where tutorial.c is<br>-------------------------------------------------------------------------<br><div>tutorial.c:30:1: warning: data definition has no type or storage class [enabled by default]<br>
tutorial.c:30:1: error: conflicting types for &lsquo;opt&rsquo;<br>tutorial.c:28:11: note: previous declaration of &lsquo;opt&rsquo; was here<br>tutorial.c:30:5: warning: initialization makes integer from pointer without a cast [enabled by default]<br>
tutorial.c:30:1: error: initializer element is not constant<br>tutorial.c:31:1: warning: data definition has no type or storage class [enabled by default]<br>tutorial.c:31:1: warning: parameter names (without types) in function declaration [enabled by default]<br>
tutorial.c:32:38: error: expected &lsquo;)&rsquo; before &lsquo;(&rsquo; token<br>tutorial.c:34:52: error: expected &lsquo;)&rsquo; before &lsquo;&amp;&rsquo; token<br>tutorial.c:35:52: error: expected &lsquo;)&rsquo; before &lsquo;&amp;&rsquo; token<br>tutorial.c:36:25: error: expected &lsquo;)&rsquo; before numeric constant<br>
tutorial.c:40:1: error: expected identifier or &lsquo;(&rsquo; before &lsquo;if&rsquo;<br>tutorial.c:43:1: error: expected identifier or &lsquo;(&rsquo; before &lsquo;else&rsquo;<br>tutorial.c:46:1: warning: data definition has no type or storage class [enabled by default]<br>
tutorial.c:46:1: warning: parameter names (without types) in function declaration [enabled by default]<br>tutorial.c:46:1: error: conflicting types for &lsquo;nlopt_destroy&rsquo;<br>In file included from tutorial.c:2:0:<br>/usr/local/include/nlopt.h:194:20: note: previous declaration of &lsquo;nlopt_destroy&rsquo; was here<br>
--------------------------------------------------------------------<br>
</div>
</div>
<div>I would greatly appreciate your help<br><br>
</div>
<div>Best<br>Sasha<br>
</div>
<div><br></div>
</div></div>
```
17 Apr 2013 16:33

### optimization with derivatives-only

Hi,

First of all many thanks for the very nice library you've created.
I wrote a small program to do calculate a steepest descent path/saddlepoint (nudge elastic band) on an energy landscape defined by a set of multidimensional Gaussian distributions, something like this
http://www.petveturas.com/img/pyneb_test_asym.png
Initially I implemented this with steepest descent + step size update or a velocity verlet integrator, but I wanted to rewrite this with nlopt because of the large choice of algorithms. This the code
petveturas.com/prog/neb-cpp/nlopt_neb.cpp
petveturas.com/prog/neb-cpp/Makefile

and the input
http://petveturas.com/prog/neb-cpp/gaussians
http://petveturas.com/prog/neb-cpp/pos.dat

However I think I have problem: the forces in this method are non-conservative so that I cannot define a Hamiltonian. So I only have the gradients, not the function value at each step in the optimization. This exact problem has been considered before in literature, e.g.
http://scitation.aip.org/journals/doc/JCPSA6-ft/vol_119/iss_24/12708_1.html

For now I return the norm of the gradients as function value, which works more or less using the derivative-based optimizers

But most gradient-dependent optimizers get stuck at some point, and some methods (BFGS/Newton
terminate called after throwing an instance of 'std::runtime_error'
what():  nlopt failure

It seems to me the quasi-newton methods (which should be most effective) typically use some line-search based on the function value for evaluation of the step-size, and (therefore?) get stuck at some point.
In some cases this ends in the following error
terminate called after throwing an instance of 'std::runtime_error'
what():  nlopt failure

So I hope somebody can help me answer if NLOpt is the right library for such a problem.
- which algorithm, if any, I could possibly use for this? Do all algorithms assume that grad = d(objective)/d(x) ?
- if none, would it be difficult to modify the existing code to

Hopefullying you perhaps have more experience with this type of optimization problem and can point me in the right direction.

Many thanks.

Best,
Jaap
```<div><div dir="ltr">Hi,<br><br><div>
<div>First of all many thanks for the very nice library you've created.</div>
<div>
I
wrote a small program to do calculate a steepest descent path/saddlepoint (nudge
elastic band) on an energy landscape defined by a set of
multidimensional&nbsp;Gaussian&nbsp;distributions, something like this<br><a href="http://www.petveturas.com/img/pyneb_test_asym.png" target="_blank">http://www.petveturas.com/img/pyneb_test_asym.png</a><br>
</div>
</div>
<div>Initially I implemented this with steepest descent +&nbsp;step
size&nbsp;update or a velocity verlet integrator, but I wanted to rewrite
this with nlopt because of the large choice of algorithms. This the code
<br><a href="http://petveturas.com/prog/neb-cpp/nlopt_neb.cpp" target="_blank">petveturas.com/prog/neb-cpp/nlopt_neb.cpp</a><br><a href="http://petveturas.com/prog/neb-cpp/Makefile" target="_blank">petveturas.com/prog/neb-cpp/Makefile</a><br><br>and the input<br><a href="http://petveturas.com/prog/neb-cpp/gaussians" target="_blank">http://petveturas.com/prog/neb-cpp/gaussians</a><br><a href="http://petveturas.com/prog/neb-cpp/pos.dat" target="_blank">http://petveturas.com/prog/neb-cpp/pos.dat</a><br><br>
</div>
<div>However
I think I have problem: the forces in this method are non-conservative&nbsp;so that I
cannot define a Hamiltonian. So I only have the gradients, not the
function value at each step in the optimization. This exact
problem has been considered before in literature, e.g.<br><a href="http://scitation.aip.org/journals/doc/JCPSA6-ft/vol_119/iss_24/12708_1.html" target="_blank">http://scitation.aip.org/journals/doc/JCPSA6-ft/vol_119/iss_24/12708_1.html</a><br><br>
</div>
<div>For
now I return the norm of the gradients as function value, which works
more or less using the derivative-based optimizers<br><br><br>But most gradient-dependent optimizers get stuck at some
point, and some methods (BFGS/Newton<br>terminate called after throwing an instance of 'std::runtime_error'<br>&nbsp; what():&nbsp; nlopt failure<br><br>It seems to me the quasi-newton methods (which should be most effective) typically use some line-search based on the function value
for evaluation of the step-size, and (therefore?) get stuck at some point. <br>In some cases this ends in the following error <br>terminate called after throwing an instance of 'std::runtime_error'<br>&nbsp; what():&nbsp; nlopt failure<br><br><br>So I hope somebody can help me answer if NLOpt is the right library for such a problem.<br>- which algorithm, if any, I could possibly use for this? Do all algorithms assume that grad = d(objective)/d(x) ?<br>
</div>
<div>- if none, would it be difficult to modify the existing code to <br>
</div>
<div>
<br>
Hopefullying you perhaps have more experience with this type of optimization problem and can point me in the right direction.<br><br>Many thanks.<br>
</div>
<div><br></div>Best,<br>Jaap</div>
</div></div>
```
12 Apr 2013 13:54

### Problems with BOBYQA

Dear all,

I'm using NLopt to solve a non-linear optimization problem without constraints. I was testing the algorithm BOBYQA, which is causing some trouble. Depending on the initial guess, I get an the following error:

terminate called after throwing an instance of 'std::invalid_argument'
what():  nlopt invalid argument

The same code is working using the other derivative-free algorithms, like NEWUOA.

Thanks for any help in advance,
Klaus
```<div>
<p>Dear all,</p>
<div><br></div>
<div>I'm using NLopt to solve a non-linear optimization problem without constraints. I was testing the algorithm BOBYQA, which is causing some trouble. Depending on the initial guess, I get an the following error:</div>
<div><br></div>
<div>
<div>terminate called after throwing an instance of 'std::invalid_argument'</div>
<div>&nbsp; what(): &nbsp;nlopt invalid argument</div>
</div>
<div><br></div>
<div>The same code is working using the other derivative-free algorithms, like NEWUOA.&nbsp;</div>
<div><br></div>
<div>Thanks for any help in advance,</div>
<div>Klaus</div>
</div>
```
9 Apr 2013 16:23

### Compute the total number of function evaluation (opt.maxeval) ?

Hi,

I wonder how can I compute the  total number of function evaluations (N) in the algorithm so that I will know when the algorithm stop ( opt.maxeval < N) ?

For example, in the following optimization problem

minimize f(x1,x2,...,xp)
subject to { Xmin <= X <= Xmax }

I use an algorithm , e.g.  NLOPT_LD_LBFGS. The input for each iteration is [ f(X) , df(X) ] . I use the "Central difference approximation" method to compute the gradient vector df(X).

So I count the number of function evaluations like this:
- In each iteration of the algorithm: n = 1 + 2*p  (1 evaluation for f(X), and 2*p evaluations for df(X))
- The total number of function evaluations with M iterations: N = M * (1+2*p)

and the process will stop when  { opt.maxeval < M * (1+2*p) }

Is that right ?

Thanks

```<div><div>
<div><span>Hi,</span></div>
<div>
<br><span></span>
</div>
<span>I wonder how can I compute the&nbsp; total number of function evaluations (N) in the algorithm so that I will know when the algorithm stop ( opt.maxeval &lt; N) ?</span><div>
<br><span></span>
</div>
<div><span>For example, in the following optimization problem</span></div>
<div>
<br><span></span>
</div>
<div><span>minimize f(x1,x2,...,xp)</span></div>
<div><span>subject to { Xmin &lt;= X &lt;= Xmax }</span></div>
<div>
<br><span></span>
</div>
<div>
<span>I use an algorithm , e.g.&nbsp; NLOPT_LD_LBFGS. The input for each iteration is [ f(X) , df(X) ] . I use the "Central difference approximation" method to compute the gradient vector
df(X).&nbsp;</span><span></span>
</div>
<div>
<br><span></span>
</div>
<div><span>So I count the number of function evaluations like this:</span></div>
<div><span>- In each iteration of the algorithm: n = 1 + 2*p&nbsp; (1 evaluation for f(X), and 2*p evaluations for df(X))<br></span></div>
<div><span>- The total number of function evaluations with M iterations: N =
M * (1+2*p)</span></div>
<div>
<br><span></span>
</div>
<div>
<span>and the process will stop when&nbsp; { </span><span></span><span>opt.maxeval &lt; </span><span>M * (1+2*p) }</span>
</div>
<div>
<br><span></span>
</div>
<div><span>Is that right ?</span></div>
<div>
<br><span></span>
</div>
<div><span>Thanks<br></span></div>
<br>
</div></div>
```
18 Mar 2013 20:17

### Problems with Matlab plugin on Mac OSX 10.7

```Hi,

I'm having trouble installing NLOpt-2.3 on my Mac running OSX 10.7.5, specifically the Matlab plugins.

I run the configure file with this command:

./configure --enable-shared MATLAB=/Applications/MATLAB_R2012a.app/matlab MEX=/Applications/MATLAB_R2012a.app/bin/mex

And a critical line of the output reads:

checking for extension of compiled mex files... configure: WARNING:
/Applications/MATLAB_R2012a.app/bin/mex failed to compile a simple file; won't compile Matlab plugin

Is there someone who might be able to help me work through this?

One test that I did is to find the lines in the configure.ac file which creates the test file:

cat > conftest.c <<EOF
#include <mex.h>
void mexFunction(int nlhs, mxArray *plhs[[]],
int nrhs, const mxArray *prhs[[]]) { }
EOF

To test this, I created the conftest.c file manually with these contents, and ran
/Applications/MATLAB_R2012a.app/bin/mex on that file.  This FAILED to create a mex file.  Next, I
changed the text of conftest.c to

#include <mex.h>
void mexFunction(int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[]) { }

(note [[]] replaced by []) and ran /Applications/MATLAB_R2012a.app/bin/mex once again.  This SUCCEEDS
in creating a mex file.

So the next thing I did was to change the configure.ac file to this effect, and run autoconf to create a new
configure script.  I ran this again... to no avail.  I get the same error message as before.

```
6 Mar 2013 00:41

### run-time error: cannot allocate memory in static TLS block

```Hi,

As soon as I instantiate my optimization problem via

nlopt::opt opt(nlopt::LD_MMA, 2);

my program crashes and outputs:
"cannot allocate memory in static TLS block"

The instantiation takes place in a thread which is different than the
main one.
Any hint on that?

Thanks,
Clemens

```
24 Feb 2013 20:52

### NLopt Installation Errors

```Hi,

I'm new to the NLopt libraries and am having the following issues.

I have Ubuntu 12.04 with MATLAB 2012 and have reverted
to the gcc and g++ 4.4 version compilers as per the
output of the mex command in matlab.

I have followed the installation instructions as best I can.
My configure command:

sudo ./configure --enable-shared MEX=/home/connie/MATLAB/R2012b/bin/mex
--with-matlab=~/home/connie/MATLAB/R2012b
MEX_INSTALL_DIR=~/nlopt-2.3/octave/dir

and then make and make install.

The only concerning output is:
checking for mex... /home/connie/MATLAB/R2012b/bin/mex
checking for extension of compiled mex files... mexa64
checking for matlab... no

where matlab is not recognized.  The only place that I can
find the nlopt_optimize.m is in the /octave folder, and
all it contains is text.  Thus,
when I try to run the nlopt_optimize() in that folder,
it says I am trying to use a script as a function.
I also cannot mex nlopt_optimize.c without having
library linking errors, but as configure should do this for me,
I am not surprised.

the dummy file also gives errors.

All of this is still in the octave folder.
I feel like there should be a matlab
folder since I cam using matlab,
but the configure process is not finding it.

I have also, after cleaning up the make and make install,
tried versions of configure && make and make && make install,
with significant errors for each one.

```
21 Feb 2013 11:48

### Global Optimization

Hi

I am new to NLOPT.

My objective function has 5 variables, and I want to minimize it. I am able to reach at a local optima using NLOPT_LD_SLSQP and
NLOPT_LN_COBYLA.

I want to arrive at the global minimum for the function (using any of the algorithms).

Can u please help me in telling how to specify the local and global optimization algorithms with the syntax.

Its urgent.

Pratik
--

```<div><div dir="ltr">
<div>
<div>
<div>
<div>
<div>
<div>
<div>Hi<br><br>
</div>I am new to NLOPT. <br><br>
</div>My objective function has 5 variables, and I want to minimize it. I am able to reach at a local optima using NLOPT_LD_SLSQP and <br>NLOPT_LN_COBYLA. <br><br>
</div>I want to arrive at the global minimum for the function (using any of the algorithms).<br><br>
</div>Can u please help me in telling how to specify the local and global optimization algorithms with the syntax. <br><br>
</div>Its urgent. <br><br>