# Thread: this self contained program

1. Originally Posted by a.mlw.walker
so after some studying, I think that for my problem I actually want surface1.c as the basis to manipulate for my algorithm, not sure though, just think that.
From the header of surface1.c
Contents: Example for fitting data y_i(t_i) by a function f(t;p), where each t_i is a vector of dimension k=2.
If your dataset is a vector (tx, tz), what are the y values?
And if you really have a 2D-vector, your callback function (f) needs three parameters (see the model function in surface1.c)

Originally Posted by a.mlw.walker
Please can someone have a look and see if they can simplify the algorithm, or at least help me to simplify it so that I can "fit this equation"?
I don't understand what your algorithm should produce. Does your algorithm generate different functions and you want to find the best?

Curve fitting is about finding a function which fits a set of data points best. The library uses the Levenberg-Marquardt algorithm for that. You have to provide a dataset and a general function (with some modifiable parameters). Then the library finds the parameter combination which fits best.

So I don't see where your algorithm fits here (especially the variable "Fitted_Curve" suggests that you try to calculate the curve yourself).

Bye, Andreas

2. ## Hi

Hi.
Thanks for responding. I have attached the equation I am trying to fit. Now you mention it, I wonder whether I am supposed to supply the equation within the curly braces, and the rest of the algoithm does the sum of the values.
The reason I thought surface may be closer to my problem is because '1D' (curve1) didnt seem right, I can only visualise a graph in 2D (x and y) so I thought that made more sense - also it was passing more variables/parameters to the function so I thought it would make it more obvious how to change the current code.
The values I posted before:
Code:
```double tx[5] = {1, 2, 3, 4, 5};
double tz[5] = {1.2, 2.693, 4.325, 6.131, 8.125};```
are k, and tk respectively in the equation. I am trying to fit for a, b and c0 - p[0], p[1] and p[2].
Is that clearer?

3. Now I think we come closer to the solution.

Your image of the equation looks like the formula for the least square method (finding the minimum of the sum of the squared residuals).
Then x is your k, y is your tk, beta is (a,b,c0) and finally your model function is the part inside the braces after the first minus sign.

Are you now able to write the callback function? You should use curve1.c as an example and adjust it to your actual values.

Bye, Andreas

4. So what Im not sure about is how much of the equation I am supposed to put in the call back function - i.e do I only put the bit inside the curly braces or do I need to put everything
i/e do I put
y - f(y, Beta)
or do I put
SUM(y-f(y,beta))^2
?

5. So I have had a go, it compiles but I just want to check whether you think I have done the right thing.
i have adjusted f to this:
Code:
```double f( double t, const double *p )
{
double invtrig = 0;
invtrig = arcsinh(sinh(p[2])*exp(p[0]*(t)*2*M_PI));
return (1/(p[0]*p[1]))*(p[2]-invtrig);
}```
which is calculating just the model function. It does not include the tk (y) value as I assume then that the algorithm deals with that.
So I am just returning the value of the model function to the levenberg marquardt algorithm.
In main, I have set:
Code:
```    double t[5] = {1, 2, 3, 4, 5};
double y[5] = {1.2, 2.693, 4.325, 6.131, 8.125};```
Is this right?

EDIT: It compiles, and also thinks it converged, however the parameters it calculates are way out from the expected values for the test data.
Does the LM algorithm know that it is meant to be a least squares fit?
ie based the general LM equation for least squares, I am only feeding it f(x, beta) I am kind of assuming it knows to subtract that from the other data set, and square it and sum the values....

6. Code:
`invtrig = arcsinh(sinh(p[2])*exp(p[0]*(t)*2*M_PI));`
Where is arcsinh() defined? The standard function (C99) from math.h is called asinh().

EDIT: It compiles, and also thinks it converged, however the parameters it calculates are way out from the expected values for the test data.
I get the following result:
Code:
```...
Results:
status after 68 function evaluations:
success (the relative error in the sum of squares is at most tol)
obtained parameters:
par[0] =      -1.0458
par[1] =      4.07939
par[2] =     -149.809
obtained norm:
0.730656
fitting data as follows:
t[ 0]=           1 y=         1.2 fit=     1.54018 residue=   -0.340182
t[ 1]=           2 y=       2.693 fit=     3.08036 residue=   -0.387364
t[ 2]=           3 y=       4.325 fit=     4.62055 residue=   -0.295545
t[ 3]=           4 y=       6.131 fit=     6.16073 residue=  -0.0297273
t[ 4]=           5 y=       8.125 fit=     7.70091 residue=    0.424091```
Bye, Andreas

7. Yeah I think I get the same (at other computer). The results I get in Matlab, and the results I am supposed to get are
a: 0.02
b: 0.08
c0:-0.39

Im not sure if Ive adjusted the function correcty and hence why Im not getting the right results.

8. Originally Posted by a.mlw.walker
Yeah I think I get the same (at other computer). The results I get in Matlab, and the results I am supposed to get are
a: 0.02
b: 0.08
c0:-0.39
So what model function did you use in Matlab?

Bye, Andreas

9. This is the matlab function. If you dont know Matlab, the dot is a bit like a "for" loop for matrices...

a=params(1)
b=params(2)
c=params(3)

% c = c'
Fitted_Curve = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input*2*pi)));
Error_Vector=Actual_Output-Fitted_Curve ;
sse=sum(Error_Vector.^2);
Basically the same thing - the point is it just depends on how the method that is using it, wants it?

10. Originally Posted by a.mlw.walker
Fitted_Curve = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input*2*pi)));
The weird thing is, if I use the parameters you've got and calculate the values for your inputs, then I get the following result:

Code:
```#include <stdio.h>
#include <math.h>

int main(void)
{
int i;
double result;
const double a = 0.02, b = 0.08, c = -0.39;
for (i = 1; i <= 5; i++)
{
result = (1 / (a * b)) * (c - asinh(sinh(c) * exp(a * i * 2 * 3.1415)));
printf("%d - %g\n", i, result);
}
return 0;
}```
Code:
```\$ gcc -o bar bar.c -lm
\$ ./bar
1 - 30.7834
2 - 64.9449
3 - 102.672
4 - 144.103
5 - 189.316```
So they are far away from your test data.

I think it would really help if you post the exact assignment you are doing.

Bye, Andreas

11. Do you have Matlab?
If so, I have attached the necessary Matlab files to get the Matlab results, however if you dont I have also pasted the output of Matlab.
I think the problem is in the way that the function has been defined, but Im not sure. In Matlab the function is defined as the entire least squares equation, not just the right hand side of the levenberg equation.
Matlab function:
Code:
```function sse=myfit(params,Input,Actual_Output)

a=params(1)
b=params(2)
c=params(3)

% c = c'
Fitted_Curve = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input*2*pi)));
Error_Vector=Actual_Output-Fitted_Curve ;
sse=sum(Error_Vector.^2);```
The function above is then passed to Matlab's Levenberg Marquardt algorithm here:
Code:
```Data = [1.2 2.693 4.325 6.131 8.125]%paper data
t = [1 2 3 4 5];
%%
Starting=[1, 1, -1];
options=optimset('Display','iter');
Estimates=fminsearch(@myfit,Starting,options,t,Data)```
Estimates being the vector of the parameters returned which come out as
Estimates =

0.0250 1.8944 -0.3690
The Matlab code then plots the graph that would be the function with those parameters so the reader can see the quality of the fit to the original data (attached).

Matlab code:

Code:
```Data = [1.2 2.693 4.325 6.131 8.125]%paper data
t = [1 2 3 4 5];
%%
Starting=[1, 1, -1];
options=optimset('Display','iter');
Estimates=fminsearch(@myfit,Starting,options,t,Data)
%%
a = Estimates(1)
b = Estimates(2)
c = Estimates(3)
tt = 0:0.001:5;

% To check the fit
% subplot(2,2,1)
plot(t,Data,'*')
hold on
plot(tt,(1/(a*b))*(c-asinh(sinh(c)*exp(a*tt*2*pi))),'r');
xlabel('goodness of fit graph');```
and the code in the file "myfit.m"
Code:
```function sse=myfit(params,Input,Actual_Output)

a=params(1)
b=params(2)
c=params(3)

% c = c'
Fitted_Curve = (1/(a*b))*(c-asinh(sinh(c)*exp(a*Input*2*pi)));
Error_Vector=Actual_Output-Fitted_Curve ;
sse=sum(Error_Vector.^2);```
So I am confused as to why this algorihtm (the c one) produces such wildly different results...

Edit, the second graph is using the results that you got - the one that has a red "step change".

12. So, here are probably my final thoughts after some testing and researching:

Originally Posted by a.mlw.walker
The function above is then passed to Matlab's Levenberg Marquardt algorithm here:
You use Matlab's fminsearch function which isn't using the Levenberg-Marquardt algorithm but the Nelder-Mead simplex algorithm.

This could explain the differences.
It also seems that the Levenberg-Marquardt algorithm is more sensitive to the initial starting parameters.
If for example I started with the initial parameters 0.1 / 1 / -0.1 I get as end result
Code:
```a = 0.0396
b = 2.756
c = -0.537```
which will produce a similar plot as your values:

As you can see, in the range [0:5] the plots are very similar.

Bye, Andreas

13. Still quite interesting, because you can request in Matlab for fMinSearch to use Levenberg - I should have pointed that
options = optimset('MaxFunEvals', 10000, 'MaxIter', 10000);
options = optimset(options, 'LevenbergMarquardt', 'on');
Estimates=fminsearch(@myfit,Starting,options,t,Dat a);
However I think you are right. Its to do with the sensitivity of the algorithm. I tried with different data, and it got exactly the same as Maltab did for the parameters.
Thanks for your help I think I can take it from here.

Popular pages Recent additions