Skip to content

dtm13001/roots_and_optimization

master
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 

roots_and_optimization

Homework 3

All work for this homework can be found in the file titled HW3_Work

#Question 1

i. Compares the number of iterations that each function needed to reach an accuracy of 0.00001%.

       Solver       Initial_Guess        ea        Iterations
    ____________    _____________    __________    __________

    'bisect'        [      2]        7.1975e-06    29        
    'falsepos'      [      2]        9.5395e-06    19        
    'newtraph'      '-1 to 1'        1.2794e-12     6        
    'mod_secant'    '-1 to 1'        8.2745e-12     6    
    

ii. Compare the convergence of the 4 methods.

From this data it is apparent that the both the Newton Raphson and Modified Secant methods converged the fastest with 6 itenerations. This was followed by the False Positive method and then the Bisection method. In terms of error the Newton Raphson method had the smallest error which was about 86,000 times smaller than the Bisection method wich had the largest error. From our data it is apparent that the Newton Raphson and Modified Secant methods are the best for aproximating the roots for a parabolic function like ours.

Plot the approximate error vs the number of iterations:

Plot of convergence for four numerical solvers.

From this plot it can also be seen that the Bisection method took longer to reduce the inital error than the other methods. Although once the Bisection method began converging it starting converinging faster but then tapers off and after 12 iterations and takes 17 more iterations to reduce its error to the allowable limit. It can also be seen that the False Positive method was converging about as fast as the Newton Raphson and Modified Secant method but then took an additional 13 iterations to converge.

iii. To make the table I made a table with the following code in Matlab:

       T = table;
       T.Solver = {'bisect', 'falsepos','newtraph', 'mod_secant'}';
       T.Initial_Guess = ig';
       T.ea = ea';
       T.Iterations = t_iter';
       T

Basically, I just made an empty table and then filled it with the inverted vectors of data that I collected with my code.

To plot the approximate error plot, I used the plot function in Matlab with the setdefaults file. The Matlab code can be found below:

       plot(1:iter1, e_b, 'g', 1:iter2, e_f, '--', 1:iter3, e_n, 'c:', 1:iter4, e_m, 'o')

This code takes vectos of the errors I collected by running each function through a for loop and recording the error in each iteration of the fuction. Then I plotted this against the number of iterations. Lastly I made unique color and line types for each series of data.

#Question 2

Divergent

    Iteration     Root     x_i    Approx_Error
    _________    ______    ___    ____________

    1            2.2857    2        12.5      
    2            2.5276    2      9.5703      
    3            2.7422    2      7.8262      
    4            2.9375    2      6.6491      
    5            3.1182    2      5.7943      

Convergent

    Iteration       Root        x_i    Approx_Error
    _________    ___________    ___    ____________

    1              -0.017391    0.2          1250  
    2             1.0527e-05    0.2    1.6531e+05  
    3            -2.3329e-15    0.2    4.5122e+11  
    4                      0    0.2    4.5122e+11  
    5                      0    0.2    4.5122e+11  

This problem in particular gave me some serious problems. When using the newtraph function I could not get the error to reduce but the function did find the converging value of zero. Below is my code:

 func = @(x) x*exp(-x^2); %original function
 d_func = @(x) exp(-x^2) - 2*(x^2)*exp(-x^2); %derivitive of function
 
     for c = 1:5     %begins loop to find values for table
        [root2(c), e_fn2(c), iter] = newtraph(func,d_func,.2,0.00001,c);
    end

My derivative is definitely correct but I have noticed that the newtraph function does not behave well when there is a root at zero. For example, I ran the the function with the function y = x^3 and y' = 3*x^2.

>> [ root error iterations ] =newtraph(y, y2, .2, 0.00001, 150)

root =

   7.7151e-28

error =

   50.0000

iterations =

   150

Even after 150 iterations the error is still 50%. The function definitely converges at zero as can be seen with the root being really close to zero. Below I plotted the number of iterations until the functions converges which shows convergence after 610 iterations. It can be seen that the error does stays consistant for about 600 iterations and then finally shows some activity.

![Plot of convergence y = x^3.] (https://github.uconn.edu/github-enterprise-assets/0000/1383/0000/0327/6711edaa-f3cc-11e6-8a40-9230125ec3ef.png)

My guess is that the y = x*exp(-x^2) will not converge for a very large number of iterations. I tried it will 100,000 iterations with no luck.


Homework 4

All work for this homework can be found in the file titled HW4_Work

###Part (a)

'collar_potential_energy' Function

% Define constants
m = 0.5; % mass in kg
g = 9.81; % acceleration due to gravity m/s^2
K = 30;  %spring constant N/m

% Function
collar_potential_energy = @(x_C, theta) m*x_C*g*sin(theta) + .5*K *((0.5 - sqrt(0.5^2+(0.5-x_C)^2)))^2;

###Part (b)

Code

theta = 0;
[x1,fx1,ea1,iter1] = goldmin(collar_potential_energy,-10,10,0.0001,100,theta);

When theta = 0 radians, x_C = 0.5 meters, and PE = 2.9821e-27 J

###Part (c)

Code

theta = .9*pi/180;  
thetaVec = linspace(0,pi, 400);
fx2 = 0;

for i = 1:length(thetaVec)
       theta = thetaVec(i);
       [x2, fx2(i), ea2, iter2] = goldmin(collar_potential_energy, -10, 10, 0.0001, 100, theta);
end

When theta = 0.9 degrees, x_C = 0.39 meters, PE = 0.0322 J

###Part (d)

Code

for i = 1:length(thetaVec)
       theta = thetaVec(i);
       [x2, fx2(i), ea2, iter2] = goldmin(collar_potential_energy, -10, 10, 0.0001, 100, theta);
end

plot(thetaVec, fx2);
    title('Plot of Minimum PE for x_C with given Angle');
    xlabel('Theta (radians)');
    ylabel('x_C (meters)');

![Steady-state position of collar on rod at angle theta] (https://github.uconn.edu/github-enterprise-assets/0000/1383/0000/0334/1c7a4956-f93d-11e6-90d6-4b24ab848d46.png)

About

Homework 3

Resources

Stars

Watchers

Forks

Releases

No releases published

Languages