Hello all. I'm taking differential equations and our professor has given us the task of using Euler's Method to approximate the value of y in a differential equation. She has also given us the option to do it in a programming language of our choice, since it's just a lot of repetition otherwise. I figured I'm also taking intro to C, so I may as well do it in C.

I'm almost there.

We are supposed to be able to approximate values at x = 0.1, 0.2, 0.3, 0.4, & 0.5 and with step size h = 0.1, 0.05, & 0.025. So far, my program gives me the correct answer for all five x values when h = 0.1, but when h = 0.05, it gives the correct answer at x = 0.1, 0.2, and 0.3, but when x = 0.4 or 0.5, it goes nuts. I think I have a problem with variable types, but I'm not sure where. I tried replacing all the variable types with doubles, but that didn't help. Any suggestions? Thanks in advance.

Code:#include<stdio.h> #include<math.h> main() { float x; /*defining variables*/ float y; float h; float targetx; puts("This program will solve the differential equation y' = y - x \nusing Euler's Method with y(0)=1/2 \n\n"); puts("Please enter the desired constant step size. (h-value)\n\n"); scanf("%f", &h); /* Defining step size*/ puts("\n\nNow enter the desired x-value to solve for y.\n\n"); scanf("%f", &targetx); y = 0.5; x = 0.0; puts("\n\nX Y"); while ( x != targetx ) { printf("\n\n%f %f", x, y); y = y + ((y - x)*h); x= x+h; } printf("\n\n%f %f\n", x, y); printf("\nThe value of y at the given x is %f.\n\n", y, h); system("pause"); }