Go to the first, previous, next, last section, table of contents.
This chapter describes functions for solving ordinary differential equation (ODE) initial value problems. The library provides a variety of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines, and higher-level components for adaptive step-size control. The components can be combined by the user to achieve the desired solution, with full access to any intermediate steps.
These functions are declared in the header file `gsl_odeiv.h'.
The routines solve the general n-dimensional first-order system,
dy_i(t)/dt = f_i(t, y_1(t), ..., y_n(t))
for i = 1, \dots, n. The stepping functions rely on the vector
of derivatives f_i and the Jacobian matrix,
J_{ij} = df_i(t,y(t)) / dy_j.
A system of equations is defined using the gsl_odeiv_system
datatype.
int (* function) (double t, const double y[], double dydt[], void * params)
int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params);
J(i,j) = dfdy[i * dim + j]
where dim
is the dimension of the system.
Some of the simpler solver algorithms do not make use of the Jacobian
matrix, so it is not always strictly necessary to provide it (this
element of the struct can be replace by a null pointer). However, it is
useful to provide the Jacobian to allow the solver algorithms to be
interchanged -- the best algorithms make use of the Jacobian.
size_t dimension;
void * params
The lowest level components are the stepping functions which advance a solution from time t to t+h for a fixed step-size h and estimate the resulting local error.
printf("step method is '%s'\n", gsl_odeiv_step_name (s));
would print something like step method is 'rk4'
.
The following algorithms are available,
The control function examines the proposed change to the solution and its error estimate produced by a stepping function and attempts to determine the optimal step-size for a user-specified level of error.
The step-size adjustment procedure for this method begins by computing the desired error level D_i for each component,
D_i = eps_abs + eps_rel * (a_y |y_i| + a_dydt h |y'_i|)
and comparing it with the observed error E_i = |yerr_i|. If the observed error E exceeds the desired error level D by more than 10% for any component then the method reduces the step-size by an appropriate factor,
h_new = h_old * S * (D/E)^(1/q)
where q is the consistency order of method (e.g. q=4 for 4(5) embedded RK), and S is a safety factor of 0.9. The ratio D/E is taken to be the maximum of the ratios D_i/E_i.
If the observed error E is less than 50% of the desired error level D for the maximum ratio D_i/E_i then the algorithm takes the opportunity to increase the step-size to bring the error in line with the desired level,
h_new = h_old * S * (E/D)^(1/(q+1))
This encompasses all the standard error scaling methods.
GSL_ODEIV_HADJ_DEC
. If the error is sufficiently small then
h may be increased and GSL_ODEIV_HADJ_INC
is returned. The
function returns GSL_ODEIV_HADJ_NIL
if the step-size is
unchanged. The goal of the function is to estimate the largest
step-size which satisfies the user-specified accuracy requirements for
the current point.
printf("control method is '%s'\n", gsl_odeiv_control_name (c));
would print something like control method is 'standard'
The highest level of the system is the evolution function which combines the results of a stepping function and control function to reliably advance the solution forward over an interval (t_0, t_1). If the control function signals that the step-size should be decreased the evolution function backs out of the current step and tries the proposed smaller step-size. This is process is continued until an acceptable step-size is found.
The following program solves the second-order nonlinear Van der Pol oscillator equation,
x"(t) + \mu x'(t) (x(t)^2 - 1) + x(t) = 0
This can be converted into a first order system suitable for use with the library by introducing a separate variable for the velocity, y = x'(t),
x' = y y' = -x + \mu y (1-x^2)
The program begins by defining functions for these derivatives and their Jacobian,
#include <stdio.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_matrix.h> #include <gsl/gsl_odeiv.h> int func (double t, const double y[], double f[], void *params) { double mu = *(double *)params; f[0] = y[1]; f[1] = -y[0] - mu*y[1]*(y[0]*y[0] - 1); return GSL_SUCCESS; } int jac (double t, const double y[], double *dfdy, double dfdt[], void *params) { double mu = *(double *)params; gsl_matrix_view dfdy_mat = gsl_matrix_view_array (dfdy, 2, 2); gsl_matrix * m = &dfdy_mat.matrix; gsl_matrix_set (m, 0, 0, 0.0); gsl_matrix_set (m, 0, 1, 1.0); gsl_matrix_set (m, 1, 0, -2.0*mu*y[0]*y[1] - 1.0); gsl_matrix_set (m, 1, 1, -mu*(y[0]*y[0] - 1.0)); dfdt[0] = 0.0; dfdt[1] = 0.0; return GSL_SUCCESS; } int main (void) { const gsl_odeiv_step_type * T = gsl_odeiv_step_rk8pd; gsl_odeiv_step * s = gsl_odeiv_step_alloc (T, 2); gsl_odeiv_control * c = gsl_odeiv_control_y_new (1e-6, 0.0); gsl_odeiv_evolve * e = gsl_odeiv_evolve_alloc (2); double mu = 10; gsl_odeiv_system sys = {func, jac, 2, &mu}; double t = 0.0, t1 = 100.0; double h = 1e-6; double y[2] = { 1.0, 0.0 }; while (t < t1) { int status = gsl_odeiv_evolve_apply (e, c, s, &sys, &t, t1, &h, y); if (status != GSL_SUCCESS) break; printf("%.5e %.5e %.5e\n", t, y[0], y[1]); } gsl_odeiv_evolve_free(e); gsl_odeiv_control_free(c); gsl_odeiv_step_free(s); return 0; }
The main loop of the program evolves the solution from (y, y') = (1, 0) at t=0 to t=100. The step-size h is automatically adjusted by the controller to maintain an absolute accuracy of 10^{-6} in the function values y.
To obtain the values at regular intervals, rather than the variable spacings chosen by the control function, the main loop can be modified to advance the solution from one point to the next. For example, the following main loop prints the solution at the fixed points t = 0, 1, 2, \dots, 100,
for (i = 1; i <= 100; i++) { double ti = i * t1 / 100.0; while (t < ti) { gsl_odeiv_evolve_apply (e, c, s, &sys, &t, ti, &h, y); } printf("%.5e %.5e %.5e\n", t, y[0], y[1]); }
It is also possible to work with a non-adaptive integrator, using only
the stepping function itself. The following program uses the rk4
fourth-order Runge-Kutta stepping function with a fixed stepsize of
0.01,
int main (void) { const gsl_odeiv_step_type * T = gsl_odeiv_step_rk4; gsl_odeiv_step * s = gsl_odeiv_step_alloc (T, 2); double mu = 10; gsl_odeiv_system sys = {func, jac, 2, &mu}; double t = 0.0, t1 = 100.0; double h = 1e-2; double y[2] = { 1.0, 0.0 }, y_err[2]; double dfdy[4], dydt_in[2], dydt_out[2]; /* initialise dydt_in */ GSL_ODEIV_JA_EVAL(&sys, t, y, dfdy, dydt_in); while (t < t1) { int status = gsl_odeiv_step_apply (s, t, h, y, y_err, dydt_in, dydt_out, &sys); if (status != GSL_SUCCESS) break; dydt_in[0] = dydt_out[0]; dydt_in[1] = dydt_out[1]; t += h; printf("%.5e %.5e %.5e\n", t, y[0], y[1]); } gsl_odeiv_step_free(s); return 0; }
The derivatives and jacobian must be initialised for the starting point t=0 before the first step is taken. Subsequent steps use the output derivatives dydt_out as inputs to the next step by copying their values into dydt_in.
Many of the the basic Runge-Kutta formulas can be found in the Handbook of Mathematical Functions,
The implicit Bulirsch-Stoer algorithm bsimp
is described in the
following paper,
Go to the first, previous, next, last section, table of contents.