| Type: | Package |
| Title: | GSL Multi-Start Nonlinear Least-Squares Fitting |
| Version: | 1.4.2 |
| Date: | 2025-09-27 |
| Description: | An R interface to weighted nonlinear least-squares optimization with the GNU Scientific Library (GSL), see M. Galassi et al. (2009, ISBN:0954612078). The available trust region methods include the Levenberg-Marquardt algorithm with and without geodesic acceleration, the Steihaug-Toint conjugate gradient algorithm for large systems and several variants of Powell's dogleg algorithm. Multi-start optimization based on quasi-random samples is implemented using a modified version of the algorithm in Hickernell and Yuan (1997, OR Transactions). Robust nonlinear regression can be performed using various robust loss functions, in which case the optimization problem is solved by iterative reweighted least squares (IRLS). Bindings are provided to tune a number of parameters affecting the low-level aspects of the trust region algorithms. The interface mimics R's nls() function and returns model objects inheriting from the same class. |
| BugReports: | https://github.com/JorisChau/gslnls/issues |
| URL: | https://github.com/JorisChau/gslnls |
| Depends: | R (≥ 3.5) |
| Imports: | stats, Matrix |
| Encoding: | UTF-8 |
| Language: | en-US |
| License: | LGPL-3 |
| SystemRequirements: | GSL (>= 2.3) |
| Biarch: | true |
| RoxygenNote: | 7.3.2 |
| NeedsCompilation: | yes |
| Packaged: | 2025-09-28 07:47:47 UTC; jchau |
| Author: | Joris Chau [aut, cre] |
| Maintainer: | Joris Chau <joris.chau@openanalytics.eu> |
| Repository: | CRAN |
| Date/Publication: | 2025-09-28 12:30:02 UTC |
Anova tables
Description
Returns the analysis of variance (or deviance) tables for two or
more fitted "gsl_nls" objects.
Usage
## S3 method for class 'gsl_nls'
anova(object, ...)
Arguments
object |
An object inheriting from class |
... |
Additional objects inheriting from class |
Value
A data.frame object of class "anova" similar to anova representing the
analysis-of-variance table of the fitted model objects when printed.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + 1 + rnorm(n, sd = 0.1)
)
## model
obj1 <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
obj2 <- gsl_nls(fn = y ~ A * exp(-lam * x) + b, data = xy,
start = c(A = 1, lam = 1, b = 0))
anova(obj1, obj2)
Extract model coefficients
Description
Returns the fitted model coefficients from a "gsl_nls" object.
coefficients can also be used as an alias.
Usage
## S3 method for class 'gsl_nls'
coef(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Named numeric vector of fitted coefficients similar to coef
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
coef(obj)
Confidence interval for model parameters
Description
Returns asymptotic or profile likelihood confidence intervals for the parameters in a
fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
confint(object, parm, level = 0.95, method = c("asymptotic", "profile"), ...)
Arguments
object |
An object inheriting from class |
parm |
A character vector of parameter names for which to evaluate confidence intervals, defaults to all parameters. |
level |
A numeric scalar between 0 and 1 giving the level of the parameter confidence intervals. |
method |
Method to be used, either |
... |
At present no optional arguments are used. |
Details
Method "asymptotic" assumes (approximate) normality of the errors in the model and calculates
standard asymptotic confidence intervals based on the quantiles of a t-distribution. Method "profile"
calculates profile likelihood confidence intervals using the confint.nls method
in the MASS package and for this reason is only available for "gsl_nls" objects that
are also of class "nls".
Value
A matrix with columns giving the lower and upper confidence limits for each parameter.
See Also
confint, confint.nls in package MASS.
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
## asymptotic ci's
confint(obj)
## Not run:
## profile ci's (requires MASS)
confint(obj, method = "profile")
## End(Not run)
Confidence intervals for derived parameters
Description
confintd is a generic function to compute confidence intervals for continuous functions
of the parameters in a fitted model. The function invokes particular methods which depend on the
class of the first argument.
Usage
confintd(object, expr, level = 0.95, ...)
Arguments
object |
A fitted model object. |
expr |
An expression or character vector that can be transformed to an |
level |
A numeric scalar between 0 and 1 giving the level of the derived parameter confidence intervals. |
... |
Additional argument(s) for methods |
Value
A matrix with columns giving the fitted values and lower and upper confidence limits for each derived parameter. The row names list the individual derived parameter expressions.
See Also
Confidence intervals for derived parameters
Description
Returns fitted values and confidence intervals for continuous functions of parameters
from a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
confintd(object, expr, level = 0.95, dtype = "symbolic", ...)
Arguments
object |
A fitted model object. |
expr |
An expression or character vector that can be transformed to an |
level |
A numeric scalar between 0 and 1 giving the level of the derived parameter confidence intervals. |
dtype |
A character string equal to |
... |
Additional argument(s) for methods |
Details
This method assumes (approximate) normality of the errors in the model and confidence intervals are
calculated using the delta method, i.e. a first-order Taylor approximation of the (continuous)
function of the parameters. If dtype = "symbolic" (the default), expr is differentiated
with respect to the parameters using symbolic differentiation with deriv. As such,
each expression in expr must contain only operators that are known to deriv.
If dtype = "numeric", expr is differentiated using numeric differentiation with
numericDeriv, which should be used if expr cannot be derived symbolically
with deriv.
Value
A matrix with columns giving the fitted values and lower and upper confidence limits for each derived parameter. The row names list the individual derived parameter expressions.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
## delta method ci's
confintd(obj, expr = c("log(lam)", "A / lam"))
Calculate Cook's distance
Description
Returns Cook's distance values from a fitted "gsl_nls" object based on the estimated
variance-covariance matrix of the model parameters.
Usage
## S3 method for class 'gsl_nls'
cooks.distance(model, ...)
Arguments
model |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Numeric vector of Cook's distance values similar to cooks.distance.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
cooks.distance(obj)
Model deviance
Description
Returns the deviance of a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
deviance(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Numeric deviance value similar to deviance
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
deviance(obj)
Residual degrees-of-freedom
Description
Returns the residual degrees-of-freedom from a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
df.residual(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Integer residual degrees-of-freedom similar to df.residual.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
df.residual(obj)
Extract model fitted values
Description
Returns the fitted responses from a "gsl_nls" object. fitted.values
can also be used as an alias.
Usage
## S3 method for class 'gsl_nls'
fitted(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Numeric vector of fitted responses similar to fitted.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
fitted(obj)
Extract model formula
Description
Returns the model formula from a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
formula(x, ...)
Arguments
x |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
If the object inherits from class "nls" returns the fitted model as a formula similar
to formula. Otherwise returns the fitted model as a function.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
formula(obj)
GSL Nonlinear Least Squares fitting
Description
Determine the nonlinear least-squares estimates of the parameters of a
nonlinear model using the gsl_multifit_nlinear module present in
the GNU Scientific Library (GSL).
Usage
gsl_nls(fn, ...)
## S3 method for class 'formula'
gsl_nls(
fn,
data = parent.frame(),
start,
algorithm = c("lm", "lmaccel", "dogleg", "ddogleg", "subspace2D"),
loss = c("default", "huber", "barron", "bisquare", "welsh", "optimal", "hampel", "ggw",
"lqq"),
control = gsl_nls_control(),
lower,
upper,
jac = NULL,
fvv = NULL,
trace = FALSE,
subset,
weights,
na.action,
model = FALSE,
...
)
## S3 method for class 'function'
gsl_nls(
fn,
y,
start,
algorithm = c("lm", "lmaccel", "dogleg", "ddogleg", "subspace2D"),
loss = c("default", "huber", "barron", "bisquare", "welsh", "optimal", "hampel", "ggw",
"lqq"),
control = gsl_nls_control(),
lower,
upper,
jac = NULL,
fvv = NULL,
trace = FALSE,
weights,
...
)
Arguments
fn |
a nonlinear model defined either as a two-sided formula including variables and parameters, or as a function returning a numeric vector, with first argument the vector of parameters to be estimated. See the individual method descriptions below. |
data |
an optional data frame in which to evaluate the variables in |
y |
numeric response vector if |
start |
a vector, list or matrix of initial parameter values or parameter ranges.
|
algorithm |
character string specifying the algorithm to use. The following choices are supported:
|
loss |
character string specifying the loss function to optimize. The following choices are supported:
If a character string, the default tuning parameters as specified by |
control |
an optional list of control parameters to tune the least squares iterations and multistart algorithm.
See |
lower |
a named list or named numeric vector of parameter lower bounds, or an unnamed numeric scalar to be replicated for all parameters. If missing (default), the parameters are unconstrained from below. |
upper |
a named list or named numeric vector of parameter upper bounds, or an unnamed numeric scalar to be replicated for all parameters. If missing (default), the parameters are unconstrained from above. |
jac |
either |
fvv |
either |
trace |
logical value indicating if a trace of the iteration progress should be printed.
Default is |
subset |
an optional vector specifying a subset of observations to be used in the fitting process.
This argument is only used if |
weights |
an optional numeric vector of (fixed) weights of length |
na.action |
a function which indicates what should happen when the data contain |
model |
a logical value. If |
... |
additional arguments passed to the calls of |
Value
If fn is a formula returns a list object of class nls.
If fn is a function returns a list object of class gsl_nls.
See the individual method descriptions for the structures of the returned lists and the generic functions
applicable to objects of both classes.
Methods (by class)
-
gsl_nls(formula): Iffnis aformula, the returned list object is of classesgsl_nlsandnls. Therefore, all generic functions applicable to objects of classnls, such asanova,coef,confint,deviance,df.residual,fitted,formula,logLik,nobs,predict,print,profile,residuals,summary,vcov,hatvalues,cooks.distanceandweightsare also applicable to the returned list object. In addition, a methodconfintdis available for inference of derived parameters. -
gsl_nls(function): Iffnis afunction, the first argument must be the vector of parameters and the function should return a numeric vector containing the nonlinear model evaluations at the provided parameter and predictor or covariate vectors. In addition, the argumentyneeds to contain the numeric vector of observed responses, equal in length to the numeric vector returned byfn. The returned list object is (only) of classgsl_nls. Although the returned object is not of classnls, the following generic functions remain applicable for an object of classgsl_nls:anova,coef,confint,deviance,df.residual,fitted,formula,logLik,nobs,predict,print,residuals,summary,vcov,hatvalues,cooks.distanceandweights. In addition, a methodconfintdis available for inference of derived parameters.
Multi-start algorithm
If start is a list or matrix of parameter ranges, or contains any missing values, a modified version of the multi-start algorithm described in
Hickernell and Yuan (1997) is applied. Note that the start parameter ranges are only used to bound the domain for the
starting values, i.e. the resulting parameter estimates are not constrained to lie within these bounds, use lower and/or upper for
this purpose instead. Quasi-random starting values are sampled in the unit hypercube from a Sobol sequence if p < 41 or from a Halton sequence (up to p = 1229) otherwise.
The initial starting values are scaled to the specified parameter ranges using an inverse (scaled) logistic function favoring starting values near the center of the
(scaled) domain. The trust region method as specified by algorithm used for the inexpensive and expensive local search (see Algorithm 2.1 of Hickernell
and Yuan (1997)) are the same, only differing in the number of search iterations mstart_p versus mstart_maxiter, where
mstart_p is typically much smaller than mstart_maxiter. When a new stationary point is detected, the scaling step from the unit hypercube to
the starting value domain is updated using the diagonal of the estimated trust method's scaling matrix D, which improves optimization performance
especially when the parameters live on very different scales. The multi-start algorithm terminates when NSP (number of stationary points)
is larger than or equal to mstart_minsp and NWSP (number of worse stationary points) is larger than mstart_r + sqrt(mstart_r) * NSP,
or when the maximum number of major iterations mstart_maxstart is reached. After termination of the multi-start algorithm, a full
single-start optimization is executed starting from the best multi-start solution.
Missing starting values
If start contains missing (or infinite) values, the multi-start algorithm is executed without fixed parameter ranges for the missing parameters.
The ranges for the missing parameters are initialized to the unit interval and dynamically increased or decreased in each major iteration
of the multi-start algorithm. The decision to increase or decrease a parameter range is driven by the minimum and maximum parameter values
obtained by the first mstart_q inexpensive local searches ordered by their squared loss, which typically provide a decent indication of the
order of magnitude of the parameter range in which to search for the optimal solution. Note that this procedure is not expected to always
return a global minimum of the nonlinear least-squares objective. Especially when the objective function contains many local optima,
the algorithm may be unable to select parameter ranges that include the global minimizing solution. In this case, it may help to increase
the values of mstart_n, mstart_r or mstart_minsp to avoid early termination of the algorithm at the cost of
increased computational effort.
References
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.
Hickernell, F.J. and Yuan, Y. (1997) “A simple multistart algorithm for global optimization”, OR Transactions, Vol. 1 (2).
See Also
https://www.gnu.org/software/gsl/doc/html/nls.html
https://CRAN.R-project.org/package=robustbase/vignettes/psi_functions.pdf
Examples
# Example 1: exponential model
# (https://www.gnu.org/software/gsl/doc/html/nls.html#exponential-fitting-example)
## data
set.seed(1)
n <- 25
x <- (seq_len(n) - 1) * 3 / (n - 1)
f <- function(A, lam, b, x) A * exp(-lam * x) + b
y <- f(A = 5, lam = 1.5, b = 1, x) + rnorm(n, sd = 0.25)
## model fit
ex1_fit <- gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(A = 0, lam = 0, b = 0) ## starting values
)
summary(ex1_fit) ## model summary
predict(ex1_fit, interval = "prediction") ## prediction intervals
## multi-start
gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = list(A = c(0, 100), lam = c(0, 10), b = c(-10, 10)) ## fixed starting ranges
)
## missing starting values
gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(A = NA, lam = NA, b = NA) ## dynamic starting ranges
)
## robust regression
gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(A = 0, lam = 0, b = 0), ## starting values
loss = "barron" ## L1-L2 loss
)
## analytic Jacobian 1
gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(A = 0, lam = 0, b = 0), ## starting values
jac = function(par) with(as.list(par), ## jacobian
cbind(A = exp(-lam * x), lam = -A * x * exp(-lam * x), b = 1)
)
)
## analytic Jacobian 2
gsl_nls(
fn = y ~ A * exp(-lam * x) + b, ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(A = 0, lam = 0, b = 0), ## starting values
jac = TRUE ## automatic derivation
)
## self-starting model
gsl_nls(
fn = y ~ SSasymp(x, Asym, R0, lrc), ## model formula
data = data.frame(x = x, y = y) ## model fit data
)
# Example 2: Gaussian function
# (https://www.gnu.org/software/gsl/doc/html/nls.html#geodesic-acceleration-example-2)
## data
set.seed(1)
n <- 100
x <- seq_len(n) / n
f <- function(a, b, c, x) a * exp(-(x - b)^2 / (2 * c^2))
y <- f(a = 5, b = 0.4, c = 0.15, x) * rnorm(n, mean = 1, sd = 0.1)
## Levenberg-Marquardt (default)
gsl_nls(
fn = y ~ a * exp(-(x - b)^2 / (2 * c^2)), ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(a = 1, b = 0, c = 1), ## starting values
trace = TRUE ## verbose output
)
## Levenberg-Marquardt w/ geodesic acceleration 1
gsl_nls(
fn = y ~ a * exp(-(x - b)^2 / (2 * c^2)), ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(a = 1, b = 0, c = 1), ## starting values
algorithm = "lmaccel", ## algorithm
trace = TRUE ## verbose output
)
## Levenberg-Marquardt w/ geodesic acceleration 2
## second directional derivative
fvv <- function(par, v, x) {
with(as.list(par), {
zi <- (x - b) / c
ei <- exp(-zi^2 / 2)
2 * v[["a"]] * v[["b"]] * zi / c * ei + 2 * v[["a"]] * v[["c"]] * zi^2 / c * ei -
v[["b"]]^2 * a / c^2 * (1 - zi^2) * ei -
2 * v[["b"]] * v[["c"]] * a / c^2 * zi * (2 - zi^2) * ei -
v[["c"]]^2 * a / c^2 * zi^2 * (3 - zi^2) * ei
})
}
## analytic fvv 1
gsl_nls(
fn = y ~ a * exp(-(x - b)^2 / (2 * c^2)), ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(a = 1, b = 0, c = 1), ## starting values
algorithm = "lmaccel", ## algorithm
trace = TRUE, ## verbose output
fvv = fvv, ## analytic fvv
x = x ## argument passed to fvv
)
## analytic fvv 2
gsl_nls(
fn = y ~ a * exp(-(x - b)^2 / (2 * c^2)), ## model formula
data = data.frame(x = x, y = y), ## model fit data
start = c(a = 1, b = 0, c = 1), ## starting values
algorithm = "lmaccel", ## algorithm
trace = TRUE, ## verbose output
fvv = TRUE ## automatic derivation
)
# Example 3: Branin function
# (https://www.gnu.org/software/gsl/doc/html/nls.html#comparing-trs-methods-example)
## Branin model function
branin <- function(x) {
a <- c(-5.1 / (4 * pi^2), 5 / pi, -6, 10, 1 / (8 * pi))
f1 <- x[2] + a[1] * x[1]^2 + a[2] * x[1] + a[3]
f2 <- sqrt(a[4] * (1 + (1 - a[5]) * cos(x[1])))
c(f1, f2)
}
## Dogleg minimization w/ model as function
gsl_nls(
fn = branin, ## model function
y = c(0, 0), ## response vector
start = c(x1 = 6, x2 = 14.5), ## starting values
algorithm = "dogleg" ## algorithm
)
# Available example problems
nls_test_list()
## BOD regression
## (https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml)
(boxbod <- nls_test_problem(name = "BoxBOD"))
with(boxbod,
gsl_nls(
fn = fn,
data = data,
start = list(b1 = NA, b2 = NA)
)
)
## Rosenbrock function
(rosenbrock <- nls_test_problem(name = "Rosenbrock"))
with(rosenbrock,
gsl_nls(
fn = fn,
y = y,
start = c(x1 = NA, x2 = NA),
jac = jac
)
)
Tunable Nonlinear Least Squares iteration parameters
Description
Allow the user to tune the characteristics of the gsl_nls and gsl_nls_large
nonlinear least squares algorithms.
Usage
gsl_nls_control(
maxiter = 100,
scale = "more",
solver = "qr",
fdtype = "forward",
factor_up = 2,
factor_down = 3,
avmax = 0.75,
h_df = sqrt(.Machine$double.eps),
h_fvv = 0.02,
xtol = sqrt(.Machine$double.eps),
ftol = sqrt(.Machine$double.eps),
gtol = sqrt(.Machine$double.eps),
mstart_n = 30,
mstart_p = 5,
mstart_q = mstart_n%/%10,
mstart_r = 4,
mstart_s = 2,
mstart_tol = 0.25,
mstart_maxiter = 10,
mstart_maxstart = 250,
mstart_minsp = 1,
irls_maxiter = 50,
irls_xtol = .Machine$double.eps^0.25,
...
)
Arguments
maxiter |
positive integer, termination occurs when the number of iterations reaches |
scale |
character, scaling method or damping strategy determining the diagonal scaling matrix D. The following options are supported:
|
solver |
character, method used to solve the linear least squares system resulting as a subproblem in each iteration.
For large-scale problems fitted with
|
fdtype |
character, method used to numerically approximate the Jacobian and/or second-order derivatives
when geodesic acceleration is used. Either |
factor_up |
numeric factor by which to increase the trust region radius when a search step is accepted. Too large values may destabilize the search, too small values slow down the search, defaults to 2. |
factor_down |
numeric factor by which to decrease the trust region radius when a search step is rejected. Too large values may destabilize the search, too small values slow down the search, defaults to 3. |
avmax |
numeric value, the ratio of the acceleration term to the velocity term when using geodesic acceleration to
solve the nonlinear least squares problem. Any steps with a ratio larger than |
h_df |
numeric value, the step size for approximating the Jacobian matrix with finite differences, defaults to |
h_fvv |
numeric value, the step size for approximating the second directional derivative when geodesic acceleration
is used to solve the nonlinear least squares problem, defaults to 0.02. This is only used if no analytic second
directional derivative ( |
xtol |
numeric value, termination occurs when the relative change in parameters between iterations is |
ftol |
numeric value, termination occurs when the relative change in sum of squared residuals between iterations is |
gtol |
numeric value, termination occurs when the relative size of the gradient of the sum of squared residuals is |
mstart_n |
positive integer, number of quasi-random points drawn in each major iteration, parameter |
mstart_p |
positive integer, number of iterations of inexpensive local search to concentrate the sample, parameter |
mstart_q |
positive integer, number of points retained in the concentrated sample, parameter |
mstart_r |
positive integer, scaling factor of number of stationary points determining when the multi-start algorithm terminates, parameter |
mstart_s |
positive integer, minimum number of iterations a point needs to be retained before starting an efficient local search, parameter |
mstart_tol |
numeric value, multiplicative tolerance |
mstart_maxiter |
positive integer, maximum number of iterations in the efficient local search algorithm (Algorithm B, Hickernell and Yuan (1997)), defaults to 10. |
mstart_maxstart |
positive integer, minimum number of major iterations (Algorithm 2.1, Hickernell and Yuan (1997)) before the multi-start algorithm terminates, defaults to 250. |
mstart_minsp |
positive integer, minimum number of detected stationary points before the multi-start algorithm terminates, defaults to 1. |
irls_maxiter |
positive integer, maximum number of IRLS iterations, defaults to 50. Only used in case of a non-default loss function ( |
irls_xtol |
numeric value, termination of the IRLS procedure occurs when the relative change in parameters between IRLS iterations is |
... |
any additional arguments (currently not used). |
Value
A list with exactly twenty-three components:
maxiter
scale
solver
fdtype
factor_up
factor_down
avmax
h_df
h_fvv
xtol
ftol
gtol
mstart_n
mstart_p
mstart_q
mstart_r
mstart_s
mstart_tol
mstart_maxiter
mstart_maxstart
mstart_minsp
irls_maxiter
irls_xtol
with meanings as explained under 'Arguments'.
Note
ftol is disabled in some versions of the GSL library.
References
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.
Hickernell, F.J. and Yuan, Y. (1997) “A simple multistart algorithm for global optimization”, OR Transactions, Vol. 1 (2).
See Also
https://www.gnu.org/software/gsl/doc/html/nls.html#tunable-parameters
Examples
## default tuning parameters
gsl_nls_control()
GSL Large-scale Nonlinear Least Squares fitting
Description
Determine the nonlinear least-squares estimates of the parameters of a large
nonlinear model system using the gsl_multilarge_nlinear module present in
the GNU Scientific Library (GSL).
Usage
gsl_nls_large(fn, ...)
## S3 method for class 'formula'
gsl_nls_large(
fn,
data = parent.frame(),
start,
algorithm = c("lm", "lmaccel", "dogleg", "ddogleg", "subspace2D", "cgst"),
control = gsl_nls_control(),
jac,
fvv,
trace = FALSE,
subset,
weights,
na.action,
model = FALSE,
...
)
## S3 method for class 'function'
gsl_nls_large(
fn,
y,
start,
algorithm = c("lm", "lmaccel", "dogleg", "ddogleg", "subspace2D", "cgst"),
control = gsl_nls_control(),
jac,
fvv,
trace = FALSE,
weights,
...
)
Arguments
fn |
a nonlinear model defined either as a two-sided formula including variables and parameters, or as a function returning a numeric vector, with first argument the vector of parameters to be estimated. See the individual method descriptions below. |
data |
an optional data frame in which to evaluate the variables in |
y |
numeric response vector if |
start |
a named list or named numeric vector of starting estimates. |
algorithm |
character string specifying the algorithm to use. The following choices are supported:
|
control |
an optional list of control parameters to tune the least squares iterations and multistart algorithm.
See |
jac |
a function returning the |
fvv |
a function returning an |
trace |
logical value indicating if a trace of the iteration progress should be printed.
Default is |
subset |
an optional vector specifying a subset of observations to be used in the fitting process.
This argument is only used if |
weights |
an optional numeric vector of (fixed) weights. When present, the objective function is weighted least squares. |
na.action |
a function which indicates what should happen when the data contain |
model |
a logical value. If |
... |
additional arguments passed to the calls of |
Value
If fn is a formula returns a list object of class nls.
If fn is a function returns a list object of class gsl_nls.
See the individual method descriptions for the structures of the returned lists and the generic functions
applicable to objects of both classes.
Methods (by class)
-
gsl_nls_large(formula): Iffnis aformula, the returned list object is of classesgsl_nlsandnls. Therefore, all generic functions applicable to objects of classnls, such asanova,coef,confint,deviance,df.residual,fitted,formula,logLik,nobs,predict,print,profile,residuals,summary,vcov,hatvalues,cooks.distanceandweightsare also applicable to the returned list object. In addition, a methodconfintdis available for inference of derived parameters. -
gsl_nls_large(function): Iffnis afunction, the first argument must be the vector of parameters and the function should return a numeric vector containing the nonlinear model evaluations at the provided parameter and predictor or covariate vectors. In addition, the argumentyneeds to contain the numeric vector of observed responses, equal in length to the numeric vector returned byfn. The returned list object is (only) of classgsl_nls. Although the returned object is not of classnls, the following generic functions remain applicable for an object of classgsl_nls:anova,coef,confint,deviance,df.residual,fitted,formula,logLik,nobs,predict,print,residuals,summary,vcov,hatvalues,cooks.distanceandweights. In addition, a methodconfintdis available for inference of derived parameters.
References
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.
See Also
https://www.gnu.org/software/gsl/doc/html/nls.html
Examples
# Large NLS example
# (https://www.gnu.org/software/gsl/doc/html/nls.html#large-nonlinear-least-squares-example)
## number of parameters
p <- 250
## model function
f <- function(theta) {
c(sqrt(1e-5) * (theta - 1), sum(theta^2) - 0.25)
}
## jacobian function
jac <- function(theta) {
rbind(diag(sqrt(1e-5), nrow = length(theta)), 2 * t(theta))
}
## dense Levenberg-Marquardt
gsl_nls_large(
fn = f, ## model
y = rep(0, p + 1), ## (dummy) responses
start = 1:p, ## start values
algorithm = "lm", ## algorithm
jac = jac, ## jacobian
control = list(maxiter = 250)
)
## dense Steihaug-Toint conjugate gradient
gsl_nls_large(
fn = f, ## model
y = rep(0, p + 1), ## (dummy) responses
start = 1:p, ## start values
jac = jac, ## jacobian
algorithm = "cgst" ## algorithm
)
## sparse Jacobian function
jacsp <- function(theta) {
rbind(Matrix::Diagonal(x = sqrt(1e-5), n = length(theta)), 2 * t(theta))
}
## sparse Levenberg-Marquardt
gsl_nls_large(
fn = f, ## model
y = rep(0, p + 1), ## (dummy) responses
start = 1:p, ## start values
algorithm = "lm", ## algorithm
jac = jacsp, ## sparse jacobian
control = list(maxiter = 250)
)
## sparse Steihaug-Toint conjugate gradient
gsl_nls_large(
fn = f, ## model
y = rep(0, p + 1), ## (dummy) responses
start = 1:p, ## start values
jac = jacsp, ## sparse jacobian
algorithm = "cgst" ## algorithm
)
Robust loss functions with tunable parameters
Description
Allow the user to tune the coefficient(s) of the robust loss functions
supported by gsl_nls. For all choices other than rho = "default", the MM-estimation
problem is optimized by means of iterative reweighted least squares (IRLS).
Usage
gsl_nls_loss(
rho = c("default", "huber", "barron", "bisquare", "welsh", "optimal", "hampel", "ggw",
"lqq"),
cc = NULL
)
Arguments
rho |
character loss function, one of |
cc |
named or unnamed numeric vector of tuning parameters. The length of this argument depends on the selected |
Value
A list with two components:
rho
cc
with meanings as explained under ‘Arguments’.
Loss function details
default-
Default squared loss, no iterative reweighted least squares (IRLS) is required in this case.
\rho(x) = x^2 huber-
Huber loss function with scaling constant
k, defaulting tok = 1.345for 95% efficiency of the regression estimator.\rho(x, k) = \left\{ \begin{array}{ll} \frac{1}{2} x^2 &\quad \text{ if } |x| \leq k \\[2pt] k(|x| - \frac{k}{2}) &\quad \text{ if } |x| > k \end{array} \right. barron-
Barron's smooth family of loss functions with robustness parameter
\alpha \leq 2(default\alpha = 1) and scaling constantk(defaultk = 1.345). Special cases include: (scaled) squared loss for\alpha = 2; L1-L2 loss for\alpha = 1; Cauchy loss for\alpha = 0; Geman-McClure loss for\alpha = -2; and Welsch/Leclerc loss for\alpha = -\infty. See Barron (2019) for additional details.\rho(x, \alpha, k) = \left\{ \begin{array}{ll} \frac{1}{2} (x / k)^2 &\quad \text{ if } \alpha = 2 \\[2pt] \log\left(\frac{1}{2} (x / k)^2 + 1 \right) &\quad \text{ if } \alpha = 0 \\[2pt] 1 - \exp\left( -\frac{1}{2} (x / k)^2 \right) &\quad \text{ if } \alpha = -\infty \\[2pt] \frac{|\alpha - 2|}{\alpha} \left( \left( \frac{(x / k)^2}{|\alpha-2|} + 1 \right)^{\alpha / 2} - 1 \right) &\quad \text{ otherwise } \end{array} \right. bisquare-
Tukey's biweight/bisquare loss function with scaling constant
k, defaulting tok = 4.685for 95% efficiency of the regression estimator, (k = 1.548gives a breakdown point of 0.5 of the S-estimator).\rho(x, k) = \left\{ \begin{array}{ll} 1 - (1 - (x / k)^2)^3 &\quad \text{ if } |x| \leq k \\[2pt] 1 &\quad \text{ if } |x| > k \end{array} \right. welsh-
Welsh/Leclerc loss function with scaling constant
k, defaulting tok = 2.11for 95% efficiency of the regression estimator, (k = 0.577gives a breakdown point of 0.5 of the S-estimator). This is equivalent to the Barron loss function with robustness parameter\alpha = -\infty.\rho(x, k) = 1 - \exp\left( -\frac{1}{2} (x / k)^2 \right) optimal-
Optimal loss function as in Section 5.9.1 of Maronna et al. (2006) with scaling constant
k, defaulting tok = 1.060for 95% efficiency of the regression estimator, (k = 0.405gives a breakdown point of 0.5 of the S-estimator).\rho(x, k) = \int_0^x \text{sign}(u) \left( - \dfrac{\phi'(|u|) + k}{\phi(|u|)} \right)^+ duwith
\phithe standard normal density. hampel-
Hampel loss function with a single scaling constant
k, settinga = 1.5k,b = 3.5kandr = 8k.k = 0.902by default, resulting in 95% efficiency of the regression estimator, (k = 0.212gives a breakdown point of 0.5 of the S-estimator).\rho(x, a, b, r) = \left\{ \begin{array}{ll} \frac{1}{2} x^2 / C &\quad \text{ if } |x| \leq a \\[2pt] \left( \frac{1}{2}a^2 + a(|x|-a)\right) / C &\quad \text{ if } a < |x| \leq b \\[2pt] \frac{a}{2}\left( 2b - a + (|x| - b) \left(1 + \frac{r - |x|}{r-b}\right) \right) / C &\quad \text{ if } b < |x| \leq r \\[2pt] 1 &\quad \text{ if } r < |x| \end{array} \right.with
C = \rho(\infty) = \frac{a}{2} (b - a + r). ggw-
Generalized Gauss-Weight loss function, a generalization of the Welsh/Leclerc loss with tuning constants
a, b, c, defaulting toa = 1.387,b = 1.5,c = 1.063for 95% efficiency of the regression estimator, (a = 0.204, b = 1.5, c = 0.296gives a breakdown point of 0.5 of the S-estimator).\rho(x, a, b, c) = \int_0^x \psi(u, a, b, c) duwith,
\psi(x, a, b, c) = \left\{ \begin{array}{ll} x &\quad \text{ if } |x| \leq c \\[2pt] \exp\left( -\frac{1}{2} \frac{(|x| - c)^b}{a} \right) x &\quad \text{ if } |x| > c \end{array} \right. lqq-
Linear Quadratic Quadratic loss function with tuning constants
b, c, s, defaulting tob = 1.473,c = 0.982,s = 1.5for 95% efficiency of the regression estimator, (b = 0.402, c = 0.268, s = 1.5gives a breakdown point of 0.5 of the S-estimator).\rho(x, b, c, s) = \int_0^x \psi(u, b, c, s) duwith,
\psi(x, b, c, s) = \left\{ \begin{array}{ll} x &\quad \text{ if } |x| \leq c \\[2pt] \text{sign}(x) \left( |x| - \frac{s}{2b} (|x| - c)^2 \right) &\quad \text{ if } c < |x| \leq b + c \\[2pt] \text{sign}(x) \left( c + b - \frac{bs}{2} + \frac{s-1}{a} \left( \frac{1}{2} \tilde{x}^2 - a\tilde{x}\right) \right) &\quad \text{ if } b + c < |x| \leq a + b + c \\[2pt] 0 &\quad \text{ otherwise } \end{array} \right.where
\tilde{x} = |x| - b - canda = (2c + 2b - bs) / (s - 1).
References
J.T. Barron (2019). A general and adaptive robust loss function. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4331-4339).
M. Galassi et al., GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078.
R.A. Maronna et al., Robust Statistics: Theory and Methods, ISBN 0470010924.
See Also
https://CRAN.R-project.org/package=robustbase
Examples
## Huber loss with default scale k
gsl_nls_loss(rho = "huber")
## L1-L2 loss (alpha = 1)
gsl_nls_loss(rho = "barron", cc = c(1, 1.345))
## Cauchy loss (alpha = 0)
gsl_nls_loss(rho = "barron", cc = c(k = 1.345, alpha = 0))
Calculate leverage values
Description
Returns leverage values (hat values) from a fitted "gsl_nls" object based on the estimated
variance-covariance matrix of the model parameters.
Usage
## S3 method for class 'gsl_nls'
hatvalues(model, ...)
Arguments
model |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Numeric vector of leverage values similar to hatvalues.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
hatvalues(obj)
Extract model log-likelihood
Description
Returns the model log-likelihood of a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
logLik(object, REML = FALSE, ...)
Arguments
object |
An object inheriting from class |
REML |
logical value; included for compatibility reasons only, should not be used. |
... |
At present no optional arguments are used. |
Value
Numeric object of class "logLik" identical to logLik.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
logLik(obj)
Available NLS test problems
Description
Returns an overview of 59 NLS test problems originating primarily from the NIST Statistical Reference Datasets (StRD) archive; Bates and Watts (1988); and More, Garbow and Hillstrom (1981).
Usage
nls_test_list(fields = c("name", "class", "p", "n", "check"))
Arguments
fields |
optional character vector to return a subset of columns in the data.frame. |
Value
a data.frame with high-level information about the available test problems. The following columns are returned by default:
-
"name"Name of the test problem for use innls_test_problem. -
"class"Either"formula"if the model is defined as a formula or"function"if defined as a function. -
"p"Default number of parameters in the test problem. -
"n"Default number of residuals in the test problem. -
"check"One of the following three options: (1)"p, n fixed"if the listed values forpandnare the only ones possible; (2)"p <= n free"if the values forpandnare not fixed, butpmust be smaller or equal ton; (3)"p == n free"if the values forpandnare not fixed, butpmust be equal ton.
References
D.M. Bates and Watts, D.G. (1988). Nonlinear Regression Analysis and Its Applications, Wiley, ISBN: 0471816434.
J.J. Moré, Garbow, B.S. and Hillstrom, K.E. (1981). Testing unconstrained optimization software, ACM Transactions on Mathematical Software, 7(1), 17-41.
See Also
https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml
https://people.math.sc.edu/Burkardt/f_src/test_nls/test_nls.html
Examples
## available test problems
nls_test_list()
Retrieve an NLS test problem
Description
Fetches the model definition and model data required to solve a single NLS test problem with gsl_nls
(or nls if the model is defined as a formula). Use nls_test_list to
list the names of the available NLS test problems.
Usage
nls_test_problem(name, p = NA, n = NA)
Arguments
name |
Name of the NLS test problem, as returned in the |
p |
The number of parameters in the NLS test problem constrained by the |
n |
The number of residuals in the NLS test problem constrained by the |
Value
If the model is defined as a formula, a list of class "nls_test_formula" with elements:
-
dataa data.frame withnrows containing the data (predictor and response values) used in the regression problem. -
fna formula defining the test problem model. -
starta named vector of lengthpwith suggested starting values for the parameters. -
targeta named vector of lengthpwith the certified target values for the parameters corresponding to the best-available solutions.
If the model is defined as a function, a list of class "nls_test_function" with elements:
-
fna function defining the test problem model.fntakes a vector of parameters of lengthpas its first argument and returns a numeric vector of lengthn.fn -
ya numeric vector of lengthncontaining the response values. -
starta numeric named vector of lengthpwith suggested starting values for the parameters. -
jaca function defining the analytic Jacobian matrix of the modelfn.jactakes a vector of parameters of lengthpas its first argument and returns annbypdimensional matrix. -
targeta numeric named vector of lengthpwith the certified target values for the parameters, or a vector ofNA's if no target solution is available.
Note
For several problems the optimal least-squares objective of the target solution can be obtained at multiple different parameter locations.
References
D.M. Bates and Watts, D.G. (1988). Nonlinear Regression Analysis and Its Applications, Wiley, ISBN: 0471816434.
J.J. Moré, Garbow, B.S. and Hillstrom, K.E. (1981). Testing unconstrained optimization software, ACM Transactions on Mathematical Software, 7(1), 17-41.
See Also
https://www.itl.nist.gov/div898/strd/nls/nls_main.shtml
https://people.math.sc.edu/Burkardt/f_src/test_nls/test_nls.html
Examples
## example regression problem
ratkowsky2 <- nls_test_problem(name = "Ratkowsky2")
with(ratkowsky2,
gsl_nls(
fn = fn,
data = data,
start = start
)
)
## example optimization problem
rosenbrock <- nls_test_problem(name = "Rosenbrock")
with(rosenbrock,
gsl_nls(
fn = fn,
y = y,
start = start,
jac = jac
)
)
Extract the number of observations
Description
Returns the number of observations from a "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
nobs(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Integer number of observations similar to nobs
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
nobs(obj)
Calculate model predicted values
Description
Returns predicted values for the expected response from a fitted "gsl_nls" object.
Asymptotic confidence or prediction (tolerance) intervals at a given level can be evaluated
by specifying the appropriate interval argument.
Usage
## S3 method for class 'gsl_nls'
predict(
object,
newdata,
scale = NULL,
interval = c("none", "confidence", "prediction"),
level = 0.95,
...
)
Arguments
object |
An object inheriting from class |
newdata |
A named list or data.frame in which to look for variables with which to predict. If
|
scale |
A numeric scalar or vector. If it is set, it is used as the residual standard deviation (or vector of residual standard deviations) in the computation of the standard errors, otherwise this information is extracted from the model fit. |
interval |
A character string indicating if confidence or prediction (tolerance) intervals at the specified level should be returned. |
level |
A numeric scalar between 0 and 1 giving the confidence level for the intervals (if any) to be calculated. |
... |
At present no optional arguments are used. |
Value
If interval = "none" (default), a vector of predictions for the mean response. Otherwise,
a matrix with columns fit, lwr and upr. The first column (fit) contains
predictions for the mean response. The other two columns contain lower (lwr) and upper (upr)
confidence or prediction bounds at the specified level.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
predict(obj)
predict(obj, newdata = data.frame(x = 1:(2 * n) / n))
predict(obj, interval = "confidence")
predict(obj, interval = "prediction", level = 0.99)
Extract model residuals
Description
Returns the model residuals from a fitted "gsl_nls" object.
resid can also be used as an alias.
Usage
## S3 method for class 'gsl_nls'
residuals(object, type = c("response", "pearson"), ...)
Arguments
object |
An object inheriting from class |
type |
character; if |
... |
At present no optional arguments are used. |
Value
Numeric vector of model residuals similar to residuals.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
residuals(obj)
Residual standard deviation
Description
Returns the estimated (unweighted) residual standard deviation of a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
sigma(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
Numeric residual standard deviation value similar to sigma
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
sigma(obj)
Model summary
Description
Returns the model summary for a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
summary(object, correlation = FALSE, symbolic.cor = FALSE, ...)
Arguments
object |
An object inheriting from class |
correlation |
logical; if |
symbolic.cor |
logical; if |
... |
At present no optional arguments are used. |
Value
List object of class "summary.gsl_nls" similar to summary.nls
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
summary(obj)
Calculate variance-covariance matrix
Description
Returns the estimated variance-covariance matrix of the model parameters
from a fitted "gsl_nls" object.
Usage
## S3 method for class 'gsl_nls'
vcov(object, ...)
Arguments
object |
An object inheriting from class |
... |
At present no optional arguments are used. |
Value
A matrix containing the estimated covariances between the parameter estimates similar
to vcov with row and column names corresponding to the parameter names given by coef.gsl_nls.
See Also
Examples
## data
set.seed(1)
n <- 25
xy <- data.frame(
x = (1:n) / n,
y = 2.5 * exp(-1.5 * (1:n) / n) + rnorm(n, sd = 0.1)
)
## model
obj <- gsl_nls(fn = y ~ A * exp(-lam * x), data = xy, start = c(A = 1, lam = 1))
vcov(obj)