Introduction When fitting a nonlinear regression model in R with nls(), the first step is to select an appropriate regression model to fit the observed data, the second step is to find reasonable starting values for the model parameters in order to initialize the nonlinear least-squares (NLS) algorithm.
Introduction The previous post showcases the rrapply() function in the minimal rrapply-package as a revised and extended version of base rapply() in the context of nested list recursion in R.
Introduction The nested list below shows a small extract from the Mathematics Genealogy Project highlighting the advisor/student genealogy of several famous mathematicians. The mathematicianās given names are present in the "
This post showcases several multi-scale Turing patterns generated in R with Rcpp(Armadillo). The generating process, inspired by (McCabe 2010), consists of multi-scale convolutions with respect to short-range activator kernels and long-range inhibitor kernels, computed efficiently in the Fourier domain using RcppArmadillo. Starting from an almost homogeneous state, the algorithm generates regular 2D Turing patterns with smoothly varying behavior across multiple scales that are quite fascinating to look at.
Introduction Automatic differentiation Automatic differentiation (AD) refers to the automatic/algorithmic calculation of derivatives of a function defined as a computer program by repeated application of the chain rule. Automatic differentiation plays an important role in many statistical computing problems, such as gradient-based optimization of large-scale models, where gradient calculation by means of numeric differentiation (i.
Introduction The new gslnls-package provides R bindings to nonlinear least-squares optimization with the GNU Scientific Library (GSL) using the trust region methods implemented by the gsl_multifit_nlinear module. The gsl_multifit_nlinear module was added in GSL version 2.
Introduction Nonlinear regression model As a model setup, we consider noisy observations \(y_1,\ldots, y_n \in \mathbb{R}\) obtained from a standard nonlinear regression model of the form:
\[ \begin{aligned} y_i &\ = \ f(\boldsymbol{x}_i, \boldsymbol{\theta}) + \epsilon_i, \quad i = 1,\ldots, n \end{aligned} \] where \(f: \mathbb{R}^k \times \mathbb{R}^p \to \mathbb{R}\) is a known nonlinear function of the independent variables \(\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n \in \mathbb{R}^k\) and the unknown parameter vector \(\boldsymbol{\theta} \in \mathbb{R}^p\) that we aim to estimate.
Introduction The aim of this post is to provide a working approach to perform piecewise constant or step function regression in Stan. To set up the regression problem, consider noisy observations \(y_1, \ldots, y_n \in \mathbb{R}\) sampled from a standard signal plus i.
Introduction Selection bias Selection bias occurs when sampled data or subjects in a study have been selected in a way that is not representative of the population of interest. As a consequence, conclusions made about the analyzed sample may be difficult to generalize, as the observed effects could be biased towards the sample and do not necessarily extend well to the population that we intended to analyze.
Introduction The previous post demonstrates the use of pre-compiled Stan models in interactive R Shiny applications to avoid unnecessary Stan model (re-)compilation on application start-up. In this short follow-up post we go a step further and tackle the issue of tracking the Stan model sampling progress itself in a shiny-application.