Skip to content
/ sgdpd Public

stochastic gradient descent for density power divergence minimization

License

Notifications You must be signed in to change notification settings

oknakfm/sgdpd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

This repository offers a user-friendly R package for an optimizer minimizing the density power divergence proposed by Basu et al. (1998) :

$$L_{\alpha}(\theta) = -\frac{1}{\alpha} \frac{1}{n} \sum_{i=1}^{n} f_{\theta}(z_i)^{\alpha} + \frac{1}{1+\alpha}\int f_{\theta}(z)^{1+\alpha} dz.$$

This optimizer needs minimal effort for users to obtain the optimal parameter, to estimate general parametric models. To cite this package, please cite the following manuscript:

@article{okuno2024DPD,
    year      = {2024},
    volume    = {},
    number    = {},
    pages     = {},
    author    = {Akifumi Okuno},
    title     = {Minimizing robust density power-based divergences for general parametric density models},
    journal   = {Annals of the Institute of Statistical Mathematics},
    note      = {To appear.}
}

Quickstart

Install

Please enter and execute the following command to install our sgdpd package.

install.packages("https://okuno.net/R-packages/sgdpd_1.0.0.zip", repos=NULL, type="win.binary")

Example

In order to showcase the capabilities of our optimizer, we perform a univariate skew-normal density estimation. Initially, we define the skew-normal density function, parameterized by theta. This is achieved using the dsnorm function, which is sourced from the fGarch package:

f <- function(z, theta) dsnorm(x=z, mean=theta[1], sd=theta[2], xi=theta[3])

Using the $n \times d$ design matrix Z, along with a specified learning rate lr, an initial parameter theta0, and an exponent parameter exponent, we can efficiently compute the optimal parameter as follows:

sgdpd(f=f, Z=Z, lr=0.1, theta0=c(0,1,1), exponent=0.2)

No further operation is needed! Please also see our user manual for more details.

Contact info.

About

stochastic gradient descent for density power divergence minimization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published