On this page, I will keep a running list of computation/programming projects I work on from time to time.

When it comes to coding, my interests range from the estimation of nonlinear moment condition models, approximate Bayesian inference, large optimization problems, high-performance computing in econometrics and finance, and big data application to time series econometrics. Below you will find a list of packages I have recently written. For details on each package visit my github and/or visit the package page. For code related to published paper, visit the publication section.

Julia Packages

Notice that some of the Julia packages are “registered”, meaning that you can install them from Julia by Pkg.add-ing them. Others are at an early stage and are not yet registered. To install these packages use Pkg.clone.

CovarianceMatrices.jl

license CovarianceMatrices Build Status Coverage Status

CovarianceMatrices is a package for estimating variance-covariance matrices in situations where the standard assumptions of independence are violated. It provides heteroskedasticity consistent (HC); heteroskedasticity and autocorrelation consistent (HAC); and cluster-robust (CRVE) estimators of the variance matrices. An interface for GLM.jl is given so that they can be integrated easily into the standard regression analysis flow. It is also easy to incorporate these estimators into new inferential procedures or applications.

using CovarianceMatrices
## Simulated AR(1) and estimate it using OLS
srand(1)
y = zeros(Float64, 100)
rho = 0.8
y[1] = randn()
for j = 2:100
  y[j] = rho * y[j-1] + randn()
end

data = DataFrame(y = y[2:100], yl = y[1:99])
AR1  = fit(GeneralizedLinearModel, y~yl, data, Normal())

## Truncated Kernel with optimal bandwidth
vcov(AR1, TruncatedKernel())

Divergences.jl

license Pkg Pkg Build Status Coverage Status

Divergences is a Julia package that makes it easy to evaluate divergence measures between two vectors. The package allows calculating the gradient and the diagonal of the Hessian of several divergences.

  • Komunjer, I.; Ragusa, G. “Existence and characterization of conditional density projections.” Econometric Theory 2016, 32, 947–987.
using Divergences
p = rand(20)
q = rand(20)
scale!(p, 1/sum(p))
scale!(q, 1/sum(q))
evaluate(CressieRead(-.5), p, q)

Genoud.jl

GENetic Optimization Using Derivative.

using Genoud
using Calculus
function f8(xx)
    x, y = xx
    -x*sin(abs(x)) - y*sin(abs(y))
end

function gr!(x, stor)  
    stor[:] = Calculus.gradient(f8, x)
end

dom = Genoud.Domain([-500  500.;
                     -500. 500.])
out = Genoud.genoud(f8, [1.0, -1.0],
                    sizepop = 5000,
                    sense = :Min,
                    domains = dom)

Surface plot of $$f(x_1, x_2) = -\sum_{i=1}^2 x_i \sin(\sqrt{|x_i|}).$$ This function is minimized at $x_1^* \approx 420.968$ and $x_2^* \approx 420.968$. At the minima, $f(x_1^*, x_2^*) = -837.9$.

Source: Yao, Xin, Yong Liu, and Guangming Lin. "Evolutionary programming made faster." IEEE Transactions on Evolutionary computation 3, no. 2 (1999): 82-102.

Results of Genoud Optimization Algorithm
 * Minimizer: [420.96874636091724,420.9687462145861]
 * Minimum: -8.379658e+02
 * Pick generation: 20
 * Convergence: true
   * |f(x) - f(x')| / |f(x)| < 1.0e-03: true
   * Number of Generations: 27

CsminWel.jl

Build Status Coverage Status codecov.io

This package provides an interface to Chris Sims’ csminwel optimization code. The code borrows from DSGE.jl, but it is adapted to be compatibles with the Optim.jl’s API. When the derivative of the minimand is not supplied either Finite Difference of Forward Automatic Differentiation derivatives are automatically used.

From the original author: > Uses a quasi-Newton method with BFGS update of the estimated inverse hessian. It is robust against certain pathologies common on likelihood functions. It attempts to be robust against “cliffs”, i.e. hyperplane discontinuities, though it is not really clear whether what it does in such cases succeeds reliably.

Differently from the solvers in Optim.jl, Csminwel returns an estimate of the inverse of the Hessian at the solution which may be used for standard errors calculations and/or to scale a Monte Carlo sampler.

#=
Maximizing loglikelihood of logistic models
=#
using CsminWel
using StatsFuns
## Generate fake data (true coefficient = 0)
srand(1)
x = [ones(200) randn(200,4)]
y = [rand() < 0.5 ? 1. : 0. for j in 1:200]

## log-likelihood
function loglik(beta)
    xb = x*beta
    sum(-y.*xb + log1pexp.(xb))
end

## Derivative of loglikelihood
function dloglik(beta)
    xb = x*beta
    px = logistic.(xb)
    -x'*(y.-px)
end

## Optim uses a mutating function for deriv
function fg!(beta, stor)
    stor[:] = dloglik(beta)
end

## With analytical derivative
res1 = optimize(loglik, fg!, zeros(5), BFGS())
res2 = optimize(loglik, fg!, zeros(5), Csminwel())

## With finite-difference derivative
res3 = optimize(loglik, zeros(5), Csminwel())

## With forward AD derivative
res4 = optimize(loglik, zeros(5), Csminwel(), OptimizationOptions(autodiff=true))

## Use approximation to the inverse Hessian for standard errors of estimated parameters
stderr = diag(res2.invH)

about

Giuseppe Ragusa teaches in the Department of Economics and Business and in the Business School at Luiss University. His research is mostly about econometrics.

Where

subscribe

To receive updates from this site, you can subscribe to the  RSS feed of all updates to the site in an RSS feed reader

search