On this page, I will keep a running list of computation/programming projects I work on from time to time.
When it comes to coding, my interests range from the estimation of nonlinear moment condition models, approximate Bayesian inference, large optimization problems, high-performance computing in econometrics and finance, and big data application to time series econometrics. Below you will find a list of packages I have recently written. For details on each package visit my github and/or visit the package page. For code related to published paper, visit the publication section.
Notice that some of the Julia packages are “registered”, meaning that you can install them from Julia by
Pkg.add-ing them. Others are at an early stage and are not yet registered. To install these packages use
CovarianceMatrices is a package for estimating variance-covariance matrices in situations where the standard assumptions of independence are violated. It provides heteroskedasticity consistent (HC); heteroskedasticity and autocorrelation consistent (HAC); and cluster-robust (CRVE) estimators of the variance matrices. An interface for
GLM.jl is given so that they can be integrated easily into the standard regression analysis flow. It is also easy to incorporate these estimators into new inferential procedures or applications.
using CovarianceMatrices ## Simulated AR(1) and estimate it using OLS srand(1) y = zeros(Float64, 100) rho = 0.8 y = randn() for j = 2:100 y[j] = rho * y[j-1] + randn() end data = DataFrame(y = y[2:100], yl = y[1:99]) AR1 = fit(GeneralizedLinearModel, y~yl, data, Normal()) ## Truncated Kernel with optimal bandwidth vcov(AR1, TruncatedKernel())
Divergences is a Julia package that makes it easy to evaluate divergence measures between two vectors. The package allows calculating the gradient and the diagonal of the Hessian of several divergences.
- Komunjer, I.; Ragusa, G. “Existence and characterization of conditional density projections.” Econometric Theory 2016, 32, 947–987.
using Divergences p = rand(20) q = rand(20) scale!(p, 1/sum(p)) scale!(q, 1/sum(q)) evaluate(CressieRead(-.5), p, q)
GENetic Optimization Using Derivative.
using Genoud using Calculus function f8(xx) x, y = xx -x*sin(√abs(x)) - y*sin(√abs(y)) end function gr!(x, stor) stor[:] = Calculus.gradient(f8, x) end dom = Genoud.Domain([-500 500.; -500. 500.]) out = Genoud.genoud(f8, [1.0, -1.0], sizepop = 5000, sense = :Min, domains = dom)
Results of Genoud Optimization Algorithm * Minimizer: [420.96874636091724,420.9687462145861] * Minimum: -8.379658e+02 * Pick generation: 20 * Convergence: true * |f(x) - f(x')| / |f(x)| < 1.0e-03: true * Number of Generations: 27
This package provides an interface to Chris Sims’
csminwel optimization code. The code borrows from DSGE.jl, but it is adapted to be compatibles with the Optim.jl’s API. When the derivative of the minimand is not supplied either Finite Difference of Forward Automatic Differentiation derivatives are automatically used.
From the original author: > Uses a quasi-Newton method with BFGS update of the estimated inverse hessian. It is robust against certain pathologies common on likelihood functions. It attempts to be robust against “cliffs”, i.e. hyperplane discontinuities, though it is not really clear whether what it does in such cases succeeds reliably.
Differently from the solvers in
Csminwel returns an estimate of the inverse of the Hessian at the solution which may be used for standard errors calculations and/or to scale a Monte Carlo sampler.
#= Maximizing loglikelihood of logistic models =# using CsminWel using StatsFuns ## Generate fake data (true coefficient = 0) srand(1) x = [ones(200) randn(200,4)] y = [rand() < 0.5 ? 1. : 0. for j in 1:200] ## log-likelihood function loglik(beta) xb = x*beta sum(-y.*xb + log1pexp.(xb)) end ## Derivative of loglikelihood function dloglik(beta) xb = x*beta px = logistic.(xb) -x'*(y.-px) end ## Optim uses a mutating function for deriv function fg!(beta, stor) stor[:] = dloglik(beta) end ## With analytical derivative res1 = optimize(loglik, fg!, zeros(5), BFGS()) res2 = optimize(loglik, fg!, zeros(5), Csminwel()) ## With finite-difference derivative res3 = optimize(loglik, zeros(5), Csminwel()) ## With forward AD derivative res4 = optimize(loglik, zeros(5), Csminwel(), OptimizationOptions(autodiff=true)) ## Use approximation to the inverse Hessian for standard errors of estimated parameters stderr = √diag(res2.invH)