Wednesday, August 22, 2012

Final Post

Now that GSOC is over, I'd like to provide a wrap up what I accomplished this summer.

In general, I have 4 main features that are tested and have results that match the Matlab/R counterparts.  They are:

Inference on descriptive statistics
Inference on Linear Regression
Inference and Estimation of the accelerated failure time model
Inference and estimation of regression with instrumental variables


Although that these feature compute correctly, only the descriptive branch is ready to be merged into master as I am still generating documentation and working with the developers to determine exactly how to incorporate the new features.

Here is a bit more of the specifics:

Descriptive

This allows us to compute confidence intervals of descriptive statistics via EL
For example:

import statsmodels.api as sm
import numpy as np

data = np.random.standard_normal(20)
EL = sm.emplike.DescStat(data)
EL.ci_mean()
EL.ci_var()
EL.ci_kurt()
EL.ci_skew()


The above will return the empirical likelihood confidence regions for mean, variance, skewness and kurtosis of 'data.'

Regression

data = sm.datasets.stackloss.load()
model = sm.OLS(data.endog, sm.add_constant(data.exog, prepend=1)
fitted = model.fit()
fitted.ci_params((!))

This returns the confidence interval via EL for the first regression parameter.  This is useful because it accounts for heteroskedasticity.  Furthermore, we can add an option stochastic_exog=0 to change the regression assumption that the X matrix is stochastic to fixed.  The resulting confidence interval will be smaller.

AFT model

data = sm.datasets.heart.laod() # heart not yet in master, I added it. 
aftmod = sm.emplike.emplikeAFT(data.endog, data.exog, data.censors)
aftres = aftmod.fit()
aftres.params

This will give the parameters of an accelerated failure time model.  I also implemented hypothesis tests and confidence intervals but the confidence intervals are very computationally intensive as it would involve a quadruply nested optimization, with on of the optimizations being a modified EM algorithm.  This model was by far the most difficult model of my entire GSOC and took over 3 weeks to implement.

IVmodel

The basic framework for this is

model = sm.emplike.IVReg(endog, exog, instruments)
fittedIV = model.fit()
fitted.params

This will return the params of the model by solving for an optimal probability distribution such that    Z(y-XB)=0 where Z is a matrix of instruments, y is the endogenous variable, X is a matrix of covariates and B are the model's parameters.


Like I said, only the descriptive branch is up to snuff but I plan on continuing work with EL in statsmodels and bringing the other branches up to snuff as well as working with the maintainers to incorporate the new features optimally.


As I mentioned in my last post, if any users have questions on how to conduct EL in statsmodels or would like a specific new feature implemented, please email me or ping the statsmodels mailing list and I will do my best to accommodate.

Special thanks again to everybody that helped this summer.  Skipper and Josef were more helpful than I could have anticipated.


To  see some of the code, check out pull-request #326 which contains a lot of the meat of the project, including the setup for optimizing an empirical likelihood ratio.  Pull requests 427 and 406 are also EL code but are continually beaing edited to be ready to merge. 

Sunday, August 19, 2012

Winding Down

GSOC comes to a close tonight at 12:00 midnight and I could not have asked for a more rewarding summer job. 

As far as my proposal, I have implemented EL descriptive statistics, linear regression, censored regression and instrumental variable regression.  Although they are not all ready to be merged into the statsmodels main, the computation is there and all that is left is the documentation and cleaning, something that I will be continually working on. 

I plan on staying active in the statsmodels community and hope to continue contributing empirical likelihood models as well as other models in the indefinite future.

Also, if anybody would like to see other specific empirical likelihood models implemented in statsmodels, please post to the mailinglist or email me directly at justin.grana@gmail.com. 

A special thanks to Skipper and Josef as well as Ralf for their help this summer.  Your advice was invaluable. 

Wednesday, August 8, 2012

Slight change of plans

Just to make it public, I have decided not to implement model selection criteria but instead to implement instrumental variable regression.  More to come soon.

Tuesday, August 7, 2012

Estimating an AFT model

Estimation of the parametric Accelerated Failure Time (AFT) model is no walk in the park.  Much of the research points to two main methods.  The first is Koul et al (1981) in which he uses a synthetic data approach; and Stute (1993, 1996) in which the author uses WLS regression.  These are often preferred to other estimators due to their computational simplicity.

HOWEVER, in all cases the method of Koul performs worse than that of Stute.  This is because an AFT model requires an estimate CDF for survival probabilities.  In Koul's theoretical paper, the authors do not use the Kaplan-Meier estimator because they note ,"in our proof we need to take the log), and using a KM estimator can potentially lead to taking the log of 0.  However, if we replace Koul's estimate of the survival probabilities with the Kaplan Meier estimate, the bias in the estimates essentially goes away.  Furthermore, by assuming that max(dependent_variable) is uncensored, we can avoid the divide by 0 dilemma.  To add to the ambiguity, other papers that cite Koul claim that Koul uses the Kaplan-Meier estimate when he actually doesn't. 

My issue is if I cannot find a theoretical justification for using the KM estimate in Koul's mehtod, I cannot use it to code.  That said, is there any theoretical adjustment of adapting Koul's synthetic data method and replacing the survival probabilities with the KM estimate? I have looked but haven't been able to find any. 

Wednesday, July 11, 2012

Midterm Review

To start things off, GSOC has been a 10/10 for me.  I enjoy the work I am doing and the downsides are very minimal.

Status
Per my proposal, my GSOC project is right on target.  Some features I have so far are empirical likelihood hypothesis tests and confidence intervals for:

the mean
the variance
the skewness
the kurtosis
correlation
multivariate means
regression parameters
regression parameters in a model forced through the origin.
 * I also added some neat plots that I didn't expect to do.

Regression through the origin includes empirical likelihood estimates since this is the only case (so far) where the empirical likelihood estimates differ from the traditional estimates. Tomorrow (Thursday) I will be adding ANOVA.  I will also be adding an option to choose if the regressors are fixed or stochastic.  The only difference is that with fixed regressors, moment conditions on the independent variables are added.

I have not worked with Bartlett corrections yet because I cannot find a test case.  However, I am not ruling it out.

Narrative
 In general, the problems I faced with my GSOC are not the ones I expected and the ones I expected were actually easier than I thought.  For example, I thought sifting through the theory to understand the correct computations would be difficult.  However, this was rather easy and it was easy to pick out the necessary building blocks from the textbooks.

Without question, the problem I didn't foresee that has given me the most trouble so far is working with GIT.  However, after creating a test repository of my own  and playing around for a couple hours, (completely destroying the test "project" at times) I finally got the hang of it.  If I have any recommendation to future GSOCers for statsmodels, I would recommend that they do this.  Like I said above, a couple hours of tinkering could save days of frustration.

What I can use help on
 Although I am catching on, I still feel there are many style elements that pass me by.  Maybe my documentation is not up to snuff.  Maybe the unit tests aren't exactly clear.  Anyway, I have been following the "no news is good news" motto, but I have  a feeling that there can't be all that much good news.  There is always room for improvement.

Statsmodels Team
100% helpful and timely in their responses.  Thanks to Skipper, Josef, Ralph and couple others that have been actively providing feedback and answers to my questions.

If I were to offer any suggestions in terms of feedback, it would be to make the recommendations as concrete as possible.  This is mostly because I am still getting the hang of the lingo.  I noticed it has become a bit easier to respond to comments as the summer has progressed.

Second Half Changes
I mentioned this to Skipper but I would like to make a couple changes in my plan.  I still want to start Monday with censored regression.  Then, instead of moving into model selection, I would like to model EL IV regression.  FInally, with my leftover time, I would like to create a general EL framework where the user enters an array of estimating equations and a parameter vector and a hypothesis and the program returns the results of the hypothesis test.  This is essentially the setup (but will not be close to the same code) as Bruce Hansen's MATLAB/Gauss package.  Then if I have time, I will do El model selection criteria.  Any comments on this idea would be greatly appreciated.



Some Code
I sent two files to the statsmodels mailing list that walk through what I have implemented so far.  Please pardon spelling, grammar and typos as I wanted to get these files out as soon as possible.  Both can be ran in their entirety but I suggest running line by line.  The code is full of in-line comments to guide the user through.  To run the code, make sure to install statsmodels from my emplike_reg branch.  I am assuming matplotlib is installed.  If you would like these files, feel free to email me at justin.grana@gmail.com.

I'd be happy to answer any questions on or off list and would be interested to hear of any suggested improvements . 

Other notes
I am using emacs as my editor.  Another benefit of GSOC is learning how to navigate this nifty and powerful programming.  Getting a taste of lisp isn't a bad side effect.

Almost all of my test cases are against MATLAB.  With censored regression I will most likely be using R. 


Sunday, July 8, 2012

Git holes

(title thanks to Skipper).

This week's post is a bit different but still relevant to GSOC.   Coming into GSOC I had no idea what GIt, Github or even what version control meant.  Now, I can still still say I don't know what they are, but I can say I have learned how (not) to use them.

I think the key to learning Git is creating a repository and trying out as many git commands as possible and seeing what they do first hand.  This was especially helpful for me because I was unfamiliar with the lingo and often I found the documentation very difficult to grasp for someone new to git.

From my minimal experience, anybody trying to learn git should create their own repo and make multiple branches, push commits, pull from remote repositories, cherry-pick, add and delete files and be familiar with the commands that make use of a remote repository and the ones that remain local.  2 Hours of tinkering for me provided more insight than days of reading documentation. 

Monday, July 2, 2012

Interesting problem

I've noticed an interesting problem with empirical likelihood that although Owen recognizes it is one of its shortfalls, I am facing it first hand.  One of the advantages of EL is that there are no distributional assumptions placed on the data or the parameters.  Therefore, EL is well suited for problems that have nonstandard distributions.  However, theoretically, EL inference can only be conducted if the hypothesized values lie in the convex hull of the data.  Without going into too much detail, the more skewed the data is, the more unlikely it is that a hypothesized value of the parameter would lie in the convex hull of the data.  When the hypothesized parameter value is not in the convex hull, optimization becomes infeasible.  This leads to an interesting conclusion: the strength of EL is also one of its major downfalls. 

On another note, next week is midterm evaluations and I think I will be looking to alter my second half plans.  I would like to include EL IV regression.  More to come on that...