Sitemap

A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.

Pages

Posts

RTAA data released

less than 1 minute read

I have released the raw data that Joanne Gowa and I collected for our RTAA paper. The data, available on GitHub contain U.S. import data by country/territory and commodity for the years 1934 to 1946. Additionally, the data also includes tariff rates for each commodity and whether the commodity line was included under an RTAA. The data is available in Excel format as we entered it. We also have Stata and csv files of the reshaped import data available.

Dropbox command updated

less than 1 minute read

Version 2 of my dropbox command is out. This tool locates a user’s Dropbox folder automatically and adds an option to search a secondary drive before moving to the primary one. Installation instructions at https://github.com/arpie71/dropbox

datasets

Asian PTAs data

Soo Yeon Kim and I coded the provisions of Asian PTAs in order to establish a credibility measure. We used the data in our Journal of East Asian Studies and World Trade Review articles. The raw data is available here.

Interwar trade data

Joanne Gowa and I collected bilateral trade data for the years before 1950 mainly from trade yearbooks and statistical abstracts for different countries and territories. As such, the trade data is almost always represented in the domestic currency. We then collected exchange rate data from a variety of sources in order to convert the trade data into US dollars. The data is available on GitHub. Please download.

Central Bank Independence data

Cristina Bodea and I updated our central bank independence data for 144 countries for the years 1970 through 2020. Compared to Bodea and Hicks 2015, we added an additional 66 countries and 12 more years of data. We are releasing the scores for all of the components of the CBI index as well. The data is available on GitHub. Please download.

Commodity trade data

A few years ago, I starting collecting commodity-level trade data for the period 1946 to 1961 for as many countries as possible. This would fill in the gap between the end of World War II and the beginning of COMTRADE’s collection. Between one thing and another, the project stalled. I am uploading the data I did collect in the hopes that someone could find a use for it. The data is available on GitHub.

publications

Causal Mediation Analysis

Raymond Hicks & Dustin Tingley. 2011. "Causal Mediation Analysis." Stata Journal 11(4): 605-619. https://doi.org/10.1177/1536867X1201100407

Stata and Dropbox

Raymond Hicks. 2014. "Stata and Dropbox." Stata Journal 14(3): 693-696. https://doi.org/10.1177/1536867X1401400313

Methodological issues

Raymond Hicks. 2015. "Methodological issues." In Lisa L. Martin(ed.) The Oxford Handbook of the Political Economy of International Trade. Oxford: Oxford University Press, 77–98.

Central Bank Independence Before and After the Crisis

Jakob de Haan, Cristina Bodea, Sylvester C.W. Eijffinger, & Raymond Hicks. 2018. "Central Bank Independence Before and After the Crisis" Comparative Economic Studies 60: 183-202. https://doi.org/10.1057/s41294-017-0050-4

Trading With Frenemies: How Economic Diplomacy Affects Exports

Using a large collection of U.S. State Department cables from the 1970s that concern export promotion, we find strong evidence that promotion efforts had the largest effect when economic trade barriers were high and in countries that were politically dissimilar to the U.S.

Don Casler, Matthew Connelly, & Raymond Hicks. 2024. "Trading with Frenemies: How Economic Diplomacy Affects Exports." International Studies Quarterly 68(3): sqae098

software

Stata utilities

I have written several utility commands for Stata. While I was at Princeton and helping political scientists merge different datasets, I got tired of trying to keep track of different country coding schemes. So I wrote ccode to translate between different coding schemes: IMF, World Bank, Correlates of War, Banks Cross National Time Series, and country name. I wrote ctyfind to look up a country name based on one of the classification codes or vice versa. For scholars who use Dropbox, I wrote dropbox which looks for the Dropbox directory on a computer. Because different individuals had Dropbox in different locations, the command was designed to ease collaboration on do files.

To install any of the packages, put the files in the ado/plus/c directory. The location can be found by typing sysdir in Stata and looking for the PLUS location. You might have to add a “c” directory.

Causal Mediation Analysis

Mediation for Stata estimates the role of particular causal mechanisms that mediate a relationship between treatment and outcome variables. The command calculates causal mediation effects and direct effects for models with continuous or binary dependent variables using methods presented in Imai et al 2010. It also calculates sensitivity analyses for mediation effects that are necessary due to non-random assignment of mediating variable. Our package replaces earlier approaches like the “Baron-Kenny” method and “Sobel test” for the case of continuous mediator and outcome variables, producing identical results as these earlier methods but not put in a causal inference framework and with sensitivity analyses to the key identification assumption. For models with binary mediators or outcomes, correct calculation of mediation effects are implemented that take into account the use of non-linear models such as probit.

The package is available from ssc and can be installed in Stata by typing ssc install mediation.

HLSTM

At History Lab, we used the Structural Topic Model package for R to run topic modeling on all our collections. I wrote some functions and created an R package to make it easier to run the analysis across all our corpora.

Note that these functions are largely wrappers for functions already in the STM package.

NER Pipeline

One of the big projects I worked on at History Lab was Named Entity Recognition/Linking on our millions of documents. I set up a pipeline to train a spaCy model in Python for the particular characteristics of our documents and then created a Knowledge Base to identify specific entities within the documents and link them to their Wikidata ID. For entities with the same name, I wrote up a script that would distinguish the entities based on the other entities mentioned in the document.

The repository with the different scripts is available on GitHub.

talks

teaching

Teaching experience 1

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

This is a description of a teaching experience. You can use markdown like any other post.

working