Excel and duplicates in a dataset

In the past I posted items on duplicate data from datasets (from Compustat databases) and how to find this out using programs like stata. For smaller datasets the program Microsoft Excel can also be used to investigate your datasets. AT some point, when merging datasets from multiple sources it may happen that you get duplicate data. Using Excel functions like a pivot table (draaitabel) and vlookup (vertikaal zoeken) duplicate data can be detected as follows:

1) first you need to tell Excel that your dataset is a specific delineated table using the menu option: Insert > Table

2) When you insert this option you need to indicate the range and if there are any column headers

3) The same menu tab insert (invoegen) has the option to create a summary pivot table (samenvatten met draaitabel). Select this option at the top left corner:

4) Create the PivotTable in a new (or empty) sheet:

5) The empty Pivot Table will be shown as follows:

6) In this example I drag the fields I want to check for duplicates (records/observaions) down to rows (Rijen). At the box values I drag a random field (in this case indfmt). On the left side of the screen the result will be presented:

7) In this example I click the field gvkey and choose the option to change the field settings

8) At subtotals & filters mark the second option No(ne)

9) In the tab Format & Print (Indeling & Afdrukken) mark the option: item labels in table format (Itemlabels in tabelvorm weergeven). Also make sure to mark the option to repeat item labels (Itemlabels herhalen):

10) The result chould look as follows:

11) Copy the list to a new sheet and use the concatenate function (Tekst.samenvoegen) to create a combination of GVKEY and year:

12) Create a similar link list with the Concatenate function in the original Compustat datasheet

13) Now use the VLOOKUP function in Excel in the Compustat datasheet to look up the link in the sheet with the result of the Pivottable. Make sure that the VLOOKUP option says False (Onwaar) at the 4th option as follows:

14) Finally, using the filter option you can find out if there are duplicatesd by selecting everything for which the count is higher than 1. The tricky thing will be deciding what to do with the result.

N.B.: If a dataset has over 100.000 observations the process described above will take some time as Microsoft Excel will require significant processing power from the computer. For larger datasets I reccomend using Stata.


Using Stata to count segments

At the end of March I got asked the question how to use Compustat North America segments data and get aggregated counts on business segments or geographic segments. The variable business segments was to be used as an indicator of diversity: how many different types of activity a company included in it’s activities. The Geographic segments was to be used as an indicator on how widespread these activities were geographically for each company.

Specific important commands that are needed:

generate year=year(datadate) > using this command you get a year which can be used to count instances of segments. This is only needed if no available year can be used (like fiscal year / fyear).

drop > using this command you delete all variables that are non-essential from the dataset

order gvkey year > this command sorts the dataset first on the gvkey (= global company key which uniqely identifies a company in any Compustat database) and then by year

duplicates drop > this command deletes any possible duplicate annual data. This is important as the count only involves unique segments

by gvkey year: egen segmentcount = count(sid) > this command generates a new variable (segmentcount) and gives it the value of the count of the segment id codes (SID) for each company and individual year.

To later combine the business segments count dataset with the geographical count dataset a unique ID (UID) is created to later merge the datasets again into a single dataset.

Overall the script (.do file) I created does three things:
1) It creates a new dataset with business counts
2) A dataset with Geographical counts is made
3) It merges both newly created datasets into a single dataset

Example script screenshot:

The example dataset with .do script file can be downloaded here.

Example result screenshot:

N.B.: In the .do file the location for all files is the U: drive. You may need to change the drive letter in the original script to (for instance) c: or H: to get it to run. Make sure both the script file and Stata dataset are in the same location.


Total Q and Tobin’s Q

A new Compustat data source has become available to people who have access to Compustat databases through WRDS: Peters and Taylor Total Q. This new source provides data on firms’ “Total q” ratio and the replacement cost of firms’ intangible capital. Total q is an improved Tobin’s q proxy that includes intangible capital in the denominator, i.e., in the replacement cost of firms’ capital. Peters and Taylor estimate the replacement cost of firms’ intangible capital by accumulating past investments in R&D (Research and Development) and SG&A (Selling, General and Administrative Expenses). Background paper can be downloaded here.

Overview of the variables and names in the database:

datadate = Date
fyear = Fiscal Year
gvkey = GVKEY / Compustat unique company code
K_int = Firm’s intangible capital estimated replacement cost
K_int_Know = Firm’s knowledge capital replacement cost
K_int_offBS = Portion of K_int that doesn’t appear on firm’s balance sheet
K_int_Org = Firm’s organization capital replacement cost
q_tot = Total q

The database offers mainly data for companies which are included in the Compustat North America database. Most of these companies are American. The coverage is: 1950 – now.


Working with Compustat Execucomp tenure data

Not too long ago I had a question from someone who was having trouble working with data from Compustat Execucomp. He wanted the yearly tenure for a specific group of people with the function of Chief Financial Officer. The research spanned a period of 2009-2014 (post-crisis). The data that was downloaded looked something like this:

Step 1: Data cleaning
One of the first steps to take in this case is to make sure to have the right kind of data to work with. In this case the columns H and I needed to be checked and cleaned. In column I you see the date when a person left as CFO working for the company. In this situation we see items like n/a where the data is unavailable and this means that the person still continues to work as CFO for the company. We first need to replace such values with the value 2014 for the last year of our research as we are looking for the tenure within the time frame 2009-2014. Any other years after 2014 can in this case also be replaced with 2014. You can use the search and replace function in Excel to do this step by step. Afterwards you can use the Filter option in Excel to check for weird data or outliers. In principle you have to check both columns with start year and left year to be sure there are no outliers (weird values).

In column H you see the year when a person joined the company. I am assuming that this was also the startyear for each person when he came to work as CFO at the company (I have not personally checked this). You see in the screenshot that not every year is seen as a numerical value: Excel shows little green triangle dots in the cells where it thinks the data is text. To ensure that a year is seen as a numerical value you can add a new column and use a trick to create numerical values in this column: devide cell by 1. See screenshot column J for the original data and column K for the new years. In the top left corner you see the “formula” you can copy downwards for all years.

Step 2: Calculate tenure for each year
In the example for this blog I only calculated the tenure for the final years of the research time frame (2013 and 2014). You can figure out the formulas for the other years. First I started calculating the CFO tenure for 2014. In this example I assume that if the startyear matches the lastyear someone has worked in this capacity for less then a full year making the tenure less then 1 and thus zero. In this case I get the right number of years of the tenure by substracting the startyear from the lastyear (= research year 2014). See example:

Now for the tenure of the previous year (2013) the If statement comes in handy to figure out the tenure for this year. The full formula is:

K3 = start year tenure within the research window (or before)
L3 = last year for the research window (2014)
P3 = tenure for the final year of the research window (= 2014)

The formula in essence does the following: if the previous year (in this case 2013) matches the start year (or is smaller), then the tenure is that of 2014 minus 1. If not then put the word False there. This last condition prevents outliers from causing problems. Screenshot:

The same formula can also be used for the previous years. All you need to do is change the formula for the right numbers. 2012 example: =IF(K3<=(L3-2);(P3-2);FALSE)

Step 3: Figure out the relevant years
This step is essentially not necessary as the filter option of excel is already available to make an annual selection by year but you then have to add the tenure year manually for each year after copying the relevant tenure data by year (to a new sheet).
The formulas in step two will provide a tenure of  0 (or more) as long as the end year for the tenure (within the time frame 2009-2014) is equal to (or higher) then the start year for the tenure (within the time frame or earlier). To know the tenure by year we create columns to show which tenure applies to what year. That allows us to use a filter in Excel to more easily get the relevant data where there is a tenure of more then zero. I created the columns N and O to get the tenure years for 2014 and 2013. The formula I used for 2014 is: =IF((P4>0);RIGHT($N$1;4);FALSE)
where: P4 = calculated number of tenure years for 2014. I have put the year in the name in the first cell at the top of the column (first) as the last 4 digits making it possible to use $N$1.

For 2013 all you need to change is the cell P4 into Q4 and the header $N$1 into $O$1. You also need to put the year in the name of the variable at the top of the column). Subsequent years work the same way.

Step 4: Filter the data for the relevant years
As the final step you can now use the standard filter option to copy the relevant data by year to a new sheet.

I would then also remove irrelevant data for other years which do not apply to the specific year I have filtered for. The end result would look something like this:


Shortselling data – Supplemental

In the previous post I mentioned two databases that have data on Short Interest shares. The Compustat part database offered data but the search through the WRDS platform could crash because of an error. The error occurred when you selected items (at search step 3) like CIK codes, etc. This problem has now been fixed and the data can now be downloaded as per usual with all selected variables.

I have also had another look at the searches in Datastream for Short Interest data and I noticed the following: you need to be careful when select a download frequency. Usually the SID data is made available every few months or once a year. The report frequency of the data has been changing the last year. If you choose the Yearly frequency in Datastream you will not get data for every year. Only when you select the frequency Monthly do you see data appear for each year.


Shortselling & companies shares

The past few years Shortselling (or “going short”) has been in both the regular new channels as well as the financial news. This is caused by the bad reputation attached to shortselling as it usually involves the expectation that a company could be in trouble. For more background information it is possible to check out the Investopedia item on the subject and the scientific literature.

Many sources and databases have been created over the past 5/6 years that provide more data on shortselling. In a previous blog post I mentioned the registers that have been created by several nations as part of an EU directive by the ESMA (European Securities and Markets Authority). The national registers list some of the reported short positions.

There are two more sources (licensed databases) that have some data on shortselling with regard to American companies: Compustat North America and Datastream.

Datastream has limited data coverage on Short Interest: only for approximately 2010 and later. The Datatype is: SID. Data is only available (at the moment) for companies listed at US markets NYSE, AMEX, ARCA, NASDAQ and OTC.
You have to be careful when using this variable: short interest positions are compiled by the exchanges twice each month, at mid-month and end-month. The data is published approximately 10-12 days after the compilation date and loaded on Datastream when it is received. The stored value dates are the compilation dates. The report frequency is not consistently bimonthly.
The SID variable is stored in thousands of shares and available with three decimal places. The default display is unpadded and the values are not adjusted retrospectively for corporate actions. So if ratios between short interest and volumes are being calculated unadjusted volume (datatype UVO) should be used.
Example screenshot of a search:

Compustat North America has a specific part database called: “Supplemental Short Interest File“. This will provide data on shorted stocks for listed companies at the New York Stock Exchange, American Stock Exchange, and NASDAQ. This data is provided to Capital IQ (Compustat) by FT Interactive. The data goes back to approximately 1997. The short interest data for ADRs is based on the ADR share. Three separate variables are available in the Compustat database:

  • SHORTINT — Shares Held Short as of Settlement Date
  • SHORTINTADJ — Shares Held Short as of Settlement Date – Adjusted
  • SPLITADJDATE — Month End Split Adjustment Date

Example screenshot:

N.B.: At the moment there appears to be an error in the WRDS platform because when I select any of the following variables in the search screen of the Compustat part database at step 3 the search crashes and provides an error: Ticker Symbol, CUSIP, SIC Code, NAICS Code, CIK Number. The error message reads: “Error: Column cik could not be found in the table/view identified with the correlation name A.” Just do not select these items and the search will run. You can use the GVKEY codes to get the other variables from another Compustat database.


Stata & missing or duplicate data

When you work with large datasets or big data it may happen that after working with it for some time you need to take a good look at what has happened to the data. Especially if you work with combinations of datasets and/or work on it with more people. Another instance is: when you have received the dataset from a researcher or organization and need to remove superfluous data that may not be relevant to your own research.

1) Investigate the data
There are a few simple commands in Stata that provide a good overview:

  • desc or describe = this command provides a brief summary of the entire dataset
  • summ or summarize = another fine command that gives a quick overview of all the variables with information on: number of observations, the mean, standard deviation, and the lowest and highest values (min & max)
  • tab or tabulate = a good way to cross-reference several items and see whether there are any obvious outliers or patterns in the data

These and many more commands or combinations of commands allow you to watch and judge the data.

2) Missing data

  • Using the summ command it was easy to see that some fields had no data. In this case it may be a good idea to delete them as they serve no purpose here. You can delete a variable/field by typing drop variable. For example: drop CIKNew. A range of variables next to each other can also be dropped with a single command. For this example: drop indfmt – conm. There are many more options to delete entire variables/fields from a dataset.
  • Another way to clean data can be applied if you require only those observations/records that (for crucial variables) do not have missing values/data. Deleting observations can be done using the missing value command: drop if mi(variable). For example: drop if mi(Totaldebt). The Stata result screen will show the result of this action: number of observations deleted.
  • Deleting missing values is, however not always straightforward. Stata shows missing values as dots if you view a dataset with the browse command. In some datasets, however, missing values may sometimes (partially) be represented by another value in some observations. If this is the case it is a good idea to replace some of these values first to allow for easier editing/deletion. If in your dataset the number zero indicates the same thing as a missing value (in some records) you can use mvdecode to replace them with a dot (= how Stata usually represents missing values). The command would look like: mvdecode variable, mv(0=.). Afterwards you can the remove all missing values the usual way with drop.

3) Removing duplicate data
When you are using multiple datasets and have combined them you could have some duplicate observations. Using data from some specific databases may also get you unintentional duplicate data. In Compustat you run the risk of duplicates if, for instance, you only need data for industrial type companies but, when doing the search in the Fundamentals Annual database you forget to unmark the option FS at the screening options at Step 2 in WRDS. Some companies have more than one statement in Compustat for the same fiscal years and will get you both FS and IND type/format statements.
The Stata command to remove duplicates should be chosen carefully. I usually combine a unique ID code with a specific event year or date. For instance: duplicates drop CIK year, force


  • duplicates drop removes duplicates
  • in this example duplicates are identified by the combination of the variable CIK (ID code = Central Index Key) with the variable year
  • duplicates will be removed without warning by including the las bit:
    , force

Personally I think removing duplicates without first checking may not always be the smart thing to do. If you are working with a large dataset it may be a good idea to first tag possible duplicates and then have a look before removing these. The command to tag the duplicates is: duplicates tag, gen(newvariable). This command checks the whole dataset with all variables for all observations for duplicates and stores the result as a number in the new variable with the name newvariable.

Another version of removing duplicates may have to do with the number of necessary observations by entity in a dataset. In some cases an analysis requires a minimum number of observations/records to be relevant. If there are too few observations you may again remove them only, in this case it can be done using the count function on the entity (for example a company identifier like ISIN, CIK, or GVKEY). You do this as follows:

  • Sort the dataset on the ID that will be counted. Example command: sort CIK
  • Now count the number of ID’s in the dataset and store them in a variable. Example command: by CIK: egen cnt = count(year). What this does is count the times each CIK ID occurs by counting the years and stores the count/number of years in the new variable cnt.
  • We can now remove observations of entities for which the count (of years) is below the number stored in the variable cnt. Example command: drop if cnt<10. This means that we need a minimum of 10 observations for an entity.

N.B.: A few final remarks on handling missing data concern the way you work with the data. When you are performing such cleaning actions as described above it is a good idea to first make a copy of your database before you do all this and save the actions as there is no undo like in many programs. You can also experiment a bit with a copy and you should definitely save the actions that you choose the finalize in a Do-file and when yiou continue from there again start with a copy. To keep track of your versions of the database you can fut a date in the name of each version. When you work with much data over a long time it is also a good idea to save space and memory by compressing the database with the command: compress. Some variables will then be changed to save space.