Stata & missing or duplicate data

When you work with large datasets or big data it may happen that after working with it for some time you need to take a good look at what has happened to the data. Especially if you work with combinations of datasets and/or work on it with more people. Another instance is: when you have received the dataset from a researcher or organization and need to remove superfluous data that may not be relevant to your own research.

1) Investigate the data
There are a few simple commands in Stata that provide a good overview:

  • desc or describe = this command provides a brief summary of the entire dataset
  • summ or summarize = another fine command that gives a quick overview of all the variables with information on: number of observations, the mean, standard deviation, and the lowest and highest values (min & max)
  • tab or tabulate = a good way to cross-reference several items and see whether there are any obvious outliers or patterns in the data

These and many more commands or combinations of commands allow you to watch and judge the data.

2) Missing data

  • Using the summ command it was easy to see that some fields had no data. In this case it may be a good idea to delete them as they serve no purpose here. You can delete a variable/field by typing drop variable. For example: drop CIKNew. A range of variables next to each other can also be dropped with a single command. For this example: drop indfmt – conm. There are many more options to delete entire variables/fields from a dataset.
  • Another way to clean data can be applied if you require only those observations/records that (for crucial variables) do not have missing values/data. Deleting observations can be done using the missing value command: drop if mi(variable). For example: drop if mi(Totaldebt). The Stata result screen will show the result of this action: number of observations deleted.
  • Deleting missing values is, however not always straightforward. Stata shows missing values as dots if you view a dataset with the browse command. In some datasets, however, missing values may sometimes (partially) be represented by another value in some observations. If this is the case it is a good idea to replace some of these values first to allow for easier editing/deletion. If in your dataset the number zero indicates the same thing as a missing value (in some records) you can use mvdecode to replace them with a dot (= how Stata usually represents missing values). The command would look like: mvdecode variable, mv(0=.). Afterwards you can the remove all missing values the usual way with drop.

3) Removing duplicate data
When you are using multiple datasets and have combined them you could have some duplicate observations. Using data from some specific databases may also get you unintentional duplicate data. In Compustat you run the risk of duplicates if, for instance, you only need data for industrial type companies but, when doing the search in the Fundamentals Annual database you forget to unmark the option FS at the screening options at Step 2 in WRDS. Some companies have more than one statement in Compustat for the same fiscal years and will get you both FS and IND type/format statements.
The Stata command to remove duplicates should be chosen carefully. I usually combine a unique ID code with a specific event year or date. For instance: duplicates drop CIK year, force

Explanation:

  • duplicates drop removes duplicates
  • in this example duplicates are identified by the combination of the variable CIK (ID code = Central Index Key) with the variable year
  • duplicates will be removed without warning by including the las bit:
    , force

Personally I think removing duplicates without first checking may not always be the smart thing to do. If you are working with a large dataset it may be a good idea to first tag possible duplicates and then have a look before removing these. The command to tag the duplicates is: duplicates tag, gen(newvariable). This command checks the whole dataset with all variables for all observations for duplicates and stores the result as a number in the new variable with the name newvariable.

Another version of removing duplicates may have to do with the number of necessary observations by entity in a dataset. In some cases an analysis requires a minimum number of observations/records to be relevant. If there are too few observations you may again remove them only, in this case it can be done using the count function on the entity (for example a company identifier like ISIN, CIK, or GVKEY). You do this as follows:

  • Sort the dataset on the ID that will be counted. Example command: sort CIK
  • Now count the number of ID’s in the dataset and store them in a variable. Example command: by CIK: egen cnt = count(year). What this does is count the times each CIK ID occurs by counting the years and stores the count/number of years in the new variable cnt.
  • We can now remove observations of entities for which the count (of years) is below the number stored in the variable cnt. Example command: drop if cnt<10. This means that we need a minimum of 10 observations for an entity.

N.B.: A few final remarks on handling missing data concern the way you work with the data. When you are performing such cleaning actions as described above it is a good idea to first make a copy of your database before you do all this and save the actions as there is no undo like in many programs. You can also experiment a bit with a copy and you should definitely save the actions that you choose the finalize in a Do-file and when yiou continue from there again start with a copy. To keep track of your versions of the database you can fut a date in the name of each version. When you work with much data over a long time it is also a good idea to save space and memory by compressing the database with the command: compress. Some variables will then be changed to save space.

Email