Knee Deep in Tables


When working with large databases with LOTS of tables, it can be hard to find a specific discreet data element or column. You know it’s in there somewhere but not exactly where… Your only (sane) option is to turn to the SQL Server system tables to find what you need.


This SQL query will help you hunt down what you need…

 ,SCHEMA_NAME(schema_id) AS [Schema Name]
 ,B.NAME AS [Column Name]
FROM sys.tables AS A
INNER JOIN dbo.TableSizes C ON A.NAME = C.tableName
WHERE B.NAME LIKE '%Mds%' --<< Guess your column name here

Upgrading Mac OS X default Sqlite database

I usually avoid  compiling my own components, instead opting for pre-compiled Mac Ports. As I’m space constrained this is not  ideal due to all the GNU dependencies it brings down along with what I actually need.

Upgrading the Sqlite3 was relatively painless. However, there are a couple of caveats…

The Sqlite amalgamation file compiles without any problems but will leave you with no ‘command history’, which is a PITA if you’re working with the Sqlite3 command line or console tool.

To get the command history working in Sqlite3, you need readline and ncurses libs. Therefore you need to be working with the ‘sqlite-autoconf*.tar.gz’ version of the source and do the usual

.configure; make; make install

If you’ve been monkeying with your environment path variables, you might need to

sudo cp /usr/bin/sqlite3 /usr/bin/sqlite3.bk

and then

sudo ln -s /usr/local/bin/sqlite3 /usr/bin/sqlite3

Everything now seems to be working fine for me on OS X 10.7.5 . I can recall previous commands with the up and down arrow keys and I can use math functions from extended_functions.c with

.load ‘’

Note: To compile the extended_functions.c package, download it into the same dir as the sqlite-autoconf source directory, then

gcc -shared -I “.” -o extension-functions.c

The “.” path is looking for the sqlite3ext.h file. You can then cp the to wherever you need it.

Social Security Agency Death Master File Extract & Transform with Bash

The DMF from the SSA (via is a pretty typical ‘fixed width’, ascii dataset.

It’s an interesting data set; and the only one I know that has it’s own Facebook Page (?!?)

This is a quick (and dirty) shell script to convert it into a more load-friendly CSV file.

This is the *nix version of an earlier post which did the same thing using Powershell.

Use it on the decompressed raw file with –

$ ./ ssdm1 >> ssdm1.csv

Cut and paste this into a text file called

# Substitute any offending semicolons and commas with white space
cat $1 | tr ";" " " | tr "," " " |
# Insert new clean semicolon separators
awk '{
ACD=substr($0, 1, 1)
LName=substr($0, 11, 20)
Suffix=substr($0, 31, 4)
FName=substr($0, 35, 15)
MName=substr($0, 50, 15)
PCode=substr($0, 65, 1)
# Reformat SSNs to 11 digit format
SSNa=substr($0, 2, 2)
SSNb=substr($0, 4, 2)
SSNc=substr($0, 7, 2)
# Reformat Date of Death formats
DODDate=substr($0, 66, 2)
DODMonth=substr($0, 68, 2)
DODYear=substr($0, 70, 4)
# Reformat Date of Birth formats
DOBDate=substr($0, 74, 2)
DOBMonth=substr($0, 76, 2)
DOBYear=substr($0, 78, 4)
# Print it all back out
printf ("%s,%s-%s-%s,%s,%s,%s,%s,%s,%s-%s-%s,%s-%s-%s\n", \
ACD, SSNa, SSNb, SSNc, LName, Suffix, FName, MName, PCode, \
DODYear, DODMonth, DODDate, DOBYear, DOBMonth, DOBDate ) \
| tr -d " "