Эван А. Султаник, Ph.D.

Evan's First Name @ Sultanik .com

Computer Security Researcher
Trail of Bits

Университет Дрексел Коллеж Компьютерные и Информатики
Отдел Компьютерных Наук

Recent Content:

Vizualizing Twitter

Journey to the Center of the Twitterverse

I’ve now been using Twitter for about six months. While Twitter’s minimalism is no doubt responsible for much of its success, I often pine for some additional social networking features. High up on that list is a simple way of representing my closest neighbors—perhaps through a visualization—without having to manually navigate individual users’ following/followers pages. A well designed representation could be useful in a number of ways:

  1. It could expose previously unknown mutual relationships (i.e., “Wow, I didn’t know X and Y knew each other!);
  2. It could reveal mutual acquaintances whom one did not know were on Twitter; and
  3. Metrics on the social network could be aggregated (e.g., degrees of separation).
This afternoon I spent an hour or so hacking together a Python script, which I have dubbed TwitterGraph, to accomplish this. Here is an example of the ~100 people nearest to me in the network:

The code for TwitterGraph follows at the end of this post. The code depends on the simplejson module and also imagemagick. It uses the Twitter API to construct the network graph. You don’t need to have a Twitter account for this to work; it doesn’t require authentication. Each IP is, however, limited to 100 API calls per hour, unless your IP has been whitelisted. My script takes this into account. Each Twitter user requires three API to download their information, so one can load about 33 users per hour before reaching the rate limit. TwitterGraph saves its data, so successive calls will continue off where it previously left. Finally, TwitterGraph also calculates the PageRank algorithm).

Usage: paste the code below into TwitterGraph.py and run the following:

$ chmod 755 ./TwitterGraph.py
$ ./TwitterGraph.py
You have 100 API calls remaining this hour; how many would you like to use now? 80
What is the twitter username for which you’d like to build a graph? ESultanik
Building the graph for ESultanik (output will be ESultanik.dot)...
$ dot -Tps ESultanik.dot -o ESultanik.ps && epstopdf ESultanik.ps && acroread ESultanik.pdf
$ dot -Tsvgz ESultanik.dot -o ESultanik.svgz

There are also (unnecessary) command line options, the usage for which should be evident from the sourcecode.


import simplejson
import urllib2
import urllib
import getopt, sys
import re
import os

class TwitterError(Exception):
  def message(self):
    return self.args[0]

def CheckForTwitterError(data):
    if ‘error’ in data:
      raise TwitterError(data[’error’])

def fetch_url(url):
    opener = urllib2.build_opener()
    url_data = opener.open(url).read()
    return url_data

def remaining_api_hits():
    json = fetch_url(“http://twitter.com/account/rate_limit_status.json”)
    data = simplejson.loads(json)
    return data[’remaining_hits’]

def get_user_info(id):
    global is_username
    global calls
    json = None
    calls += 1
    if is_username:
        json = fetch_url(”http://twitter.com/users/show.json?screen_name=” + str(id))
        json = fetch_url(“http://twitter.com/users/show.json?user_id=” + str(id))
    data = simplejson.loads(json)
    return data

def get_friends(id):
    global calls
    calls += 1
    json = fetch_url(“http://twitter.com/friends/ids.json?user_id=” + str(id))
    data = simplejson.loads(json)
    return data

def get_followers(id):
    global calls
    calls += 1
    json = fetch_url(“http://twitter.com/followers/ids.json?user_id=” + str(id))
    data = simplejson.loads(json)
    return data

last_status_msg = “”
def update_status(message):
    global last_status_msg
    # clear the last message
    p = re.compile(r”[^\s]”)
    sys.stdout.write(p.sub(’ ‘, last_status_msg))
    last_status_msg = message

def clear_status():
    last_status_msg = “”

def save_state():
    global history
    global user_info
    global friends
    global followers
    global queue
    global username
    data = simplejson.dumps([history, user_info, friends, followers, queue])
    bakfile = open(username + “.json”, “w”)

def build_adjacency():
    global friends
    idxes = {}
    adj = []
    idx = 0
    for user in friends:
        idxes[user] = idx
        idx += 1
    for user in friends:
        if len(friends[user]) <= 0:
        amount_to_give = 1.0 / len(friends[user])
        for f in friends[user]:
            if str(f) in idxes:
                adj[idxes[user]][idxes[str(f)]] = amount_to_give
    return [idxes, adj]

    opts, args = getopt.getopt(sys.argv[1:], "hu:c:r", ["help", "user=", "calls=", "resume"])
except getopt.GetoptError, err:
    print err

max_calls = -1
username = ""
load_prev = None

for o, a in opts:
    if o in ("-h", "--help"):
    elif o in ("-u", "--user"):
        username = a
    elif o in ("-c", "--calls"):
        max_calls = int(a)
    elif o in ("-r", "--resume"):
        load_prev = True
        assert False, "unhandled option"

if max_calls != 0:
    # First, let's find out how many API calls we have left before we are rate limited:
    update_status("Contacting Twitter to see how many API calls are left on your account...")
    max_hits = remaining_api_hits()
    if max_calls < 0 or max_hits < max_calls:
        update_status("You have " + str(max_hits) + " API calls remaining this hour; how many would you like to use now? ")
        max_calls = int(raw_input())
        if max_calls > max_hits:
            max_calls = max_hits
if username == “”:
    print “What is the twitter username for which you’d like to build a graph? “,
    username = re.compile(r”\n”).sub(””, raw_input())

update_status(”Trying to open “ + username + “.dot for output...”)
dotfile = open(username + “.dot”, “w”)
print “Building the graph for “ + username + “ (output will be “ + username + “.dot)...”

is_username = True
history = {}
queue = [username]
calls = 0
user_info = {}
friends = {}
followers = {}

# Let’s see if there’s any partial data...
if os.path.isfile(username + “.json”):
    print “It appears as if you have some partial data for this user.”
    resume = “”
    if not load_prev:
        print “Do you want to start off from where you last finished? (y/n) “,
        resume = re.compile(r”\n”).sub(””, raw_input())
    if load_prev == True or resume == “y” or resume == “Y” or resume == “yes” or resume == “Yes” or resume == “YES”:
        is_username = False
        bakfile = open(username + “.json”, “r”)
        [history, user_info, friends, followers, queue] = simplejson.loads(bakfile.read())
        print str(len(friends)) + “ friends!”
        print “Loaded “ + str(len(history)) + “ previous Twitterers!”
        print “The current queue size is “ + str(len(queue)) + “.”
        print “You are about to overwrite the partial data; are you sure? (y/n) “,
        resume = re.compile(r”\n”).sub(””, raw_input())
        if not (resume == “y” or resume == “Y” or resume == “yes” or resume == “Yes” or resume == “YES”):

while len(queue) > 0 and calls + 3 <= max_calls:
    next_user = queue.pop(0)
    # Let's just double-check that we haven't already processed this user!
    if str(next_user) in history:
    update_status(str(next_user) + "\t(? Followers,\t? Following)\tQueue Size: " + str(len(queue)))
    if next_user in user_info:
        info = user_info[next_user]
            info = get_user_info(next_user)
        except urllib2.HTTPError:
            update_status("It appears as if user " + str(next_user) + "'s account has been suspended!")
            print ""
    uid = next_user
    if is_username:
        uid = info['id']
        history[uid] = True
        is_username = False
    user_info[uid] = info
    update_status(info['screen_name'] + "\t(? Followers,\t? Following)\tQueue Size: " + str(len(queue)))
    followers[uid] = get_followers(uid)
    for i in followers[uid]:
        if str(i) not in history:
            history[i] = True
    update_status(info['screen_name'] + "\t(" + str(len(followers[uid])) + " Followers,\t? Following)\tQueue Size: " + str(len(queue)))
    friends[uid] = get_friends(uid)
    for i in friends[uid]:
        if str(i) not in history:
            history[i] = True
    update_status(info['screen_name'] + "\t(" + str(len(followers[uid])) + " Followers,\t" + str(len(friends[uid])) + " Following)")

# Get some extra user info if we have any API calls remaining
# Find someone in the history for whom we haven't downloaded user info
for user in history:
    if calls >= max_calls:
    if not user in user_info:
            user_info[user] = get_user_info(user)
        except urllib2.HTTPError:
            # This almost always means the user’s account has been disabled!

if calls > 0:

# Now download any user profile pictures that we might be missing...
update_status(“Downloading missing user profile pictures...”)
if not os.path.isdir(username + “.images”):
    os.mkdir(username + “.images”)
user_image_raw = {}
for u in friends:
    _, _, filetype = user_info[u][’profile_image_url’].rpartition(”.”)
    filename = username + “.images/” + str(u) + “.” + filetype
    user_image_raw[u] = filename
    if not os.path.isfile(filename):
        # we need to download the file!
        update_status(”Downloading missing user profile picture for “ + user_info[u][’screen_name’] + “...”)
        urllib.urlretrieve(user_info[u][’profile_image_url’], filename)
update_status(”Profile pictures are up to date!”)
print “”

# Now scale the profile pictures
update_status(”Scaling profile pictures...”)
user_image = {}
for u in friends:
    _, _, filetype = user_info[u][’profile_image_url’].rpartition(”.”)
    filename = username + “.images/” + str(u) + “.scaled.” + filetype
    user_image[u] = filename
    if not os.path.isfile(filename):
        # we need to scale the image!
        update_status(”Scaling profile picture for “ + user_info[u][’screen_name’] + “...”)
        os.system(”convert -resize 48x48 “ + user_image_raw[u] + “ “ + user_image[u])
update_status(”Profile pictures are all scaled!”)
print “”

update_status(”Building the adjacency matrix...”)
[idxes, adj] = build_adjacency()
print “”
update_status(”Calculating the stationary distribution...”)
iterations = 500
damping_factor = 0.25
st = [1.0]*len(friends)
last_percent = -1
for i in range(iterations):
    users = 0
    for u in friends:
        users += 1
        percent = round(float(i * len(friends) + users) / float(iterations * len(friends)) * 100.0, 1)
        if percent > last_percent:
            last_percent = percent
            update_status(”Calculating the stationary distribution... “ + str(percent) + “%”)
        idx = idxes[str(u)]
        given_away = 0.0
        give_away = st[idx] * (1.0 - damping_factor)
        if give_away <= 0.0:
        for f in friends[u]:
            if str(f) not in friends:
            fidx = idxes[str(f)]
            ga = adj[idx][fidx] * give_away
            given_away += ga
            st[fidx] += ga
        st[idx] -= given_away
print ""
# Now calculate the ranks of the users
deco = [ (st[idxes[u]], i, u) for i, u in enumerate(friends.keys()) ]
rank = {}
last_st = None
last_rank = 1
for st, _, u in deco:
    if last_st == None:
        rank[u] = 1
    elif st == last_st:
        rank[u] = last_rank
        rank[u] = last_rank + 1
    last_rank = rank[u]
    last_st = st
    print user_info[u]['screen_name'] + "\t" + str(rank[u])

update_status("Generating the .dot file...")

# Now generate the .dot file
dotfile.write("digraph twitter {\n")
dotfile.write("  /* A TwitterGraph automatically generated by Evan Sultanik's Python script! */\n")
dotfile.write("  /* http://www.sultanik.com/                                                 */\n")
for user in friends:
    dotfile.write("  n" + str(user) + " [label=< ”)
“ + user_info[user][’name’]) if not (user_info[user][’name’] == user_info[user][’screen_name’]): dotfile.write(”
(” + user_info[user][’screen_name’] + “)”) dotfile.write(“
Rank: “ + str(rank[user]) + “
>”); if user_info[user][’screen_name’] == username: dotfile.write(” color=\”green\” shape=\”doubleoctagon\””) dotfile.write(”];\n”) dotfile.write(”\n”) for user in friends: for f in friends[user]: if str(f) in friends: dotfile.write(” n“ + str(user) + “ -> “ + “ n” + str(f) + “;\n”) dotfile.write(”}\n”) dotfile.close() print “” clear_status()


In which Evan and Joe teach you how to make beautiful documents.

Earlier today, Joe Kopena and I once again presented our tag-team LATEX talk. Not familiar with LATEX? Why not read the Wikipedia article! It’s essentially a professional grade system for beautifully typesetting documents/books. There are various books and Internet tutorials that do a fairly good job of introducing the basics, so, in our talk, Joe and I cover some more advanced topics and also general typesetting snags that novices often encounter. We always get requests for our slides after each of our talks, so I figured I’d post them online (which is the purpose of this blog entry).

Note that the entire presentation was created in LATEX using Beamer. You may also want to read my notes on BIBTEX, which will eventually become a part of our talk. You can read some of Joe’s notes on LATEX on his personal wiki, here. Feel free to browse and/or post any of your general typesetting questions to this public mailing list.

On the Economics of Higher Education

In which I apply flimsy math and hand-waving to justify the time I’ve wasted in school.

There has been much “messaging on twitter” [sic] and “posting to blogs” [sic] of late regarding the economic benefit of pursuing a graduate degree in Computer Science. For example, there are claims, among other things, that a masters degree will require 10 years to earn back the income lost during study. A Ph.D. will require a staggering 50 years. Most everything I’ve read cites this article based upon Dr. Norman Matloff’s testimony to the U.S. House Judiciary Committee Subcommittee on Immigration. Curiously, the article everyone seems to cite does not itself have a bibliography. It does, however, credit “a highly biased pro-industry National Research Council committee” for calculating these numbers. Five to ten minutes of “searching on Google” [sic] and I was unable to find a report from the National Research Council corroborating such a claim. Can anyone point me to a link?

I do not dispute that these numbers may be correct; the purpose of this blog entry is to point out that, at least in the case of most with whom I’ve matriculated, it is flat out false.

Here is my (admittedly simple) mathematical model:

$n=\frac{t ( E[s_w] + c )}{E[s_a]-E[s_w]},$
  • $t$ is the number of years spent in school;
  • $E[s_w]$ is the expected salary one would have earned if one did not attend school;
  • $c$ is the net monetary cost of attending school per year, such as tuition paid, books purchased, &c. This value should also take into account any income earned during a school year (e.g., one’s stipend) and in many cases will be a negative number;
  • $E[s_a]$ one’s expected salary after graduating school; and
  • $n$ is the number of years one would have to work after graduating to make up for lost income.

Note that this model does not take attrition into account.

As an example, let’s say John is a Ph.D. student who, through a research assistantship, receives tuition remission and a stipend of $20,000 a year. This is quite reasonable (and actually a bit conservative according to this study). If John had not chosen to pursue a Ph.D. he would have been hired in a $65k entry level position, which is slightly on the high end. Once he has graduated (in the quite average term of five years), he expects to receive a salary of $85k which, according to this survey is on the low end. We also, however, have to account for taxes! From my own experience and from consulting virtually every graduate student I know, John will receive a refund for practically all of the money taxed from his income. Without going to school, John would be in the 25% tax bracket, with a normalized income of about $52k (taking the tiered bracketing system into account). After earning his Ph.D. John would have a normalized income of about $67k. Plugging these values into the model we get:

$n=\frac{5 \times ( 52 + (-20) )}{67-52} \approx 11.$
Therefore, John will require about 11 years to recoup the income lost during school.

I think I was relatively conservative with my income estimates, and that’s still a lot less time than 50 years! I plugged in my own stats/estimates into the model and I project that I will need fewer than five years (and I don’t even make as much as some other students I know)! Furthermore, with a Ph.D., John has theoretically more potential for advancement/promotion. Once the 11 years are over, he will have much more earning potential than a degreeless John (assuming the market for Ph.D.s remains strong, which I don’t think is a huge assumption given the lack of domestic technical/science Ph.D.s in the US right now).

Computer Science

An Introduction

People often ask me what I do or about what I am studying. Many have certain misconceptions and stereotypes that render the simple answer of “Computer Science” insufficient. For example, the vast majority of non-technical people with whom I’ve talked seem to think that learning new programming languages and writing programs are the primary areas of study for computer-related university majors. That’s like believing literature majors go to university to learn the intricacies of using pens and typewriters. In the ~7 years—and counting (gasp!)—in which I’ve been in higher education, I haven’t been taught a single programming language.

The following is an attempt on my part to answer these questions, in the hopes that I can hereafter simply refer people to this page instead of having to explain this for the thousandth time.

Hacking the Law

Thought Experiments Testing the Limits of the Law


First of all, I am neither a lawyer nor a trained ethicist. The following are a list of thought experiments related to “hacking” (i.e., testing the limits of) the law. Unless otherwise noted, I have not done any research to confirm whether or not the questions posted herein are either novel or have already been answered. Although the following contains some material related to computers, I have tried my best to write it in such a way as to be accessible to the widest audience.

Copyrighting a Number

Is it legal?

It is obviously legal to copyright an artistic work, like a digital photo. A digital photo, however, is really stored on a computer’s hard drive as a sequence of numbers, each representing the color of a dot in the picture. This sequence of numbers could be summed such that it amounts to a single, unique number. Would it be legal for one to give that number—which uniquely represents the copyrighted image—to a friend? The friend could then divide that number back into its sequence on the hard drive, thus reconstructing the original copyrighted picture. If copyrighting numbers is not legal, then I do not see why what I just described would not be legal.

The issue is actually a bit more complicated than it seems.

It is entirely possible that the method used to convert the digital picture to a single number could be slightly modified (e.g., by adding 1 to the resulting number). If the recipient of the number does not know that this was done then the resulting reconstructed picture will look like noise. If the recipient knows to subtract 1 from the number before reconstructing the picture, however, the picture will be exactly the same as the copyrighted picture.

To add even more complication, it is entirely possible that, by adding 1 to the number, the improperly decoded picture might in fact become a completely different copyrighted picture.


  1. Person X has a copyrighted picture, called picture A, that he/she legally owns.
  2. X converts the picture to a number, $n$.
  3. X sends the number $n+1$ to person Y.

Case 1:

  • Y converts the number $n-1$ back to a picture, resulting in picture A.

Case 2:

  • Y converts the number $n$ to a picture, resulting in a completely different picture B.
  • Picture B turns out to be copyrighted by person Z.
  • Neither person X nor person Y have ever even seen picture B before.

At what point is copyright lost?

Related to copyrighting a number is the following.

When the picture is represented as a sequence of numbers (representing the colors of the individual dots in the picture), it is possible to increment each of the colors of the individual dots. For example, let’s say the dot in the upper left corner of picture A is currently black. We could iteratively increment the color of that dot so that it eventually becomes white (going through a sequence of lightening grays in the process). We could even increment all of the dots in the picture at the same time.

Now, let’s say picture A is a photo of the Mona Lisa of which we do not own the copyright. Picture B is a photo of the Empire State Building that you took and of which therefore own the copyright. Both of the pictures have the same dimensions; therefore each dot in picture A has a corresponding dot in picture B.

Now, we iteratively increment the dots in A such that they all move toward the color of their corresponding dot in picture B. Let’s call the result of this picture C. At the beginning, C will look exactly like picture A. At the end, C will look exactly like picture B. In the middle of the process, C will look like a linear combination of A and B.

Question 1

At what point during the “morph” from A to B will the “copyright” of picture C transition from that of picture A to picture B?

Question 2

Is there any point during the process that picture C might not be protected by either picture A or picture B’s copyrights?

Celebrating 200 Poetic Years

In which Rob and I embark on yet another crazy trip.

Rob Lass and I have shared many an adventure. We have embarked on a number of multi-day cycling trips. He accompanied me on a crazy U-Haul road trip to the Canadian border to retrieve a 1.5 tonne pallet of IBM servers I had acquired. We have masqueraded as lawyerly fat-cats at whiskey festivals. We both share an unnatural fascination with the life and works of Leslie Lamport. We were once collectively mooned and subsequently chided by Jello Biafra. Yet another time, we shared drinks in the hotel bar of a Holiday Inn in Monmouth, NJ, sitting next to Ron Jeremy. We have also shared a number of moments in close proximity to RMS (an activity which, incidentally, I recommend only in moderation).

I was not in the least surprised, then, when Rob approached me about going down to Baltimore for the bicentennial anniversary of Edgar Allan Poe’s birth, followed by a stakeout of Poe’s grave to catch the Poe Toaster. The intervening hours were to be filled at The Horse You Came In On Saloon, which was supposedly one of Poe’s favorite hangouts, and is said to be the last place he was seen before his death. I heartily endorsed this plan.

The first matter of business was to make our two hour road trip as pleasant as possible. This obviously entailed gratuitous electronics.

How We Roll

Upon our arrival at Westminster Hall (the location of the bicentennial ceremony), we first set out to examine Poe’s grave in what remained of the daylight.

Rob and Evan at Poe's Grave
Please ignore the two fops and focus your attention on the fence in the background: this is the one over which we suspect the toaster makes his entrance. The building behind the fence is the Law Library of the University of Maryland. The courtyard between the fence and the building is secured and only accessible from either the interior of the library or by scaling two consecutive fences in an adjacent alley (more on this below).

Charm City Cakes (of Ace of Cakes fame) created a cake for the event.

Charm City Poe Cake
The cake was raffled off to the guests, and I am sorry to report that neither of us won.

I’d also like to report that many Poe fans are certified weirdos. Some also have extreme dedication.

Extreme Dedication
In this particular case, however, to what the dedication is I am not sure (the ceremony overlapped with the Baltimore Ravens’ unsuccessful bid at the Super Bowl).

The celebration as a whole, however, was quite fun, including a number of very good performances. Rob and I also got to get to know John Astin, which turned out to be somewhat of a letdown. But he’s ancient, so it’s okay.

The View from Inside Westminster Hall

Afterward we got a bite to eat and caught the tail end of said Ravens game at The Horse You Came In On.

The Horse You Came In On
I learned four things from this experience:
  1. Yuengling seems to be as popular in Baltimore as it is in Philly;
  2. in Baltimore, Yuengling is not pronounced “lager;”
  3. despite the fact that Baltimore lost to the Pittsburgh Steelers and my car has a PA license plate, no one mistook my car for that of a Steelers fan and flipped it over in a riot (as would undoubtedly have been the case if Baltimore were populated by Philadelphia sports fans); and
  4. the “frat” scene seems to descend on The Horse You Came In On immediately after the completion of sports games.

The gate closest to the monument.

We got back to the graveyard around 00:30 on the 19th to find a crowd of about 60 people. We really didn’t know what to expect; apparently neither did anyone else, as wild rumors started to fly. One rumor claimed that the toaster often made rounds to the fences surrounding the graveyard to say hi (and undoubtedly sign countless autographs and pose for pictures). Another rumor claimed that the toaster was none other than Poe House curator Jeff Jerome himself. This is all complicated by the fact that Poe actually has two graves (he was exhumed in the late 19th century to make way for his monument and re-buried in the back of the graveyard—a location not visible from the sidewalk/gates). The grave in the back is the one in which Rob and I were photoed above. Some people thought the toaster visited the monument (which is visible from the street), while others thought that he visited the grave in the back. There were therefore two groups of people each clustered around the gate closest to one of the graves. The “monument” group seemed to be a mix of the aforementioned weirdos with a healthy dose of hipsters. They spent their time reading poetry. The group at the other gate (closest to the back grave) was decidedly more hardcore; spirits flowed from many a hip flask.

The gate closest to the rear grave (where the toaster usually goes).

At this latter gate, Rob and I met up with a guy who had actually attended this thing before; in fact, he claimed to have attended every year since 1983. He and his son (a teenager) come every year to try and get a picture of the toaster, most likely to sell to a magazine (there is only one known photo of the toaster from a 1990 issue of Life magazine reproduced here). He said that the toaster almost always goes to the back grave. The toaster gets no cooperation from any authorities; neither the Westminster Burial Grounds nor the UMD Law Library provide him with any assistance. Jeff Jerome camps out in the church every year to simply confirm that the toaster is the same person as the year before (i.e., there is not an impostor) and also to ensure the identity of the toaster remains secret (because if his identity were ever revealed the magic of the tradition might be lost). Jerome does not know who exactly the toaster is, however, and he does not want to know. Once the toaster arrives, does is toast, and makes his exit, Jerome goes into the graveyard, collects the bottle of liquor, flowers, and any notes the toaster may have left, puts them in the church, and leaves. It is Jerome’s exit that cues the hordes of weirdos, hipsters, alcoholics, and amateur journalists that the toaster has come and done his deed.

The alley next to the graveyard.

At around 01:30, the man’s teenage son came up to his father saying that he had been surveiling the alley adjacent to the graveyard that I mentioned above. Three guys had gone in, but he only saw two of them come out. Rob immediately walked down to the alley and I followed close behind. Rob got there first and apparently saw two guys on the other side of the two fences (one fence of which was about 10 feet tall). One fellow jumped over the brick wall to the graveyard. The other hid behind a small half wall, peeked his head out to look at Rob, and then sprinted over the wall to follow his companion. About five minutes later, camera flashes could be seen reflecting off of the walls of the law library, seeming to emanate from the area of the back grave. We assumed this was the Poe Toaster having pictures taken for his own record. We waited for another hour or so but nothing happened. It was cold, and the toaster had likely already come and gone, so we drove home.

All in all, it was an awesome adventure.

You can read Rob’s account of it here.

Walking to the Horizon

or, A Mathematical Argument for a Gastronomical Visit to Stockholm

I am subscribed to David Horvitz’s new project entitled IDEA SUBSCRIPTION in which he posts almost-daily simple instructions. Yesterday’s instructions read as follows:

I do not profess to have spent much time researching this in the past, but I had never heard of this approximation before. The approximation is so concise that I was curious as to its error. The approximation is obviously incorrect for very tall heights since it is unbounded:
$\lim_{h \rightarrow \infty} \sqrt{1.5 h} = \infty,$
vi&., in actuality an enormously tall person (whose eyes were almost an infinite distance away from the surface of the Earth) would only be able to see a quarter of the Earth’s circumference in front!

I therefore spent the last 5 minutes formalizing a bound on the error of this approximation. The results, which follow, were quite surprising.

Cycle Junkie Shirt

An esoteric shirt inspired by a slightly less esoteric conference.

In late July of 2006, Rob Lass and I decided to attend the HOPE conference in New York City. We were both living in Philadelphia at the time, being conveniently a little over 100 miles (~160km) from NYC. Earlier that year we had successfully piloted our bicycles from Philadelphia to Reston, Virginia, averaging over 100 miles each day. Therefore, we set upon to ride up to New York in one day.

The HOPE conference is attended, in large part, by geeks, mostly of the computer variety. From our interactions with the then burgeoning bicycle subculture in Philadelphia, we had noticed a large overlap with the computer geek subculture. An idea was thus born: We were to design and print a t-shirt—de facto uniform of bike- and computer-geeks alike—that would marry the two subcultures. We would then sell the t-shirts at HOPE to help fund our expedition.

Here is the design up with which I came:

There are three “cycles” referenced in the design:

  1. a bicycle (obviously);
  2. a CPU cycle; and
  3. a graph cycle.

The term “cycle junkie” was coined by Bill Gosper.

Although I am almost sold out of the first printing, if there is enough interest I might organize a second printing of the shirts. Contact me if you’re interested.

Sultanik&rsquo;s Law of Wikipedia Authorship

Spoiler: Trolls always prevail.

$$\lim_{t \rightarrow \infty}P(a = \mbox{Expert} \vee a = \mbox{Troll}) = 1.0,$$

where $t$ is time and $a$ is the author of a new article on Wikipedia.

In other words, as time goes on it becomes more and more certain that authors of new Wikipedia articles will either be a very specialized expert or a troll.

Only π more hours to go…

In which I am trolled by a software utility.

This evening I finally got around to doing some forensic data recovery from a broken (i.e., horribly clicking) hard drive. Most of the data I had backed up, but there are a couple non-vital files for which it would be nice to recover. That, and I've never done something like this before and it's quite fun. It's especially fun that the partition I'd like to recover was formatted in ReiserFS, for which no free and few commercial recovery tools exist.

The first step to data recovery is making an image of the faulty disk on a healthy hard drive. The disk image can then be repaired and diagnosed without having to worry about hardware failures (i.e., the dreaded clicking). The tool of choice for this is ddrescue. For those that are familiar with the *NIX command dd, ddrescue works similarly except it skips over bad sectors. Once all of the good sectors are copied, it then goes back to all of the bad sectors and tries to read them again (in case the hardware malfunction is stochastic).

ddrescue prints out a handy list of statistics, including the average transfer rate. My rate is currently at 7120 kB/s (it's so slow because I am copying the image to my network file server over 100BaseT to a Pentium-III box running software raid). The hard drive I am recovering is 76.8 GB in size. I did some quick calculations to figure out how long I'd have to wait before this thing finishes.

$\frac{76.8\ \mbox{GB}}{7120\ \mbox{kB/s}} \approx 3.141\ \mbox{hours} \approx \pi.$