Эван А. Султаник, Ph.D.

Evan's First Name @ Sultanik .com

Computer Security Researcher
Trail of Bits

Адъюнкт-Профессор
Университет Дрексел Коллеж Компьютерные и Информатики
Отдел Компьютерных Наук

Are no two snowflakes alike?

A mathematical argument for the negative.

I think the claim that "no two snowflakes are alike" is fairly common. The idea is that there are so many possible configurations of snow crystals that it is improbable that any two flakes sitting next to each other would have the same configuration. I wanted to know: Is there any truth to this claim?

Kenneth Libbrecht, a physics professor at Caltech, thinks so, and makes the following argument:

Now when you look at a complex snow crystal, you can often pick out a hundred separate features if you look closely.

He goes on to explain that there are $10^{158}$ different configurations of those features. That's a 1 followed by 158 zeros, which is about $10^{70}$ times larger than the total number of atoms in the universe. Dr. Libbrecht concludes

Thus the number of ways to make a complex snow crystal is absolutely huge. And thus it's unlikely that any two complex snow crystals, out of all those made over the entire history of the planet, have ever looked completely alike.

Being the skeptic that I am, I decided to rigorously investigate the true probability of two snowflakes having possessed the same configuration over the entire history of the Earth. Read on to find out.

Revisiting the Ballmer Peak

Might the Ballmer Peak be an actual phenomena?

Last year I made a rather esoteric joke about a supposed phenomena called the "Ballmer Peak" that was popularized by a web comic. The idea is that alcohol, at low doses, can actually increase the productivity of a computer programmer.  The original claim was obviously in jest since, among other reasons, Randall Munroe (the web comic's author) claimed that this peak occurs at exactly 0.1337% blood alcohol content. This got me thinking: Could there be any truth to this claim? Read on to find out; the results may surprise you.

Sir Robert Burnett

An investigation into the life of the patron saint of alcoholic graduate students.

Political scientist Ed Burmila—sole remaining contributor to one of my favorite weblogs on the Internets, Gin and Tacos—just asked his readership,

What brought you here initially? Was I suggested by one of your friends? Did you arrive from a link on a different site – especially Crooks & Liars? Random internet search? Internet search specifically for gin and/or tacos? Saw a sticker on someone's car? Wrote three words in the search bar, hit ctrl-Enter, and hoped for the best?

I have been reading Gin and Tacos for almost its entire, decade-long existence. I didn't mean for that to sound so hipsterish, but there's not much other way to put it. I remember when I first stumbled on the site having entered "Robert Burnett" in my search bar, back in the days when G&T.com had more in common with its name than simply being awesome. I, like Ed, was a poor graduate student at the time, and I too had discovered the siren call of Sir Robert Burnett's London Dry Gin. (Perhaps I inherited this penchant from my advisor.) I'm not sure about G&T.com's Robert Burnett fan fiction, though.

I was, however, intrigued by Ed's, et al., historical sleuthing in trying to track down the truth behind the real Robert Burnett. Unfortunately, here is all they were able to conclude:

• Robert Burnett Jr. and Sir Robert Burnett were active in politics, however neither were mayor of London.
• The Burnett family was very active in military recruitment.
• Most importantly, that the Burnett family dealt in liquors.
• Finally, Sir Robert Burnett had a pretty damn nice estate.

Unfortunately, none of my research resulted in specific reference to gin. This is primarily due to the fact that the only available source to me was the Times of London, although there might have been advertisements for Burnett’s Gin in the Times, they did not come through on the search. Someone with more experience in alcohol oriented history could possibly do better.

I was no expert in history, however, like most Ph.D. students, I was a world renowned expert in procrastination. I therefore took on the task. Read on to see my (now six year old) results.

Seven Degrees of Separation

In the year 2651 we will have to create the "Seven Degrees of Separation Game"!

Is it true that everyone on earth is separated by at most six degrees? There's plenty of empirical evidence to support this claim already, so I am going to take a different, more theoretical approach.

New website!

…for the fifth—and by no means last—time.

So, I’ve gotten rid of MediaWiki and switched back to Drupal (now in version 7). The old version of this site is still available here. I’ll be posting another blog entry in the coming days explaining why I’ve chosen to abandon MediaWiki. I’ve already migrated most of the content from my old website, but there is still some to go. The address for my old RSS feed still works, however, Google Reader seems to be ordering the entries non-chronologically and I am not sure why.

Annotation of multi-page PDFs using open-source tools.

I'm currently teaching a class and receive all of the homework submissions digitally (in the form of PDFs). Printing out the submissions seems like a waste, so I devised a workflow for efficiently grading the assignments digitally by annotating the PDFs. My method relies solely on free, open-source tools.

Here's the general process:

1. Import the PDF into GIMP. GIMP will automatically create one layer of the image for each page of the PDF.
2. Add an additional transparent layer on top of each page layer.
3. Use the transparent layer to make grading annotations on the underlying page. One can progress through the pages simply by hiding the upper layers.
4. Save the image as an XCF (i.e., GIMP format) file.
5. Use my gimplayers2pngs script (see below) to export the layers of the XCF file into independent PNG image files.
6. Use the composite function of ImageMagick to overlay the annotation layers with the underlying page layers, converting to an output PDF.

Below is the code for my xcflayers2pngs script. It is a Scheme Script-Fu script embedded in a Bash script that exports each layer of the XCF to a PNG file.

#!/bin/bash
{
cat < i 0)
(set! bottom-to-top (append bottom-to-top (cons (aref all-layers (- i 1)) ‘())))
(set! i (- i 1))
)
(reverse bottom-to-top)
)
)

(define (format-number base-string n min-length)
(let* (
(s (string-append base-string (number->string n)))
)
(if (< (string-length s) min-length)
(format-number (string-append base-string "0") n min-length)
s)))

(define (get-full-name outfile i)
(string-append outfile (format-number "" i 4) ".png")
)

(define (save-layers image layers outfile layer)
(let* (
(name (get-full-name outfile layer))
)
(file-png-save RUN-NONINTERACTIVE image (car layers) name name 0 9 1 1 1 1 1)
(if (> (length layers) 1)
(save-layers image (cdr layers) outfile (+ layer 1)))
))

(define (convert-xcf-to-png filename outfile)
(let* (
(image (car (gimp-file-load RUN-NONINTERACTIVE filename filename)))
(layers (get-all-layers image))
)
(save-layers image layers outfile 0)
(gimp-image-delete image)
)
)

(gimp-message-set-handler 1) ; Send all of the messages to STDOUT
EOF

echo “(convert-xcf-to-png \”$1\” \“$2\”)”

echo “(gimp-quit 0)”
} | gimp -i -b -


Running xcflayers2pngs file.xcf output will create output0000.png, output0001.png, output0002.png, ..., one for each layer of file.xcf. Each even-numbered PNG file will correspond to an annotation layer, while each odd PNG file will correspond to a page of the submission. We can then zip the even pages with their associated odd pages using the following ImageMagick trickery: convert output???[13579].png null: output???[02468].png -layers composite output.pdf

We can automate this process by creating a Makefile rule to convert a graded assignment in the form of an XCF into a PDF:

%.pdf : %.xcf
rm -f $**.png ./xcflayers2pngs$< $* convert$*???[13579].png null: $*???[02468].png -layers composite$@
rm -f $**.png  Gender Representation on the Internet In which I discover that male names appear much more often than female names on the Internet. There is a lot that has happened since last August. I successfully defended my Ph.D., for one. I could give a report on our post-defense trip to Spain. I could talk about some interesting work I'm now doing. Instead, I'm going to devote this blog entry to gender inequity. This all started in November of last year in response to one of Dave's blog posts. Long-story-short, he was blogging about a girl he had met; in an effort to conceal her identity (lest she discover the blog entry about herself), he replaced her name with its MD5 hash. Curious, I decided to brute force the hash to retrieve her actual name. This was very simple in Perl: #!/usr/bin/perl -w use Digest::MD5 qw(md5_hex); my$s = $ARGV[0] or die(“Usage: crackname MD5SUM\n\n”); system(”wget http://www.census.gov/genealogy/names/dist.female.first”) unless(-e ‘dist.female.first’); open(NAMES, ‘dist.female.first’) or die(”Error opening dist.female.first for reading!\n”); while() { if($_ =~ m/^\s*(\w+)/) {
my $name = lc($1);
if(md5_hex(ucfirst($name)) eq$s || md5_hex($name) eq$s ||
md5_hex(ucfirst($name) . “\n”) eq$s || md5_hex($name . “\n”) eq$s) {
print ucfirst($name) . “\n”; exit(0); } } } close(NAMES); exit(1);  Note that I am using a file called dist.female.first, which is freely available from the US Census Bureau. This file contains the most common female first names in the United States, sorted by popularity, according to the most recent census. This script was able to crack Dave's MD5 hash in about 10 milliseconds. This got me thinking: For what else could this census data be used? My first idea was also inspired by Dave. You see, he was writing a novel at the time. Wouldn't it be great if I could create a tool to automatically generate plausible character names for stories? #!/usr/bin/perl -w use Cwd ‘abs_path’; use File::Basename; my($scriptfile, $scriptdir) = fileparse(abs_path($0));
my $prob;$prob = $ARGV[0] or$prob = rand();
system(“cd $scriptdir ; wget http://www.census.gov/genealogy/names/dist.all.last”) unless(-e$scriptdir . ‘dist.all.last’);
system(”cd $scriptdir ; wget http://www.census.gov/genealogy/names/dist.male.first”) unless(-e$scriptdir . ‘dist.male.first’);
system(”cd $scriptdir ; wget http://www.census.gov/genealogy/names/dist.female.first”) unless(-e$scriptdir . ‘dist.female.first’);
sub get_rand {
my($filename,$percent) = @_;

open(NAMES, $filename) or die(”Error opening$filename for reading!\n”);
$percent *= 100.0; my$nameval = -1;
my @names;
my $lastname; while() { if($_ =~ m/^\s*(\w+)\s+([^\s]+)\s+([^\s]+)/) {
$lastname = ucfirst(lc($1));
if($3 >=$percent) {
last if($nameval >=$percent && $3 >$nameval);
$nameval =$3;
push(@names, $lastname); } } } close(NAMES); return$lastname if($#names < 0); return$names[int(rand($#names + 1))]; } sub random_name { my($male, $p) = @_; my$firstnameprob;
my $lastnameprob; do {$firstnameprob = rand($p);$lastnameprob = $p -$firstnameprob;
} while($lastnameprob > 1.0); return &get_rand($male ? ‘dist.male.first’ : ‘dist.female.first’, $firstnameprob) . “ “ . &get_rand(’dist.all.last’,$lastnameprob);
}
sub flushall {
my $old_fh = select(STDERR);$| = 1;
select(STDOUT);
$| = 1; select($old_fh);
}
print STDERR “Male: “;
&flushall();
print &random_name(1, $prob) . “\t”; &flushall(); print STDERR “\nFemale: “; &flushall(); print &random_name(0,$prob) . “\n”;


This script does just that. Given a real number between 0 and 1 representing the scarcity of the name, this script randomly generates a name according to the distribution of names in the United States according to the census. Values closer to zero produce more common names, and values closer to one produce more rare names. The parameter can be thought of as the scarcity percentile of the name; a value of $x$ means that the name is less common than $x$% of the other names. Note, though, that I'm not actually calculating the joint probability distribution between first and last names (for efficiency reasons), so the value you input doesn't necessarily correlate to the probability that a given first/last name combination occurs in the US population.

$./randomname 0.0000001 Male: James Smith Female: Mary Smith$ ./randomname 0.5
Male: Robert Shepard
Female: Shannon Jones
$./randomname 0.99999 Male: Kendall Narvaiz Female: Roxanne Lambetr  The "Male" and "Female" portions are actually printed to STDERR. This allows you to use this in scripting without having to parse the output: $ ./randomname 0.75 2>/dev/null
Gerald Castillo Christine Aaron


But I didn't stop there. Here's the punchline of this Shandy-esque recount:
Inspired by Randall Munroe style Google result frequency charts, I became interested in seeing how the frequency of names in the US correlates to the frequency of names on the Internet. I therefore quickly patched my script to retrieve Google search query result counts using the Google Search API. I generated 60 random names (half male, half female) for increasing scarcity values (in increments of 0.01). The results are pretty surprising:

Note that the $y$-axis is on a logarithmic scale.

As expected, the number of Google search results is exponentially correlated to the scarcity of the name. What is unexpected is the disparity between representation of male names on the Internet versus female names on the Internet. On average, a male name of a certain scarcity will have over 6.6 times more Google results than a female name of equal scarcity!

鯖の味噌煮 (Saba no Misoni)

Software

• Fillets from one large mackerel cut in 4cm pieces.
• 2.5cm ginger cut in matchsticks
• 2 tbsp. mirin
• 1 tbsp. sake
• 1.5 tbsp. sugar
• 1 tbsp. soy sauce
• 1 cup dashi stock (or water)
• 5 tbsp. red (“aka”) miso paste
• 2 scallions cut in 2cm pieces, whites and greens separated.

Algorithm

1. Combine dashi, soy, mirin, sake, 3 tbsp. miso, and sugar in a sauce pan and bring to boil.
2. Add ginger, scallion whites, and mackerel, put on tinfoil otoshi buta, and simmer on medium heat for 10 minutes, basting every few.
3. Stir in remaining miso and cook for an additional 5 minutes.
4. Remove from heat, add scallion greens, replace otoshi buta, and cool to room temperature in pan.

Азербайджанский Соус

Рецепт любимого рагу моей жены.

Хотя они не являются азербайджанская, моя жена и ее семья жили в Баку с середины 1980-х по первую половину войны карабахского конфликта (мой свёкор, который был полковник в Советской армии, был назначен там). За это время, моя тёща разработаны и признательность за возможность готовить азербайджанской кухни. Один блюдо, которое стало любимой жены называется "Азербайджанский соус." Это больше похоже на тушеное мясо, чем это, как соус—по крайней мере в интерпретации моих тёща. Основные компоненты включают баклажаны, перцы, помидоры, кинза, и целая курица.

Потому что версия моя мать в законе настолько вкусно, я хотел бы узнать, как готовить его слишком. Освоив искусство кулинарии путем наблюдения, моя тёща—как и большинство других поваров старого мира из домашней кухне—мало заботится для весов и мер; ее рецепты передаются в единицах "щепотки это" и "брызги, что." Это делает воспроизведения блюдо в иностранной кухни достаточно сложно. Одна из проблем с поиском формальных, написанный рецепт соуса в том, что то же слово имеет много смыслов на французском языке по гастрономии (нпр., "Sous-chef" и "Sous-vide"), которая наполняет Google поиск с ложно-положительных результатов. По всем этим причинам я решил обращал внимания метод моей тёща. Я думаю, что сошлись на довольно стабильное рецепт (с некоторыми из моих собственных модификаций), которые я изложил ниже. При этом, я попытался выделить изменения, которые я сделал на оригинальный рецепт.

Biggest

A solution for digital hoarding.

I have a problem. I admit it. I have a problem deleting files. In the “good times”—vi&., when I have gigabytes to spare on my hard drive—I simply don’t bother deleting temporary files. That video I encoded/compressed to MPEG? Sure, I’ll keep the raw original! Why not? Just in case I ever need to re-encode it at a higher bitrate, you see.

Inevitably, I run low on disk space months later, at which point I’ve forgotten where all of those pesky large files are living.

Enter my script, which I simply call biggest. This script will conveniently print the $n$ biggest files that are rooted at a given directory. Here’s an example:

\$ biggest 10
. [92MB]
|- art [15MB]
|  |- .svn [7MB]
|  |  - text-base [7MB]
|  |     |- heat.png.svn-base [2MB]
|  |     - SWATipaq.png.svn-base [2MB]
|  |
|  |- heat.png [2MB]
|  - SWATipaq.png [2MB]
|
|- os [7MB]
|  - os.pdf [3MB]
|
|- .svn [9MB]
|  - text-base [9MB]
|     - proposalpresentation.pdf.svn-base [8MB]
|
|- eas28@palm [14MB]
|- ESultanikPhDProposalPresentation.tar.gz [12MB]
|- APLTalk.pdf [9MB]
|- proposalpresentation.pdf [8MB]
`- proposalhandouts.pdf [7MB]

It is available on GitHub, here: