# Evan A. Sultanik, Ph.D.

Evan's First Name @ Sultanik .com

Computer Security Researcher
Trail of Bits

Universitato Drexel Colegio de Komputanta kaj Informa Teknologio
Departemento de Komputscienco

## Social Signals

### or, the basis for an article that was nominated for best paper and subsequently rejected.

Looking back on my previous job at The Johns Hopkins University, I am struck by and grateful for the breadth of different topics I was able to research: distributed phased array radar resource scheduling, Linux kernel integrity monitoring, TLS, hypervisors, probabilistic databases, streaming algorithms, and even some DNA sequencing, just to name a few.

Toward the end of my—to abuse an overloaded term—tenure at JHU/APL, I became involved in their social media analysis work. For example, I had some success creating novel algorithms to rapidly geolocate the context of social media posts, based solely upon their textual content. But one of the most interesting things I encountered along these lines was a phenomena that we came to call Social Signals.

I wrote a paper about this, which was simulaneously nominated for best paper and yet rejected from a prestegious Computer Science conference. Read on for the full story.

## Success in OS X

### 10 easy steps (and lots of unnecessary prose) on how to set up a new PowerBook in 48 hours or more.

Five years ago I wrote about my travails solving some networking issues on my Linux laptop. I equated the experience to a classic XKCD comic in which a simple software fix escalates to a life-and-death situation. The same thing happened to me again, this time on Mac OS X. Read on to find out.

## Hashing Pointers

### or: How I Learned to Stop Worrying and Love C++11

For as long as I’ve understood object-oriented programming, I’ve had an ambivalent relationship with C++. On the one hand, it promised the low-level control and performance of C, but, on the other, it also had many pitfalls that did not exist in other high-level managed languages like Java. I would often find myself in a position where I wouldn’t be sure exactly what the compiler would do under-the-hood. For example, when passing around objects by value would the compiler be “smart” enough to know that it can rip out the guts of an object instead of making an expensive copy? Granted, much of my uncomfortableness could have been remedied by a better understanding of the language specification. But why should I have to read an ~800 page specification just to understand what the compiler is allowed to do? The template engine and the STL are incredibly powerful, but it can make code just as verbose as Java, one of Java’s primary criticisms. Therefore, I found myself gravitating toward more “purely” object-oriented languages, like Java, when a project fit with that type of abstraction, and falling back to C when I needed absolute control and speed.

A couple years ago, around the time when compilers started having full support for C++11, I started a project that was tightly coupled to the LLVM codebase, which was written in C++. Therefore, I slowly started to learn about C++11’s features. I now completely agree with Stroustrup: It’s best to think of C++11 like a completely new language. Features like move semantics give the programmer complete control over when the compiler is able to move the guts of objects. The new auto type deduction keyword gets rid of a significant amount of verbosity, and makes work with complex templates much easier. Coupled with the new decltype keyword, refactoring object member variable types becomes a breeze. STL threads now make porting concurrent code much easier. That’s not to mention syntactic sugar like ranged for statements, constructor inheritance, and casting keywords. And C++ finally has lambdas! C++11 seems to be a bigger leap forward from C++03 than even Java 1.5 (with its addition of generics) was to its predecessor.

As an example, I recently needed an unordered hash map where the keys were all pointers. For example,

std::unordered_map<char*,bool> foo;
I wanted the keys to be hashed based upon the memory addresses of the character pointers, not the actual strings. This is similar to Java’s concept of an IdentityHashMap. Unfortunately, the STL does not have a built-in hash function for pointers. So I created one thusly:
/* for SIZE_MAX and UINTPTR_MAX: */
#include <cstdint>
namespace hashutils {
/* hash any pointer */
template<typename T>
struct PointerHash {
inline size_t operator()(const T* pointer) const {
#if SIZE_MAX < UINTPTR_MAX
/* size_t is not large enough to hold the pointer’s memory address */
addr %= SIZE_MAX; /* truncate the address so it is small enough to fit in a size_t */
#endif
}
};
}

Note that I am using auto here to reduce verbosity, since it is evident that addr is a uintptr_t from the righthand side of the assignment. The hashutils::PointerHash object allows me to do this:
std::unordered_map<char*,bool,hashutils::PointerHash<char>> foo;

The neat part is that C++11 has a new using keyword that essentially lets me generically alias that definition:
template<typename K,typename V>
using unordered_pointer_map = std::unordered_map<K,V,hashutils::PointerHash<typename std::remove_pointer<K>::type>>;

unordered_pointer_map<char*,bool> foo;

Note the use of std::remove_pointer<_>, a great new STL template that gets the base type of a pointer.

In another instance, I wanted to have a hash map where the keys were pointers, but the hash was based off of the dereferenced version of the keys. This can be useful, e.g., if you need to hash a bunch of objects that are stored on the heap, or whose memory is managed outside of the current scope. This, too, was easy to implement:

namespace hashutils {
template<typename T>
inline size_t hash(const T& v) {
return std::hash<T>()(v);
}

/* hash based off of a pointer dereference */
template<typename T>
struct PointerDereferenceHash {
inline size_t operator()(const T& pointer) const {
return hash(*pointer);
}
};

/* equality based off of pointer dereference */
template<typename T>
struct PointerDereferenceEqualTo {
inline bool operator()(const T& lhs, const T& rhs) const {
return *lhs == *rhs;
}
};

template<typename K,typename V>
using unordered_pointer_dereference_map = std::unordered_map<K,V,PointerDereferenceHash<K>,PointerDereferenceEqualTo<K>>;
}

Note that, through the magic of the C++ template engine, this code supports keys that are pure pointers as well as C++11’s new smart pointers.

As another example of the afforementioned auto keyword and ranged for statements, this is how easy it is in C++11 to hash an entire collection (e.g., a std::vector or std::set):

namespace hashutils {
class HashCombiner {
private:
size_t h;
public:
HashCombiner() : h(0) {}
template <class T>
inline HashCombiner& operator<<(const T& obj) {
h ^= hash(obj) + 0x9e3779b9 + (h << 6) + (h >> 2);
return *this;
}
operator size_t() const { return h; }
};

/* hash any container */
template<typename T>
struct ContainerHash {
size_t operator()(const T& v) const {
HashCombiner h;
for(const auto& e : v) {
h << e;
}
return h;
}
};
}

Then, to make all sets hashable (and thereby valid to be used as keys in a map), simply add this:
namespace std {
template<typename... T>
struct hash<set<T...>> : hashutils::ContainerHash<set<T...>> {};
}


I realize that this post is a collection of rather mundane code snippets that are nowhere near a comprehensive representation of the new language features. Nevertheless, I hope that they will give you as much hope and excitement as they have given me, and perhaps inspire you to (re)visit this “new” language called C++11.

## Killing Programs Softly

### A quick script to gently kill intermittently unresponsive programs on OS X.

Seven months ago I asked the following question on StackExchange:

Sometimes, when I have many applications open doing many memory and IO-intensive things, my computer inevitably starts to thrash a bit. While waiting for things to settle down, I often decide to close some applications that don’t really need to be open. The trouble is that many of the applications (especially ones that have background/idle processes) tend to be intermittently unresponsive until the thrashing subsides, so it either takes a very long time for them to get focus to send +q, or when I go to close them by clicking on their icon in the dock I am only presented with the option to force quit. I know that those applications aren’t permanently unresponsive, so I’d prefer to send them a gentle TERM signal and have them quit gracefully when they are able. I usually end up killing them by using pkill from the terminal, however, that’s not always feasible, especially if the terminal is also hosed.

What is the easiest way to gently send the signal to kill a process if/when that process is intermittently unresponsive? (In a situation in which access to the terminal and/or starting a new application is not convenient.)

The question didn’t get much fanfare on StackExchange, and the only answer so far has been to use AppleScript to essentially programmatically send the +q signal to the application. I’ll bet it’s basically equivalent to using pkill from the terminal to send a SIGTERM to the process, but it might work in the event that my terminal emulator app is also unresponsive. Anyhow, it was the best solution I had, so I matured the idea by making a friendly standalone AppleScript that enumerates all of the currently running processes and prompts the users for which to gently kill. Here it is:

local procnames
tell application “System Events”
set procnames to (get the name of every process whose background only is false and name is not “GentleKill”)
end tell
(choose from list procnames with prompt “Which applications would you like to gently kill?” with multiple selections allowed)
if result is not false then
set tokill to result
set text item delimiters to {”, “}
display dialog “Are you sure you want to gently kill “ & tokill & “?”
repeat with prog in tokill
tell application prog to quit
end repeat
end if


## 粤式蒸鱼 (Steamed Fish with Hot Oil)

### In which I adulterate a classic Cantonese dish.

Steamed fish that is finished with a drizzle of hot oil is a classic Cantonese dish. The way I like to make it includes a few Japanese ingredients, and my method is a bit unorthodox.

### Hardware

• 12 inch frying pan with lid
• A small raised rack that will fit inside the pan that can provide a centimeter or so of clearance above the bottom of the pan; the removable rack that came with my toaster oven works perfectly
• A very small bowl
• A small saucepan (it can be tiny)
• A platter for serving that is large enough for the fish and deep enough to hold some sauce

### Software

• 2 tbsp. soy sauce
• 1 tbsp. mirin
• 1 tbsp. shaoxing cooking wine, or you can substitute an additional tbsp. of mirin
• 1 tbsp. sake
• 1/4 cup of either water, katsuo dashi stock, or water plus half of an instant dashi stock packet
• 1 fillet of a large flaky fish. Approximately 1 pound. I often use rockfish or wild striped bass. If you cannot find a large fillet, a smaller whole fish can be used.
• 1/4 cup coarsely chopped cilantro
• 2 in. knob of ginger, peeled and julienned
• 2 scallions, both whites and greens, julienned (or thinly sliced at an oblique angle, similar to the thicker Chinese “horse ear” cut)
• hot chili pepper, sliced thin at an oblique angle (optional)
• 1/8 tsp. five spice powder
• 1/4 tsp. toasted sesame oil
• 2 tbsp. vegetable oil

### Algorithm

• Mix the soy sauce, mirin, shaoxing wine, sake, and water/dashi in the frying pan
• Place the fish into the pan and marinade for 15 minutes, flipping once half way
• Meanwhile, combine the ginger, cilantro, scallion, optional chili pepper, five spice and toasted sesame oil in a small bowl and mix
• Remove the fish from the marinade, place the rack into the pan, and the fish onto the rack
• Bring to a boil
• Once boiling, cover the pan tightly, reduce the heat to simmer, and steam for 8 minutes
• Meanwhile, start heating the vegetable oil in the small saucepan
• When the fish is done steaming, remove it and the rack from the pan, placing the fish on the platter
• Increase heat to high and let the sauce reduce to the consistency of Grade A (runny) maple syrup
• Pour the sauce over the fish and put the herb mixture on top of the fish
• When the vegetable oil is very hot, spoon it over top of the herbs

## Lenticrypt: a Provably Plausibly Deniable Cryptosystem

### or, This Picture of Cats is Also a Picture of Dogs

Back in 2009, I wrote about a thought experiment on how to subvert copyright law via plausible deniability. A couple years ago I expanded on that thought experiment by proposing a seedling idea on how to accomplish it via cryptography. Since then, I’ve slowly been developing that seedling into a functioning proof-of-concept, which has culminated in the creation of the Lenticrypt project:

Lenticrypt can generate a single ciphertext file such that different plaintexts are generated depending on which key is used for decryption:

$python lenticrypt.py -e key1 plaintext1 -e key2 plaintext2 -o output.enc$ python lenticrypt.py -d key1 output.enc | diff - plaintext1 -s
Files - and plaintext1 are identical

\$ python lenticrypt.py -d key2 output.enc | diff - plaintext2 -s
Files - and plaintext2 are identical


Unlike alternative plausibly deniable cryptosystems like the recently discontinued TrueCrypt—whose ciphertext size grows in proportion to the number of plaintexts (i.e., hidden volumes) it encrypts—Lenticrypt’s ciphertext size is proportional to the largest plaintext it encrypts. This is because Lenticrypt shares bytes in the cyphertext between each of the plaintexts it encrypts; they are not stored in separate regions of the ciphertext. Therefore, there is no straightforward way to estimate the number of plaintexts that are “hidden” inside a single ciphertext.

In fact, Lenticrypt has the theoretical property that, under reasonable assumptions, there is always a near 100% probability that there exists an key in the public domain that will decrypt a given ciphertext to any desired plaintext, even if that key is not known. Therefore, even if an incriminating plaintext is revealed, the author of the ciphertext can plausibly deny having created it because there is a non-zero probability that the plaintext was legitimately decrypted by random chance.

More technical details on the cryptosystem as well as additional use-cases are described in Issue 0x04 of The International Journal of PoC||GTFO.

Note that Issue 0x04 of PoC||GTFO is a polyglot: Among other things, you can also treat the .pdf file as if it were a .zip. If you extract it, there are some neat Leticrypt-related Easter eggs inside the feelies.