A Curious Connection Between Cubing and Cryptography

This blog connects two of my favourite pastimes. It shows that solving the world’s most popular puzzle, the Rubik’s cube, has a perhaps surprsing relationship to the science of cracking secret codes, cryptology. To make it concrete, my cryptology focus will be on how the Polish cryptographers broke the Enigma cipher in World War II. I will show parallels between how hard it is to “brute force” all possible combinations in both cases, versus the clever techniques that give us shortcuts to finding the solutions fast.

Both topics can involve advanced mathematics in combinatorics and group theory, but I will keep it somewhat light so to be accessible to a large audience. I hope this comes off as a fun read, with a sprinkling of mathematics and history.

Remark: cryptology and cryptography are sometimes considered interchangeable, but cryptology explicitly includes cracking ciphers.

The Rubik’s Cube

I’m going to assume that everybody knows what the Rubik’s cube is. The number of possible combinations is 43,252,003,274,489,856,000 (approx 43 quintillion). Nobody has ever tried them all: if they tried, they would die long before they finished. But it helps to understand where this number comes from.

The cube has 6 middle pieces each with one face, 8 corners each with three faces, and 12 edges each with two faces. You can physically pull out the corners and edges and you are left with only the 6 middle pieces that are fixed:

The pieces of the cube.

Now imagine we wanted to achieve the ultimate mixup by putting the pieces back together in a random order. This would give us a truly random mixup, and we can use this idea to calculate the number of total possible mixups and hence the total number of combinations that was mentioned above… almost! There is a small problem with this logic, but we’ll get to that soon. But let’s look at trying this approach.

First consider putting in the edges back in place. Take an edge, such as the blue/red edge, and put it in some place on the cube. How many places can it go? There’s 12 spots for edges, and it can go in any one of those spots. But also, we can insert it two ways: either with the blue face facing upward (or forward/downward/backward depending on where you are putting it) or the red face facing this way. So that’s 12×2 possibilities for the first edge.

An edge can be inserted into some position two different ways.

What about the next edge? There’s only 11 spots left and it can go in any one. Likewise, there are 2 ways to put it in so that’s 11×2. For the third edge, there are 10 spots and 2 ways to put it. You get the idea. In total there are

(12×2)×(11×2)x(10×2)×(9x2)×(8×2)×(7×2)×(6×2)×(5×2)×(4×2)×(3×2)×(2×2)×(1×2) 

combinations. This can be written simpler by grouping terms and using the factorial operator: 12! x 212.

Now we can do the same thing with corner pieces. We grab the first corner and notice eight possible positions to put it in. Depending upon which face is up (or forward/down/backward), there are three different ways to insert it. That means 8 x 3 possible ways for the first corner. Likewise, there are 7×3 ways for the second corner. You can do the rest, the total possible combinations for the corners are:

(8x3)x(7x3)x(6x3)x(5x3)x(4x3)x(3x3)x(2x3)x(1x3)

The short way to write this is 8! × 38.

A corner can be inserted into some position three different ways.

So the total number ways of putting it back together is the product of these values: 12! × 212 × 8! × 38. Almost there, but as I said before this is not the real total number of mixups! The reason why is because when we turn the cube the ways that it allows us to turn it, it cannot reach all of these mixups. It turns out that out of all those combinations, the last corner that you insert cannot go all three possible ways: only one of them agrees with a valid mixup. Because of this, we need to divide the above value by 3. Similarly, there is a dependency on the edges that requires us to divide by 4. In other words, we have to divide the quantity above by 3 × 4 = 12, and hence the total number of combinations is:

12! × 212 × 8! × 38 / 12 = 43,252,003,274,489,856,000

With that background, we can now talk about how we can solve it faster without going through all these combinations!

Solving the Cube Faster With Shortcuts

Before we discuss the mathematics, let’s throw out a few fun facts. The first is that any mixup can be solved in at most 20 moves (where a half-turn is counted as a single move). This is known as God’s Number, and it was proved by Tom Rokicki et al and computing power donated by Google (see also https://www.cube20.org/ and the scientific publication). Nowadays there are online solvers such as https://rubiks-cube-solver.com/ that can show you a solution involving at most 20 moves for the mixup that you have.

Unfortunately, these online solvers work for computers, and are too complex for humans to memorise. Instead, the speed solvers of today use methods like CFOP or Roux, and average in the mid 40s to low 50s number of moves. The current official WCA (World Cubing Association) world record for average of 5 solves is 5.09 seconds by Tymon Kolasiński from Poland, however there are a handful of cubers that could beat the record on a really good day so we could be seeing sub-5 second averages in the near future. Tymon uses the CFOP method.

For our purposes, we just want to explain concepts and connect it to breaking the Enigma cipher in World War II. The easiest way to do that, is to look at a simpler solution method known as corners first.

Corners first is a very old way of solving the cube: few people do it today, and most of us who do are old farts. The first WCA world record of 22.95 seconds was set using the corners first method nearly 40 years ago by my hero Minh Thai. 22.95 seconds is quite slow by modern standards, but it’s a bit of an unfair comparison because Minh had to use the original Rubik’s brand of cube, which is very hard to move. Modern cubers are free to use any brand they want, and these cubes are much more efficient than what Minh Thai used. Having said that, I don’t think anyone would disagree that modern cubers have better methods than the classic corners first and are superior solvers.

The idea of corners first is, as the name suggests, to first get the corners in place, and then solve the edges. The main point is that corners are edges can be solved independently: once the corners are in place, one can move edges around without affecting the corners — they still remain in place. The mathematical consequence is that by knowing that we can work on the two parts independently, the maximum number of tries we need is not the number of possibilities for corners multiplied by the number of possibilities for edges, but instead it is the sum of the two: 8! × 38/3 + 12! × 212 / 4. In other words, if we do one first and then the other, it becomes a sum, but if we are doing both at the same time without taking advantage of being able to work on them independently, the work factor is multiplied.

This already is a huge speed improvement, but it’s still an awfully big number. We learn other tricks to make the number of moves smaller. Getting corners in place can take at most 11 moves, but for me it probably takes around 20. I then get edges for the top and bottom layer fairly easily, and last I solve the middle layer independently of the top and bottom layers. The more work you make independent of other work, the faster it becomes. This is the key takeaway from this section, and we will show the same concept was used to break the Enigma cipher.

For those who want to learn more about speed cubing today, I highly recommend the Speedcubers documentary on Netflix. I promise, my hero Feliks Zemdegs is every bit as cool as he seems in this documentary.

The Enigma Cipher

While the Americans, British, and likely other countries have their own success stories about how breaking codes made a big difference to the outcome of the wars, I am particularly drawn to the way Marian Rejewski and the Polish cryptographers used the theory of permutations (group theory) to find mathematical weaknesses in the German Enigma cipher. This knowledge was shared with the Britain and France shortly before Germany invaded Poland in 1939. The information exchange from the Polish cryptographers was the main building block that Alan Turing and British cryptographers needed to develop more advances on breaking the Enigma Cipher.

Polish cryptographer Marian Rejewski from https://en.wikipedia.org/wiki/File:Marian_Rejewski.jpg.

First we need to understand the Enigma cipher before we draw parallels with the Rubik’s cube. Below is the Wikipedia photo of an Enigma machine. I will give a simplified explanation to keep the blog from being too long.

Enigma machine: photo by Alessandro Nassiri.

The following 3 minute YouTube video is shows how the cipher works:

In the back, there are some rotors with numbers on them. Think of them as mapping letters to other letters via electric circuits. The rotors work together: if one maps the letter A to F; the next maps F to X; and the third maps X to K, then that means when put together A will map to K. However when a letter is encrypted, the rotors rotate, which changes the mapping. Example: instead of A being mapped to F, B will take that spot and A gets mapped to another letter. The right-most rotor will rotate every time a letter is encrypted, the middle rotor will rotate after the left most has rotated a full 26 times, and the left-most rotor will rotate after the middle has rotated a full 26 times. This means that the three rotors combined can provide 26×26×26 = 17,576 combinations to encrypt messages.

That by itself is not a big enough number to prevent brute force decryption, so Enigma includes the plugboard that is seen at the front. This involves 10 plugs each of which swap two letters around (remark: Rejewski talks of 6 plugs being used, but other accounts said 10 were being used: perhaps Germany upped the number of plugs later in the war). I’ll skip the mathematical explanation, but that results in a total of 26! / (6! 10! 210) = 150,738,274,937,250 combinations.

This means that the total number of possible combinations for the plugboard and the three rotors was:

263 × 26! / (6! 10! 210) = 2,649,375,920,297,106,000

This is over 2 quintillion combinations, whereas Rubik’s cube had over 43 quintillion combinations! It helps visually to compare these values side-by-side:

             2,649,375,920,297,106,000 : Enigma Combinations
                        versus
            43,252,003,274,489,856,000 : Rubik's Cube Combinations

These numbers are on similar orders of magnitude already, but there were additional factors involved with Enigma that made the number of combinations even higher. For example, there were at least 5 different rotors that could be used and 3 were selected from them and put in any different order (the figure above does not even consider order of the rotors). This by itself results in a multiplier of at least 60, bringing the total number of combinations to almost 159 quintillion. Okay, I’ll write it out:

       158,962,555,217,826,360,000 : Enigma Combinations (5 rotors to
                                                          choose 3 from)
                    versus
        43,252,003,274,489,856,000 : Rubik's Cube Combinations

Remark: other sources may give different figures: it depends upon number of plugs used and number of rotors used and these factors were not constant throughout the war.

Now remember, in the Rubik’s cube case, we saw that being able to work on corners independently of edges immediately squashed the total number of combinations down: it switches the multiply into a sum. With that background, guess what the Polish cryptographers discovered…. Time’s up. They discovered a trick that allowed them to solve the rotor positions independently of solving the plugboard! As a consequence, the multiply becomes a sum in terms of number of combinations they actually had to try to solve it. Similar to the Rubik’s cube, they found other shortcuts for figuring out the plugboard once they knew the rotor positions that ended up making that part real easy.

How the Polish cryptographers did this involves a lesson in group theory: we will not go that deep. But as an overview, they needed about 80 messages encrypted with the same daily encryption key to derive a “fingerprint” of the rotor positions. Rejewski called this fingerprint the “characteristic.” By constructing a huge catalogue of all possible characteristics for all the rotor positions, they could quickly look up a small set of possible rotor positions for the daily key. It required a little extra effort to find the exact one. Building the huge catalogue was done with machine that they built called the cyclometer. More info can be found in this pdf or Simon Singh’s The Code Book.

Cracking of the Enigma cipher played a very important role for the Allies in World War II. Prior to these discoveries, the Enigma cipher was thought to be unbreakable due to the large number of combinations to find the right key. In fact, it was mathematically flawed as the Polish illustrated. Some people believe cracking Enigma was largely due to operator error. While it was true that there were a number of operator errors during the war, Enigma is not secure by modern standards because it is vulnerable to known plaintext attacks.

Summary

We could think of solving the Rubik’s cube as similar to breaking cryptographic codes: both involve finding shortcuts among the large number of possible solutions in order to find the answer quickly. Although my specific focus here was on similarities between solving the cube and how the Enigma cipher was broken, this same style of thinking is often used to break other ciphers. For the mathematical/programmer reader, I give other examples in my blog So, You Want to Learn to Break Ciphers.

There are people who believe that solving the Rubik’s cube as a useless skill. I strongly disagree with that: it teaches rigour, creativity, and complex problem solving. My focus in this blog was on these value in my own specialty, cryptography, but the skill of breaking down a complex problem into small manageable steps will benefit people in many different ways throughout their lives.

How I Avoided Management for 25 Years

A recent post that showed up in reddit’s /r/programming, What Happens To Developers Who Never Go Into Management?, got my mind buzzing on all the ways I had to re-invent myself to avoid going into management in the security industry. Being in my 50s, I do technical level work with people who are often half my age, and I am often older than my manager and sometimes my manager’s manager and even higher levels. I have continued to dodge the bullet of going into management, but only in the past couple years have I considered that maybe I should give management a try.

This is a blog about what to expect if you want to avoid management. It’s largely about my personal experiences. For the last decade I have been in AppSec, but prior to that I was involved with embedded security and cryptography. If there is one takeaway from this blog, it is that you have to keep on re-inventing yourself while learning new technologies and industry directions. Maybe it’s obvious when you see it written out, but it may not be so obvious when you are early in your career.

What Happens as you Become Older and Avoid Management?

The short answer is that the technologies and expertise that we have mastered often become less and less relevant, as new technologies come to take their place and new skills are needed. Many skills we learned at University become obsolete. The new, young developers learn the latest technologies and skills, and we have to find the time to learn those ourselves in order to remain relevant.

But it’s not all doom and gloom for the old fart. We have had many years of experience to see how industry changes. We understand that that change will continue to happen, and if we position ourselves right, we can help direct the change. We can become thought leaders (even without being a manager), which requires not only insights, but excellent communication skills. We’ve learned the value of team work, and understand the necessity for practical solutions that meet the needs of various teams within an organisation. We also tend to have a sense of when something is just not working the way it should work, which gives us the opportunity to take the lead for change. The catch for this paragraph is that it doesn’t happen all by itself: we need to grab the bull by the horns to make those changes happen.

Personal Example of the Above Points

Here’s a breakdown of my own career and how I had to constantly re-invent myself:

25 years ago: After grabbing Masters degrees in Computer Science and second one in Mathematics, I started my career in industry. I was an expert at C programming, algorithms, parallel programming and making code fast. I knew C programming pitfalls (such as buffer overflow) and how to exploit them. I was good at various assembly languages and cryptography. I had the niche skill of being able to break cryptosystems. I also knew embedded programming. I had the skills that many Silicon Valley companies wanted, yet the compensation was not much more than beans. I got to do what I loved to do all day at my job and was rarely interrupted. I was happy. Back then we did waterfall development methodology, which seemed right at the time.

We developers who worked back then had the very frustrating experience of working on early versions Microsoft Windows, which was awful beyond belief. Cygwin was a partial escape from this horror. Eventually Linux became more common, but back then we had to set up our computers to use dual boot, and it was painful to switch between the two.

15 years ago: Object oriented programming became the cool thing. My beloved C language was being pushed out due to C++, which I learned to read but I hated it. Embedded programming jobs were hard to come by where I was living, but I found one. The catch was that I spent the whole 4 years at the job doing security code review. Unfortunately there was no concept of “shift left” back then, and I was stuck at the tail end of a broken attempt at bringing security into a product. I was still able to use my cryptographic expertise for some cool stuff. I started to learn Python and Java.

A nice development was the availability of Virtual Machines that let us run Linux within Windows. I still preferred Linux, but Windows XP was getting close to acceptable from a usability point-of-view (not from a security viewpoint).

10 years ago: I had trouble finding jobs in embedded security where I Iived and I could not find a programming position that would pay the bills largely due to my increasingly obsolete languages of expertise. I decided to switch to web security. I started learning about OWASP, shifting left, Secure Development Life Cycle (SDLC), iOS programming, test pyramid, very basic web penetration testing, Oauth 2, and introduction to AWS and cloud computing. I continued to learn more Java and Python but never became fluent. I started to learn about SAST tooling. I also gained a strong appreciation for Agile development methodology.

An interesting point here is that I warned about web security supply chain attacks before it was the cool thing to talk about (Remark: vulnerable 3rd party libraries was just making the OWASP Top 10 when I made that StackOverflow post, but that was for accidental vulnerabilities — there was hardly any mention of intentional backdoors being put in 3rd party software back then). This is because in embedded security, we are very paranoid about the 3rd party software that we use. It was a bit of culture shock to see that web security had little concern for it at the time, but it certainly is hitting the headlines frequently in recent years.

5 years ago: I went into consulting. The cool phrase of the day was DevSecOps. I got lots of hands on experience with security tooling, mainly SAST and DAST, and how to automate the testing. I became better at penetration testing. I also learned many new languages, frameworks, and technologies: JavaScript, C#, .Net Core, Android, Nodejs, Docker, Kubernetes, CVSS, etc…. I got fairly strong knowledge of the OWASP Top 10 and how to prevent these problems. I also learned about Microservices.

There were some really big surprises to me at this time. For example, JavaScript was a language to be taken seriously, something I never believed before. I learned to program in it, both front end and back end. I became okay at it, except the async await stuff. I learned about the Document Object Model and jQuery. Another surprise to me was that Microsoft, in my view, was turning into a respectable company (yep, I said it!). I had previously considered them to be evil due to suffering through the pains of their monopoly. But Microsoft made some major changes that were required for them to survive in a cloud computing world against giants like Google and Amazon. Suddenly I no longer hated them. They were actually very mature in the SDLC. They had good documentation, and their C# and .Net Core seemed very cool, especially with the efforts to bake in security by default. The more I compared C# to Java, the less I liked Java.

Recent years / now:

My appreciation for communication skills started during my consulting time, but I continued to develop it beyond then. I was seeing so-called “experts” and “researchers” getting publicity, and too many times they were getting credit for things that were not particularly innovative.

My blog was created to communicate my thoughts on security, and to make certain obscure but useful research results more known (example: Fighting Bots with the Client-Puzzle Protocol). I spent a lot of time reading research and staying abreast of industry developments. I became increasingly focused on user-friendly security, which others are beginning to grab onto, but I believe I was there among the earliest (in fact, this was something I had been talking about for approximately 10 years). I was able to bring some of these ideas to real products for great change at previous employment. Also, with some colleagues, I started AppSec Australia meetup to help increase sharing of AppSec knowledge in Australia.

I’ve always believed in security education, but the more I improve my communication skills, the more impact I see it having. At my previous job, we had a lot of success with security education. There was not much whining coming from the AppSec group at that company: I truly believe we were doing things very effectively for such a small team.

Then Versus Now

What skills did I have 25 years ago that I still use today? Cryptography, shell scripting, and not much more. I still use vim as my primary editor, but I am trying to push myself to VS Code. I had to re-invent myself several times to stay relevant. I love learning, so that’s okay with me. But it can be frustrating to learn a new language that someone half your age has mastered right out of University! I’ll get good at that async await stuff eventually!

Truthfully, I had so many expertise back then that were matched by very few people, yet I’m not sure I can say that about anything now. What I can say is history has brought me breadth and depths of skills, vision for how things change, and the confidence to step up and take the lead for change.

One thing to keep in mind if you want to avoid management is that University does not “train” you, instead it “educates” you. Training IMHO means learning a skill, whereas educating involves learning a skill and learning to think and solve problems. If you did your education right, you can adapt to change. The big problem we have as we get older is having less time to do so. This is due to having families and weakening bodies that prevent us from doing the long, crazy hours that the younger people can. It is my belief that experience is the key to using our time wiser.

If you copied any of these popular StackOverflow encryption code snippets, then you coded it wrong

Security code reviews is a task that I do on a daily basis, and have been doing for the last thirteen and a half years. In this time, I have reviewed several hundred code bases, and have come across cryptographic code many times. More often than not, there have been security issues in cryptography code that I have reviewed. Very often I have traced these bogus code snippets to StackOverflow answers that got highly upvoted. In this blog, I point out those bad snippets and tell why they are wrong. I also give advice on how to do it right.

I’m not doing this to shame those who have made mistakes: Instead, I want to do my part to help fix the problem. As an AppSec specialist, I get really tired of having the same discussions over and over. I try real hard to make it easy for people to do the right thing: I point them to code that is safe to use, such as Luke Park’s Secure Compatible Encryption Examples. Despite this, there are the occasional teams who just continue to resist, even before the code has made it to production which is the best time to fix it. This makes everybody’s lives more difficult: it wastes my time to have to explain to them why their code is wrong, and it forces the teams to have to do a lot more work later because once the bad cryptography is in production, they need a migration plan to fix it.

I do believe the future has hope for having less cryptography security problems. Many cryptography implementations are having improved APIs, which was largely driven by Dan Bernstein’s NaCl. Also, StackOverflow is no longer pinning the accepted answer as the top answer. This gives people opportunity to upvote better answers than what was originally accepted. You can take that as a cue. Let’s be nice: upvoting the good is better than downvoting the bad.

Now let’s get to the nitty-gritty.

Example 1: AES-128 CBC Mode in Java

The link is here. At the time of writing, this answer has 248 upvotes: It is both the most popular answer and the selected answer. To begin to understand why it is wrong, look at the types of the key and the initVector: they are strings. They should be byte arrays.

Keys and initialization vectors (IVs) should be byte arrays, not strings.

The string for the key is a common problem I see. If you are using a string, then you have a password. Passwords are not cryptographic keys, but they can be converted to cryptographic keys with a password based key derivation function. What should not be done is exactly what was done here: copying the password string (which was incorrectly labelled called a ‘key’) into a SecretKeySpec structure.

The string for the IV is a less common mistake, but it is still wrong. Before I explain the fix, let’s look at the calling function:

Hard-coded password (wrongly labelled as a ‘key’) and hard-coded IV.

The problems are:

  • Hard-coded password (mislabeled as a key).
  • Hard-coded IV that is type String.

I know some people are going to argue that this is just a proof-of-concept and any sane individual would know how to do the right thing for the password and IV. My response is that these people obviously do not review code for a living because I have seen code like this in reputable places that you would never expect to make mistakes like this. It’s a real problem and it is very common.

To illustrate the difference between a key and a password: a 128-bit key should have 128-bits of entropy, so breaking it requires 2128 effort. The solution here has chosen a password from the set of upper case letter, lower case letters, and digits. If they had chosen a completely random password from this set, then that’s 6616 possibilities, which is on the order of 296. That means a cracking effort of at most 296, but actually it gets worse because people do not choose passwords randomly. Way too often I see passwords used for encryption that are very predictable followed by something like ‘123!’. It is sad that people have been conditioned to believe that following deprecated password security guidance (“password composition rules“) somehow makes things secure. It doesn’t: don’t do it.

This is important: do not use the same IV twice — it breaks the security properties. This is explained in my blog Top 10 Developer Crypto Mistakes. I get into way too many arguments about this. Especially when someone responds with “It’s okay — we will put the IV in the vault,” my frustration level sky-rockets. No, it’s not okay: IVs are not intended to be secret, and even if they are hidden, it does not fix the security problem. Quit rolling your own crypto! Unfortunately so many do not understand that they are rolling their own crypto.

There’s one more problem with this code: it is using unauthenticated encryption. In my Top 10 Developer Crypto Mistakes blog, I listed #7 as “Assuming encryption provides message integrity.” I now accept the philosophical advice that used in the design of NaCl: developers should not need to know the difference, instead authenticated encryption should always be used, which covers both bases. In this case, AES in GCM mode solves this problem, CBC mode does not.

A much better answer in the same thread is here by user Patrick Favre. Please review it yourself and if you accept it as a good answer like I did, then upvote it.

Example 2: AES-256 in CBC mode in C#

The code is here.

At the time of writing, this is the second most popular answer with 117 upvotes. It was not the selected answer, but it is still very popular.

The first problem in the code is obvious, the others may not be. See screenshot:

A hard-coded password (wrongly labelled as a key) but what’s wrong with the IV? Read below.

So yes, another case of hard-coded password, which is mislabeled as a key. But the good news is that they used a password based key derivation function (Rfc2898DeriveBytes, which is PBKDF2 using HMAC_SHA1) to turn it into a real key! That’s kind of good, but since they are not specifying an iteration count (you need to scroll to the right on the StackOverflow page to see this), the default value of 1000 is used which is very low (i.e. not very secure) by modern standards.

Now let’s scroll to the right, we see what they fed into Rfc2898DeriveBytes for the salt:

Somebody’s name has been used for a hard-coded salt.

A hard-coded salt! Salts do not need to be secret, but in many cases you should not use the same salt twice. Regardless, it’s best not to have that salt hard-coded.

True story: I saw this exact snippet in a code review for a client. I remember looking at the salt and thinking “that looks very ASCII to me.” So I wrote a program in C to print it out as a string and the result was: “Ivan Medvedev.” I then Googled the IV and came to this snippet on Stack Overflow! I just wondered who this poor Ivan Medvedev is who has his name permanently engraved in insecure code!

So let’s get to the last problem: What’s wrong with the IV? Answer: if the salt and key are constant inputs, then you always get the same IV output. Hence you are using the same IV more than once, which breaks the security of any block cipher in CBC mode.

If the author would have simply copied the Microsoft Rfc2898DeriveBytes example, they would have been much closer to a good answer. But there is still the unauthenticated encryption problem which comes with CBC mode, and Microsoft’s example also has too few iterations.

The somewhat good news is that the selected answer on StackOverflow is better and has much more upvotes. However it too still has problems: the iteration count for Rfc2898DeriveBytes( ) is too small and they are using unauthenticated encryption (CBC mode).

What is a good iteration count is not set in stone. NIST says “the iteration count SHOULD be as large as verification server performance will allow, typically at least 10,000 iterations.” OWASP says at least 720,000 iterations. That’s quite a gap between the two: you might want to read what Thomas Pornin says. Regardless, anything less than 10,000 is definitely too low by modern standards.

Example 3: Triple DES in Java

The code is here.

This question was asked in 2008, so you might excuse some of the problems. At this time, NIST was recommending AES but still allowing Triple DES in some cases. Nowadays triple DES is completely deprecated because the block size is too small, but I still see it in code sometimes.

Let’s look closer at the top answer that has 68 upvotes at the time of writing:

MD5 should never be used, especially the way it used here. Also, a hard-coded password and a zero IV are problems.

Again, a hard-coded password, but this one is transformed into a key using MD5 which has been deprecated since 1996. However, an additional problem is that MD5 is not a password based key derivation function, so it has no business as being used as one like we see here. PBKDF2 had been a standard since 2000 and should have been used.

This code uses a zero IV, which is a very common problem that I have seen in many code bases. It is also unauthenticated, but that may be excusable because the concept of unauthenticated encryption was not well known at this time.

Please never use any code like this.

Example 4: AES in Java

The code is here. This one has 15 upvotes at the time of writing.

No mode of operation was specified.

You may notice that he does not specify an IV. That’s probably because he hasn’t specified a mode of operation, and for most cryptographic providers, that means getting a default ECB mode of operation. You should never use ECB: Remember the encrypted penguin picture?

A better answer in this thread has only 11 upvotes. It has CBC mode and and IV chosen properly. It’s not authenticated encryption, but otherwise it is quite a reasonable answer.

Example 5: Please don’t ever do this!

The answer is here. I’m not going to give a screenshot: this one is just too awful.

The question was how to encrypt a string in Java. The selected answer, which has 23 upvotes at the time of writing, is an attempt at implementing a one-time-pad. There has been a warning posted on the top of the answer suggesting that this should never be used. That is 100% correct: a problem with one-time-pads is that they tend to be n-time-pads in practice, where n > 1. Using the pad more than once makes it very easy to crack the encryption.

This is a good example where the change by StackOverflow to make the highest voted issue show up first is paying off. The current highest voted issue has 187 upvotes at the time of writing. This answer gives a nice overall education about how to do encryption properly, and it is definitely worth a read. The only niggle I have is the SecureRandom( ) confusion, which the author knows acknowledges is not right. The risk here is really very small, but the right way to use SecureRandom( ) is very simple.

Am I cherry picking?

You may say that I am pointing out old code, but honestly these code snippets are good representations of the problems I see very often in my daily duties. I rarely see crypto done right.

You can help in improving the state of crypto by upvoting the better answers. This is preferable to downvoting the bad examples highlighted above: let’s solve this problem in the friendly way rather than the mean way. StackOverflow has made the change to allow us to do that.

Why are there so many bad cryptography implementations out there?

The reasons why there are so many bad cryptography examples comes down to history. Historically, there has been a large disconnect between the cryptographic community and the development community. When freely available cryptographic libraries started becoming available from the 1990s, the APIs assumed that developers would know how to use them safely. This was of course a wrong assumption, and combined with the complexity of use, developers spent a lot of effort just trying to make things work. Once they had working code, they were generous to share it with others — not realising that they were tip-toeing through a minefield.

The future looks better for cryptographic implementations, but the first step to getting there is to raise awareness of the good answers and to be very clear about which answers are not valid. This blog is one in several ways that I am trying to be just one small voice helping in make that change.

Addendum

I would like to acknowledge reddit user cym13 for his valuable feedback on this blog, which helped improved the end result.

I would also like to thank the author of cryptohack.org for pointing out a mistake in an earlier version of this blog: for Example 3, I had said authenticated encryption had not been around back then. I was wrong. So I updated the blog to say it was not well known back then.

I would also like to thank Ivan Medvedev for replying to this blog.

There are a number of crypto vigilantes on StackOverflow that frequently leave comments about security problems and how to fix them, such as Maarten Bodewes and ArtjomB. They are experts on the subject.

One desired result was achieved within the first day of posting this blog: in Example 4 above, the better answer has been upvoted enough to surpass the bad answer. Thanks heaps to the community readers who helped make that happen. There has also been a large number of upvotes in the better answer for Example 1, but it still has a long way to go to pass the other answer. Regardless, we have made progress!

Why We Shouldn’t Commit Secrets into Source Code Repositories

Committing secrets into source code repositories is one of the most frequent problems I see in application security code review, and has been so for at least 5 years. I’m speaking as one who has reviewed numerous code repositories for a variety of different companies. It is a problem that never seems to go away.

Like other common security problems, education and tooling can help improve the situation. This blog plays on the education side. A list of examples is provided where secrets in source code repositories were exploited, or could have been exploited, for serious damage. Related information is also included, such as how these secrets tend to be found and reports about the frequency of this development sin happening.

This problem is not yet in the OWASP top 10, but maybe some day it will make it if things continue the present way.

Attackers love finding secrets in source code because it enables lateral movement. That is, compromise of one system leading to compromise of another. It can also sometimes lead to privilege escalation, particularly when a secret allows one to have higher privileges than what the developers are supposed to have.

Companies that suffered for committing secrets into source code repos

We start out with three juicy examples, where it was shown that attackers abused the commit of secrets into source code. They are Uber, Stack Overflow, and Ashley Madison. In all three cases here, the repositories were private, which did not stop the attackers.

In the Uber incident, details were given by Uber CISO John Flynn in his US Senate Testimony. The attacker somehow gained access to a private Uber GitHub repository — how the intruder got access has not been published. Within the repo was a commit of AWS S3 credentials, and that S3 bucket contained private data of approximately 57 million Uber users. Uber was so concerned about the seriousness of the breach that they made the poor judgment decision to try to hide it and pay off the attackers in exchange for taking their good-will promise of deleting the stolen data. Uber later acknowledged that this was a big mistake. More about the Uber attack can be found in the following articles: 3 lessons learned from Uber’s 57-million user hack, Uber Hack: Software Code Repository/VCS Leaked Credential Usage Detection, Uber Paid Hackers to Delete Stolen Data on 57 Million People.

Information about the Stackoverflow breach was provided in a detailed Stackoverflow blog in January 2021. Starting at the end of April and continuing through much of May 2019, an attacker had gained moderator and developer level access across all of the sites in the Stack Exchange Network, and then exfiltrated the source code and also personally identifiable information of 184 users of the Stack Exchange Network. The attacker took advantage of secrets in source code and other bad practices for protecting secrets a number of times for both lateral movement and privilege escalation. The blog writes:

“This incident brought to light shortcomings in how we’d structured access to some of our systems and how we managed secrets, both at build time and in source code.”

“Bad secret hygiene—we had secrets sprinkled in source control, in plain text in build systems and available through settings screens in the application.”

Stackoverflow blog

Their advice to others includes:

“Guard secrets better. TeamCity has a way to protect secrets, but we found we weren’t using it consistently. Educate engineers that “secrets aren’t just passwords.” Protect SSH keys and database connection strings too. When in doubt, protect it. If you must store secrets in a Git repo, protect them with git-crypt or Blackbox .”

Stackoverflow blog

The third example is Ashley Madison, which was hacked in July of 2015 by a group calling themselves “The Impact Team.” Ashley Madison is an online dating service targeted towards married people who want to cheat on their spouses. In this attack, The Impact Team leaked private details of 30 million users as a punishment for their infidelity, and also leaked the website source code, which included hardcoded database passwords, AWS S3 credentials, secret tokens for other applications, and private TLS keys. It has not been proven that the source code lead to the theft of the other data, but it appears highly likely based upon what was in it. More information here: Credentials stored in Ashley Madison’s source code might have helped attackers, Credentials in the Ashley Madison Sources.

Sometimes ethical hackers find the problems first, sometimes they’re not first

In the awesome report No need to hack when it’s leaking by Jelle Ursem & DataBreaches.net, nine health care related companies are identified that committed secrets to source code which led to the leakage of personally identifiable information and private health information of “150,000 – 200,000 patients, and possibly many more” — see screenshot below. While some of these accidents appear to be first discovered by the ethical hackers, they do note that one company (Texas Physician House Calls) had been hacked in the past and had malware on their live servers. There may be some cases where we never know whether the ethical hackers were there first.

Excerpt from “No need to hack when it’s leaking” report

Another example comes from SolarWinds, who supplies network monitoring software to the US government and a number of Fortune 500 companies. A supply chain attack in 2020 led to a foreign actor intruding in on thousands of organisations, including the US federal government. While it is not known whether this was used in the attack, an ethical researcher found that a SolarWinds developer committed ftp credentials to a public repository on GitHub in 2018. More info here: SolarWinds: Intern leaked passwords on GitHub.

Moreover, Microsoft was a victim of the SolarWinds breach, and reported that the intruders had accessed source code for 3 products. In getting access to these source code files, the intruders were searching for secrets in the source code:

“The search terms used by the actor indicate the expected focus on attempting to find secrets. Our development policy prohibits secrets in code and we run automated tools to verify compliance. Because of the detected activity, we immediately initiated a verification process for current and historical branches of the repositories. We have confirmed that the repositories complied and did not contain any live, production credentials.”

Microsoft Blog

Given that they were searching for credentials in source code when they got Microsoft source, it is hard to believe that they did not use the freely available ftp server credentials in some way when they were targeting SolarWinds.

In another example, the United Nations had a .git directory exposed on a production website. An ethical hacker group reports finding: “multiple sets of hardcoded credentials which allowed them to take control of a MySQL data and an internal survey management platform.” They later obtained access to another private GitHub repository that contained
secrets for 7 databases belonging to the UNEP.

A similar event happened in 2018, when an ethical hacker got access to the source code for ebay Japan. The person reports:

“I found out, that http://www.ebay.co.jp runs a version control repository on their production site which was not properly secured. This gave me access to the .git folder in the webroot directory which further allowed me to download the entire source code of http://www.ebay.co.jp including database passwords and much more.”

Last, it’s worth having a read of how Komodo Research performs red team exercises. In their research, they looked into some Fortune 500 companies to see how easy it is to exploit secrets committed to source code for such big organisations. Using only a few hours of effort, they had critical data of 10 of them, which included

“Enterprise admin creds, Domain admin creds, many more ‘regular’ AD creds, multiple database credentials (there is something about these connection strings that just keep popping up in repositories), SAP creds, mainframe passwords, SMTP, FTP, SSH – you name it, we had it.”

Komodo Research blog

To emphasize, your private repos are not safe to hold secrets!

Many of the examples above were from private repositories, but others were from public. Sometimes an attacker is able to find his way to your code, other times mistakes give your code away for free. For example, when Github gave valid session cookies to wrong users, or the Nissan source code leak through repo misconfiguration, or the Mercedes source code leak that was due to an error letting anybody register a fake email address belonging to that company. For more motivation, see the section on Misuse of GitHub in the No need to hack when it’s leaking report. Bottom line: never assume that only good guys are reading your code!

Of course for public repos, the situation is worse: bots are scraping code repository websites like GitHub all the time in search of private data. In 2014, a developer named Rich described his $500 screwup — accidentally committing AWS access keys to GitHub. Related: Don’t upload your important passwords to GitHub, Dev put AWS keys on Github. Then BAD THINGS happened, Slack bot token leakage exposing business critical information.

The State of Secret Sprawl on GitHub

The reader may observe that GitHub is often (not always) the holder of source code repositories that have credentials taken from them. This is a consequence of its size: GitHub has more than 50 million developers and more than 60 million repositories hosted on it. GitHub has responded by building tools to help developers, such as GitHub secret scanning. They have also published a report about secrets committed to GitHub.

I find two things surprising in this report: (1) developers may expose corporate secrets through their own personal public repositories, and (2) more often than not, secrets are committed by accident:

“Secrets present in all these repositories can be either personal or corporate and this is where the risk lies for organizations as some of their corporate secrets are exposed publicly through their current or former developers’ personal repositories”

GitHub Secret Sprawl report

“A user that first writes his code with credentials in the code so that it is easier to write/debug, he then forgets to remove it from all his files after his work is done. He then commits and pushes his changes. When he understands that he made a mistake, either he does a deletion commit or a push force so that the secrets do not appear in his current version. Most of the time, he forgets that git and the internet are not forgiving: Secrets can be accessed in the git history even if they aren’t in the current version of code anymore, and public data hosted on GitHub can be duplicated and cloned into multiple different locations.”

GitHub Secret Sprawl report

Final Remarks

As I said at the beginning, I believe that education and tooling are important for improving this frequent and very serious coding problem. This blog addresses only the education side by providing many examples of the devastating consequences of committing secrets to source code repositories. It does not address tooling to prevent it (such as pre-commit hooks) or what developers should do instead of committing secrets to source code repositories (such as injecting secrets during the production build using environmental variables or enterprise secret management solutions) — both of these are substantial topics that can be covered in other blogs.

No, Java is not a Secure Programming Language

If you ask Google, you will be brought to a fantasy land of fairies, unicorns, and Java being the quintessential example of a secure programming language. Whoever are writing these web pages clearly do not live in the same world as me — an Application Security Specialist (there is no acronym for that title, BTW) who spends his every day with developers to help them uplift secure coding practices.

This blog is intended to correct one of the most perpetuated security myths by showing why Java is lagging far behind in security design in comparison to modern competitive languages. The problems are twofold:

  • Many Java security bugs are due to insecure defaults. As a consequence, developers need to have advanced development knowledge just to write simple code that cannot be easily exploited.
  • Java has really poor documentation: it is not hard to make things work, but it is often very unclear how to do things the ‘right way.’

To illustrate this, we go through a prominent examples from OWASP Top 10 and compare Java to the .Net framework. The problems with Java security are not restricted to web design, but this should give a pretty good indication of the security state. We also provide a few other examples outside of OWASP Top 10.

Note to Java developers: This is not a “Let’s bash Java because it’s fun” blog. While no doubt the Java developers will not like the title, it is hoped that they get something useful out of it: not just a display of the problems, but indications on how to work around them, and hence have less insecure software.

Java and the OWASP Top 10

We start out with 4 prominent examples of Java security failures with comparison to .Net framework. These are very common problems that often have very serious consequences:

  • XXE (XML External Entity), which typically results in letting the attacker have access to any file on your file server (example). By default, every XML parser in Java has external entities enabled. This is a feature that is very rarely legitimately needed, yet it is on by default. If you load or parse XML documents, you better follow this wonderful OWASP guide. The .Net framework also had problems with default values in old versions, but they fixed it in newer versions.
  • Deserialization vulnerabilities, which is wonderfully described in detail here. If you’re hit by this in Java, you’re toast — an attacker has a shell on your system. Unfortunately, Java does not offer a fix — it’s up to you to figure out to protect yourself. Deserialization vulnerabilities are in other languages too, but if we want to focus on .Net, it is secure by default. How to avoid this in Java? It’s not so easy. The best approach is not to deserialize untrusted input, but that may require a huge redesign for legacy applications. One idea that should work is to bundle the serialized data with an hmac signature and verify the signature before deserialization.
  • Cross site scripting, which allows an attacker to run JavaScript of his choice on other users for your website. If you write JSP code, then you cannot output untrusted data the default way. Instead you need to use c:out or fn:escapeXml or else you’re in trouble. Compare that to the Razor engine in .Net Core, which takes a secure by default approach, and has very visible warnings about not using the default approach.
  • Sensitive data exposure, and my issue here is specifically about cryptography. As a former cryptographic researcher, I hold this one dear to my heart. Where to start here? Well, let’s look at the warning in bold at the end of the JCA introduction (see screenshot below). Yep, Java is not here to help you, go get your PhD in crypto and you’re on your own (and don’t copy the insecure examples they have in their documentation!). Think you can write secure crypto in Java? I’ll bet good money that you’re wrong. Have a look at my blog Top 10 Developer Crypto Mistakes (see also reddit /r/programming and /r/netsec comments). The Java developer crypto problem is well documented in academia as well, including in mobile applications.
    This is in stark contrast to .Net, where they have put an enormous amount of effort into making their APIs really well documented and generally less clumsy. It is safe to copy-and-paste code from .Net documentation and they always provide examples to make your life easier. Java is the exact opposite.
    Unfortunately, this is too large of a topic to provide a right answer for Java developers, but a good start is from Luke Park’s Secure Compatible Encryption Examples. For other crypto topics, follow the advice of Maarten Bodewes on StackOverflow — he knows it better than anyone I have seen. And of course, there is always the libsodium option, which is idiot-proof by design.
    Fortunately, most of the implementation flaws are not exploited in practice simply because cryptanalysis is a niche skill.
Warning from the JCA documentation page

Other Examples of Problems With Java

I’ve spent so many years puzzled by Java documentation trying to understand what is really happening under the hood. It’s not easy.

A great example is SecureRandom( ), which is what it says it is. This is one case where it is secure by default, but the problem is that there are so many ways of screwing it up and I have seen it happen too often. Since Java documentation does not tell you the right way to use it, we are left to read websites going into deep dive details about how to use it properly and how not to, such as this and this (another one I just came across is here, which I have yet to review in detail, but it should give you an idea of the complexity). Please, developers, do not make it too complicated, and also avoid fiddling with APIs such as setSeed( ) because you’re only making things worse.

Another common problem with Java is logging, which always seems to be vulnerable to carriage return/line injection attacks. To my knowledge, you need to provide your own protection to this — Java does not have one out of the box. This short guide should be helpful.

The state of Java security is most pronounced when we compare with .Net, which is continuously making it easier for developers to write safe code without being security experts. Microsoft is even putting in protections like CSRF tokens by default, which is wonderful. In contrast, Java never seems to evolve from a security perspective: same language, same defaults, same poor documentation. Good luck, you’re on your own with Java!

But the Web Says Java is Secure!

The examples above are indisputable: Java has problems. Despite this, many websites tout the good properties of Java while ignoring all the real-world security problems. It’s quite like this short video clip, where we need to rename “CiSO” to “Java”:

Am I just a Microsoft Fanboy?

My programmer days were in the 1990s. Believe it or not, back then I ranked in the top 10 producers of obscenities directed at Bill Gates and the Microsoft monopoly. Actually, I just made that up, but if there was such a list, I think I would have been on it.

Back in the 1990s, Java was a breath of fresh air: a break away from the Microsoft monopoly, and a development language that was not prone to buffer overflow vulnerabilities like C was. It had cryptography as part of the language, freeing people from paying excessively high costs to vendors like RSA. It was wonderful for its time.

The problem with Java from a security perspective is that it has never evolved: the same problems exist from version to version and they never get fixed. The documentation never gets better. Java will always be Java, whereas its competition has chosen not to suffer the same fate.

Fighting Bots with the Client-Puzzle Protocol

In 1999, Ari Juels and John Brainard came up with an elegant protection against denial of service attacks, known as the client-puzzle protocol. Their idea was patented (US patent 7197639), which might have inhibited its uptake. However that patent expired in early 2020, so it is now free for anybody to use. And it should be used.

The client-puzzle protocol is not widely known or implemented, but we do note that Akamai picked up the concept very soon after the patent expired for their bot manager. Akamai adds advanced intelligence to it (remark: it appears that Clouflare may be doing the same, see also Kasada), but the basic client-puzzle protocol is easy to implement and can be used by anybody without spending a dime.

This blog:

  • Explains the basic idea (slightly simplified) behind the Juels-Brainard client-puzzle protocol with the focus on web applications,
  • Links to proof-of-concept source code,
  • Links to a demo of the source code hosted on Heroku,
  • Explains why it is preferable over rate limiting when protecting against malicious bots,
  • Suggests implementation enhancements to get the most benefit of it,
  • Provides references to related work.

Client-Puzzle Protocol

The objective in the client-puzzle protocol is to slow down bots so that they become near the speed of humans. Slowing them down impedes a number of different attacks, such as web scraping, brute forcing, and certain types of denial-of-service.

To accomplish this, the proof-of-work concept is used. Now many people may be thinking about Bitcoin, but the proof-of-work concept existed in cryptographic literature more than a decade before Bitcoin was invented. In fact, Ari Juels was one of the pioneers of the concept.

The client-puzzle protocol serves a cryptographic puzzle to clients that must be solved before their request is served. The puzzle may take a fraction of a second to solve — which has little impact on legitimate humans, but slows down bots significantly. The verification of the puzzle solution is very fast, so the protocol imposes negligible impact on the server. Also, the protocol is entirely stateless.

To be concrete, the puzzle typically involves finding part of the pre-image of a cryptographic hash function. See the diagram below.

In this construct, two levels of hashing are used. First, the client request data (query parameters/request body) along with a server-side secret and timestamp are hashed. This produces hash h1, which is hashed to produce hash h2. The puzzle consists of h2 and most of the bits of h1.

Cryptographic hash functions are designed to be pre-image resistant, so if you are given h2, then it would be very difficult/time consuming to find h1. On the other hand, if you are given h2 along with most of the bits of h1, then you could brute force the remaining bits provided that the number of remaining bits is not too large. You would simply try each possibility for the remaining bits, hash the candidate pre-image, and see if the hash matches h2. If k bits are missing, then this requires up to 2k trials. It is easily done for small k, but requires some computational effort.

In the client-puzzle protocol, h2 and most of the bits of h1 are given to the client. The client must brute force the remaining bits. When the client succeeds, it sends back the puzzle solution (h1), the timestamp that the server provided, and the request data.

Now consider how the server verifies the result. This is the most elegant part because the server does not need to remember h2. Instead, it just recomputes h1 by hashing its secret along with the client provided request data and timestamp. If the hash matches what the client provided, then the puzzle is solved!

Clients cannot forge fake puzzles as long as they do not know the server secret. Attempts at providing fake puzzles are easily caught and rejected from the computation just described. This computation is quick and easy to do for the server: a single hash computation with no database usage. Similarly, clients cannot lie about the timestamp because the hash computation will not check out.

What’s the purpose of the timestamp? You can set an expiry time by which the puzzle must be solved. This gives the option of denying the request if the solution took too long, which is especially useful if the response for the request may change over time. For example, consider prices of items on an ecommerce website. Without the timestamp, the attacker can provide the same puzzle solution every week without recomputing it to get weekly price updates. With the timestamp, it forces him to recompute every time.

Generally, the use of this protocol would look like this for a web application:

When Juels and Brainard wrote about this protocol in 1999, they described protecting against threats such as TCP SYN flooding, email bombs, and so on. The internet has flourished since then, and today it is more clear that there are many applications of this technology not only for preventing denial of service attacks, but more generally for impeding bot activity such as the way Akamai is using it.

Source Code and Demo

I am not a professional programmer and I am just learning Node.js, but despite that it was little effort for me to build a proof of concept. You can see the source code on GitHub. This PoC takes user input for a search query and returns a random gif from Giphy related to the search query. The server side uses a Giphy API key. By using client-puzzle protocol, clients need to do a computation every time a request is made through the server to Giphy, which controls the number of requests going to Giphy through the server.

The main functions on the server side are compute_puzzle( ), which computes the puzzle from the original request, and check_puzzle_solution( ), which verifies the solution. The client side has code to brute force the puzzle solution.

You can see the demo here. The puzzle strength is set at 217 and the expiry time is 10 seconds. This usually takes less than a second on most of my devices, with exception to my old iPad 2, where it can take a few seconds. Note that every request is using my Giphy development API key, which is rate limited by Giphy. Although I do not know what the rate limit settings are, I’m counting on the client-puzzle protocol to (hopefully) keep my anonymously accessible demo application below the bar.

If you’re going to use my GitHub code, here are a few remarks you should be aware of:

  • The code generates the secret from crypto.randomBytes( ) upon startup, which is fine if you have only a single backend server. If you’re using multiple servers, the secret should be shared among them to deal with the case that the server that responds to the puzzle solution might not be the same as the server that created the puzzle.
  • The whole security of the system depends upon the strength of the secret, so don’t use something silly like P@55w0rd123. The secret needs to be long and not brute-forceable.
  • The program is most entertaining with a Giphy API key that you can obtain quickly and for free following guidance here. Once you have the key, set it as the environmental variable giphy_api_key.
  • You can also set the puzzle strength (environmental variable puzzle_strength, 16 by default which means 216 effort) and expiry time (environmental variable time_limit, 5000 milliseconds by default).
  • Ultimately the client-side JavaScript code should be optimised. This is because the JavaScript code is what legitimate users will be using whereas a good hacker would use custom code to solve the puzzle as fast as possible. The more he can make his code faster than the JavaScript code, the more beneficial it is to him (more details in last section below). To optimise the legitimate client code, the focus should be on making sha256 as fast as possible.

The limits of rate limiting

Rate limiting is very important, but there are certain attack scenarios where rate limiting does not provide sufficient protection.

If you have an API that is accessible anonymously, the typical approach for rate limiting is to limit by IP address. However, nowadays hackers easily get around that restriction by rotating their IP address using tools such as Fireprox. Hackers can also spoof geographical locations, and easily blend in to look like legitimate users. As a consequence, it is hard to distinguish between the bad guy versus legitimate users — you either let the bad guy requests in for free or you take the risk of dropping legitimate user requests.

When battling malicious clients that are accessing anonymously accessible content, rate limiting is most effective if a decision can be made with certainty on whether the client is malicious. This certainty will not exist against a clever adversary.

The client-puzzle protocol handles the uncertainty much better by requiring every client to solve a puzzle. Legitimate users (people) do not make numerous requests per second, so they will see little impact. Malicious bots on the other hand will see a huge impact because they are forced to do computations for each of their numerous requests, and that builds up. With the client-puzzle protocol, making requests is no longer free.

Of course, there could be legitimate bots where there is a need to do many requests per seconds. Those can be identified in any number of ways (API key, known IP address, etc…) and whitelisted to allow them through. Everybody else must do the computation per request.

Implementation enhancements

Before talking about enhancements, we must talk about limits. The client-puzzle protocol does not stop bots, it only slows them down. A malicious entity can still get requests through, but suddenly it has to “pay” per request, where the payment is by computation time. If the entity has substantial computing power, it can erase much of the benefit that the protocol offers.

In the original publication, Juels and Brainard suggested only using the protocol when the server is under attack and tuning the parameters according to the severity of attack. They also had in mind attacks such as TCP SYN flooding, where a server can only handle so many connections at once. Our focus is more at the application layer, so we will discuss application-specific enhancements. Of course tuning parameters according to severity is still a valid protection.

The following implementation enhancements can also be considered:

  • Similar to Akamai/Cloudflare/Kasada, we might have some intelligence about our adversary, allowing us to adjust the puzzle strength according to the likelihood of the request being malicious. For example, maybe we know that the attacker is using a particular API key or user agent header — we can offer tougher puzzles to those requests than for other requests.
  • If we are trying to defend from an attacker brute forcing specific targets, then we can cache failed efforts and increase puzzle strength for those specific targets upon each failure. These increases in strength should be temporary: long enough to frustrate the enemy, not too long to lock out the good guys.
  • We can always whitelist known good clients to allow them through with little effort, while requiring higher puzzle strength for the unknown.

Remark: Although it will reduce impact, the client-puzzle protocol is not by itself strong enough to prevent credential stuffing attacks because the attacker has a significant probability of success per request. For a more appropriate defence, see our OT2FA blog. The client-puzzle protocol does help impede other forms of brute force where the likelihood of success per request is smaller.

Related work

One of the shortcomings of the client puzzle protocol is that the attacker might have significant more computational resources than legitimate users. This is known as the resource disparity problem. Guided Tour Puzzle Protocols were designed to address this disparity. I have not yet researched their practicality or intellectual property considerations.

In private communication, Ari Juels suggested that client puzzles could potentially be used for mining a cryptocurrency such as Monero (having an ASIC-resistant proof-of-work scheme), “meaning that users are effectively paying for services.” After some number crunching, he said it would unfortunately not yield much currency because the clients on the whole aren’t doing a huge amount of work. Ari also pointed out how this was discussed in a related paper from 1999 with Markus Jakobsson where they applied the idea to a cryptocurrency protocol known as MicroMint.

Understanding Certificate Pinning

Certificate pinning (“cert pinning” for short) is a technique used for mobile applications to add an extra layer of protection to secure communications. Some people additionally use the technique to prevent people from reverse engineering APIs via intercepting proxies, however this latter objective is hard to achieve against a determined hacker.

Certificate pinning offers very high security, but it does come with some downsides that need to be considered by the business. This blog explains the security and business considerations for certificate pinning, and shows the trade-offs that can be made according to the need of the organisation implementing it.

The takeaways from this blog are:

  • Understanding why we do cert pinning
  • Understanding the public key infrastructure (PKI)
  • Making an analogy with browser security
  • Certificate pinning typically comes at the cost of forced mobile app upgrades, which can be a particularly painful user experience for short-live certificates such as those provided by Let’s Encrypt
  • Apps that are cert pinned need to roll out new versions in coordination with operations teams that rotate certificates because upgrading an app requires time (including AppStore review) — otherwise there will be downtime
  • The option of Certificate Authority Pinning is less strict than full Certificate Pinning. It requires less maintenance while offering slightly less but still strong security

What is Cert Pinning and Why Would I Use it?

In order to understand certificate pinning, one must understand PKI. An excellent blog on this is provided by Technophile: SSL/TLS for dummies part 3 – Understanding Certificate Authority — embarrassingly, their certificate is expired at the time of writing, so just read below.

The short summary is that secure communication over the internet is done via TLS (often people wrongly say “SSL”, which was a predecessor of TLS that was deprecated due to security weaknesses). The security of TLS depends upon a small set of organisations called certificate authorities (CAs), whose role is to verify the identity and then issue certificates to websites, which allow them to use TLS security. To make this all work, operating systems and potentially other software providers need to be supplied with a small set of certificates from the certificate authorities — these certificates are the “root level” certificates that the everything else in TLS depends upon. When you visit a website via TLS, the certificate provided by that website needs its signature checked against one of the root level CAs to verify its validity — this asserts that you are visiting the website that you think you are visiting and communication is secure to that website, assuming the CAs have not been breached and the OS/software vendor has not been breached and you do not have a bogus CA installed and the website you are visiting has adequately protected its private cryptographic keys. If any of those assumptions are wrong, all bets are off.

As mentioned, there are many ways that TLS can break, but the most severe would be if a certificate authority is breached since it affects everybody. Such security incidents have happened, the most prominent example being DigiNotar.  Less than having a CA breach, there are other examples where PKI security shortcomings have affected a large number of people, such as the Lenovo Superfish incident.

It must be emphasised that if any single CA is breached, then all of TLS breaks. It doesn’t matter what CA you used for your website, it only matters what CA your attacker uses. I’ve seen major organisations that are supposed to be security savvy having polices of only getting certificates from a subset of CAs that they trust the most — they entirely miss the point that the CAs they trust don’t matter: the design of PKI means we all depend upon every single one of them not being breached.  At the end of this blog, we will show there is one scenario that can such policies meaningful: however I have yet to see it implemented in practice.

The fact that every CA needs to be completely secure for PKI to work is one reason that some people do not trust PKI. However, this is where certificate pinning comes to the rescue for mobile apps: the main reason to use it is to make up for the shortcoming of PKI. The concept of certificate pinning is to make the mobile app only trust the exact certificate that is used on the website that it is communicating with, and thus no longer depend upon the security of CAs. One way to implement this (other, more practical ways will be mentioned later) is to either provide the exact certificate (or else the cryptographic hash of it) in the mobile app source files during development and write code that will verify that this certificate matches what the website is sending during the initial stages of the TLS communication. If there is not an exact match, then reject the communications and do not proceed. This provides very strong security, but it does come with some potential gotchas that we will explain later.

There is a second reason why some people use certificate pinning, which is to prevent people from reverse engineering your mobile app via an intercepting proxy. Without certificate pinning, anybody can view the communications between their mobile app and the server using well known techniques (see here and here).   This is not breaking TLS — other people cannot inspect your communications, only you can inspect your own communications by installing an intercepting proxy certificate on your mobile device. By adding a new certificate that the OS trusts and for which you know the private key and nobody else does, you have made it so you can snoop on the TLS communications but nobody else can. This does not directly work if the certificate is pinned because the application stops the interception — it does not trust the newly installed certificate, but instead only trusts the exact certificate that is pinned.

However, this defence has limited efficacy because it is possible to bypass certificate pinning using well known techniques (see here and here), and because another approach to understanding how APIs work is the old-fashioned reverse engineering of the binary. As a consequence, to prevent adversaries from understanding your API, you also need obfuscation and jailbreak/root detection. Such defences will stop many hackers, but a skilled and determined hacker with a lot of free time can ultimately succeed.

Concluding, it makes sense to do certificate pinning to prevent potential PKI insecurities, but cert pinning does not stop people from understanding your APIs — it merely slows down the good hackers. For the rest of this blog, our primary focus is on the use of cert pinning to deal with potential PKI problems. Cert pinning to slow down reverse engineering is outside the scope of this blog.

If a CA becomes breached, the manufacturer should push a security update

OS vendors / mobile device manufacturers have a strong interest in ensuring security: people trusting in their products is important to their business success. As a consequence, they usually push security updates to their customers when a CA becomes no longer trusted. For example, Apple released an update when DigiNotar was compromised. Similar updates were done by browser manufacturers as well as described here.

So this gives us some assurance that even without certificate pinning, we can still have decent security.  However there are some catches:

  1. Users don’t always install their security updates promptly, so there is gap when exploitation is possible.
  2. This only works for known compromises — whereas certificate pinning will protect against the unknown compromises.
  3. The situation for Android is less assuring because the security updates come from Google via the mobile device vendor, and the vendors have not always been so prompt in pushing security updates to their users.

Related to (3), Google does provide a solution for app developers to protect clients against discovered TLS vulnerabilities without relying on security updates from the vendor by using API installIfNeeded() or installIfNeededAsync().  It’s not clear to me from the documentation whether this fixes only TLS implementation bugs, or if it also updates root certificates on the device.

So, while there is some fallback, there is definitely room for improving security with a technique such as certificate pinning. However there is a practical downside to certificate pinning that one needs to be aware of — the app breaks when the certificate changes. We will discuss that more later.

Comparison to Browser Based Security

It’s interesting that we mandate such a high standard for mobile security, but what about the corresponding web browser security? Our ability to have similar controls is somewhat restricted, so one might feel that we are treating the mobile app more holy than the browser.

In 2015, the security community tried to compensate for the discrepancy between mobile security guidance and the web browser security capabilities by introducing a similar concept to the web browser. A new http header was added that allowed websites to instruct browsers to pin their certificates, so the browser would not accept any other certificate that it was presented from that particular website (for the duration of the validity specified in the header). This was known as HTTP Public Key Pinning (HPKP).

It didn’t take long before the security community discovered problems with this proposal. Within 2 years, one of the authors of HPKP announced the intent to deprecate it due to a number of problems that were not previously anticipated. The reasons for deprecation include the difficulty in choosing a set of pins that is guaranteed to work, the risk of locking users out if an error was made in pinning, and the risk of an attacker abusing HKPK if he could obtain a misissued certificate. A real example of locking out users is here. With the deprecation, A replacement was recommended : Expect-CT header. Expect-CT header is formally documented here.

Expect-CT is not as strong as certificate pinning, but is safer to use (still, the authors suggest using it with caution). In a nutshell, it is used to instruct browsers that the website must have a published certificate, in compliance with Google’s Certificate Transparency effort. With some configuration, the browser can be instructed to block connections if the site does not comply. Thus if a malicious actor secretly got an unpublished, misissued certificate for a website he would not be able to abuse it with this website. In other words, for the malicious actor to abuse a misissued certificate, it needs to be one that is published! But hopefully those abuses can be caught by certificate revocation (or maybe not).

Expect-CT is not supported by a number of browsers (including Firefox) at the time of writing this blog. It is a big step towards fixing PKI shortcomings, but falls slightly behind the security one gets from mobile certificate pinning.

Implementation and Caveats

The company Infinium has written excellent blogs for developers on how to implement certificate pinning. These guides can be found here:

A key downside of certificate pinning, which needs emphasis, is that changing certificates implies that you need to release a new version of the app (with an exception, to be discussed below) and force upgrade your users to the latest version. This is because the app only works with the exact certificate that was pinned — communication breaks if any other certificate is used. If you are using short-lived certificates, such as those from Let’s Encrypt, then this requires frequent maintenance and is an unpleasant user experience.

Another points is that the rotation of the certificate will need to be coordinated with the release of the updated app to avoid downtime. This means that the development team will need sufficient advanced notice to update the app with the new certificate, test it and review it, and get it through App Store review, or else users may be locked out for some period of time. This is a very real concern: I’ve seen it happen, and the business was not happy.

The exception mentioned above is that if only the public keys (as opposed to the full certificate) are pinned then one can rotate a certificate without releasing a new app provided that the same public keys are used in the new certificate. That is, you change the certificate when it expires but use the same cryptographic keys as before. This results in a more friendly user experience, but it requires those that manage certificates (typically an operations team) to follow the requirement from the development team. Of course, if any key changes, a new app needs to be released and a forced upgrade needs to happen.

There is another option that offers slightly less security than full certificate pinning but
is much more likely to avoid the problem of needing to release a new app and force upgrade the users…

Alternative Option: Certificate Authority Pinning

With the goal of eliminating the need to upgrade the app when a new certificate is released but still wanting better-than-TLS security, developers can pin the certificate authority public key only. In other words, the app will now only accept certificates that are signed by the specific CA (or CAs) that you allow and no others, in addition to all the normal certificate checks that are required for security.

Recall our discussion above about the problems with PKI: if any CA is breached, then everybody becomes vulnerable. However, by pinning the certificate of the CA or CAs that we trust, we no longer have that risk. Instead, the exact CAs that we trust need to be compromised for us to be vulnerable.

This is not as strong as full certificate pinning because if an intermediate key within your chain of trust becomes compromised, then you are still vulnerable. However, it is better than TLS-only because now you do not depend upon all CAs being secure: instead you only depend upon the CA or CAs that you are using being secure.

With this approach, the app only needs to be updated if you change CAs or if the CA changes its public key. If your company has a policy on which CAs they will restrict to, then you can pin the public keys of those exact CAs and quite likely you will not need to worry about updating the app or forced upgrades for a long, long time. This is especially appealing for those using short-lived certificates.

For Android devices, pinning the CA public key with the TrustManager
(see also this).

For iOS, certificate authority pinning is discussed here.

Some Useful AppSec Resources

While no doubt OWASP has earned the prestige of being the #1 AppSec resource, there are many other good information sources across the web that I have collected over the years that have been very helpful to me and to others whom I have shared with.  I especially enjoy a blog that explains things simply and clearly while at the same time being technically correct.  Below is a list of my favourite such resources.  I am greatly appreciative to those who can reciprocate with their own list.

Blogs / General AppSec

Certificate Pinning

Cookie Security

CORS

Cross Site Scripting

Cryptography

Deserialization

DevSecOps

Http Security Headers

Input Validation

  • Validating Input – This is old, but is a classic.  For more recent guidance, see the Martin Fowler website blog on The Basics of Web Application Security (linked above)

JWTs

Logging

Mobile Security

Oauth

  • An Illustrated Guide to OAuth and OpenID Connect – Most people want to dive into the technical details of Oauth before they really understand its purpose. Slow down, read this, and then you will have a better insight to the complex protocol

Passwords

PHP

Race Conditions

Server Side Request Forgery

SSL/TLS

SQL Injection

 

Thoughts on the Capital One Security Breach

Whenever one reads about a security breach like what happened to Capital One, security experts are eager to find out the anatomy of the attack.  Little by little, details have emerged.  Initially called a firewall misconfiguration problem, later reports seemed to suggest a Server Side Request Forgery attack (SSRF) vulnerability.  These conflicting stories do not seem to be publicly resolved.  In fact, even a recent story from Wired is still suggesting firewall misconfiguration.  One thing that is clear is that Amazon is sticking to the firewall misconfiguration story while trying to remove themselves from any blame:

It’s inaccurate to argue that the Capital One breach was caused by IAM in any way. The intrusion was caused by a misconfiguration of a web application firewall and not the underlying cloud-based infrastructure

Which is it, firewall misconfiguration or SSRF, and if Amazon is not to blame, then who is?

Firewall Misconfiguration or SSRF?

It is not so common to hear of a web application being taken over due to a misconfigured firewall, so this sounded curious from the beginning.  The closest I have been able to find to make any sense of this comes from Krebs:

“According to a source with direct knowledge of the breach investigation, the problem stemmed in part from a misconfigured open-source Web Application Firewall (WAF) that Capital One was using as part of its operations hosted in the cloud with Amazon Web Services (AWS).

“Known as “ModSecurity,” this WAF is deployed along with the open-source Apache Web server to provide protections against several classes of vulnerabilities that attackers most commonly use to compromise the security of Web-based applications.

“The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform.”

Krebs then goes on to explain how the attacker performed SSRF on the web application to access the Amazon instance metadata, which allowed her to access IAM Role credentials and own the EC2 instance.

From this, we are actually starting to get some insight.  Indeed, the root cause of the problem does not seem to be a misconfigured firewall — instead it was an application SSRF vulnerability, which is a common theme for AWS hacks — in fact many tutorials about SSRF talk exactly about abusing AWS EC2 instances.

For those not familiar with SSRF, I strongly recommend this Contra Application Security tutorial that shows how Capital One might have been breached.  Assuming the accuracy, I must emphasize that it did not take a hacking genius to find this vulnerability.  This attack is low hanging fruit.

For Amazon and others to suggest that the problem was a misconfigured firewall shows a fundamental misunderstanding of web security.  Quite frankly, I find it shocking that one of the top cloud providers is is going with this line of argument, so time to speak out:

WAFs are not a magic solution to your security problems

What is evident from the above quotes is that a WAF configuration is being blamed for  web application vulnerability.  This is entirely the wrong security mindset.

When a web application is built, security has to be part of the design. It is not something that is added on at the end: “Now turn on your WAF, then you’re secure!”  Nonsense!  Sure, WAF vendors are there to sell a product, so they like to claim things that are stretching the truth.  But a company like Amazon should know better, and for them to point the finger at the WAF is very telling into the security immaturity of Amazon.

WAFs simply do not solve all security problems.  WAFs are a backup protection — if security protections were built into the application itself, then the WAF would offer no value.  In reality, developers make mistakes, so the WAF is a fallback security mechanism that can help when other things fail.  It is however by no means the primary form of defence.

WAF vendors need to be kept honest.  I’ve seen more than one make ridiculous claims: “Turn on your WAF and you are protected from the OWASP Top 10!”  It simply is not true.  WAFs can detect and stop a lot of common attacks, but there are so many things they cannot detect and/or stop.

WAFs work by searching for dangerous (“blacklist”) patterns and blocking requests that fit those patterns.  Blacklist validation can never be perfect, because there are an infinite number of possible inputs yet the blacklist must be finite.  Therefore it is just a matter of time before a good hacker finds the right pattern that gets past the WAF.

Moreover, there a number of strategies that hackers can use to bypass WAF blacklists, such as changing the encoding.  For good overviews, see WAF through the eyes of hackers and WAF Evasion Techniques.

Last, the suggestion that a WAF can stop all OWASP Top 10 issues (which some vendors will claim) is absurd particularly since some of the attacks on the list do not go through the server at all.  For example, DOM-based cross site scripting happens between the attacker and victim without going through the server or WAF.  The vulnerability is present due to servers serving up vulnerable JavaScript to the victim.  As another example, if the server is sending data via http rather than https, then any person can eavesdrop on it without sending any data to the server at all.  WAFs just cannot and do not sprinkle magic fairy dust to make these problems go away.

In the Capital One breach, Amazon is blaming Capital One for not having their WAF stop the SSRF.  The reality is that the WAF is the backup protection, and the primary protection should have been at the application level.  As I explained in my Understanding Input Validation blog from February 2018 (which by the way talks about how SSRF is often abused on Amazon cloud computing), input validation is the proper way to stop SSRF.  That solves the problem exactly where the vulnerability exists — in the code — rather than expecting some add-on security protection to suddenly turn an insecure application into a secure one.

So if it wasn’t a WAF misconfiguration, then whom do we blame?

The joy of recriminations!  In fact, I see a lot of failures which are far more significant than the WAF configuration failure.  For example:

  • Was the application penetration tested?  If not, that is a major failure in security process.  If it was, then it is a bit surprising to see that this vulnerability was missed — a good penetration tester should have found it.
  • Was the application security code reviewed?  A good code reviewer with a decent SAST tool could have found the vulnerability.  But we could only say that if we know whether Capital One invested in application security — that I do not know.
  • Were developers provided application security education?  While this is one that is harder to depend upon, it is a recommended best practice of today.
  • Maybe Amazon is to shoulder some blame for not making SSRF harder to abuse in their infrastructure?  I’ll elaborate on that in the next section.
  • Whoever made the business decision to go with AWS, was security part of that decision?  I’ll elaborate on that in the next section too.

AWS is like a car without seatbelts!

Recently, Evan Johnson from Cloudflare Inc wrote a blog Preventing the Capital One Breach.  That blog hits the nail on the head.

Let’s cut to the chase: The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP).  All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability.  On the other hand, AWS applications continue to get hit over and over and over again.  Does that tell you something?

Microsoft is a company that learned about security the hard way.  It took them a long, time before they understood that it is their responsibility to make products that are secure by default and hard for the user to misuse:  Putting the responsibility for security in the hands of the user is dangerous.  A manifestation of this is in their instance metadata endpoint, which builds in protections to stop SSRF attacks:

Azure_metadata.png

I can’t say Google has learned security the hard way, instead I say they just hire a lot of smart people and their business clearly understands the importance of security to their business model.  Similar to Microsoft, they built SSRF defences into their instance metadata endpoint:

gcp_metadata.png

The point is that it is not sufficient for the attacker to be able to just control the url, but the attacker must also be able to set a http header in order to access the metadata endpoint.  In most cases, that is outside of the attacker’s control, which makes SSRF a lot less likely to exploit.

If Amazon had similar protections like Microsoft and Google, then it is unlikely that we would be talking about the Capital One security breach right now: it simply would not have happened.  So, why won’t Amazon put such protections in their metadata endpoint?  Because Amazon believes it is not their responsibility to make their services hard to abuse, instead it’s the customer’s responsibility to get everything perfect themselves.

And that’s where the seatbelt analogy comes in.  If these vendors were selling cars, the Microsofts and Googles would be selling the cars with seatbelts — understanding that you might crash, but they have designed the systems to reduce the impact to you.  Amazon on the other hand would be the vendor selling the car without seatbelts: if you crash, it’s your own fault.  If you die, don’t blame them.  They provided a car with a lot of nice features and if you read the manual and drove it exactly as you are expected to, you would have no problems.  But let’s be clear, if everything is not perfect, then you accept the consequences and they will let you know it’s your fault and not theirs.  See it in their own words:

“The intrusion was caused by a misconfiguration of a web application firewall and not the underlying infrastructure or the location of the infrastructure.  AWS is constantly delivering services and functionality to anticipate new threats at scale, offering more security capabilities and layers than customers can find anywhere else including within their own datacenters, and when broadly used, properly configured and monitored, offer unmatched security—and the track record for customers over 13+ years in securely using AWS provides unambiguous proof that these layers work.”

Concluding remarks

Amazon lacks security maturity.  They do not understand key concepts that those weathered in the industry have learned over many years of experience.  Trying to suggest a firewall is the fix for an application security problem is fundamentally wrong.  Relying on people to be experts at configuring firewalls to prevent attacks is a bad strategy: instead they should learn from the Microsofts and Googles about how to build infrastructure that is less fragile and less dependent upon the perfect users.  That’s not to say that the other security controls do not have a place, but instead they need to understand that they are backup defences and not primary defences.

Hosting in AWS is like buying a car without seatbelts.  If your application gets hit as well, then maybe the stakeholders who pushed for AWS should shoulder the vast majority of the blame.  Next time security should be part of the decision making when choosing a cloud provider, which means both Azure and GCP should be preferred over AWS.