Now that I'm enjoying my time as an exchange student at Caltech, I decided to look back at some of the great things that happened to me in UCL. This particular time I'll tell you about an effective way to shield your website from various exploits: by introducing more vulnerabilities.
Being a 2nd year computer scientist at UCL means that several times a year you'll have so-called scenario weeks. During a scenario week, all lectures are cancelled (woooo) and you have a week long computer science and/or software engineering challenge (wooooooo).
To help you get started, the first couple of scenario weeks are reasonably tame. One of them involves learning about web app security and popular exploits by implementing two versions of the same website - the first version is vulnerable to popular exploits (namely OWASP Top Ten) and the second version is as bullet-proof as you can get.
Once you're done with the implementation, you produce videos and reports of all of the exploits you managed to use on your first web app, and then produce a similar report to show how every single one was prevented in your second web app. I had a bunch of web design experience prior to that scenario week so I personally didn't learn anything mind-blowing, but it was certainly a fun challenge to tackle.
Aside: I mentioned having some web design experience - I've actually been making websites since I was 11. The designs for some of them were hilarious (I didn't immediately discover CSS) so I will most likely make a separate post about them too. (4th Feb 2018 edit: Here's the post!)
"Implicit" cross-site scripting protection
We finished making the "bad" version of the web app reasonably quickly (doesn't take long to break things), you can find the final "masterpiece" in Chris Ul's GitHub repo. We moved on to producing reports and videos shortly after, and everything was going smoothly until we got to cross-site scripting (XSS).
It was weird and funny, of course, that every single exploit we tried before that worked successfully: cookie hijacking, SQL injections, cross-site request forgery, you name it. Every exploit worked like clockwork because we purposefully made our website vulnerable. But that did not apply to XSS - did we mis-engineer our web app and unknowingly made it more secure than it should have been?
As funny as it sounds, it wasn't completely out of the question. A quick reminder on how XSS works: On most websites, when you register you specify your name and other details in your profile. These details are then shown to other users when you, for example, post comments. Remember that everything you see is just HTML source code that your browser has parsed, and your browser normally can't tell the difference between proper, readable text and HTML tags that should be parsed. If I were to set my name to
<script>alert('Hello!');</script> and the website wouldn't have proper XSS protection in place, every user who opens a page with my comment would get an annoying
Hello! popup. Now this example is relatively harmless, but people can execute all sorts of nasty scripts using XSS.
With this knowledge in mind, we went through the source code for our app to see if we're performing sanitisation or any other "unnecessary" checks. Looking at our
// ... $q = "INSERT INTO snippets (owner_id, content) VALUES ('".$_GET['user_id']."', '".$_GET['content']."')"; $res = mysqli_query($conn, $q); // ...
Looks pretty insecure to me - so the issue wasn't in sanitisation. We brainstormed for a second and then it dawned on us.
All the scripts we used for our XSS injections had a structure similar to
<script>alert('Hello!');</script> - you might notice that it contains some single quotes (
'). These single quotes broke the SQL query you saw above, so nothing was inserted into the database and our uses were safe. In essence, the fact that our system was vulnerable to SQL injections saved our users from specific XSS attacks.
Now that's not some obscure bug that requires decades of computer science experience to pinpoint. In fact, this is probably the simplest yet silliest "bug" I've ever encountered in my life, which is why we spent good 10 minutes laughing at the very idea.
Of course, if we were to use double quotes instead (i.e.
<script>alert("Hello!");</script>) our XSS attack would succeed and someone could steal our users' data. This is where the second punchline comes in - it's safe to say to that any meaningful JS script would contain at least one string. Same goes for scripts used in XSS attacks. Unless you're doing something weird, this means the script would have to contain at least one quote symbol.
I personally always use single quotes in my JS code (and sometimes backticks for template literals, but that's unavoidable). Naturally, I would want to convince you to switch to
the dark side single quotes too. There are many ways to achieve that, and thanks to this bug here's a brand new take on the problem:
Now, a friend of yours, who's not very good with webapp security, makes a site everyone begins to use. By convention, they'll use single quotes in their SQL queries.
Eventually someone would try to perform an XSS attack to spy on you. Remember that in our perfect world everyone uses single-quotes, meaning that the attacker will fail since their code won't even get inserted into the database thanks to the bug we've just covered. Thus the world is safe.
If this doesn't make you want to only ever use single quotes in your JS code, nothing ever will. Either way, thanks for your attention.