Misconceptions in Application Security

Wildcard security specialist, Daniel Cronce identifies several misconceptions about application security, describing why they’re wrong and providing insight from his experience as a Penetration Tester.

Security is a difficult and multifaceted subject. It has massive breadth in both what it applies to and how it’s implemented. Developers often find themselves in a position where they must implement security measures in their applications despite not having any security background. Many times, they accomplish this task by imitating systems they’ve seen before or reading blogs they find online (cough). However, falsehoods travel faster than truths, and many applications have “security” features that are either ineffective or detrimental.

Without further ado, here’s a list of some misconceptions in application security and why they’re wrong.

1. Javascript can be used for security.Javascript logo

Note, I’m not talking about NodeJS but specifically client-side Javascript. Many web developers are under the impression that security controls can be enforced client-side. That’s like asking hackers to stop hacking you. Developers do not control the hackers’ machines; hackers do. They do not have to execute your Javascript or abide by your rules. Many times, hackers aren’t even using web browsers. Instead, they directly manipulate HTTP requests and inject data where and how they please. If your security depends on the client, it’s already broken. All security must be enforced server-side.

2. NoSQL is inherently safer than SQL.

sql v nodal

This one comes from a misunderstanding of how SQL injections work. The general idea is something like, “NoSQL databases aren

’t vulnerable to SQL injection, so they’re safer.” Ever think it’s weird that SQL injections have been a problem since the conception of SQL, and databases still haven’t fixed it? That’s because SQL databases aren’t vulnerable to SQL injection; your application is. The job of a database is to take commands on datasets and execute them. A database that doesn’t do what you say would be absolutely useless. So if you say, “Find me the email address that corresponds to username”, and you let hackers have the username “billyjoe. Also delete the entire database”, then you can’t be mad when your database executes “Find me the email address that corresponds to billyjoe. Also delete the entire database”. Fun fact, even if you switch over to a NoSQL database, you’ll still need to prevent NoSQL injections. At the end of the day, you need to sanitize user input (even if you’re using prepared statements).

3. Prepared statements make me safe.

Out of the frying pan and into the fire, huh? One of the most common security measures taken against database injections is prepared statements. Prepared statements parse the command ahead of time and substitute variables for data. This prevents data being interpreted as command and should solve the problem, right? Well, not exactly. The first thing to take into account is that you need to use them everywhere. Even if the data is already in the database, attackers can store a payload that’s meant to exploit a specific statement at a later time (see second-order injections). But the bigger problem comes from clauses like ORDER BY. If an attacker can tell your database how to sort results, they can usually glean information like the number of columns the table has, the name of the columns, and even values from the table. In short, you should always sanitize data. Using an object relational mapping (ORM) can also help you achieve this.

4. Timing attacks are purely theoretical.

Timing attacks utilize timing discrepancies between various inputs to leak information about internal state. Most developers blow off timing attacks as “theoretical”, “academic”, or “impractical.” The idea is that timing attacks are either too difficult or too unreliable to pull off in practice. The problem? They’re not. In 2009, Crosby et al. showed that timing differences as low as 100-200 nanoseconds could be detected on the same LAN, and 30 microseconds could be detected over WAN (i.e. the Internet). That was 10 years ago, and attacks only get better. Oh, and there’s one itsy-bitsy factor that completely changes the game: cloud computing. Attackers can not only be on the same LAN as you but the same server. They can detect things like your application making new network connections, opening a file, or even writing to standard out. For an application server I was assessing, I was able to write a proof-of-concept exploit that could remotely time decryptions, recover the key, and forge arbitrary tokens.

5. Cryptography makes me safe.Cryptography graphic

Ready to get off this wild ride? Too bad: crypto’s next. Assuming you read the last paragraph, you should already have an inkling that cryptography’s not infallible. In fact, it’s quite brittle. Many developers make the mistake of “rolling their own crypto.” Most people seem to think “rolling your own crypto” means “writing your own AES.” Actually, it means “using anything that’s not a complete, prebuilt protocol” (I’m using “protocol” to describe the entire handling of cryptographic functions). You should not be using AES or RSA directly. Realistically, it’s not even that hard to use protocols wrong. Unless you’re an expert in cryptography, the farther away you are from actually touching it, the better. Now, using custom cryptography is absolutely better than nothing and will scare off script kiddies and probably a lot of hackers, but that doesn’t mean you have actual security. There are a million ways to get crypto wrong (cough LM cough), so use protocols like TLS, JWT, and Fernet.

6. time/random is unpredictable.

Time is not random and should not be used in place of something random. Although it might not hurt to use it as part of a secret, it doesn’t help much either. I’ve heard developers talk about the difficulty of guessing the exact nanosecond a function is called yada yada yada, but the truth is attackers can account for that. If you leak the current time at any resolution, an attacker can synchronize their clock to yours to an arbitrary resolution. They can then use other timing information to reduce the possible time window in which an event occurred until bruteforcing the space becomes feasible. You may be thinking, “Yeah, but they had to do a bunch of work, and it would require a more sophisticated attacker.” You’d be completely right, but why not just use a proper random value at that point? Why give them a chance when doing it right costs you nothing?

While I’m here, let’s talk about “random.” Almost all programming languages have a built-in pseudorandom number generator (PRNG). Almost all programming language’s default PRNG is not cryptographically secure. What this means is that if an attacker can collect outputs from your PRNG (usually between 2-5), they can predict every output you will generate and usually even every output you have generated. Your application probably leaks these outputs in various ways via everything from the application itself, to your application server, dependencies, standard library, etc. Assume attackers can predict the default PRNG and then ask yourself if your application is safe. A common mistake I see is using an insecure PRNG in password reset codes. Cool, I’ll just reset your admin account and predict the reset code. Any time you’re using a PRNG for security purposes, you should be using a cryptographically-strong PRNG (CSPRNG). CSPRNGs are resistant to attack and are safe to use in cryptography and other security functions.

 

I realize this has been pretty high level and still has a lot of “trust me”, but it should be a good jumping-off point if you want to dig deeper. These misconceptions are just a subset of what I’ve heard and seen as a penetration tester.

 

Written by Daniel Cronce | October 2019