No other topic has so influenced and embroiled our industry as has the subject of security. Not to say that this influence has always been a positive one, mind you; in many cases, security remains that subject that scares the bejeebies out of us as architects. We hear the statistics; we read the news; we even swap the war stories at conferences and team meetings. The idea that somebody could trash a system that we built, undoing all of the careful business rule validation and/or database relational integrity rules, to say nothing of stealing money or product from the company, is almost too frightening to contemplate. It's the "slasher flick" of software architecture. We pretend that it's something that we are on top of; but, like any good horror film, we secretly whisper to ourselves the safety chant that always seems to work: "It can't happen to me, it can't happen to me, it can't happen to me." Does it work? Who knows? The worst part of the whole story is the uncertainty. Just because we do not know if we have been hacked does not mean we have not been hacked—at least, not until our system becomes the subject of the latest news story.
And yet, nobody seems to really address the heart of the issue, which is, simply put, "How do we avoid being the subject of one of those horror stories? How do we architect a secure system?"
Secure Enough
Bear in mind that it's not always clear what resources or assets interest the attacker. In many cases, even if your data is not particularly "interesting" (meaning that you have no credit-card numbers, passwords, or military secrets in your database), your system is still valuable. Often, an attacker wants it just for its computing resources. You have disk space on which to store warez, CPU by which to try to engage in distributed cryptoanalytical attacks, socket connections from which to launch attacks against other systems, and so on. In some cases, the attacker simply wants to broadcast that he hacked your box by defacing your Web site. And, even if you do not make money directly off your Web page, how would your clients feel if they came to your home page one day and discovered, "Hacked by Mrs. Feldman's Third-Grade Science Class," in bright, blinking neon text?
Social-engineering attacks are another way that your carefully crafted security system can fail spectacularly. For more on social-engineering attacks, read Kevin D. Mitnick's The Art of Deception: Controlling the Human Element of Security, recognizing that you will become paranoid after reading it (which, come to think of it, is a good thing for architecting security). Having established, then, that there is no way to achieve "perfect" security, let us focus instead on having "enough" security.
Let us also be clear about the fact that no matter how much security we architect into the system, bad code can always defeat the best-laid plans of architects and men. The vast majority of known security flaws are due to simple mistakes in implementation, not huge holes in architecture. Suggesting that security is simply the architect's problem is like suggesting that quality is, too. So, before embarking on your design sessions, make sure that code reviews—the world's best security tool, by far—are on the project agenda.
Assuming that we have those two caveats under our belts, what can we do, from an architect's perspective, to avoid showing up on the nightly news?
Know What You Are Trying to Protect
Generally speaking, the first step in figuring out how to secure a system is knowing what you are trying to protect. This might seem like a ridiculous starting point, in some ways, but it's akin to suggesting that the first step to building a software system is knowing what to build. As architects, we would hardly expect to be able to craft a successful architecture without knowing the requirements of the system—be they expressed in agile user stories or up-front documentation. Similarly, we need to establish a set of requirements for the security of the system. Formally, this is known as a "threat model." Informally, it's a basic breakdown of the assets with which you are working: what assets an attacker might be interested in—either maliciously, to change or destroy, or benignly, simply to obtain—and ways by which an attacker might acquire that information.
Building a threat model is a subject beyond this article, but as a leadoff, consider Bruce Schneier's "attack trees" approach (see Secrets and Lies, for details) or, as a simpler and/or more lightweight approach, "Guerilla Threat Modeling." Regardless of which you choose, having a threat model acts as a guide, giving you a concrete goal against which to compare. Without this, security discussions become half-hearted, "well, the attacker could…" ruminations and flights of fantasy, and nothing concrete gets discussed or done.
Know How You Are Going to Protect It
Having established a basic breakdown of what you need to protect (and how much you're willing to spend to do it) by establishing your threat model, you next need to decide how to secure it. And, unfortunately, as our mystery guest alluded to in the Architects Anonymous meeting, just running your Web application over SSL does not cover your bases. Tempting as it might be as an architectural solution, SSL does not do a thing to protect your code against SQL injection attacks, improper authorization checks, or accidental information disclosures. What's worse, SSL might not be applicable, because you might be building an app that does not (gasp!) use HTML as its presentation layer. Suppose, for example, you are using some form of queuing system—Microsoft's MSMQ, or IBM's MQSeries, for example—to send messages between the user interface and the business-processing engine. How are those messages sent, where are they stored, and what processes or users have access to those messages while they sit in the queue? Even if the messages are transmitted over SSL to the server, there is a high likelihood that they will sit in plain, unencrypted form while they wait in the queue to be processed by the next agent in the system.
The temptation, then, is to presume that cryptography is somehow The Answer to All Problems Security-Related. If we only choose a cryptographic tool with a strong enough encryption key—digital signatures, perhaps—all of our security problems will be solved. Bruce Schneier even encouraged this with the foreword of his book, Applied Cryptography, in which he wrote:
In [Applied Cryptography], I described a mathematical Utopia: algorithms that would keep your deepest secrets safe for millennia, protocols that could perform the most fantastical electronic interactions—unregulated gambling, undetectable authentication, anonymous cash—safely and securely…. I went so far as to write, "It is insufficient to protect ourselves with laws; we need to protect ourselves with mathematics."
Unfortunately, Schneier was wrong, as he admitted in the preface to his next book, Secrets and Lies:
It's just not true. Cryptography can't do any of that.
The error of Applied Cryptography is that I didn't talk at all about the context. I talked about cryptography as if it were The Answer. I was pretty naive.
The result was not pretty. Readers believed that cryptography was a kind of magic security dust that they could sprinkle over their software and make it secure, that they could invoke magic spells like "128-bit key" and "public-key infrastructure." A colleague once told me that the world was full of bad security systems designed by people who read Applied Cryptography.
Ouch. So, if cryptography is not The Answer, what is?
The Answer
As Suzuki stated, the most important thing is to figure out what is the most important thing. And, in the case of software security, the most important thing is to figure out exactly what we are trying to do. It seems like we keep coming around to this point, but in many cases trying to define security in a concrete sense is a context-sensitive problem. Security for Web applications is going to be a problem different from security for client/server systems, which in turn is a problem different from security for multi-UI environments. And each of these is entirely different from securing Web services or transactionally processing business-logic layers.
We could easily go on for days talking in vagaries and generalities—and, unfortunately, many people (lots of them "security consultants)" do. We want to stay more in the realm of the pragmatic, however, so we need concreteness to hang on. Fortunately, we are not entirely out in the cold. John Viega and Gary McGraw, in their book Building Secure Software: How to Avoid Security Problems the Right Way, offer 10 principles for building secure software that I'll attempt to summarize here.
(1) Secure the Weakest Link
Just as any good military strategist will tell you that the frontal attack against the enemy's walls is a good way to get men killed, any good hacker will tell you that attacking a system's encryption is a great way to waste time and effort. Attackers do not attack points where cryptography is in use; they try to go around it. This means that they will prefer to attack the database or Web server directly, instead of hacking the SSL that is used to talk to it. Similarly, attackers will not attack the firewall directly. Instead, they will try to attack the applications that are visible through the firewall (which, by the way, means that port 80 is, without a doubt, the most heavily attacked port in the world).
Worse, often the weakest link is the people using the system. All the encryption in the world will not work if your users insist on having passwords of "password," and forcing them to change it every 30 days just causes them to get more creative about how to end-around the password restrictions. (Helpful user-friendly password tip: Instead of requiring pass "words," require pass "phrases," instead. Set the maximum password length to 4,096 characters; accept spaces and other characters; and encourage users to use their favorite verses, poems, or nursery rhymes. A pass-phrase of even just 80 characters in length—all lowercase, with no punctuation other than a space—is usually enough to defeat even the most determined brute-force attack. Then, you can stop requiring users to use Byzantine combinations of numbers, symbols, and mixed-case characters that do not show up in a single word.)
(2) Practice Defense-in-Depth
The last thing that we want a software system to resemble is one of those "hard, crunchy outside/soft, chewy middle" candies. Relying on a single layer of defense in a software system (such as the firewall) is the logical equivalent of trusting a single layer of defense in a bank. After all, we have the vault. Why bother with cameras, or guards at the door? Because, bluntly put, any one layer has its exploitable weakness; but the chances of an attacker being able to exploit the weaknesses of every layer in combination are further and further removed with every additional layer of defense. Given that you never know for certain that you have every bug and every possible security flaw covered, the more layers, the more time it takes an attacker to get through to anything.
(3) Fail Securely
This can also be interpreted as "assume insecurity." That is, when a portion of the system fails, do not assume that it failed because of some kind of processing error. Instead, assume that it failed because a hacker intended it to fail. For example, if you issue a SQL statement to the database where only one row should be returned, remember to check for the second or greater row. If there is a second row, it might be due to a SQL injection attack or similar nastiness. Always verify your assumptions; if they do not pan out, assume that it's an attack in progress, and behave accordingly.
(4) Follow the Principle of Least Privilege
In some ways, this is a variation on the defense-in-depth principle: Only grant the minimum access necessary to perform an operation, and only grant that access for the minimum amount of time necessary. For example, when accessing a database, use an account that is set to the minimum privileges necessary. If all you need to do is read a table, use an account that only has read access to that table. Then, should an attacker somehow manage to slip a SQL injection attack into the stream, the database-access control kicks in and prevents mayhem.
(5) Compartmentalize
It's easier to build defense-in-depth if the system is not a monolithic monster that requires all-or-nothing access. Also, it's easier to contain the damage that is done when—not if—an attack is eventually successful, if there are walls (firewalls, if you will) that prevent the hacker's access from spreading too far. This is partly the reason for layering a system. For example, if the Web presentation layer is compromised, it does not spread to the business-logic or transactional-processing layer, and so on. Role-based authorization can play a large part here, presuming that you architect portions of the system—as well as the users—to have roles.
(6) Keep It Simple
The fewer complexities, the easier the system is to understand. The easier it is to understand, the easier it is to debug and verify for security and correctness. The easier it is to debug and verify… Well, the rest probably emerges as obvious. One basic approach to keeping simplicity is to minimize the amount of security code. Within a given system, use a single security system—although not necessarily a single account or single role—to check and authorize requests. Within .NET, for example, this would be .NET Code Access Security (CAS) and/or the Windows Operating System security system. Keith Brown's The .NET Developer's Guide to Windows Security is an invaluable resource here.
Remember, too, that while simplicity is something that users desire, what users desire is not always what is simplest—or right. For example, users might ask to store the password somewhere in the registry, to avoid having to type it in repeatedly—whenever they want to start using the system, for example. That makes the system easier to use, which is desirable, but clearly it's not simpler (how do you store the secret securely?), nor is it more secure. An architect is responsible to many parties—not just those who will use the system, but the people whose data will be stored in it, too—which means security holds just as much weight as usability.
(7) Promote Privacy
Privacy implies not only user-information privacy, but also systemic privacy. Attackers frequently "footprint" a system, long before attacking it, to have the right tools at hand when preparing to do so. Why give the attacker information for free, when providing a few levels of obscurity can deter the casual attacker? Instead of ASP.NET pages using the default ".aspx" extension, remap them instead to ".jsp" and let potential attackers think that they're attacking an Apache Tomcat setup. Ditto for running SQL Server on port 1433; choose instead port 1521 (Oracle's default Net Listener port). While this is not enough to create security (security through obscurity is never security), it's enough to turn the casual script kiddie away. It's the moral equivalent of locking the screen door on the porch: It won't keep out the determined attacker, but it cuts down the number of attackers by requiring just that much more sophistication on their part.
(8) Remember that Hiding Secrets Is Hard
Just because a programmer thinks it difficult to dig a secret out of an obscure place does not necessarily mean it's hard to get. Programmers frequently like to hide secrets (like keys) in code, figuring that binary code is more or less inviolate. A few demos with ILDasm or Reflector are usually enough to turn that myth on its ear, but even unmanaged code is reversible. Similarly, tucking a secret away in a "secret location" on disk is meaningless. If an attacker can get a root kit on the system through another back door, a simple hook into the operating-system APIs (such as those demonstrated by the SysInternals utilities) will reveal any secret tucked away, be it on the file system, in the Registry, or wherever. Storing secrets is not impossible, and cryptography can help. For example, most modern systems do not store actual password text, but a cryptographic hash of that text, instead, and compare a hash of what the user typed against the hash stored on disk. Before you start hashing all your secrets, however, remember that the secret keys—or private keys, whichever might be in use—have to be stored someplace, and that just becomes the secret that needs to be hidden. We're back to square one. Whenever and wherever possible, try to avoid storing secrets.
(9) Be Reluctant to Trust
Unfortunately, just because you wrote the client code does not mean that the code executing on the client is your code. After code is installed on a remote machine, it's in the attackers' hands and can be ripped apart into its original form fairly easily. In a Web application, this means that you must assume that any and all validation on the client has been bypassed. The same is true in a client/server application. Validate early, but validate often, and definitely validate any data coming across the wire.
(10) Use Your Community Resources
In the field of cryptography, it's widely considered foolish to trust an algorithm or design that has not been publicly presented, examined, cross-examined, and validated. Most software packages that offer security of some form or another and that claim "proprietary algorithms" usually end up getting cracked within weeks, if not days, of their release. Case in point: The RC2 and RC4 encryption algorithms were supposed to be RSA trade secrets, yet both were reverse-engineered and posted anonymously to the Internet.
When in doubt, put your faith into something that has public scrutiny behind it. This does not mean that anything that is publicly documented (the "Thousand Pairs of Eyes" syndrome so widely promoted by open-source advocates) is completely secure, mind you—far from it, in many cases. Publicly documented servers have gone for years with security holes that were discovered only recently—and continue to be discovered. But you have a much stronger chance of keeping safe if you stick to resources that have met at least some level of public scrutiny—whether open-sourced or not.
Conclusion
Was the guy at the party right? Do we need custom engines to be secure? At the end of the day, security is just another form of code quality: The more bug-free, the more secure and resistant to attack. So, in many respects, the best thing that you can do to improve the security of your system is the same thing that you can do to improve the usefulness of your system: Have a ruthless and religious policy about finding and eliminating bugs—preferably, with comprehensive code reviews, fast-cycle releases, and unit tests. But, most importantly, the key to making software secure is not the language that you use or the platform for which you program, but the attitude of architects and the team behind them. So long as you recognize that security is more than the software bits that are used—that it's something that the entire project team (including the users) has to believe in—you have a much better shot at architecting a secure system than the architects who believe that "firewalls will keep us safe."
Disclaimer: The original article is visible at https://msdn.microsoft.com/en-us/library/bb245797.aspx
Join the discussion