Dealing with Architectural Security

J2EE Journal  
OCTOBER 15, 2005 10:30 AM EDT READS: 32,681       

Application architects have heard about the increased importance of security, but in many cases they really don’t know how to approach this issue. In this article, I’ll share my experience and define a few basic steps and checkpoints for building application architecture with security in mind.

This year, architects have started to face several domestic (SOX and HIPPA) and even international (Basel II) regulations that require a certain level of protection of the personal and financial data that’s processed and owned by the companies. Though network and operating system security solutions have done a great job in their domains, there is still one weakly protected path to corporate data – it’s a spectrum of commercial and homegrown applications.

I won’t discuss why security is important and what is required by the regulations because you can find a lot of related materials in JDJ and other on- and off-line resources. My goal is to identify and explain the most important steps to be taken toward security when building application architecture.

Step1: Requirements
Review your business and technical requirements to see if security is addressed there. If you find security requirements are absent or denied, e.g., “encryption of communication between servers is not required,” collect and verify the requirements. In both cases, check requirements with the legal and compliance department – the statement “we usually do not do encryption for internal data exchange” may not be valid anymore in the light of new regulations. During requirements gathering and analysis, identify corporate security resources and policies. In particular: Consider integration with identity management and access control systems (for example, Liberty Alliance Identity Management or BEA’s WebLogic Enterprise Security solutions). Discuss user activities that should be controlled and later audited in order to meet the policies. Consider the use of application state/status monitoring (e.g., via JMX) to easier recognize abnormal behavior potentially caused by security violations. Consider the operational procedures for obtaining access permissions for application users and periodic user access recertification. You can find some examples of security policies in the sidebar: Examples of Security Policies for Web Applications.

Step 2: Sensitive Resources
Examine business models, data sources, the data, and the user community for your future application. This should help you to recognize the points that might be the most lucrative to a potential intruder. In the process, you have to answer, at least, some questions, such as: How does the business model implemented by the application affect the financial reporting and status of your organization? Does the data include personal and/or financial information and how attractive it may be to an intruder? Are the data sources reliable and secured, and can/may you verify it if necessary? Are the application customers internal or external with regard to the served organization? If they are internal, do you really know who they are, e.g., does your application use information from the user provisioning and identity management system? If the users are external, are they controlled by your partner/provider or consumer organization or is it an open public audience? Due to the nature of the application and data, should you expect targeted intruder attacks or is it likely the application will be under less sophisticated “curiosity” attacks?

Step 3: Creation of the Architecture
Armed with the knowledge of corporate and industry policies, on the one hand, and with the picture of potential spots of security violation in your application on the other, create the architecture and the high level-design. Since we know there is no such thing as 100% security for the functioning application, the architect has to prioritize the potential risks of deliberate and accidental security violations and intrusions. It’s very important to estimate the consequences of a security violation or an impact of security breaches on the application, other applications, and your entire organization. Keep in mind that some intrusions are highly possible but may have no or very little impact. The others, on the contrary, are much less probable but cause consequences that may be terrible. Here are two examples:

1. Let’s assume you’ve designed a Web application that would be deployed in a DMZ (demilitarized zone, i.e., the network zone between two firewalls). In our day, you have to expect continuous hacker attacks; if you constructed and deployed the application smartly (see Web Application Security Consortium), the majority of attacks would “die” in there and not penetrate into the middle and back-end layers.

2. If you design an “internal” application that is accessible to the operation team and supporting developers, there are a lot of risks as well. For instance, you have to watch whether a database user name/password pair is stored “in clear” in the application configuration file. The probability of password misuse is low, but, as you know, in crisis situations we use all available developers, some of whom may be foreign contractors or even offshore programmers who have had little or no background check. So, a “clear” password is a “piece of cake” for really bad guys; with these passwords he or she can get access to a company’s strategic data. Then, just use your imagination… When security risks are prioritized, it’s easier to concentrate on the most dangerous ones and address them sequentially, in an iterative manner, spreading the cost of security controls over multiple phases of application implementation.

Step 4: Secured Design Solutions
After you have identified and prioritized security risks, try to come up with more secure solutions, to the best of your knowledge. For this, don’t ignore the operational aspects of the application – a lot of security concerns may be covered via operational activities, not by the code only. For example, the application might not need to manage login credentials for the users if there is a strong user authentication operational procedure in place. In another example, Application A maintains a password for Application B in the configuration file in the encrypted form; if both applications are deployed on the BEA WebLogic platform in a trusted domain, Application A would not need neither the password encryption nor the password itself for Application B. In most of the cases, just following two architectural principles can provide a much higher level of security for the application. The principles are: Layered application architecture. The J2EE platform perfectly supports layered architecture. It contributes to the scalability, stability, and security of the entire application. If you accept an idea that each layer of the application responds to its special requirements (e.g., the Presentation layer responds to user experience requirements; the Business layer, business requirements; the Persistence layer, data management requirements), it will be much easier for you to design the application and preserve security integrity. Separation of responsibilities. For example, the delegation of data access from the Presentation layer to the Business layer can protect your database from easy exposure to the intruder of the Web application.

I can easily anticipate that many architects will say, “Look, you recommend that we apply a lot of extra work while we have to implement the business tasks in the application,” or “If we start to implement all these security controls and protections, we’ll get certain performance degradation in the application,” and so on. Yes, you’re absolutely right! If you think about security as an additional feature instead of an organic business requirement, which makes the application trusted, the application will be sacrificed. That’s why it’s better to embed security into the structure of the application at the earliest possible step of the architecture design process.

Security requires resources and processing time. To support a certain level of performances and address security, the architecture has to repre-sent “compensating” solutions. That is, you have to find solutions that save resources and execution time to spend them on application security needs at runtime. For example, if security controls take much time, use caching more intensively; if gathering audit information about user activities slows down applica-tion response, invoke asynchronous acquisition of audit data. For example, when implementing an MVC pattern as Struts and deal with audit, the ActionServlet or the code pointed by ActionFor-ward class can send audit data via JMS to the audit storage instead of holding up the response thread while writing audit data into the database by itself.

Step 5: Architecture Security Review
When the architecture and high-level design are complete, invite the legal and compliance department to review them. Your goal is to get a sign-off on your design and security risk mitigation solutions.

Step 6: Post-Design Security Testing
The architect can contribute a lot to security even in post-design phases:

  • If you are the architect who accompanies the project through the implementation, I would recommend invoking the legal and compliance department once again to review the code from a security perspective.
  • You shouldn’t concentrate on a static analysis of the code against security policies and delegate such tests to QA. However, since you are probably the one who knows the weaknesses of the whole application (short-cuts and design compromises taken during implementation), you are the right person for the task of designing penetration tests.
  • Finally, since you know what to expect from the integral execu- tion of different elements of the application, you can identify “unusual” activities in the application log (open source commons-logging with log4j or standard java.util.logging logging package may be recommended for logging). Recognition of these activities may help to automate a security log reviews in the application maintenance phase.

Now, you are ready to face the security challenge! By guarding your application, you guard your organization, your customers, and, finally, yourself.

Resources

SIDEBAR

Examples of Security Policies for Web Applications

  • UserID and password must be sent via the POST method only.
  • Password should be encrypted when stored in the database.
  • In production, remove all “backdoor” login code.
  • Assigned accounts should be locked if a number of wrong attempts to login exceeds N times.
  • Validate all data coming from external resources, especially from the user’s browser.
  • Input data should be validated using the strongest validation level (Exact Match, Known Good Validation, Exclude Known Bad).
  • In case of fatal errors, the application has to fail securely (error handling).
  • Confidential information should not be written in regular log files.
  • Log file access should be protected.
  • No messages sent back to the browser may contain any debug information.
  • No messages sent back to the browser may contain any server-side information.
  • Logging error messages should not differentiate between wrong UserID and wrong password.
  • No database-related information should be returned back to the user in error messages.
  • If the cookies are not protected by an encryption (hashing, obfuscation), the data in them should be validated before use.
  • Session time-to-live must be agile to corporate security policies, not to the business model (if needed, the session has to be gracefully refreshed).
  • Avoid hidden fields in the Web page.
  • Where possible, for all requests choose the API that can provide a UserID and password, and use authorization control with the least privileges rule

Leave a comment

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: