Welcome!

Raman Sud

Subscribe to Raman Sud: eMailAlertsEmail Alerts
Get Raman Sud via: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


Related Topics: Java EE Journal, Apache Web Server Journal

J2EE Journal: Article

Are You Ready to Run with the IT Run Book?

A firm grasp of everything brings benefits

Any developer or IT organization can attest to the fact that flawless application development does not ensure flawless deployment when the time comes to roll the application out across a WebLogic application server. Successful deployment requires that application configurations in the IT infrastructure stack be correctly dialed for the application to run properly.

Unfortunately for IT organizations, proper application configurations are often lost in translation (or fall victim to human error) between development and production. The result can be slow deployment or application downtime - both of which are mortal sins in the world of critical business applications. Fortunately for businesses, this problem has not gone unnoticed, and an emerging class of technology centralizes and automates the creation of these application configuration files.

In any WebLogic development organization, developers typically use e-mail, text docs, or verbal communication to set forth their recommendations on what configuration parameters the QA team should use to adequately support the latest binary dump in a testing environment. For example, the developer might have the QA team check to ensure that ShrinkingEnabled is set to "true" under the JDBC connection pool, and that the number of connection pool threads is set to 50. Sometimes the info is very straightforward (e.g., "Here's the URL for the DB connection"); sometimes it takes a little research (e.g., "Here's the transactional data source JNDIName"). In most cases, this is not a one-sided exchange. Getting the QA build support folks set up with the best testing environment takes some time. Also, some of the information, such as the Server and SSL listen port settings, is dependent on their environment. But, in the end, the settings are defined, the binaries are checked into the build process, and the developer is essentially removed from the configuration aspects of getting his or her application running properly (see Figure 1).

On the production side of the IT house, configuration settings appear in a very different light. In a typical production environment in which IT managers are busy keeping 1,000 or so mission-critical applications up and running all the time, 99.999% availability is top priority. Unlike the development organization, the folks in IT are closely involved with ongoing maintenance and upkeep of the IT infrastructure stack (Web servers, app servers, databases, etc.). They have a deep concern for the platforms supporting the J2EE applications in production, because their jobs are on the line if these platforms fail. Moreover, they are hard-pressed to manage the complexities associated with all the underlying configuration files, both within WebLogic and residing across all the other assets in the application infrastructure stack.

In the high-stakes world of application performance and availability, the functioning of a particular application takes a back seat to that which is required to keep that application up and running at maximum efficiency.

It's no secret that the production people will do pretty much anything to stop an application from crashing, or to get a failed application back on line when it goes down. In these cases (although the developer is usually the first one pulled in), the problem is most often the underlying configurations within the supporting IT infrastructure. It's usually not the code itself that causes the crash. The tendency is for lots of blame to be handed out, without much happening to ensure that more configuration-related issues don't crop up in the future.

At a high level, development and IT organizations both want the same thing: to move new applications into production as quickly and smoothly as possible. However, their approaches are totally different. For IT infrastructure team leaders, making ongoing adjustments to low-level parameters around an application that they know little about is just a fact of life. Developers, on the other hand, are relying on coding expertise and industry know-how to put today's leading technologies to good use. They are in the business of creating and delivering industry-leading applications. Developers aren't delving into the intricacies of the various IT assets supporting the applications. In their creative world, new capabilities and competitive advantage are key. Finding the best way to set up the surrounding assets in support of these applications is IT's business.

Is it enough to simply see an application through QA to full GA release, and then let the staging and production folks fend for themselves? Or is this a problem? Should IT and development cooperate more closely on ironing out infrastructure requirements? One giant insurance company, which asked not to be named in this article, thinks they should. Unlike most organizations, this company's IT department maintains control over infrastructure setup across the entire application life cycle, including development, QA, performance testing, staging, and production. This company's developers don't control their own testing sandboxes. If the application they are working on requires a change to the JDBC provider, the developer opens a ticket with the IT department.

This example sounds a bit extreme, but it stems from a desire to capture and log everything required to build what's known as the Application Run Book. To solve the problem of Run Books that don't accurately reflect the production environment, the IT team is working far upstream in the development environment to gather the full set of configuration data that will eventually be used in support of the application running in production. In the case of this company, getting an accurate gauge of the application infrastructure means beginning to gather data in support of the Run Book before the new development project is ever launched.

A second example is a major financial services company. The company needed a more durable tool for IT to use when defining disaster recovery information and troubleshooting application issues in production. The solution they identified meant that all the teams involved in building and delivering a new J2EE application needed to engage very early on and define a set of configuration data that would input directly into their build tracking solution. To this end, they implemented a process by which every dev build checked into PVCS must be accompanied by an electronic set of infrastructure configuration data. This data had to encompass the entire infrastructure stack, including WebLogic. It also had to include Apache, their Oracle database, all the way down to the port settings that would eventually be required on the firewall.

If these companies are any indication, in coming years developers will gradually be required to have input into the production Run Book. This will likely focus on WebLogic configuration information. However, as seen here, developers will also need to scope peripheral technologies they do not currently test or use. Perhaps they will even help define best practices for the IT team to rely on when their application arrives in the production environment.

Of course it's not likely that developers will be defining actual configuration parameters in support of their application after the QA testing cycle is complete. But, in their drive to move Web applications out to production faster, and keep them constantly up and running afterwards, companies are definitely looking for new technologies to facilitate the promotion of an application. It could well turn out that developers have full knowledge of, and perhaps even control over, associated infrastructure parameters throughout the application life cycle. To achieve this, companies will certainly consider the benefit of automation in managing this information. Using automation, whole environments, with their associated low-level configuration settings, can be captured and redeployed. This can be done with minimal involvement from the IT infrastructure team, in a fraction of the time it currently takes to manually configure a new environment. Compared with other new technologies appearing now, automation offers the highest potential return to IT infrastructure managers looking to trim costs and improve organizational efficiency.

Automation needs to help the development and QA side of the house give the IT infrastructure team confidence about the underlying configuration data. Specifically, they need to know that the configuration data supplied along a new application for the Staging queue has been properly vetted and reflects the current realities of production.

For starters, the work of developing the IT Run Book would already be done. This is because IT would be receiving the same information culled directly from a test environment of which they have full knowledge. There would be no more middle-of-the-road approach involving multiple production scenarios. The Run Book would reflect the production environment exactly (see Figure 2).

To arrive at this happy situation, a determination must be made from the outset about how the various teams will share the information that will ultimately feed into the IT Run Book. Throughout the process, the developer should be free to access and comment on the end-to-end QA testing environment. He or she should also be able to provide input into how the various underlying infrastructure assets need to be configured for optimal performance. This is essentially a standards and policy creation exercise. By gathering feedback from development and QA, the IT infrastructure team can collect perceived requirements pertaining to asset configurations and actually watch as these asset requirements are modified along the path to production. In addition, they will effectively be the hand on the rudder, because the core requirement throughout the testing phase is that the QA environment must match production exactly.

The final stage in this Run Book Automation scenario is for QA to literally push the entire binary set, together with all the "blessed" configurations agreed on by QA and development, out to the UAT phase. Currently, the bottleneck that exists in UAT is largely due to the time it takes to understand what testing took place. It's also created during exploration of how the myriad changes that have happened since the GM milestone will affect the application in the production environment. Now that the Run Book has been made transparent and accessible for all to view, and in some cases modify, this issue goes away. If a quick comparison of the QA test environment can show that the application works well in an environment that mirrors today's production topology, the burden on UAT is virtually eliminated.

A fully verified set of configuration data that mirrors production exactly is possible and can be easily accessed by interested stakeholders across the entire application life cycle. Thus, the notion of a configuration management database (CMDB) must come into play. Moreover, the underlying data must allow for a level of granularity to achieve the type of results described here. That is, namely, the ability to promote an entire application environment, and all the configuration items for its related assets, seamlessly across QA, through to the staging phase and subsequent release to production.

Summary

Technologies that allow companies to achieve faster rollout of new Web applications to production and to automate the development of a meaningful Run Book remain few. However, they are growing in number. Over time, it may turn out that having a firm grasp of the entire spectrum of infrastructure requirements is an important asset to the development organization. At a minimum, such a system would introduce far broader visibility across the organization, in terms of what configurations need to be managed and the preferred attribute values. A powerful upside for development might be fewer invitations to help troubleshoot configuration-related issues in the production environment.

More Stories By Raman Sud

Raman Sud is the vice president of engineering for mValent, developer of mValent Integrity. Sud has 20 years of experience delivering mission-critical software for enterprises and telecommunication service providers leveraging distributed development and building integrated teams in the US and India.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.