WebLogic Journal, 2006 READS: 26,326 |
This article describes a workaround design that allows a Portal to survive if its resource starts hanging request threads.
Business Task
How frequently does your Portal experience user requests hanging in the resource? Not frequently, I hope. However, if this happens and the resource continues hanging user requests, the Portal is exposed to a fatal risk of spending all of the configured concurrent user requests and eventually dies. This is a disaster.
I faced such situation a few times and decided to protect my Portal from even rare surprises such as these. Let a Portal include several Portlets with login control provided via a backing file for each Portlet. The backing file is created per request and is thread-safe with regard to the user requests. In the login control, the baking file’s init() method (or preRender() method) calls an API of separated security service (resource) to obtain a resource access authorization. For a business processing, Portlets also delegate user requests via other API calls to the business layer. Everything works fine until a call to the resource is not returned to the Portlet, i.e., it hangs somehow somewhere. Here is a concrete example. We assume that WebLogic Portal uses EJB as a resource in the business layer and/or in security service.
The EJB operates on other resources and reports processing status by sending a message via JMS. The EJB is deployed in a cluster together with other applications. Portal is deployed on a different physical machine. One of the co-deployed applications causes memory error and the whole cluster, including our EJB, hangs, i.e., it does not return requests and does not throw exceptions. Actually, it is not necessary to discuss such dramatic failure: “hanging” mode, from a Portal perspective, is just a response that returns too slowly, with too much latency to be acceptable in the dynamic concurrent life of the Portal. Latency may be caused, for instance, by server overload or problems with databases, but it is irrelevant – the threads in backing files run longer than allowed for normal Portal work and the “maximum number of concurrent users” in the Portal is reached. The Portal stops accepting user requests at all – this is the problem.
Solution Design
The first thing that came to my mind was to set a time-out on the RMI client, i.e., the EJB client, used by the Portlet to access its resource (remote-client-timeout). However, the WebLogic Guidelines on Using the RMI Timeout list several restrictions on such time-outs, including “No JMS resources are involved in the call.” That is, I am not supposed to use a time-out for my resource EJB. I refer to this case for only one reason: to outline that there are situations where the Portal may be unprotected from hanged resource threads. Even if no restrictions would be applied to the time-out, if requests hang faster than the time-out frees related threads, the problem stays. I would like to present one of the possible solutions for this problem. The solution is effective on one condition: the Portal has some contents or functions that are independent of potentially hanging resources. That is, Portal can operate with partial functionality. The solution includes three components: Monitoring, Decision Rule, and the Rule Enforcement Method. The concept of the solution is straightforward: Portal monitors running calls to the resource, henceforth called resource threads, counts the amount of too long running resource threads (riskCounterValue) and applies a Decision Rule such as “If riskCounterValue reaches or exceeds predefined threshold – riskThreshold – all incoming calls to that resource are denied until riskCounterValue becomes smaller than riskThreshold.” Due to the Rule, the number of potentially “hanged” resource threads gets limited and the user request may be served with reduced functionality. For example, if a Portal includes four Portlets, and some resource threads for one of the Portlets is considered “at risk of hanging,” the Portal can skip the Portlet-in-risk and display just three Portlets to the user. The implementation of the Rule Enforcement Method is very important. If the rule is enforced in the scope of every call, we may expect performance degradation but gain simplicity in the control of potentially “hanged” resource threads. If the rule is enforced outside of the calls, we can preserve performance but tuning of such control becomes tricky. We will discuss the latter case with details. The diagram in Figure 1 describes it. As the diagram shows, in the first step the Portal initializes a Helper object that, in turn, initializes a CallRegistry object. The latter may be implemented as a java.util.HashMap and used for registering all calls to the resource API. Then Helper starts a “watchdog” thread. If you use Struts, for example, this thread starts in the Model. The “watchdog” thread periodically reviews records in the CallRegistry, counts the number of too long running calls, and sets it as a riskCounterValue variable in the Helper. It is assumed that we approximately know normal execution time of API calls. This may be one value for all APIs – the longest duration – or every API may have its individual execution time. Therefore, when an API method is invoked, we can calculate the time at which the API is expected to complete in a normal situation, for example: java.lang.System.currentTimeMillis()
long apiExecutionTime = …;// property
long timeToComplete =
java.lang.System.currentTimeMillis() + apiExecutionTim When a Helper’s method is called, it adds a new record into the CallRegistry. The record consists of a unique Call ID (used as a key in the java.util.HashMap) and expected completion time (timeToComplete) for the API (used as a value in the java.util.HashMap). If the method successfully completes, it removes its record from the CallRegistry. Let’s review how a user request is processed. Upon receiving a user request, the Portlet’s backing file delegates it to the Helper API method (the latter invokes the resource API). First the Helper API method checks if it may execute. If the riskThreshold is not reached by the moment of the request, the Helper API method continues its work. Otherwise, it throws an exception and the Portal moves to the next function or next API call. The permission to execute may be given only if the amount of too long running resource threads (riskCounterValue) is less than the riskThreshold. The riskThreshold is set via configuration properties. For example, if the maximum of concurrent user requests is configured as 25, the riskThreshold may be set to 10. That is, the Portal risks only a half of its capability of handling concurrent user requests and it still can operate if resource threads start to hang. Notice that we do not do anything with too long running API calls. Some of them can eventually complete successfully and Helper API methods will remove their records from the CallRegistry, i.e., the next route of counting may result in lower number than the riskThreshold and the next user request for the resource may not be denied (by throwing an exception). The Portal cannot know if there is an accidental latency in the network or if the resource thread is really hanging. Because of that, it is recommended to send a notification (e.g., via an e-mail) to the Operation Team if the riskThreshold is reached or exceeded in several sequential control cycles. Received notification will allow the Operation Team to analyze logs promptly, and find and resolve the reason of long running calls in timely manner. Analysis and Tuning
A control of “hanged” resource threads is quite dynamic and not simple in tuning. Its effectiveness is based on the balance of three parameters: ratio of average period of time (TUR) between user requests to period of time (TRC) between “watchdog” thread control cycles – risk control cycles: R = TUR / TRC risk threshold (riskThreshold) for a particular resource expected time of execution of the resource API calls The research and testing of the control have concluded that parameter tunings depend on a particular Portal implementation but have common tendencies. The graph in Figure 2 demonstrates guidelines for the tuning. In the tests, the ratio was set as R = 95% where TUR was 95 [ms] while TRC was set as 100 [ms]. In general, a reliable ratio is 90% and higher. The graph shows how the number of “hanged” API calls, counted in the control, depends on call execution time. Points on the graphs represent maximum numbers of user requests “hanged” in between risk control cycles, i.e., maximum of riskCounterValue in the series of tests for given call execution time. Remember that some user requests are denied when the Decision Rule is enforced and the number of “hanged” resource threads does not increase.
The horizontal red line in Figure 2 marks the amount of allowed maximum concurrent users in the Portal. The purpose of the control is to keep maximum riskCounterValue strictly below the red line. The closer the points in the graphs are to the red line, the more probability that the riskThreshold may be reached or exceeded.
As we can see, the behavior of the control is not obvious. For some values of call execution time (from 3250 ms to 1500 ms), the control yields user requests and number of “hanged” resource threads gets close to and exceeds the allowed maximum of concurrent users in the Portal. This is the interval during which the control is ineffective in the given conditions. At the same time, there are two intervals – from 100 ms to 1000 ms and from 3500 ms to 4000 ms – where the control is effective: the Decision Rule with the particular riskThreshold reliably protects Portal from “hanged” API calls and leaves enough concurrent request threads to serve other user requests.
The graph also shows that smaller riskThreshold provides better protection. However, if riskThreshold for a resource is set too low, the resource may become unavailable in most of user sessions just due to the slight fluctuations in the network latency. This is another subject for balance and tuning.
Conclusion
The proposed solution of run-time control of “hanged” resource calls
allows a Portal to isolate the resources that are in trouble and to continue
its work with remaining resources making a minimal impact on the performances.
The solution effectiveness depends on several tuning parameters: the ratio
between request frequency and frequency of the risk control cycles, the value of
the risk threshold, and the expected call execution time.
Tuning is not a trivial task in this case – it requires intensive testing. Moreover, the numbers given in the article are specific to my test Portal and you should expect other values in the tests on your Portal though you find the same dependencies. On the other hand, if certain performance degradation is acceptable, it is recommended to perform risk control cycles in the scope of every API call that significantly simplifies tuning of the solution parameters.
References
- WebLogic RMI Features and Guidelines: Guidelines on Using the RMI Timeout http://e-docs.bea.com/wls/docs81/rmi/rmi_api.html
- Nyberg, G., Patrick, R., Bauerschmidt, P., McDaniel, J., and Mukherjee, R. “Mastering BEA WebLogic Server: Best Practices for Building and Deploying J2EE Applications.” Wiley E-Book published March 2004. ISBN: 0-471-48090-8.