Open Source Software Technical Articles

Want the Best of the Wazi Blogs Delivered Directly to your Inbox?

Subscribe to Wazi by Email

Your email:

Connect with Us!

Current Articles | RSS Feed RSS Feed

GlassFish monitoring and troubleshooting tips


If you want to ensure optimal performance from the GlassFish application server, you must monitor certain essential performance indicators. Here are some tips to help you keep GlassFish running trouble-free.

For this article we used GlassFish 3.1 running on CentOS 6.3, as described in our previous GlassFish installation article.

Before starting any in-depth monitoring you should first check that the cluster and its instances are running. To do so, use the asadmin commands list-clusters and list-instances:

asadmin> list-clusters
TestCluster running

asadmin> list-instances --long
Name  Host           Port   Pid  Cluster      State         
n1  24848  --   TestCluster   not running  

The commands above show the cluster itself is running, but an instance on one server is not. To start the n1 instance use the command start-cluster clustername to start a cluster (and stop-cluster clustername to stop it), while start-instance instancename and stop-instance instancename start and stop an instance.

Admins can do most GlassFish monitoring through its monitoring module, but before you can monitor a component you must set a certain level of monitoring for it. Check the monitoring levels for each component by using the asadmin get command: get configs.config.server-config.monitoring-service.module-monitoring-levels. The output shows that all monitoring is disabled by default:


The fewer monitoring options that are enabled, the better the server's performance.

The available monitoring levels are OFF, LOW, and HIGH. Setting monitoring to LOW displays only essential output, and incurs a lower performance penalty than HIGH, which provides more verbose output. Consider using HIGH only in test environments or when troubleshooting &ndaash; and be judicious when using LOW too.

To see how monitoring works, start by changing the monitoring level for the Java Virtual Machine (JVM) on the domain administration server (DAS) to LOW with the asadmin set command: set configs.config.server-config.monitoring-service.module-monitoring-levels.jvm=LOW. After that you can start pulling the JVM's statistics using asadmin's get command. For instance, get -m server.jvm.memory.usedheapsize-count-count displays how much heap size is used; that is, how much memory, of that the allocated to JVM, is used. The -m argument specifies you're requesting monitoring data. On a default installation with a sample application deployed, the expected result is something like server.jvm.memory.usedheapsize-count-count = 259402784. This indicates that only 247MB (259402784 bytes) memory is used, which means the server is healthy. If you were to try running that command without enabling the JVM's monitoring level, it would return "No monitoring data to report."

To manage monitoring settings cluster-wide, substitute for server the name of a target, which may be the local server, a remote instance node, or even a cluster. In the examples below you'll see only target written; change it to whatever you'd like to monitor.

To get the configuration settings for a cluster called TestCluster, you would use the setting configs.config.TestCluster-config.monitoring-service.module-monitoring-levels. Thus to enable LOW-level JVM monitoring on TestCluster, run the command set configs.config.TestCluster-config.monitoring-service.module-monitoring-levels.jvm=LOW. The result at the asadmin prompt should contain output from each cluster instance confirming the change.

Once you've changed the setting you can get monitoring data from all cluster's instances. For example, to get the JVM used heap size from an instance called n1 that's part of TestCluster, run the command get -m n1.jvm.memory.usedheapsize-count-count. The result should be similar to n1.jvm.memory.usedheapsize-count-count = 372013384.

You can play with these settings, changing the monitoring levels until you get the monitoring data you need. To discover all the available data, use the asadmin commands get -m target.*.

The asadmin utility accepts input from and throws output to the Linux shell. This allows you to work with asadmin just as you work with other shell commands, piping the output to bash scripts, so you can use grep to refine the monitoring results for the TestCluster with the command /opt/glassfish3/bin/asadmin get -m TestCluster.* |grep heapsize.

There are thousands monitoring indicators, but you probably want to focus on a few important ones to start. Of course, your needs may differ depending on the problems you are working on:

  • JVM's used heap size is important for troubleshooting high memory usage and performance optimization as a whole. As we saw above, you can get it with the asadmin command get -m target.jvm.memory.usedheapsize-count-count. Compare this number with the maximum allowed heap size (target.jvm.memory.maxheapsize-count-count) to see what portion of the heap is in use. If the used heap size nears the max heap size, the garbage collector urgently attempts to free memory. If memory cannot be freed, GlassFish reports running out of memory. No memory means decreased performance and may result in unexpected application behavior. To fix an issue like this, try tweaking the garbage collector to free memory faster or increase the max heap size if the system's resources allow it.
  • Number of loaded classes is useful for detecting performance and application development trends. To see this indicator, run the asadmin command get -m target.jvm.class-loading-system.totalloadedclass-count-count. There are no universal rules for interpreting the values you might see, but the lower the values the better, especially during heavier application use. Analyze the number of loaded classes along with indicators such as JVM's used heap size to look for trends and dependencies.
  • JVM threads is important for performance tuning and for troubleshooting JVM crashes. Some of the most essential indicators are the current active JVM thread count (target.jvm.thread-system.threadcount-count) and the peak values (target.jvm.thread-system.peakthreadcount-count)). Keep monitoring those values; when they're in a good range, they will not slow down your server and applications. When you notice slowness and current values approach peak values, you know you have a problem. To investigate further where the problem is you need a Java profiler, which is a tool that helps you find performance bottlenecks, memory leaks, and threading issues.
  • Thread pools are groups of reusable process threads for serving incoming tasks. You can compare a pool's current usage values with the maximum allowed to troubleshoot performance issues and application failures. For example, you can take the network listener and compare the current thread count ( with the maximum allowed ( Problems start when the current count nears the max threads number. That's when HTTP requests queue up and may fail after more time.

Using GlassFish's web administration console for monitoring

While asadmin is fast, powerful, and precise in acquiring GlassFish monitoring data, and can be further extended with shell scripting, it is a text-based console interface and not as user-friendly and intuitive as a graphical interface can be. The easiest way to monitor GlassFish is through its web administration console. By default, you can access the web interface on the DAS at port 4848 (e.g. https://das:4848/). Once you've logged in, click on the link for Monitoring Data on the left vertical menu.

On the Monitoring page you can configure monitoring on the available instances and clusters or access monitoring data on them. You'll find the same options and indicators as in the asadmin utility.

The web interface provides an easy path for getting started with GlassFish monitoring. However, it lacks the extendability of the asadmin, which you may need for more advanced monitoring tasks.

Remote GlassFish monitoring through Java Management Extensions

The Java Management Extensions (JMX) and its GlassFish connector allow you to monitor GlassFish remotely. This connector is enabled by default; you can find it in the configuration file for the default domain (domain1) at /opt/glassfish3/glassfish/domains/domain1/config/domain.xml:

<admin-service system-jmx-connector-name="system" type="das-and-server">
<jmx-connector port="8686" address="" security-enabled="false" auth-realm-name="admin-realm" name="system"></jmx-connector>

Once you ensure the above configuration directives are present you should be able to connect via telnet or nc to the JXM on TCP port 8686 on the DAS and the nodes. If you have problems, check our guide for troubleshooting CentOS problems.

Various tools make use of JMX and offer monitoring. One popular tool, the Java Monitoring and Management Console, is part of the Java Development Kit. JConsole requires a graphical environment, and you can run it locally on your admin station with the command /usr/bin/jconsole.

When JConsole opens you are prompted with a window for the JMX connection details. You have to specify a hostname or IP address along with the JMX port – for example, You must also specify your GlassFish admin username and password to gain access to the remote JMX service.

jconsole resized 600

The picture above shows JConsole connected to a GlassFish 3.1 node. JConsole can not only display all the previously mentioned monitoring indicators but also graph them over time. These visualizations allow you to follow trends in the indicators' changes and get a better picture of your GlassFish cluster's performance.

This work is licensed under a Creative Commons Attribution 3.0 Unported License
Creative Commons License.


This is a great article on GlassFish monitoring and troubleshooting. Thanks for sharing such an informative article.
Posted @ Sunday, November 17, 2013 11:24 PM by Hoyt Velasquez
Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics