If you want to ensure optimal performance from the GlassFish application server, you must monitor certain essential performance indicators. Here are some tips to help you keep GlassFish running trouble-free.
For this article we used GlassFish 3.1 running on CentOS 6.3, as described in our previous GlassFish installation article.
Before starting any in-depth monitoring you should first check that the cluster and its instances are running. To do so, use the asadmin commands list-clusters and list-instances:
asadmin> list-instances --long
Name Host Port Pid Cluster State
n1 192.168.1.105 24848 -- TestCluster not running
The commands above show the cluster itself is running, but an instance on one server is not. To start the n1 instance use the command start-cluster clustername to start a cluster (and stop-cluster clustername to stop it), while start-instance instancename and stop-instance instancename start and stop an instance.
Admins can do most GlassFish monitoring through its monitoring module, but before you can monitor a component you must set a certain level of monitoring for it. Check the monitoring levels for each component by using the asadmin get command: get configs.config.server-config.monitoring-service.module-monitoring-levels. The output shows that all monitoring is disabled by default:
The fewer monitoring options that are enabled, the better the server's performance.
The available monitoring levels are OFF, LOW, and HIGH. Setting monitoring to LOW displays only essential output, and incurs a lower performance penalty than HIGH, which provides more verbose output. Consider using HIGH only in test environments or when troubleshooting &ndaash; and be judicious when using LOW too.
To see how monitoring works, start by changing the monitoring level for the Java Virtual Machine (JVM) on the domain administration server (DAS) to LOW with the asadmin set command: set configs.config.server-config.monitoring-service.module-monitoring-levels.jvm=LOW. After that you can start pulling the JVM's statistics using asadmin's get command. For instance, get -m server.jvm.memory.usedheapsize-count-count displays how much heap size is used; that is, how much memory, of that the allocated to JVM, is used. The -m argument specifies you're requesting monitoring data. On a default installation with a sample application deployed, the expected result is something like server.jvm.memory.usedheapsize-count-count = 259402784. This indicates that only 247MB (259402784 bytes) memory is used, which means the server is healthy. If you were to try running that command without enabling the JVM's monitoring level, it would return "No monitoring data to report."
get -m server.jvm.memory.usedheapsize-count-count
server.jvm.memory.usedheapsize-count-count = 259402784
To manage monitoring settings cluster-wide, substitute for server the name of a target, which may be the local server, a remote instance node, or even a cluster. In the examples below you'll see only target written; change it to whatever you'd like to monitor.
To get the configuration settings for a cluster called TestCluster, you would use the setting configs.config.TestCluster-config.monitoring-service.module-monitoring-levels. Thus to enable LOW-level JVM monitoring on TestCluster, run the command set configs.config.TestCluster-config.monitoring-service.module-monitoring-levels.jvm=LOW. The result at the asadmin prompt should contain output from each cluster instance confirming the change.
Once you've changed the setting you can get monitoring data from all cluster's instances. For example, to get the JVM used heap size from an instance called n1 that's part of TestCluster, run the command get -m n1.jvm.memory.usedheapsize-count-count. The result should be similar to n1.jvm.memory.usedheapsize-count-count = 372013384.
get -m n1.jvm.memory.usedheapsize-count-count
n1.jvm.memory.usedheapsize-count-count = 372013384
You can play with these settings, changing the monitoring levels until you get the monitoring data you need. To discover all the available data, use the asadmin commands get -m target.*.
get -m target.*
The asadmin utility accepts input from and throws output to the Linux shell. This allows you to work with asadmin just as you work with other shell commands, piping the output to bash scripts, so you can use grep to refine the monitoring results for the TestCluster with the command /opt/glassfish3/bin/asadmin get -m TestCluster.* |grep heapsize.
/opt/glassfish3/bin/asadmin get -m TestCluster.* |grep heapsize
There are thousands monitoring indicators, but you probably want to focus on a few important ones to start. Of course, your needs may differ depending on the problems you are working on:
get -m target.jvm.memory.usedheapsize-count-count
get -m target.jvm.class-loading-system.totalloadedclass-count-count
JVM threads is important for performance tuning and for troubleshooting JVM crashes. Some of the most essential indicators are the current active JVM thread count (target.jvm.thread-system.threadcount-count) and the peak values (target.jvm.thread-system.peakthreadcount-count)
While asadmin is fast, powerful, and precise in acquiring GlassFish monitoring data, and can be further extended with shell scripting, it is a text-based console interface and not as user-friendly and intuitive as a graphical interface can be. The easiest way to monitor GlassFish is through its web administration console. By default, you can access the web interface on the DAS at port 4848 (e.g. https://das:4848/). Once you've logged in, click on the link for Monitoring Data on the left vertical menu.
On the Monitoring page you can configure monitoring on the available instances and clusters or access monitoring data on them. You'll find the same options and indicators as in the asadmin utility.
The web interface provides an easy path for getting started with GlassFish monitoring. However, it lacks the extendability of the asadmin, which you may need for more advanced monitoring tasks.
The Java Management Extensions (JMX) and its GlassFish connector allow you to monitor GlassFish remotely. This connector is enabled by default; you can find it in the configuration file for the default domain (domain1) at /opt/glassfish3/glassfish/domains/domain1/config/domain.xml:
<admin-service system-jmx-connector-name="system" type="das-and-server">
<jmx-connector port="8686" address="0.0.0.0" security-enabled="false" auth-realm-name="admin-realm" name="system"></jmx-connector>
Once you ensure the above configuration directives are present you should be able to connect via telnet or nc to the JXM on TCP port 8686 on the DAS and the nodes. If you have problems, check our guide for troubleshooting CentOS problems.
Various tools make use of JMX and offer monitoring. One popular tool, the Java Monitoring and Management Console, is part of the Java Development Kit. JConsole requires a graphical environment, and you can run it locally on your admin station with the command /usr/bin/jconsole.
When JConsole opens you are prompted with a window for the JMX connection details. You have to specify a hostname or IP address along with the JMX port – for example, 10.0.0.11:8686. You must also specify your GlassFish admin username and password to gain access to the remote JMX service.
The picture above shows JConsole connected to a GlassFish 3.1 node. JConsole can not only display all the previously mentioned monitoring indicators but also graph them over time. These visualizations allow you to follow trends in the indicators' changes and get a better picture of your GlassFish cluster's performance.
Allowed tags: <a> link, <b> bold, <i> italics