This manual explains how to use HyperDebug (version 1.0.0).

Introduction

HyperDebug is software for defect hunting. It allows the programmer to observe application execution flow and to find out what actually happens.
It is useful to fight hard defects/bugs, especially when:

  • the bug cannot be reproduced locally

  • access is not given for the machine where the bug occurs

  • debugger cannot be used or did not help

  • the log information insufficient

HyperDebug works with applications running in JVM (Java Virtual Machine) .

HyperDebug features

HyperDebug provides the following functionality.

Log application execution flow

HyperDebug record application method calls (entry and exit) into trace log file, in CSV format.
The method call parameters are also logged, into separate file. Methods execution times are also measured.
The user can specify which application classes to be traced.
No need to modify the application source code - the trace logs are generated via JVM agent.

Generate dumps when specific exception occurs

HyperDebug can generate thread dump and heap dump, when exception of specified class (or with specified message) occurs. No need to attach JVM profiler and wait for the right moment to generate manually the dumps.

Collect various debug information

On top of application trace logs, HyperDebug can collect also:

  • Values of the environment variables. Because sometimes the OS encoding and time zone affect the application work.

  • File access permissions for the application directories. Because sometimes mis-configured permissions cause errors.

  • Necessary configuration files. Because bad configuration settings may cause problems.

Privacy is respected - the gathered debug information is not sent automatically anywhere.
The collected files could be inspected (and sensitive data removed) before sending to the programmer who will analyse them.

Check application integrity

HyperDebug can calculate checksums of the application files.
This allows to compare the expected checksums with the actual, and detect file corruption.
The modified files (only) could be collected and sent for analysis.

Process huge log data

HyperDebug provides tools to help huge log processing.
Can extract the interesting log file fragments (by time or regex). This decreases the data noise - no need to observe gigabyte long files.
Can transform the log file format (via regex). This way, logs from different systems with different formats could be merged.
Then different third party log analysers could be run over the unified log data.

Typical defect hunting scenario

Some software defects are very difficult to catch. Especially when:

  • The issue occurs sporadically (only on high load or only on specific cluster node).

  • The affected machine cannot be accessed directly because of security restrictions.

  • The logs provided are not sufficient to understand what happened.

In such situations the programmer typically does the following:

  1. Require the latest update/fix of the software to be installed.

  2. Request the available log files.

  3. Ask about the software configuration.

  4. Ask for reproduction steps.

  5. Suggest configuration change (plus log level increase).

  6. Request again latest logs.

  7. Send software patch.

  8. Request again latest logs…​

If the communication pass through support department member, the above process becomes slow and open for human mistakes.

Improved defect hunting scenario (with HyperDebug)

Here is how HyperDebug is used for hunting difficult issues:

  1. The programmer prepares XML configuration file describing what information should be collected. Which log files are needed, which application classes to trace, which config files to check…​

  2. The programmer packs archive containing HyperDebug agent module and the custom XML configuration file.

  3. The programmer sends the archive to whom has access to the target machine - usually admin.

  4. The admin copies the received agent archive to the target machine.

  5. The admin stops the application and starts it again with additional JVM parameters (to attach the HyperDebug agent).

  6. After some time, when the issue occurs again, the admin opens the directory where the agent collects its data.

  7. The admin can check if the collected data contains something sensitive (like passwords) and eventually removes it.

  8. The admin compress the directory with collected data and sends it to the programmer for analysis.

This way the communication between parties is minimal and the risk of human mistakes is reduced.

Requirements

To work properly, HyperDebug requires Java Development Kit (JDK) to be installed. The minimum supported version is 8. The software was tested with JDK implementations from the following vendors:

  • OpenJDK 8

  • Oracle JDK 8

It could possibly work with other JDK implementations, but it was not tested with them - so they are not officially supported.

HyperDebug fundamentals

HyperDebug consists of different modules:

  1. HyperDebug Agent. This is the part, which should be installed on the same machine where is the target application - hyperdebug-agent-XXXX.jar. When attached to the JVM, the agent does the method tracing, exception monitoring, heap (memory) dumps.

  2. HyperDebug Core. Contains core functionality, reused between the project modules.

  3. HyperDebug Transformer. Transforms the data collected by the HyperDebug agent or other sources. This includes log file transformation, size reduction etc.

Agent configuration file

The HyperDebug agent requires configuration XML file, which describes what data to be collected.
Here is example agent configuration, with all options included:

<?xml version="1.0" encoding="UTF-8"?>

<project name="Project2" format="1">

    <description>Detect data corruption in Copier application</description>

    <parameters>
        <parameter name="installDir">/opt/copier</parameter>
        <parameter name="outputDir">${installDir}/hd-data</parameter>
        <include>credentials.properties</include>
        <parameter name="overrideResults">yes</parameter>
    </parameters>

    <trace>
        <classes>com.company.copier.*,com.company.common.*</classes>
        <untracedClasses>com.company.copier.internal.StingHelper</untracedClasses>
        <traceParameters>yes</traceParameters>
        <untracedParameters>org.w3c.dom.Document</untracedParameters>
        <stopTracingAfter>10 minutes</stopTracingAfter>
        <traceException>java.lang.IllegalArgumentException</traceException>
        <traceExceptionMessage>Bad.+</traceExceptionMessage>
    </trace>

    <collect>
        <files include="*.log" exclude="audit*.log" includeSubDirs="yes" sourceDir="${installDir}/log"/>
        <checksums exclude="*.log,*.tmp" sourceDir="${installDir}"/>
        <environment exclude="sso.token"/>
        <permissions sourceDirs="${installDir}/log,${installDir}/temp"/>
        <differences exclude="*.log,*.tmp" sourceDir="${installDir}" checksums="${outputDir}/checksums.ini"/>
    </collect>

</project>

The agent configuration file name is not predefined - can be any name.

There is prepared template for agent configuration file: HdAgentConfig.xml.template

Bellow are explained the configuration parameters.

Element project is holder for all configuration elements (root). Mandatory.
Attributes:

  • name - user defined project name. Mandatory.

  • format - this is the configuration format version (for compatibility with future versions). It should be 1. Mandatory.

Element description contains user summary about the configuration (free text). Optional.

Element parameters is holder for parameter values. Optional.

Element parameter is named configuration parameter. It can be referenced by other configuration elements (via ${parameterName} syntax). Optional.
Attributes:

  • name - parameter name. Mandatory.

The element text is the parameter value.

Predefined parameters are:

  • installDir - directory where is the debugged application. Mandatory. Usually it is absolute path. If relative path is used, it must be relative to HD_HOME directory.

  • outputDir - directory where to store the collected debug information. Mandatory. Usually it is absolute path. If relative path is used, it must be relative to HD_HOME directory.

  • overrideResults - to override (or not) the existing files in output directory (leftovers from previous runs). Can be 'yes' or 'no'. Default value is 'yes'. Optional.

Element include inserts the content of external properties file. The keys and values defined in this file will be appended to the other configuration parameters. Such files usually contain user credentials. Optional.

Element trace is holder for the execution trace configuration. Optional.
Sub-elements: classes, untracedClasses, traceParameters, untracedParameters, stopTracingAfter, traceException, traceExceptionMessage, dumpDir, outputDir.

Element classes contains list of names of classes to be traced. Accepts askerisk (*) as wildcard. The names are separated with comma (,). It is recommended to specify narrow set of classes (only the classes of interest), otherwise the trace log will grow very large. Optional.

Element untracedClasses contains list of names of classes from the 'classes' list which should be excluded from tracing. The names are separated with comma (,). Optional.

Element traceParameters contains trace method call parameters flag. Can be 'yes' or 'no'. Optional.

Element untracedParameters contains list of class names which should be excluded from call parameter tracing. The names are separated with comma (,). Optional.

Element stopTracingAfter stops the tracing after specified time. The time is specified in format: amount unit, where amount is positive number. The unit can be one of these: milliseconds, seconds, minutes, hours, days. Optional.

Element traceException contains exception class to be traced. Optional.

Element traceExceptionMessage contains text required to exist in the traced exception message. The value can be regular expression. This parameter requires traceException to be defined. Optional.

Element outputDir contains directory where to store the trace log files. Mandatory.

Element collect is holder for the data collection configuration. Optional.
Sub-elements: files, checksums, environment, permissions, diff.

Element files describes application files to be collected for analysis. Optional.
Attributes:

  • include - mask for file names to be collected. Accepts askerisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • exclude - mask for file names not to be collected. Accepts askerisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • includeSubDirs - flag to collect ale sub-directories content. Possible values: yes and no (default). Optional.

  • sourceDir - source directory from which to collect files. Mandatory.

Element checksums describes which for which application files to calculate checksum (SHA-256). Optional.
Attributes:

  • exclude - mask for file names excluded from checksum calculation. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • sourceDir - source directory from which files to calculate the checksums. Mandatory.

Element environment describes how to collect environment variables. Optional.
Attributes:

  • exclude - names of environment variables which should not be collected. Multiple names are separated by comma (,). Optional.

Element permissions describes which file permissions to be collected. Optional.
Attributes:

  • sourceDirs - names of directories which permissions to collect. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

Element diff describes how to calculate file differences. Optional.
Attributes:

  • exclude - mask for file names not to be compared. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • sourceDir - source directory from which to compare files. Mandatory.

  • checksums - file containing pre-calculated checksums for the source directory content. Mandatory.

Cookbook

Here are tips how to use HyperDebug to accomplish specific tasks.
Lets assume that HyperDebug was installed into directory HD_HOME.
Lets assume that the application which should be debuged is installed in directory MYAPP_HOME. The application name is MyApp.

How to trace the application execution flow?

To do this, HyperDebug agent should be attached to the debuged application. The agent is similar to debuggers, but is attached through JVM parameter, instead of opened port. There are different options to attach the agent.

Run the application with additional JVM parameter.
The parameter syntax is:

-javaagent:HD_HOME/library/hyperdebug-agent-1.0.0.jar=HD_HOME/config/MyAppDebug.xml

The first part after the colon sign, points to the HyperDebug agent jar file.
The part after equals sign, points to prepared agent configuration file, saying what data to be collected from MyApp application.

The additional parameter should be added to the script starting the MyApp application - typically bat or sh file. The parameter should be added on front, before all other parameters (JVM memory settings, classpath etc).

Then restart the MyApp application. The collected trace data will appear in the directory specified in trace/outputDir in the agent configuration file.

If the agent is started, the application console/log will have the following line:
Loading HyperDebug agent with arguments.

Example trace log file content:

2018-06-20T20:25:44.153Z;"main_1";CallBegin;com.mindfusion.hd.echoer.EchoerApplication.main(java.lang.String[]);;;18917505756430;
2018-06-20T20:25:44.170Z;"main_1";CallBegin;com.mindfusion.hd.echoer.EchoerApplication.echoString(java.lang.String);596706728;;18917523915248;
2018-06-20T20:25:44.170Z;"main_1";CallEnd;;;0;18917523915248;
2018-06-20T20:25:44.175Z;"main_1";Exception;"Test exception @ java.lang.RuntimeException: Test exception\	at com.mindfusion.hd.echoer.EchoerApplication.main(EchoerApplication.java:43)\	at com.mindfusion.hd.echoer.EchoerApplicationTest.testRunWithAgent(EchoerApplicationTest.java:54)\	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\	at java.lang.reflect.Method.invoke(Method.java:498)\	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)\	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)\	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)\	at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)\	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)\	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)\	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)\	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)\	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)\	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)\	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)\	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)\	at org.junit.runners.ParentRunner.run(ParentRunner.java:363)\	at org.junit.runner.JUnitCore.run(JUnitCore.java:137)\	at com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68)\	at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47)\	at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242)\	at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70)\";;;18917505756430;

Trace log columns:

Name Description Example

Timestamp

When the log line was created, in ISO format (yyyy-MM-ddThh:mm:ss.mmmZ)

2018-03-22T20:15:54.773Z

Thread

The thread name and thread id

main_1

Action

Action name (one of CallBegin/CallEnd/Exception)

CallBegin

Called method

Name of the called method name and its parameter types. In case of exception contains the exception message and its stack trace

com.mindfusion.hd.echoer.EchoerApplication.
echoString(java.lang.String)

Object instance

Hashcode of the object which method was called (empty for static methods)

596706728

Execution time

Milliseconds for method execution

20

Line identifier

Unique log line identifier
(in the file scope)

1521749754773

Example for corresponding data log file content:

18917505756430;"[Ljava.lang.String;;@5f20155b"
18917523915248;"echo1"

Data log columns:

Name Description Example

Line identifier

The log line identifier from the corresponding trace log file.

30764508060988

Call parameters

Text value of parameters passed when calling method (written on CallBegin)

"echo1"

Data collection commands

HyperDebug can collect additional data, outside the debugged application, to facilitate defect discovery:

./hd-command.sh [commandName] [-param value]*

or

hd-command.bat [commandName] [-param value]*

There are two versions of the scripts - one for Linux (.sh) and one for Windows (.bat). Their work is equivalent, so they will be referred without the extension. Like this: hd-command.

The possible commands are listed bellow.

CollectFiles command

CollectFiles collects files of interest (for defect discovery) from given target directory.
Parameters:

  • exclude - mask for file names not to be collected. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • include - mask for file names to be collected. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • includeSubDirs - flag to collect ale sub-directories content. Possible values: yes and no (default).
    Optional.

  • outputDir - directory where to store the collected files. Sub-directories will be preserved during copying. If not specified, the default directory is 'hd-data'. Optional.

  • sourceDir - source directory from which to collect files. Mandatory.

Examples:

hd-command CollectFiles -include \*.log -sourceDir /opt/myapp/logs -includeSubDirs yes -outputDir /tmp/hd-data/1

That command (above) will copy all log files from '/opt/myapp/logs' directory and its sub-directories to directory '/tmp/hd-data/1'.

CollectChecksums command

CollectChecksums calculates checksums (SHA-256) of the files from specified directory and its sub-directories. The results are written in file.
Parameters:

  • exclude - mask for file names excluded from checksum calculation. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • outputDir - directory where to store the file with calculated checksums. If not specified, the default directory is 'hd-data'. Optional.

  • sourceDir - source directory from which files to calculate the checksums. Mandatory.

Examples:

hd-command CollectChecksums -sourceDir /opt/myapp -exclude *.tmp,\*.log,\*.cfg -outputDir /opt/temp/hd-data/1

That command (above) will calculate checksums of all files, except tmp/log/cfg from directory '/opt/myapp' including its sub-directories. The result will be written in file in directory '/opt/temp/hd-data/1'.

CollectEnvironment command

CollectEnvironment collect the environment variables from the machine (character encoding, time zone, host name etc).
Parameters:

  • exclude - names of environment variables which should not be collected. Multiple names are separated by comma (,). Optional.

  • outputDir - directory where to store the file with calculated checksums. If not specified, the default directory is 'hd-data'. Optional.

Examples:

hd-command CollectEnvironment -exculde machine.token -outputDir /tmp/hd-data/1

That command (above) will collect the machine environment variables (except 'machine.token') and write them to file in directory '/tmp/hd-data/1'.

CollectPermissions command

CollectPermissions collect the user access permissions for desired directories. The result will be written in file.
Parameters:

  • sourceDirs - names of directories which permissions to collect. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • outputDir - directory where to store the file with collected permissions. If not specified, the default directory is 'hd-data'. Optional.

Examples:

hd-command CollectPermissions -sourceDirs /opt/myapp/temp,/opt/myapp/config -outputDir /tmp/hd-data/1

That command (above) will collect the access permissions for directories '/opt/myapp/temp' and '/opt/myapp/config' and write them in file. The file will be be in directory '/tmp/hd-data/1'.

CollectDifferences command

CollectDifferences detects application file differences by comparing pre-calculated checksums (generated by CollectChecksums command) and the current application files. This allows to detect corrupted/modified files. The comparison result will be written in file (list of added/updated/removed files). Also, the different files will be collected and compressed, so they could be easily sent for analysis, or could be used as patch for other application installation.
Parameters:

  • checksums - file containing pre-calculated checksums for the source directory content. Mandatory.

  • exclude - mask for file names not to be compared. Accepts asterisk (*) as wildcard. Multiple names are separated by comma (,). Optional.

  • outputDir - directory where to store the file with collected differences and files. If not specified, the default directory is 'hd-data'. Optional.

  • sourceDir - source directory from which to compare files. Mandatory.

Examples:

hd-command CollectDifferences -sourceDir /opt/myapp -checksums /tmp/hd-data/0/config.ini -exclude \*.log -outputDir /tmp/hd-data/1

That command (above) will compare the checksums from '/tmp/hd-data/0/config.ini' with the current files in '/opt/myapp' directory. The result files will be written in directory '/tmp/hd-data/1'.

CollectInfo command

CollectInfo executes all collectors configured in the agent configuration file.
What data to collect is described in the prepared agent configuration file, section collect. All this information is collected by executing command:

The data will be collected in directory, pointed by parameter outputDir in the agent configuration file.
This is convenient way to describe the necessary information in one XML file and give it to support team to collect the data with single command.
Parameters:

  • config - name of the transformation configuration file. Mandatory.

Examples:

hd-command CollectInfo -config MyAppDebug.xml

CutFile command

The bug hunting was always related to large logs. So large that the standard text editors cannot work with them…​
CutFile cuts long files:

  • From/to specific regex.

  • From/to specific line.

  • From/to specific offset.

  • Only mathing lines (grep via regex).

Parameters:

  • file - the long file to be cut. Mandatory.

  • fromLine - the line number to begin the cut from. The first line is 1. Optional. Usually combined with toLine parameter.

  • toLine - the line number to end the cut. Optional. Usually combined with fromLine parameter.

  • fromString - regular expression matching the line where the cut begins. Optional. Usually combined with toString parameter.

  • toString - regular expression matching the line where the cut ends. Optional. Usually combined with fromString parameter.

  • fromOffset - number of characters from the beginning of the file where the but begins. The first line is 1. Can be specfied as decimal or hex number (like 0x4F). Optional. Usually combined with toOffset parameter.

  • toOffset - number of characters from the beginning of the file where the but begins. Can be specfied as decimal or hex number (like 0x4F). Optional. Usually combined with fromOffset parameter.

  • writeToFile - name of the file where to write the cut parts. Optional.

If the start cutting point is not specified, assumes the beginning of the file.
If the end cutting point is not specified, assumds the end of the file.
If the output file is not specified, the cut parts will be written to file named the same as the input file with appended unique counter (like 'myapp.1.log').

Examples:

hd-command CutFile -file /opt/myapp/logs/today.log -fromLine 1200 -toLine 2300
hd-command CutFile -file /opt/myapp/logs/today.log -fromString "Segment \d+ locked .?" -writeToFile /tmp/segments/segment1.log hd-command CutFile -file /opt/myapp/logs/today.log -toOffset 0x25ED432 hd-command CutFile -file /opt/myapp/logs/today.log -matchingLines "\[Worker_1.?\]"

TransformFiles command

TransformFiles applies series of transformations on log files. This is useful to unify them (when they come from different sources and have different format). Also to reduce the information noise and prepare for consequtive analysis.

Parameters:

  • config - name of the transformation configuration file. Mandatory.

Examples:

hd-command TransformFiles -config /opt/myapp/TransformLogs.yaml

The transformations are described in transformation configuration file (in YAML format), which has the following structure:

  • name - user defined name of the transformation. Optional.

  • inputDir - directory where the input files are located. Mandatory.

  • fileFilter - regex matching the files from the input directory to be processed. Optional. If not specified all files will be processed.

  • inputLine - regex matching line of the processed log file, where all capturing groups are named. Each named group represents log field (timestamp, level, message etc). The field names can later be referenced from the changes config section, using ${fieldName} syntax.

  • changes - block describing the changes tl be applied on each log line (can have more of these blocks). Mandatory. It has the following attributes:

    • over - reference to log field to process (timestamp, level, message…​).

    • find - regex matching desired part of the processed field.

    • replace - String to replace the matched part. Can have back references to the captuing groups from the 'find' regex.

    • as - alias name for the change result. If not specified will use the referenced olg field name.

  • outputLine - regex defining the format of the transformed (output) log line. Can refer log fields from the input line or aliases from the change blocks.

  • outputHeader - header line for the transformed log file. Optional.

  • outputDir - directory where to write the transformed log files. Optional. If not provided will use the input directory (and will not overwrite existing files).

Here is example transformation configuration:

# Log file transformation.
# Input as: 2018-03-22T20:15:54.773Z WARN [main] - message 1
# Output as: W - message 1
name: Keep only the first letter from the level and the message.
inputDir: /opt/myapp/logs
fileFilter: .*?\.log
inputLine: ^(?:.+?) (?<level>\w{4,7}) \[(?:.+?)\] - (?<message>.+?)$
changes:
  - over: ${level}
    find: (\w).+
    replace: $1
    as: shortLevel
outputLine: "${shortLevel} - ${message}"

Such transformations are convenient when you want to compare two logs - one from successful execution and one from failed execution. Because of timestamp (and thread id) differences the comparator would see too many differences. The transformer can remove these undesired differences and make the comparison easier.

License

HyperDebug is commercial software. Appropriate license should be purchased prior of use.
The file containing the license must be copied at: HD_HOME/config/HyperDebug.license.

Here is the end user license agreement: EULA