Donnerstag, 12. Dezember 2013

The infamous sun.misc.Unsafe explained

The biggest competitor to the Java virtual machine might be Microsoft's CLR that hosts languages such as C#. The CLR allows to write unsafe code as an entry gate for low level programming, something that is hard to achieve on the JVM. If you need such advanced functionality in Java, you might be forced to use the JNI which requires you to know some C and will quickly lead to code that is tightly coupled to a specific platform. With sun.misc.Unsafe, there is however another alternative to low-level programming on the Java plarform using a Java API, even though this alternative is discouraged. Nevertheless, several applications rely on sun.misc.Unsafe such for example objenesis and therewith all libraries that build on the latter such for example kryo which is again used in for example Twitter's Storm. Therefore, it is time to have a look, especially since the functionality of sun.misc.Unsafe is considered to become part of Java's public API in Java 9.

Getting hold of an instance of sun.misc.Unsafe


The sun.misc.Unsafe class is intended to be only used by core Java classes which is why its authors made its only constructor private and only added an equally private singleton instance. The public getter for this instances performs a security check in order to avoid its public use:

public static Unsafe getUnsafe() {
  Class cc = sun.reflect.Reflection.getCallerClass(2);
  if (cc.getClassLoader() != null)
    throw new SecurityException("Unsafe");
  return theUnsafe;
}

This method first looks up the calling Class from the current thread’s method stack. This lookup is implemented by another internal class named sun.reflection.Reflection which is basically browsing down the given number of call stack frames and then returns this method’s defining class. This security check is however likely to change in future version. When browsing the stack, the first found class (index 0) will obviously be the Reflection class itself, and the second (index 1) class will be the Unsafe class such that index 2 will hold your application class that was calling Unsafe#getUnsafe().

This looked-up class is then checked for its ClassLoader where a null reference is used to represent the bootstrap class loader on a HotSpot virtual machine. (This is documented in Class#getClassLoader() where it says that “some implementations may use null to represent the bootstrap class loader”.) Since no non-core Java class is normally ever loaded with this class loader, you will therefore never be able to call this method directly but receive a thrown SecurityException as an answer. (Technically, you could force the VM to load your application classes using the bootstrap class loader by adding it to the –Xbootclasspath, but this would require some setup outside of your application code which you might want to avoid.) Thus, the following test will succeed:

@Test(expected = SecurityException.class)
public void testSingletonGetter() throws Exception {
  Unsafe.getUnsafe();
}

However, the security check is poorly designed and should be seen as a warning against the singleton anti-pattern. As long as the use of reflection is not prohibited (which is hard since it is so widely used in many frameworks), you can always get hold of an instance by inspecting the private members of the class. From the Unsafe class's source code, you can learn that the singleton instance is stored in a private static field called theUnsafe. This is at least true for the HotSpot virtual machine. Unfortunately for us, other virtual machine implementations sometimes use other names for this field. Android’s Unsafe class is for example storing its singleton instance in a field called THE_ONE. This makes it hard to provide a “compatible” way of receiving the instance. However, since we already left the save territory of compatibility by using the Unsafe class, we should not worry about this more than we should worry about using the class at all. For getting hold of the singleton instance, you simply read the singleton field's value:

Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe");
theUnsafe.setAccessible(true);
Unsafe unsafe = (Unsafe) theUnsafe.get(null);

Alternatively, you can invoke the private instructor. I do personally prefer this way since it works for example with Android while extracting the field does not:

Constructor<Unsafe> unsafeConstructor = Unsafe.class.getDeclaredConstructor();
unsafeConstructor.setAccessible(true);
Unsafe unsafe = unsafeConstructor.newInstance();

The price you pay for this minor compatibility advantage is a minimal amount of heap space. The security checks performed when using reflection on fields or constructors are however similar.

 

Create an instance of a class without calling a constructor


The first time I made use of the Unsafe class was for creating an instance of a class without calling any of the class's constructors. I needed to proxy an entire class which only had a rather noisy constructor but I only wanted to delegate all method invocations to a real instance which I did however not know at the time of construction. Creating a subclass was easy and if the class had been represented by an interface, creating a proxy would have been a straight-forward task. With the expensive constructor, I was however stuck. By using the Unsafe class, I was however able to work my way around it. Consider a class with an artificially expensive constructor:

class ClassWithExpensiveConstructor {

  private final int value;

  private ClassWithExpensiveConstructor() {
    value = doExpensiveLookup();
  }

  private int doExpensiveLookup() {
    try {
      Thread.sleep(2000);
    } catch (InterruptedException e) {
      e.printStackTrace();
    }
    return 1;
  }

  public int getValue() {
    return value;
  }
}

Using the Unsafe, we can create an instance of ClassWithExpensiveConstructor (or any of its subclasses) without having to invoke the above constructor, simply by allocating an instance directly on the heap:

@Test
public void testObjectCreation() throws Exception {
  ClassWithExpensiveConstructor instance = (ClassWithExpensiveConstructor) 
  unsafe.allocateInstance(ClassWithExpensiveConstructor.class);
  assertEquals(0, instance.getValue());
}

Note that final field remained uninitialized by the constructor but is set with its type's default value. Other than that, the constructed instance behaves like a normal Java object. It will for example be garbage collected when it becomes unreachable.

The Java run time itself creates objects without calling a constructor when for example creating objects for deserialization. Therefore, the ReflectionFactory offers even more access to individual object creation:

@Test
public void testReflectionFactory() throws Exception {
  @SuppressWarnings("unchecked")
  Constructor<ClassWithExpensiveConstructor> silentConstructor = ReflectionFactory.getReflectionFactory()
      .newConstructorForSerialization(ClassWithExpensiveConstructor.class, Object.class.getConstructor());
  silentConstructor.setAccessible(true);
  assertEquals(10, silentConstructor.newInstance().getValue());
}

Note that the ReflectionFactory class only requires a RuntimePermission called reflectionFactoryAccess for receiving its singleton instance and no reflection is therefore required here. The received instance of ReflectionFactory allows you to define any constructor to become a constructor for the given type. In the example above, I used the default constructor of java.lang.Object for this purpose. You can however use any constructor:

class OtherClass {

  private final int value;
  private final int unknownValue;

  private OtherClass() {
    System.out.println("test");
    this.value = 10;
    this.unknownValue = 20;
  }
}

@Test
public void testStrangeReflectionFactory() throws Exception {
  @SuppressWarnings("unchecked")
  Constructor<ClassWithExpensiveConstructor> silentConstructor = ReflectionFactory.getReflectionFactory()
      .newConstructorForSerialization(ClassWithExpensiveConstructor.class,
            OtherClass.class.getDeclaredConstructor());
  silentConstructor.setAccessible(true);
  ClassWithExpensiveConstructor instance = silentConstructor.newInstance();
  assertEquals(10, instance.getValue());
  assertEquals(ClassWithExpensiveConstructor.class, instance.getClass());
  assertEquals(Object.class, instance.getClass().getSuperclass());
}

Note that value was set in this constructor even though the constructor of a completely different class was invoked. Non-existing fields in the target class are however ignored as also obvious from the above example. Note that OtherClass does not become part of the constructed instances type hierarchy, the OtherClass's constructor is simply borrowed for the "serialized" type.

Not mentioned in this blog entry are other methods such as Unsafe#defineClass, Unsafe#defineAnonymousClass or Unsafe#ensureClassInitialized. Similar functionality is however also defined in the public API's ClassLoader.

Native memory allocation


Did you ever want to allocate an array in Java that should have had more than Integer.MAX_VALUE entries? Probably not because this is not a common task, but if you once need this functionality, it is possible. You can create such an array by allocating native memory. Native memory allocation is used by for example direct byte buffers that are offered in Java's NIO packages. Other than heap memory, native memory is not part of the heap area and can be used non-exclusively for example for communicating with other processes. As a result, Java's heap space is in competition with the native space: the more memory you assign to the JVM, the less native memory is left.

Let us look at an example for using native (off-heap) memory in Java with creating the mentioned oversized array:

class DirectIntArray {

  private final static long INT_SIZE_IN_BYTES = 4;
  
  private final long startIndex;

  public DirectIntArray(long size) {
    startIndex = unsafe.allocateMemory(size * INT_SIZE_IN_BYTES);
    unsafe.setMemory(startIndex, size * INT_SIZE_IN_BYTES, (byte) 0);
    }
  }

  public void setValue(long index, int value) {
    unsafe.putInt(index(index), value);
  }

  public int getValue(long index) {
    return unsafe.getInt(index(index));
  }

  private long index(long offset) {
    return startIndex + offset * INT_SIZE_IN_BYTES;
  }

  public void destroy() {
    unsafe.freeMemory(startIndex);
  }
}

@Test
public void testDirectIntArray() throws Exception {
  long maximum = Integer.MAX_VALUE + 1L;
  DirectIntArray directIntArray = new DirectIntArray(maximum);
  directIntArray.setValue(0L, 10);
  directIntArray.setValue(maximum, 20);
  assertEquals(10, directIntArray.getValue(0L));
  assertEquals(20, directIntArray.getValue(maximum));
  directIntArray.destroy();
}

First, make sure that your machine has sufficient memory for running this example! You need at least (2147483647 + 1) * 4 byte = 8192 MB of native memory for running the code. If you have worked with other programming languages as for example C, direct memory allocation is something you do every day. By calling Unsafe#allocateMemory(long), the virtual machine allocates the requested amount of native memory for you. After that, it will be your responsibility to handle this memory correctly.

The amount of memory that is required for storing a specific value is dependent on the type's size. In the above example, I used an int type which represents a 32-bit integer. Consequently a single int value consumes 4 byte. For primitive types, size is well-documented. It is however more complex to compute the size of object types since they are dependent on the number of non-static fields that are declared anywhere in the type hierarchy. The most canonical way of computing an object's size is using the Instrumented class from Java's attach API which offers a dedicated method for this purpose called getObjectSize. I will however evaluate another (hacky) way of dealing with objects in the end of this section.

Be aware that directly allocated memory is always native memory and therefore not garbage collected. You therefore have to free memory explicitly as demonstrated in the above example by a call to Unsafe#freeMemory(long). Otherwise you reserved some memory that can never be used for something else as long as the JVM instance is running what is a memory leak and a common problem in non-garbage collected languages. Alternatively, you can also directly reallocate memory at a certain address by calling Unsafe#reallocateMemory(long, long) where the second argument describes the new amount of bytes to be reserved by the JVM at the given address.

Also, note that the directly allocated memory is not initialized with a certain value. In general, you will find garbage from old usages of this memory area such that you have to explicitly initialize your allocated memory if you require a default value. This is something that is normally done for you when you let the Java run time allocate the memory for you. In the above example, the entire area is overriden with zeros with help of the Unsafe#setMemory method.

When using directly allocated memory, the JVM will neither do range checks for you. It is therefore possible to corrupt your memory as this example shows:

@Test
public void testMallaciousAllocation() throws Exception {
  long address = unsafe.allocateMemory(2L * 4);
  unsafe.setMemory(address, 8L, (byte) 0);
  assertEquals(0, unsafe.getInt(address));
  assertEquals(0, unsafe.getInt(address + 4));
  unsafe.putInt(address + 1, 0xffffffff);
  assertEquals(0xffffff00, unsafe.getInt(address));
  assertEquals(0x000000ff, unsafe.getInt(address + 4));
}

Note that we wrote a value into the space that was each partly reserved for the first and for the second number. This picture might clear things up. Be aware that the values in the memory run from the "right to the left" (but this might be machine dependent).

The first row shows the initial state after writing zeros to the entire allocated native memory area. Then we override 4 byte with an offset of a single byte using 32 ones. The last row shows the result after this writing operation.

Finally, we want to write an entire object into native memory. As mentioned above, this is a difficult task since we first need to compute the size of the object in order to know the amount of size we need to reserve. The Unsafe class does however not offer such functionality. At least not directly since we can at least use the Unsafe class to find the offset of an instance's field which is used by the JVM when itself allocates objects on the heap. This allows us to find the approximate size of an object:

public long sizeOf(Class<?> clazz)
  long maximumOffset = 0;
  do {
    for (Field f : clazz.getDeclaredFields()) {
      if (!Modifier.isStatic(f.getModifiers())) {
        maximumOffset = Math.max(maximumOffset, unsafe.objectFieldOffset(f));
      }
    }
  } while ((clazz = clazz.getSuperclass()) != null);
  return maximumOffset + 8;
}

This might at first look cryptic, but there is no big secret behind this code. We simply iterate over all non-static fields that are declared in the class itself or in any of its super classes. We do not have to worry about interfaces since those cannot define fields and will therefore never alter an object's memory layout. Any of these fields has an offset which represents the first byte that is occupied by this field's value when the JVM stores an instance of this type in memory, relative to a first byte that is used for this object. We simply have to find the maximum offset in order to find the space that is required for all fields but the last field. Since a field will never occupy more than 64 bit (8 byte) for a long or double value or for an object reference when run on a 64 bit machine, we have at least found an upper bound for the space that is used to store an object. Therefore, we simply add these 8 byte to the maximum index and we will not run into danger of having reserved to little space. This idea is of course wasting some byte and a better algorithm should be used for production code.

In this context, it is best to think of a class definition as a form of heterogeneous array. Note that the minimum field offset is not 0 but a positive value. The first few byte contain meta information. The graphic below visualizes this principle for an example object with an int and a long field where both fields have an offset. Note that we do not normally write meta information when writing a copy of an object into native memory so we could further reduce the amount of used native memoy. Also note that this memory layout might be highly dependent on an implementation of the Java virtual machine.
With this overly careful estimate, we can now implement some stub methods for writing shallow copies of objects directly into native memory. Note that native memory does not really know the concept of an object. We are basically just setting a given amount of byte to values that reflect an object's current values. As long as we remember the memory layout for this type, these byte contain however enough information to reconstruct this object.

public void place(Object o, long address) throws Exception {
  Class clazz = o.getClass();
  do {
    for (Field f : clazz.getDeclaredFields()) {
      if (!Modifier.isStatic(f.getModifiers())) {
        long offset = unsafe.objectFieldOffset(f);
        if (f.getType() == long.class) {
          unsafe.putLong(address + offset, unsafe.getLong(o, offset));
        } else if (f.getType() == int.class) {
          unsafe.putInt(address + offset, unsafe.getInt(o, offset));
        } else {
          throw new UnsupportedOperationException();
        }
      }
    }
  } while ((clazz = clazz.getSuperclass()) != null);
}

public Object read(Class clazz, long address) throws Exception {
  Object instance = unsafe.allocateInstance(clazz);
  do {
    for (Field f : clazz.getDeclaredFields()) {
      if (!Modifier.isStatic(f.getModifiers())) {
        long offset = unsafe.objectFieldOffset(f);
        if (f.getType() == long.class) {
          unsafe.putLong(instance, offset, unsafe.getLong(address + offset));
        } else if (f.getType() == int.class) {
          unsafe.putLong(instance, offset, unsafe.getInt(address + offset));
        } else {
          throw new UnsupportedOperationException();
        }
      }
    }
  } while ((clazz = clazz.getSuperclass()) != null);
  return instance;
}

@Test
public void testObjectAllocation() throws Exception {
  long containerSize = sizeOf(Container.class);
  long address = unsafe.allocateMemory(containerSize);
  Container c1 = new Container(10, 1000L);
  Container c2 = new Container(5, -10L);
  place(c1, address);
  place(c2, address + containerSize);
  Container newC1 = (Container) read(Container.class, address);
  Container newC2 = (Container) read(Container.class, address + containerSize);
  assertEquals(c1, newC1);
  assertEquals(c2, newC2);
}

Note that these stub methods for writing and reading objects in native memory only support int and long field values. Of course, Unsafe supports all primitive values and can even write values without hitting thread-local caches by using the volatile forms of the methods. The stubs were only used to keep the examples concise. Be aware that these "instances" would never get garbage collected since their memory was allocated directly. (But maybe this is what you want.) Also, be careful when precalculating size since an object's memory layout might be VM dependent and also alter if a 64-bit machine runs your code compared to a 32-bit machine. The offsets might even change between JVM restarts.

For reading and writing primitives or object references, Unsafe provides the following type-dependent methods:
  • getXXX(Object target, long offset): Will read a value of type XXX from target's address at the specified offset.
  • putXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offset.
  • getXXXVolatile(Object target, long offset): Will read a value of type XXX from target's address at the specified offset and not hit any thread local caches.
  • putXXXVolatile(Object target, long offset, XXX value): Will place value at target's address at the specified offset and not hit any thread local caches.
  • putOrderedXXX(Object target, long offset, XXX value): Will place value at target's address at the specified offet and might not hit all thread local caches.
  • putXXX(long address, XXX value): Will place the specified value of type XXX directly at the specified address.
  • getXXX(long address): Will read a value of type XXX from the specified address.
  • compareAndSwapXXX(Object target, long offset, long expectedValue, long value): Will atomicly read a value of type XXX from target's address at the specified offset and set the given value if the current value at this offset equals the expected value.
Be aware that you are copying references when writing or reading object copies in native memory by using the getObject(Object, long) method family. You are therefore only creating shallow copies of instances when applying the above method. You could however always read object sizes and offsets recursively and create deep copies. Pay however attention for cyclic object references which would cause infinitive loops when applying this principle carelessly.

Not mentioned here are existing utilities in the Unsafe class that allow manipulation of static field values sucht as staticFieldOffset and for handling array types. Finally, both methods named Unsafe#copyMemory allow to instruct a direct copy of memory, either relative to a specific object offset or at an absolute address as the following example shows:

@Test
public void testCopy() throws Exception {
  long address = unsafe.allocateMemory(4L);
  unsafe.putInt(address, 100);
  long otherAddress = unsafe.allocateMemory(4L);
  unsafe.copyMemory(address, otherAddress, 4L);
  assertEquals(100, unsafe.getInt(otherAddress));
}

Throwing checked exceptions without declaration


There are some other interesting methods to find in Unsafe. Did you ever want to throw a specific exception to be handled in a lower layer but you high layer interface type did not declare this checked exception? Unsafe#throwException allows to do so:

@Test(expected = Exception.class)
public void testThrowChecked() throws Exception {
  throwChecked();
}

public void throwChecked() {
  unsafe.throwException(new Exception());
}

Native concurrency


The park and unpark methods allow you to pause a thread for a certain amount of time and to resume it:

@Test
public void testPark() throws Exception {
  final boolean[] run = new boolean[1];
  Thread thread = new Thread() {
    @Override
    public void run() {
      unsafe.park(true, 100000L);
      run[0] = true;
    }
  };
  thread.start();
  unsafe.unpark(thread);
  thread.join(100L);
  assertTrue(run[0]);
}

Also, monitors can be acquired directly by using Unsafe using monitorEnter(Object), monitorExit(Object) and tryMonitorEnter(Object).

A file containing all the examples of this blog entry is available as a gist.

Samstag, 23. November 2013

A declarative content parser for Java

Recently, I worked on a project that required me to parse several files which came in their own file formats. To make things worse, the file format changed quite often such that the related code had to be adjusted quite often. In my opinion, object-oriented languages such as Java are not necessarily the sharpest tools for dealing with file parsing. For this reason, I tried to solve this problem with a declarative approach, the result of which I published on GitHub and on Maven Central.

This miniature tool provides help with parsing a custom content format by creating Java beans that extract this data where each bean resembles a single row of content of a specified source. The mapping of the input to a Java bean is based on regular expressions what allows great flexibility. However, the syntax of regular expressions is enhanced by properties expressions that allow the direct mapping of bean properties within this regular expression describing these contents.
This tool is intended to be as light-weight as possible and comes without dependencies. It is however quite extensible as demonstrated below.

A simple example

The tool is used by a single entry point, an instance of BeanTransformer which can read content for a specified bean. By doing so, the parser only needs to build patterns for a specified bean a single time. Therefore, this tool performs equally good as writing native content parsers, once it is set up.
As an example for the use of this tool, imagine you want to read in data from some sample file sample.txt containing the following data in an imaginary format:

##This is a first value##foo&&2319949,
##This is a second value##bar&&741981,
##This is a third value##&&998483,

This tool would now allow you to directly translate this file by declaring the following Java bean:

@MatchBy("##@intro@##@value@&&@number@,")
class MyBean {

    private String intro;

    @OptionalMatch
    private String value;

    private int number;

    // Getters and setters...
}

The @name@ expressions each represent a property in the bean which will be filled by the data found at this particular point within the regular expression. The expression can be escaped by preceeding the first @ symbol with backslashes as for example \\@name@. A back slash can be escaped in the same manner. With calling BeanTransformer.make(MyBean.class).read(new FileReader("./sample.txt")) you would receive a list of MyBean instances where each instance resembles one line of the file. All properties would be matched to the according property name that is declared in the bean.

Matching properties

The @MatchBy annotation can also be used for fields within a bean that is used for matching. This tool will normally discover a pattern by interpreting the type of a field that is referenced in the expression used for parsing the content. Since the @number@ expression in the example above references a field of type int, the field would be matched against the regular expression [0-9]+, representing any non-decimal number. All other primitive types and their wrapper types are also equipped with a predefined matching pattern. All other types will by default be matched by a non-greedy match-everything pattern. After extracting a property, the type of the property will be tried to be instantiated by:
  • Invoking a static function with the signature valueOf(String) defined on the type.
  • Using a constructor taking a single String as its argument
This default behavior can however be changed. A field can be annotated with @ResolveMatchWith which requires a subtype of a PropertyDelegate as its single argument. An instance of this class will be instantiated for transforming expressions from a string to a value and reverse. The subclass needs to override the only constructor of PropertyDelegate and accept the same types as this constructor. An optional match can be declared by annotating a field with @OptionalMatch. If no match can be made for such a field, the PropertyDelegate's setter will never be invoked (when using a custom PropertyDelegate).

Dealing with regular expressions

Always keep in mind that @MatchBy annotation take regular expressions as their arguments. Therefore, it is important to escape all special characters that are found in regular expressions such as for example .\\*,[](){}+?^$. Also, note that the default matching patterns for non-primitive types or their wrappers is non-greedy. This means that the pattern @name@ would match the line foo by only the first letter f. If you want to match the full line, you have to declare the matching expression as ^@name@$ what is the regular expression for a full line match. Be aware that using regular expressions might require you to define a @WritePattern which is described below. Using regular expressions allows you to specify matching constraints natively. Annotating a field with @MatchBy("[a-z]{1,5]") would for example allow for only matching lines where the property is represented by one to five lower case characters. Configuring mismatch handling is described below.

Writing beans

Similar to reading contents from a source, this utility allows to write a list of beans to a target. Without further configuration, the same pattern as in @MatchBy will be used for writing where the property expressions are substituted by the bean values. This can however result in distorted output since symbols of regular expressions are written as they are. Therefore, a user can define an output pattern by declaring @WritePattern. This pattern understands the same type of property expressions such as @name but does not use regular expressions. Remember that regular expressions must therefore not be escaped when a @WritePattern is specified. A property expression can however be escaped in the same manner as in a @MatchBy statement.

Handling mismatches

When a content source is parsed but a single line cannot be matched to the specified expression, the extraction will abort with throwing a TransformationException. Empty lines will however be skipped. This behavior can be configured by declaring a policy within @Skip. That way, a non-matched line can either ignored, throw an exception or be ignored for empty lines. An empty line a the end of a file is always ignored.

Builder

The BeanTransformer offers a builder which allows to specify different properties. Mostly, it allows to override properties that were declared in a specific bean such as the content pattern provided by @MatchBy, a @Skip policy or the @WritePattern. This allows the reuse of a bean for different content sources that contain the same properties but differ in their display. Also, this allows to provide a pattern at run time.

Performance considerations

All beans are constructed and accessed via field reflection. (It is therefore not required to define setters and getters for beans. A default constructor is however required. In the process, primitive types are accessed as such and not wrapped in order to avoid such overhead. Java reflection is usually considered to be slower than conventional access. However, modern JVMs such as the HotSpot JVM are efficient in detecting the repetitive use of reflection and compile such access into native byte code. Therefore, this tool should not perform worse than a hand-written matcher once the BeanTransformer is set up.

Extension points

Besides providing your own PropertyDelegates, it is possible to implement a custom IDelegationFactory which is responsible for creating (custom) PropertyDelegates for any field. The default implementation SimpleDelegationFactory provides an example for such an implementation. That way, it would be for example possible to automatically create suitable patterns for bean validation (JSR-303) annotations.

The code is licensed under the Apache Software License, Version 2.0.

Mittwoch, 13. November 2013

cglib: The missing manual

The byte code instrumentation library cglib is a popular choice among many well-known Java frameworks such as Hibernate (not anymore) or Spring for doing their dirty work. Byte code instrumentation allows to manipulate or to create classes after the compilation phase of a Java application. Since Java classes are linked dynamically at run time, it is possible to add new classes to an already running Java program. Hibernate uses cglib for example for its generation of dynamic proxies. Instead of returning the full object that you stored ina a database, Hibernate will return you an instrumented version of your stored class that lazily loads some values from the database only when they are requested. Spring used cglib for example when adding security constraints to your method calls. Instead of calling your method directly, Spring security will first check if a specified security check passes and only delegate to your actual method after this verification. Another popular use of cglib is within mocking frameworks such as mockito, where mocks are nothing more than instrumented class where the methods were replaced with empty implementations (plus some tracking logic).

Other than ASM - another very high-level byte code manipulation library on top of which cglib is built - cglib offers rather low-level byte code transformers that can be used without even knowing about the details of a compiled Java class. Unfortunately, the documentation of cglib is rather short, not to say that there is basically none. Besides a single blog article from 2005 that demonstrates the Enhancer class, there is not much to find. This blog article is an attempt to demonstrate cglib and its unfortunately often awkward API.

Enhancer

Let's start with the Enhancer class, the probably most used class of the cglib library. An enhancer allows the creation of Java proxies for non-interface types. The Enhancer can be compared with the Java standard library's Proxy class which was introduced in Java 1.3. The Enhancer dynamically creates a subclass of a given type but intercepts all method calls. Other than with the Proxy class, this works for both class and interface types. The following example and some of the examples after are based on this simple Java POJO:

public class SampleClass {
  public String test(String input) {
    return "Hello world!";
  }
}

Using cglib, the return value of test(String) method can easily be replaced by another value using an Enhancer and a FixedValue callback:

@Test
public void testFixedValue() throws Exception {
  Enhancer enhancer = new Enhancer();
  enhancer.setSuperclass(SampleClass.class);
  enhancer.setCallback(new FixedValue() {
    @Override
    public Object loadObject() throws Exception {
      return "Hello cglib!";
    }
  });
  SampleClass proxy = (SampleClass) enhancer.create();
  assertEquals("Hello cglib!", proxy.test(null));
}

In the above example, the enhancer will return an instance of an instrumented subclass of SampleClass where all method calls return a fixed value which is generated by the anonymous FixedValue implementation above. The object is created by Enhancer#create(Object...) where the method takes any number of arguments which are used to pick any constructor of the enhanced class. (Even though constructors are only methods on the Java byte code level, the Enhancer class cannot instrument constructors. Neither can it instrument static or final classes.) If you only want to create a class, but no instance, Enhancer#createClass will create a Class instance which can be used to create instances dynamically. All constructors of the enhanced class will be available as delegation constructors in this dynamically generated class.

Be aware that any method call will be delegated in the above example, also calls to the methods defined in java.lang.Object. As a result, a call to proxy.toString() will also return "Hello cglib!". In contrast will a call to proxy.hashCode() result in a ClassCastException since the FixedValue interceptor always returns a String even though the Object#hashCode signature requires a primitive integer.

Another observation that can be made is that final methods are not intercepted. An example of such a method is Object#getClass which will return something like "SampleClass$$EnhancerByCGLIB$$e277c63c" when it is invoked. This class name is generated randomly by cglib in order to avoid naming conflicts. Be aware of the different class of the enhanced instance when you are making use of explicit types in your program code. The class generated by cglib will however be in the same package as the enhanced class (and therefore be able to override package-private methods). Similar to final methods, the subclassing approach makes for the inability of enhancing final classes. Therefore frameworks as Hibernate cannot persist final classes.


Next, let us look at a more powerful callback class, the InvocationHandler, that can also be used with an Enhancer:

@Test
public void testInvocationHandler() throws Exception {
  Enhancer enhancer = new Enhancer();
  enhancer.setSuperclass(SampleClass.class);
  enhancer.setCallback(new InvocationHandler() {
    @Override
    public Object invoke(Object proxy, Method method, Object[] args) 
        throws Throwable {
      if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) {
        return "Hello cglib!";
      } else {
        throw new RuntimeException("Do not know what to do.");
      }
    }
  });
  SampleClass proxy = (SampleClass) enhancer.create();
  assertEquals("Hello cglib!", proxy.test(null));
  assertNotEquals("Hello cglib!", proxy.toString());
}


This callback allows us to answer with regards to the invoked method. However, you should be careful when calling a method on the proxy object that comes with the InvocationHandler#invoke method. All calls on this method will be dispatched with the same InvocationHandler and might therefore result in an endless loop. In order to avoid this, we can use yet another callback dispatcher:

@Test
public void testMethodInterceptor() throws Exception {
  Enhancer enhancer = new Enhancer();
  enhancer.setSuperclass(SampleClass.class);
  enhancer.setCallback(new MethodInterceptor() {
    @Override
    public Object intercept(Object obj, Method method, Object[] args, MethodProxy proxy)
        throws Throwable {
      if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) {
        return "Hello cglib!";
      } else {
        proxy.invokeSuper(obj, args);
      }
    }
  });
  SampleClass proxy = (SampleClass) enhancer.create();
  assertEquals("Hello cglib!", proxy.test(null));
  assertNotEquals("Hello cglib!", proxy.toString());
  proxy.hashCode(); // Does not throw an exception or result in an endless loop.
}


The MethodInterceptor allows full control over the intercepted method and offers some utilities for calling the method of the enhanced class in their original state. But why would one want to use other methods anyways? Because the other methods are more efficient and cglib is often used in edge case frameworks where efficiency plays a significant role. The creation and linkage of the MethodInterceptor requires for example the generation of a different type of byte code and the creation of some runtime objects that are not required with the InvocationHandler. Because of that, there are other classes that can be used with the Enhancer:
  • LazyLoader: Even though the LazyLoader's only method has the same method signature as FixedValue, the LazyLoader is fundamentally different to the FixedValue interceptor. The LazyLoader is actually supposed to return an instance of a subclass of the enhanced class. This instance is requested only when a method is called on the enhanced object and then stored for future invocations of the generated proxy. This makes sense if your object is expensive in its creation without knowing if the object will ever be used. Be aware that some constructor of the enhanced class must be called both for the proxy object and for the lazily loaded object. Thus, make sure that there is another cheap (maybe protected) constructor available or use an interface type for the proxy. You can choose the invoked constructed by supplying arguments to Enhancer#create(Object...).
  • Dispatcher: The Dispatcher is like the LazyLoader but will be invoked on every method call without storing the loaded object. This allows to change the implementation of a class without changing the reference to it. Again, be aware that some constructor must be called for both the proxy and the generated objects.
  • ProxyRefDispatcher: This class carries a reference to the proxy object it is invoked from in its signature. This allows for example to delegate method calls to another method of this proxy. Be aware that this can easily cause an endless loop and will always cause an endless loop if the same method is called from within ProxyRefDispatcher#loadObject(Object).
  • NoOp: The NoOp class does not what its name suggests. Instead, it delegates each method call to the enhanced class's method implementation.

At this point, the last two interceptors might not make sense to you. Why would you even want to enhance a class when you will always delegate method calls to the enhanced class anyways? And you are right. These interceptors should only be used together with a CallbackFilter as it is demonstrated in the following code snippet:

@Test
public void testCallbackFilter() throws Exception {
  Enhancer enhancer = new Enhancer();
  CallbackHelper callbackHelper = new CallbackHelper(SampleClass.class, new Class[0]) {
    @Override
    protected Object getCallback(Method method) {
      if(method.getDeclaringClass() != Object.class && method.getReturnType() == String.class) {
        return new FixedValue() {
          @Override
          public Object loadObject() throws Exception {
            return "Hello cglib!";
          };
        }
      } else {
        return NoOp.INSTANCE; // A singleton provided by NoOp.
      }
    }
  };
  enhancer.setSuperclass(MyClass.class);
  enhancer.setCallbackFilter(callbackHelper);
  enhancer.setCallbacks(callbackHelper.getCallbacks());
  SampleClass proxy = (SampleClass) enhancer.create();
  assertEquals("Hello cglib!", proxy.test(null));
  assertNotEquals("Hello cglib!", proxy.toString());
  proxy.hashCode(); // Does not throw an exception or result in an endless loop.
}

The Enhancer instance accepts a CallbackFilter in its Enhancer#setCallbackFilter(CallbackFilter) method where it expects methods of the enhanced class to be mapped to array indices of an array of Callback instances. When a method is invoked on the created proxy, the Enhancer will then choose the according interceptor and dispatch the called method on the corresponding Callback (which is a marker interface for all the interceptors that were introduced so far). To make this API less awkward, cglib offers a CallbackHelper which will represent a CallbackFilter and which can create an array of Callbacks for you. The enhanced object above will be functionally equivalent to the one in the example for the MethodInterceptor but it allows you to write specialized interceptors whilst keeping the dispatching logic to these interceptors separate.

 

How does it work?


When the Enhancer creates a class, it will set create a private static field for each interceptor that was registered as a Callback for the enhanced class after its creation. This also means that class definitions that were created with cglib cannot be reused after their creation since the registration of callbacks does not become a part of the generated class's initialization phase but are prepared manually by cglib after the class was already initialized by the JVM. This also means that classes created with cglib are not technically ready after their initialization and for example cannot be sent over the wire since the callbacks would not exist for the class loaded in the target machine.

Depending on the registered interceptors, cglib might register additional fields such as for example for the MethodInterceptor where two private static fields (one holding a reflective Method and a the other holding MethodProxy) are registered per method that is intercepted in the enhanced class or any of its subclasses. Be aware that the MethodProxy is making excessive use of the FastClass which triggers the creation of additional classes and is described in further detail below.

For all these reasons, be careful when using the Enhancer. And always register callback types defensively, since the MethodInterceptor will for example trigger the creation of additional classes and register additional fields in the enhanced class. This is specifically dangerous since the callback variables are also stored in the enhancer's cache: This implies that the callback instances are not garbage collected (unless all instances and the enhancer's ClassLoader is, what is unusual). This is in particular dangerous when using anonymous classes which silently carry a reference to their outer class. Recall the example above:


@Test
public void testFixedValue() throws Exception {
  Enhancer enhancer = new Enhancer();
  enhancer.setSuperclass(SampleClass.class);
  enhancer.setCallback(new FixedValue() {
    @Override
    public Object loadObject() throws Exception {
      return "Hello cglib!";
    }
  });
  SampleClass proxy = (SampleClass) enhancer.create();
  assertEquals("Hello cglib!", proxy.test(null));
}

The anonymous subclass of FixedValue would become hardly referenced from the enhanced SampleClass such that neither the anonymous FixedValue instance or the class holding the @Test method would ever be garbage collected. This can introduce nasty memory leaks in your applications. Therefore, do not use non-static inner classes with cglib. (I only use them in this blog entry for keeping the examples short.)

Finally, you should never intercept Object#finalize(). Due to the subclassing approach of cglib, intercepting finalize is implemented by overriding it what is in general a bad idea. Enhanced instances that intercept finalize will be treated differently by the garbage collector and will also cause these objects being queued in the JVM's finalization queue. Also, if you (accidentally) create a hard reference to the enhanced class in your intercepted call to finalize, you have effectively created an noncollectable instance. This is in general nothing you want. Note that final methods are never intercepted by cglib. Thus, Object#wait, Object#notify and Object#notifyAll do not impose the same problems. Be however aware that Object#clone can be intercepted what is something you might not want to do.

Immutable bean

cglib's ImmutableBean allows you to create an immutability wrapper similar to for example Collections#immutableSet. All changes of the underlying bean will be prevented by an IllegalStateException (however, not by an UnsupportedOperationException as recommended by the Java API). Looking at some bean

public class SampleBean {
  private String value;
  public String getValue() {
    return value;
  }
  public void setValue(String value) {
    this.value = value;
  }
}

we can make this bean immutable:

@Test(expected = IllegalStateException.class)
public void testImmutableBean() throws Exception {
  SampleBean bean = new SampleBean();
  bean.setValue("Hello world!");
  SampleBean immutableBean = (SampleBean) ImmutableBean.create(bean);
  assertEquals("Hello world!", immutableBean.getValue());
  bean.setValue("Hello world, again!");
  assertEquals("Hello world, again!", immutableBean.getValue());
  immutableBean.setValue("Hello cglib!"); // Causes exception.
}

As obvious from the example, the immutable bean prevents all state changes by throwing an IllegalStateException. However, the state of the bean can be changed by changing the original object. All such changes will be reflected by the ImmutableBean.

Bean generator

The BeanGenerator is another bean utility of cglib. It will create a bean for you at run time:

@Test
public void testBeanGenerator() throws Exception {
  BeanGenerator beanGenerator = new BeanGenerator();
  beanGenerator.addProperty("value", String.class);
  Object myBean = beanGenerator.create();
  
  Method setter = myBean.getClass().getMethod("setValue", String.class);
  setter.invoke(myBean, "Hello cglib!");
  Method getter = myBean.getClass().getMethod("getValue");
  assertEquals("Hello cglib!", getter.invoke(myBean));
}

As obvious from the example, the BeanGenerator first takes some properties as name value pairs. On creation, the BeanGenerator creates the accessors
  • <type> get<name>()
  • void set<name>(<type>)
for you. This might be useful when another library expects beans which it resolved by reflection but you do not know these beans at run time. (An example would be Apache Wicket which works a lot with beans.)

Bean copier

The BeanCopier is another bean utility that copies beans by their property values. Consider another bean with similar properties as SampleBean:

public class OtherSampleBean {
  private String value;
  public String getValue() {
    return value;
  }
  public void setValue(String value) {
    this.value = value;
  }
}

Now you can copy properties from one bean to another:

@Test
public void testBeanCopier() throws Exception {
  BeanCopier copier = BeanCopier.create(SampleBean.class, OtherSampleBean.class, false);
  SampleBean bean = new SampleBean();
  myBean.setValue("Hello cglib!");
  OtherSampleBean otherBean = new OtherSampleBean();
  copier.copy(bean, otherBean, null);
  assertEquals("Hello cglib!", otherBean.getValue());  
}

without being restrained to a specific type. The BeanCopier#copy mehtod takles an (eventually) optional Converter which allows to do some further manipulations on each bean property. If the BeanCopier is created with false as the third constructor argument, the Converter is ignored and can therefore be null.

Bulk bean

A BulkBean allows to use a specified set of a bean's accessors by arrays instead of method calls:

@Test
public void testBulkBean() throws Exception {
  BulkBean bulkBean = BulkBean.create(SampleBean.class, 
      new String[]{"getValue"}, 
      new String[]{"setValue"}, 
      new Class[]{String.class});
  SampleBean bean = new SampleBean();
  bean.setValue("Hello world!");
  assertEquals(1, bulkBean.getPropertyValues(bean).length);
  assertEquals("Hello world!", bulkBean.getPropertyValues(bean)[0]);
  bulkBean.setPropertyValues(bean, new Object[] {"Hello cglib!"});
  assertEquals("Hello cglib!", bean.getValue());
}

The BulkBean takes an array of getter names, an array of setter names and an array of property types as its constructor arguments. The resulting instrumented class can then extracted as an array by BulkBean#getPropertyBalues(Object). Similarly, a bean's properties can be set by BulkBean#setPropertyBalues(Object, Object[]).

Bean map

This is the last bean utility within the cglib library. The BeanMap converts all properties of a bean to a String-to-Object Java Map:

@Test
public void testBeanGenerator() throws Exception {
  SampleBean bean = new SampleBean();
  BeanMap map = BeanMap.create(bean);
  bean.setValue("Hello cglib!");
  assertEquals("Hello cglib", map.get("value"));
}

Additionally, the BeanMap#newInstance(Object) method allows to create maps for other beans by reusing the same Class.

Key factory 

The KeyFactory factory allows the dynamic creation of keys that are composed of multiple values that can be used in for example Map implementations. For doing so, the KeyFactory requires some interface that defines the values that should be used in such a key. This interface must contain a single method by the name newInstance that returns an Object. For example:

public interface SampleKeyFactory {
  Object newInstance(String first, int second);
}

Now an instance of a a key can be created by:

@Test
public void testKeyFactory() throws Exception {
  SampleKeyFactory keyFactory = (SampleKeyFactory) KeyFactory.create(Key.class);
  Object key = keyFactory.newInstance("foo", 42);
  Map<Object, String> map = new HashMap<Object, String>();
  map.put(key, "Hello cglib!");
  assertEquals("Hello cglib!", map.get(keyFactory.newInstance("foo", 42)));
}

The KeyFactory will assure the correct implementation of the Object#equals(Object) and Object#hashCode methods such that the resulting key objects can be used in a Map or a Set. The KeyFactory is also used quite a lot internally in the cglib library.

Mixin

Some might already know the concept of the Mixin class from other programing languages such as Ruby or Scala (where mixins are called traits). cglib Mixins allow the combination of several objects into a single object. However, in order to do so, those objects must be backed by interfaces:

public interface Interface1 {
  String first();
}

public interface Interface2 {
  String second();
}

public class Class1 implements Interface1 {
  @Override 
  public String first() {
    return "first";
  }
}

public class Class2 implements Interface2 {
  @Override 
  public String second() {
    return "second";
  }
}

Now the classes Class1 and Class2 can be combined to a single class by an additional interface:

public interface MixinInterface extends Interface1, Interface2 { /* empty */ }

@Test
public void testMixin() throws Exception {
  Mixin mixin = Mixin.create(new Class[]{Interface1.class, Interface2.class, 
      MixinInterface.class}, new Object[]{new Class1(), new Class2()});
  MixinInterface mixinDelegate = (MixinInterface) mixin;
  assertEquals("first", mixinDelegate.first());
  assertEquals("second", mixinDelegate.second());
}

Admittedly, the Mixin API is rather awkward since it requires the classes used for a mixin to implement some interface such that the problem could also be solved by non-instrumented Java.

String switcher

The StringSwitcher emulates a String to int Java Map:
@Test
public void testStringSwitcher() throws Exception {
  String[] strings = new String[]{"one", "two"};
  int[] values = new int[]{10, 20};
  StringSwitcher stringSwitcher = StringSwitcher.create(strings, values, true);
  assertEquals(10, stringSwitcher.intValue("one"));
  assertEquals(20, stringSwitcher.intValue("two"));
  assertEquals(-1, stringSwitcher.intValue("three"));
}

The StringSwitcher allows to emulate a switch command on Strings such as it is possible with the built-in Java switch statement since Java 7. If using the StringSwitcher in Java 6 or less really adds a benefit to your code remains however doubtful and I would personally not recommend its use.

Interface maker

The InterfaceMaker does what its name suggests: It dynamically creates a new interface.

@Test
public void testInterfaceMaker() throws Exception {
  Signature signature = new Signature("foo", Type.DOUBLE_TYPE, new Type[]{Type.INT_TYPE});
  InterfaceMaker interfaceMaker = new InterfaceMaker();
  interfaceMaker.add(signature, new Type[0]);
  Class iface = interfaceMaker.create();
  assertEquals(1, iface.getMethods().length);
  assertEquals("foo", iface.getMethods()[0].getName());
  assertEquals(double.class, iface.getMethods()[0].getReturnType());
}

Other than any other class of cglib's public API, the interface maker relies on ASM types. The creation of an interface in a running application will hardly make sense since an interface only represents a type which can be used by a compiler to check types. It can however make sense when you are generating code that is to be used in later development.

Method delegate

A MethodDelegate allows to emulate a C#-like delegate to a specific method by binding a method call to some interface. For example, the following code would bind the SampleBean#getValue method to a delegate:

public interface BeanDelegate {
  String getValueFromDelegate();
}

@Test
public void testMethodDelegate() throws Exception {
  SampleBean bean = new SampleBean();
  bean.setValue("Hello cglib!");
  BeanDelegate delegate = (BeanDelegate) MethodDelegate.create(
      bean, "getValue", BeanDelegate.class);
  assertEquals("Hello world!", delegate.getValueFromDelegate());
}

There are however some things to note:
  • The factory method MethodDelegate#create takes exactly one method name as its second argument. This is the method the MethodDelegate will proxy for you.
  • There must be a method without arguments defined for the object which is given to the factory method as its first argument. Thus, the MethodDelegate is not as strong as it could be.
  • The third argument must be an interface with exactly one argument. The MethodDelegate implements this interface and can be cast to it. When the method is invoked, it will call the proxied method on the object that is the first argument.
Furthermore, consider these drawbacks:
  • cglib creates a new class for each proxy. Eventually, this will litter up your permanent generation heap space
  • You cannot proxy methods that take arguments.
  • If your interface takes arguments, the method delegation will simply not work without an exception thrown (the return value will always be null). If your interface requires another return type (even if that is more general), you will get a IllegalArgumentException


Multicast delegate

The MulticastDelegate works a little different than the MethodDelegate even though it aims at similar functionality. For using the MulticastDelegate, we require an object that implements an interface:

public interface DelegatationProvider {
  void setValue(String value);
}

public class SimpleMulticastBean implements DelegatationProvider {
  private String value;
  public String getValue() {
    return value;
  }
  public void setValue(String value) {
    this.value = value;
  }
}

Based on this interface-backed bean we can create a MulticastDelegate that dispatches all calls to setValue(String) to several classes that implement the DelegationProvider interface:

@Test
public void testMulticastDelegate() throws Exception {
  MulticastDelegate multicastDelegate = MulticastDelegate.create(
      DelegatationProvider.class);
  SimpleMulticastBean first = new SimpleMulticastBean();
  SimpleMulticastBean second = new SimpleMulticastBean();
  multicastDelegate = multicastDelegate.add(first);
  multicastDelegate = multicastDelegate.add(second);

  DelegatationProvider provider = (DelegatationProvider)multicastDelegate;
  provider.setValue("Hello world!");

  assertEquals("Hello world!", first.getValue());
  assertEquals("Hello world!", second.getValue());
}

Again, there are some drawbacks:
  • The objects need to implement a single-method interface. This sucks for third-party libraries and is awkward when you use CGlib to do some magic where this magic gets exposed to the normal code. Also, you could implement your own delegate easily (without byte code though but I doubt that you win so much over manual delegation).
  • When your delegates return a value, you will receive only that of the last delegate you added. All other return values are lost (but retrieved at some point by the multicast delegate). 


Constructor delegate

A ConstructorDelegate allows to create a byte-instrumented factory method. For that, that we first require an interface with a single method newInstance which returns an Object and takes any amount of parameters to be used for a constructor call of the specified class. For example, in order to create a ConstructorDelegate for the SampleBean, we require the following to call SampleBean's default (no-argument) constructor:

public interface SampleBeanConstructorDelegate {
  Object newInstance();
}

@Test
public void testConstructorDelegate() throws Exception {
  SampleBeanConstructorDelegate constructorDelegate = (SampleBeanConstructorDelegate) ConstructorDelegate.create(
    SampleBean.class, SampleBeanConstructorDelegate.class);
  SampleBean bean = (SampleBean) constructorDelegate.newInstance();
  assertTrue(SampleBean.class.isAssignableFrom(bean.getClass()));
}


Parallel sorter

The ParallelSorter claims to be a faster alternative to the Java standard library's array sorters when sorting arrays of arrays:

@Test
public void testParallelSorter() throws Exception {
  Integer[][] value = {
    {4, 3, 9, 0},
    {2, 1, 6, 0}
  };
  ParallelSorter.create(value).mergeSort(0);
  for(Integer[] row : value) {
    int former = -1;
    for(int val : row) {
      assertTrue(former < val);
      former = val;
    }
  }
}

The ParallelSorter takes an array of arrays and allows to either apply a merge sort or a quick sort on every row of the array. Be however careful when you use it:
  •  When using arrays of primitives, you have to call merge sort with explicit sorting ranges (e.g. ParallelSorter.create(value).mergeSort(0, 0, 3) in the example. Otherwise, the ParallelSorter has a pretty obvious bug where it tries to cast the primitive array to an array Object[] what will cause a ClassCastException.
  • If the array rows are uneven, the first argument will determine the length of what row to consider. Uneven rows will either lead to the extra values not being considered for sorting or a ArrayIndexOutOfBoundException.
Personally, I doubt that the ParallelSorter really offers a time advantage. Admittedly, I did however not yet try to benchmark it. If you tried it, I'd be happy to hear about it in the comments.


Fast class and fast members

The FastClass promises a faster invocation of methods than the Java reflection API by wrapping a Java class and offering similar methods to the reflection API:

@Test
public void testFastClass() throws Exception {
  FastClass fastClass = FastClass.create(SampleBean.class);
  FastMethod fastMethod = fastClass.getMethod(SampleBean.class.getMethod("getValue"));
  MyBean myBean = new MyBean();
  myBean.setValue("Hello cglib!");
  assertTrue("Hello cglib!", fastMethod.invoke(myBean, new Object[0]));
}

Besides the demonstrated FastMethod, the FastClass can also create FastConstructors but no fast fields. But how can the FastClass be faster than normal reflection? Java reflection is executed by JNI where method invocations are executed by some C-code. The FastClass on the other side creates some byte code that calls the method directly from within the JVM. However, the newer versions of the HotSpot JVM (and probably many other modern JVMs) know a concept called inflation where the JVM will translate reflective method calls into native version's of FastClass when a reflective method is executed often enough. You can even control this behavior (at least on a HotSpot JVM) with setting the sun.reflect.inflationThreshold property to a lower value. (The default is 15.) This property determines after how many reflective invocations a JNI call should be substituted by a byte code instrumented version. I would therefore recommend to not use FastClass on modern JVMs, it can however fine-tune performance on older Java virtual machines.

cglib proxy

The cglib Proxy is a reimplementation of the Java Proxy class mentioned in the beginning of this article. It is intended to allow using the Java library's proxy in Java versions before Java 1.3 and differs only in minor details. The better documentation of the cglib Proxy can however be found in the Java standard library's Proxy javadoc where an example of its use is provided. For this reason, I will skip a more detailed discussion of the cglib's Proxy at this place.

A final word of warning

After this overview of cglib's functionality, I want to speak a final word of warning. All cglib classes generate byte code which results in additional classes being stored in a special section of the JVM's memory: The so called perm space. This permanent space is, as the name suggests, used for permanent objects that do not usually get garbage collected. This is however not completely true: Once a Class is loaded, it cannot be unloaded until the loading ClassLoader becomes available for garbage collection. This is only the case the Class was loaded with a custom ClassLoader which is not a native JVM system ClassLoader. This ClassLoader can be garbage collected if itself, all Classes it ever loaded and all instances of all Classes it ever loaded become available for garbage collection. This means: If you create more and more classes throughout the life of a Java application and if you do not take care of the removal of these classes, you will sooner or later run of of perm space what will result in your application's death by the hands of an OutOfMemoryError. Therefore, use cglib sparingly. However, if you use cglib wisely and carefully, you can really do amazing things with it that go beyond what you can do with non-instrumented Java applications.

Lastly, when creating projects that depend on cglib, you should be aware of the fact that the cglib project is not as well maintained and active as it should be, considering its popularity. The missing documentation is a first hint. The often messy public API a second. But then there are also broken deploys of cglib to Maven central. The mailing list reads like an archive of spam messages. And the release cycles are rather unstable. You might therefore want to have a look at javassist, the only real low-level alternative to cglib. Javassist comes bundled with a pseudo-java compiler what allows to create quite amazing byte code instrumentations without even understanding Java byte code. If you like to get your hands dirty, you might also like ASM on top of which cglib is built. ASM comes with a great documentation of both the library and Java class files and their byte code.

Note that these examples only run with cglib 2.2.2 and are not compatible with the newest release 3 of cglib. Unfortunately, I experienced the newest cglib version to occasionally produce invalid byte code which is why I considered an old version and also use this version in production. Also, note that most projects using cglib move the library to their own namespace in order to avoid version conflicts with other dependencies such as for example demonstrated by the Spring project. You should do the same with your project when making use of cglib. Tools such like jarjar can help you with the automation of this good practice.

Montag, 28. Oktober 2013

Java class loading anomaly

I learned about a (for me) initially rather unintuitive anomaly in the Java language today. Of course, this is not technically an anomaly but something well-defined in the JVMS. However, I was not aware of the class loading behavior described in this blog entry, despite having read the specification, which I decided this was worth sharing.

I stumbled onto this when I was curious about reasons why it is not allowed to use static fields referencing an enum for annotation values while it is allowed for any other value. It turns out that the Java compiler is not allowed to substitute enum fields at compile time while it can substitute such values for all other possible annotation members. But what does this mean in practice?

Let's look at this example class:

@MyAnnotation(HelloWorldHelper.VAL1)
class MyClass {
  public static void main(String[] args) {
    System.out.println(MyClass.class.getAnnotation(MyAnnotation.class).value());
    System.out.println(HelloWorldHelper.VAL2);
    System.out.println(HelloWorldHelper.class.getName());
    System.out.println(HelloWorldHelper.VAL3);
  }
}

with the following helper classes:

enum MyEnum {
  HELLO_WORLD_ENUM
}

@Retention(RetentionPolicy.RUNTIME)
@interface MyAnnotation {
  String value();
}

class HelloWorldHelper {
  public static final String VAL1 = "Hello world!";
  public static final String VAL2 = "Hello world again!";
  public static final MyEnum VAL3 = MyEnum.HELLO_WORLD_ENUM;
  static { System.out.println("Initialized class: HelloWorldHelper"); }
}

the output (for me first unexpectedly) returns as:

Hello world!
Hello world again!
HelloWorldHelper
Initialized class: HelloWorldHelper
HELLO_WORLD_ENUM

But why is this so? The Java compiler substitutes constant references to String values (this is also true for primitives) with a direct entry of the referenced String's value in the referencing class's constant pool. This also means that you could not load another class HelloWorldHelper at runtime and expect those values to be adjusted in MyClass. This adjustment would only happen for the MyEnum value which is as a matter of fact resolved at runtime (and therefore causes the HelloWorldHelper class to be loaded and initialized which can be observed by the execution of the static block). The motive for not allowing this anomaly for enums but for Strings might well be (of course, I can only guess) that the Java language specification treats strings differently than other object types such as the primitive wrapper types. Usually, copying an object reference would break Java's contract of object identity. Strings on the other side will still be identical even after they were technically duplicated due to Java's concept of pooling load-time strings. As mentioned before, primitives can also be copied into the referencing class since primitive types are implemented as value types in Java which do not know a concept of identity. However, the HelloWorldHelper class would be loaded when for example referencing a non-primitive Integer boxing type.

Interestingly enough does HelloWorldHelper.class.getName() does not require the HelloWorldHelper class to be initialized. When looking at the generated byte code, one can observe that the HelloWorldHelper class is actually referenced this time and will as a matter of fact be loaded into the JVM. However, JVMS §5.5 does not specify such a reflective access as a reason to initialize the class which is why the above output appears the way observed.

Donnerstag, 18. Juli 2013

Extending Guava caches to overflow to disk

Caching allows you to significantly speed up applications with only little effort. Two great cache implementations for the Java platform are the Guava caches and Ehcache. While Ehcache is much richer in features (such as its Searchable API, the possibility of persisting caches to disk or overflowing to big memory), it also comes with quite an overhead compared to Guava. In a recent project, I found a need to overflow a comprehensive cache to disk but at the same time, I regularly needed to invalidate particular values of this cache. Because Ehcache's Searchable API is only accessible to in-memory caches, this put me in quite a dilemma. However, it was quite easy to extend a Guava cache to allow overflowing  to disk in a structured manner. This allowed me both overflowing to disk and the required invalidation feature. In this article, I want to show how this can be achieved.

I will implement this file persisting cache FilePersistingCache in form of a wrapper to an actual Guava Cache instance. This is of course not the most elegant solution (more elegant would to implement an actual Guava Cache with this behavior), but I will do for most cases.

To begin with, I will define a protected method that creates the backing cache I mentioned before:

private LoadingCache<K, V> makeCache() {
  return customCacheBuild()
    .removalListener(new PersistingRemovalListener())
    .build(new PersistedStateCacheLoader());
}

protected CacheBuilder<K, V> customCacheBuild(CacheBuilder<K, V> cacheBuilder) {
  return CacheBuilder.newBuilder();
}

The first method will be used internally to build the necessary cache. The second method is supposed to be overridden in order to implement any custom requirement to the cache as for example an expiration strategy. This could for example be a maximum value of entries or soft references. This cache will be used just as any other Guava cache. The key to the cache's functionality are the RemovalListener and the CacheLoader that are used for this cache. We will define these two implementation as inner classes of the FilePersistingCache:

private class PersistingRemovalListener implements RemovalListener<K, V> {
  @Override
  public void onRemoval(RemovalNotification<K, V> notification) {
    if (notification.getCause() != RemovalCause.COLLECTED) {
      try {
        persistValue(notification.getKey(), notification.getValue());
      } catch (IOException e) {
        LOGGER.error(String.format("Could not persist key-value: %s, %s", 
          notification.getKey(), notification.getValue()), e);
      }
    }
  }
}

public class PersistedStateCacheLoader extends CacheLoader<K, V> {
  @Override
  public V load(K key) {
    V value = null;
    try {
      value = findValueOnDisk(key);
    } catch (Exception e) {
      LOGGER.error(String.format("Error on finding disk value to key: %s", 
        key), e);
    }
    if (value != null) {
      return value;
    } else {
      return makeValue(key);
    }
  }
}

As obvious from the code, these inner classes call methods of FilePersistingCache we did not yet define. This allows us to define custom serialization behavior by overriding this class. The removal listener will check the reasons for a cache entry being evicted. If the RemovalCause is COLLECTED, the cache entry was not manually removed by the user but it was removed as a consequence of the cache's eviction strategy. We will therefore only try to persist a cache entry if the user did not wish the entries removal. The CacheLoader will first attempt to restore an existent value from disk and create a new value only if such a value could not be restored.

The missing methods are defined as follows:

private V findValueOnDisk(K key) throws IOException {
  if (!isPersist(key)) return null;
  File persistenceFile = makePathToFile(persistenceDirectory, directoryFor(key));
  (!persistenceFile.exists()) return null;
  FileInputStream fileInputStream = new FileInputStream(persistenceFile);
  try {
    FileLock fileLock = fileInputStream.getChannel().lock();
    try {
      return readPersisted(key, fileInputStream);
    } finally {
      fileLock.release();
    }
  } finally {
    fileInputStream.close();
  }
}

private void persistValue(K key, V value) throws IOException {
  if (!isPersist(key)) return;
  File persistenceFile = makePathToFile(persistenceDirectory, directoryFor(key));
  persistenceFile.createNewFile();
  FileOutputStream fileOutputStream = new FileOutputStream(persistenceFile);
  try {
    FileLock fileLock = fileOutputStream.getChannel().lock();
    try {
      persist(key, value, fileOutputStream);
    } finally {
      fileLock.release();
    }
  } finally {
    fileOutputStream.close();
  }
}


private File makePathToFile(@Nonnull File rootDir, List<String> pathSegments) {
  File persistenceFile = rootDir;
  for (String pathSegment : pathSegments) {
    persistenceFile = new File(persistenceFile, pathSegment);
  }
  if (rootDir.equals(persistenceFile) || persistenceFile.isDirectory()) {
    throw new IllegalArgumentException();
  }
  return persistenceFile;
}

protected abstract List<String> directoryFor(K key);

protected abstract void persist(K key, V value, OutputStream outputStream) 
  throws IOException;

protected abstract V readPersisted(K key, InputStream inputStream) 
  throws IOException;

protected abstract boolean isPersist(K key);

The implemented methods take care of serializing and deserializing values while synchronizing file access and guaranteeing that streams are closed appropriately. The last four methods remain abstract and are up to the cache's user to implement. The directoryFor(K) method should identify a unique file name for each key. In the easiest case, the toString method of the key's K class is implemented in such a way. Additionally, I made the persist, readPersisted and isPersist methods abstract in order to allow for a custom serialization strategy such as using Kryo. In the easiest scenario, you would use the built in Java functionality which uses ObjectInputStream and ObjectOutputStream. For isPersist, you would return true, assuming that you would only use this implementation if you need serialization. I added this feature to support mixed caches where you can only serialize values to some keys. Be sure not to close the streams within the persist and readPersisted methods since the file system locks rely on the streams to be open. The above implementation will take care of closing the stream for you.

Finally, I added some service methods to access the cache. Implementing Guava's Cache interface would of course be a more elegant solution:

public V get(K key) {
  return underlyingCache.getUnchecked(key);
}

public void put(K key, V value) {
  underlyingCache.put(key, value);
}

public void remove(K key) {
  underlyingCache.invalidate(key);
}

protected Cache<K, V> getUnderlyingCache() {
  return underlyingCache;
}

Of course, this solution can be further improved. If you use the cache in a concurrent scenario, be further aware that the RemovalListener is, other than most Guava cache method's executed asynchronously. As obvious from the code, I added file locks to avoid read/write conflicts on the file system. This asynchronicity does however imply that there is a small chance that a value entry gets recreated even though there is still a value in memory. If you need to avoid this, be sure to call the underlying cache's cleanUp method within the wrapper's get method. Finally, remember to clean up the file system when you expire your cache. Optimally, you will use a temporary folder of your system for storing your cache entries in order to avoid this problem at all. In the example code, the directory is represented by an instance field named persistenceDirectory which could for example be initialized in the constructor.

Update: I wrote a clean implementation of what I described above which you can find on my Git Hub page and on Maven Central. Feel free to use it, if you need to store your cache objects on disk.

Freitag, 5. Juli 2013

Object-based micro-locking for concurrent applications by using Guava

One of the presumably most annoying problems with writing concurrent Java applications is the handling of resources that are shared among threads as for example a web applications' session and application data. As a result, many developers choose to not synchronize such resources at all, if an application's concurrency level is low. It is for example unlikely that a session resource is accessed concurrently: if request cycles complete within a short time span, it is unlikely that a user will ever send a concurrent request using a second browser tab while the first request cycle is still in progress. With the ascent of Ajax-driven web applications, this trusting approach does however become increasingly hazardous. In an Ajax-application, a user could for example request a longer-lasting task to complete while starting a similar task in another browser window. If these tasks access or write session data, you need to synchronize such access. Otherwise you will face subtle bugs or even security issues as it it for example pointed out in this blog entry.

An easy way of introducing a lock is by Java's synchronized keyword. This example does for example only block a request cycle's thread if a new instance needs to be written to the session.
HttpSession session = request.getSession(true);
if (session.getAttribute("shoppingCart") == null) {
  synchronize(session) { 
    if(session.getAttribute("shoppingCart")= null) {
      cart = new ShoppingCart();
      session.setAttribute("shoppingCart");
    }
  }
}
ShoppingCart cart = (ShoppingCart)session.getAttribute("shoppingCart");
doSomethingWith(cart);

This code will add a new instance of ShoppingCart to the session. Whenever no shopping cart is found, the code will acquire a monitor for the current user's session and add a new ShoppingCart to the HttpSession of the current user. This solution has however several downsides:
  1. Whenever any value is added to the session by the same method as described above, any thread that is accessing the current session will block. This will also happen, when two threads try to access different session values. This blocks the application more restrictive than it would be necessary.
  2. A servlet API implementation might choose to implement HttpSession not to be a singleton instance. If this is the case, the whole synchronization would fail. (This is however not a common implementation of the servlet API.)
It would be much better to find a different object that the HttpSession instance to synchronize. Creating such objects and sharing them between different threads would however introduce the same problems. A nice way of avoiding that is by using Guava caches which are both intrinsically concurrent and allow the use of weak keys:

LoadingCache<String, Object> monitorCache = CacheBuilder.newBuilder()
       .weakValues()
       .build(
           new CacheLoader<String, Object>{
             public Object load(String key) {
               return new Object();
             }
           });

Now we can rewrite the locking code like this:

HttpSession session = request.getSession(true);
Object monitor = ((LoadingCache<String,Object>)session.getAttribute("cache"))
  .get("shoppingCart");
if (session.getAttribute("shoppingCart") == null) {
  synchronize(monitor) { 
    if(session.getAttribute("shoppingCart")= null) {
      cart = new ShoppingCart();
      session.setAttribute("shoppingCart");
    }
  }
}
ShoppingCart cart = (ShoppingCart)session.getAttribute("shoppingCart");
doSomethingWith(cart);

The Guava cache is self-populating and will simply return a monitor Object instance which can be used as a lock on the shared session resource which is universially identified by shoppingCart. The Guava cache is backed by a ConcurrentHashMap which avoids synchronization by only synchronizing on the map key's hash value bucket. As a result, the application was made thread safe without globally blocking it. Also, you do not need to worry about running out of memory sice the monitors (and the related cache entries) will be garbage collected if they are not longer in use. If you do not use other caches, you can even consider soft references to optimize run time.

This mechanism can of course be refined. Instead of returning an Object instance, one could for example also return a ReadWriteLock. Also, it is important to instanciate the LoadingCache on the session's start up. This can be achieved by for example a HttpSessionListener.

Samstag, 15. Juni 2013

Subtyping in Java generics

Generic types introduce a new spectrum of type safety to Java program. At the same type, generic types can be quite expressive, especially when using wildcards. In this article, I want to explain how subtyping works with Java generics.

General thoughts on generic type subtyping


Different generic types of the same class or interface do not define a subtype hierarchy linear to the subtype hierarchy of possible generic argument types. This means for example that List<Number> is not a supertype of List<Integer>. The following prominent example gives a good intuition why this kind of subtyping is prohibited:

// assuming that such subtyping was possible
ArrayList<Number> list = new ArrayList<Integer>();
// the next line would cause a ClassCastException
// because Double is no subtype of Integer
list.add(new Double(.1d))

Before discussing this in further detail, let us first think a little bit about types in general: types introduce redundancy to your program. When you define a variable to be of type Number, you make sure that this variable only references objects that know how to handle any method defined by Number such as Number.doubleValue. By doing so, you make sure that you can safely call doubleValue on any object that is currently represented by your variable and you do not longer need to keep track of the actual type of the variable's referenced object. (As long as the reference is not null. The null reference is actually one of the few exceptions of Java's strict type safety. Of course, the null "object" does not know how to handle any method call.) If you however tried to assign an object of type String to this Number-typed variable, the Java compiler would recognize that this object does in fact not understand the methods required by Number and would throw an error because it could otherwise not guarantee that a possible future call to for example doubleValue would be understood. However, if we lacked types in Java, the program would not change its functionality just by that. As long if we never made an errornous method call, a Java program without types would be equivalent. Viewed in this light, types are merely to prevent us developers of doing something stupid while taking away a little bit of our freedom. Additionally, types are a nice way of implicit documentary of your program. (Other programming languages such as Smalltalk do not know types and besides being anoying most of the time this can also have its benefits.)

With this, let's return to generics. By defining generic types you allow users of your generic class or interface to add some type safety to their code because they can restrain themselfs to only using your class or interface in a certain way. When you for example define a List to only contain Numbers by defining List<Number>, you advice the Java compiler to throw an error whenever you for example try to add a String-typed object into this list. Before Java generics, you simply had to trust that the list only contained Numbers. This could be especially painful, when you handed references of your collections to methods defined in third-party code or received collections from this code. With generics, you could assure that all elements in your List were of a certain supertype even at compile time. 

At the same time, by using generics you loose some type-safety within your generic class or interface. When you for example implement a generic List

class MyList<T> extends ArrayList<T> { }

you do not know the type of T within MyList and you have to expect that the type could be as unsophisticated as Object. This is why you can restrain your generic type to require some minimum type:

class MyList<T extends Number> extends ArrayList<T> {
  double sum() { 
  double sum = .0d;
    for(Number val : this) {
      sum += val.doubleValue();
    }
  return sum;
  }
}

This allows you to asume that any object in MyList is a subtype of Number. That way, you gain some type safety within your generic class.

Wildcards


Wildcards are the Java equivalent to saying whatever type. Consequently, you are not allowed to use wildcards when instanciating a type, i.e. defining what concrete type some instance of a generic class should represent. A type instanciation occurs for example when instanciating an object as new ArrayList<Number> where you among other things implicitly call the type constructor of ArrayList which is contained in its class definition

class ArrayList<T> implements List<T> { ... }

with ArrayList<T> being a trivial type constructor with one single argument. Thus, neither within ArrayList's type constructor definition (ArrayList<T>)  nor in the call of this constructor (new ArrayList<Number>) you are allowed to use a wildcard. When you are however only referring to a type without instanciating a new object, you can use wildcards, such as in local variables. Therefore, the following definition is allowed:

ArrayList<?> list;

By defining this variable, you are creating a place holder for an ArrayList of any generic type. With this little restriction of the generic type however, you cannot add objects to the list via its reference by this variable. This is because you made such a general assumption of the generic type represented by the variable list that it would not be safe to add an object of for example type String, because the list beyond list could require objects of any other subtype of some type. In general this required type is unknown and there exists no object which is a subtype of any type and could be added safely. (The exception is the null reference which abrogates type checking. However, you should never add null to collections.) At the same time, all objects you get out of the list will be of type Object because this is the only safe asumption about a common supertype of al possible lists represented by this variable. For this reason, you can form more elaborate wildcards using the extends and super keywords:

ArrayList<? extends Number> list1 = new ArrayList<Integer>();
ArrayList<? super Number> list2 = new ArrayList<Object>();

When a wildcard defines a minimum subtype via extends such as list1, the compiler will enforce that any objects you get out of this list will be some subtype of Number such as for example Integer. Similarly, when defining a maximum subtype via super as in list2, you can expect any list to represent a supertype of Number such as Object. Thus you can safely add instances of any subtype of Number to this list.

Finally, you should note that you can actually use wildcards within type constructors if the used type arguments are itself generic. The following use of a type constructor is for example perfectly legal:

ArrayList<?> list = new ArrayList<List<?>>();

In this example, the requirement that the ArrayList must not be constructed by using a wildcard type is fullfilled because the wildcard is applied on the type argument and not on the constructed type itself.

As for subtyping of generic classes, we can summarize that some generic type is a subtype of another type if the raw type is a subtype and if the generic types are all subtypes to each other. Because of this we can define

List<? extends Number> list = new ArrayList<Integer>();

because the raw type ArrayList is a subtype of List and because the generic type Integer is a subtype of ? extends Number.

Finally, be aware that a wildcard List<?> is a shortcut for List<? extends Object> since this is a commonly used type definition. If the generic type constructor does however enforce another lower type boundary as for example in

class GenericClass<T extends Number> { }

a variable GenericClass<?> would instead be a shortcut to GenericClass<? extends Number>.

The get-and-put principle


This observation leads us to the get-and-put principle. This principle is best explained by another famous example:

class CopyClass {
  <T> void copy(List<T> from, List<T> to) {
    for(T item : from) to.add(item);
  }
}

This method definition is not very flexible. If you had some list List<Integer> you could not copy its contents to some List<Number>  or even List<Object>. Therefore, the get-and-put principle states that you should always use lower-bounded wildcards (? extends) when you only read objects from a generic instance (via a return argument) and always use upper-bounded wildcards (? super) when you only provide arguments to a generic instance's methods. Therefore, a better implementation of MyAddRemoveList would look like this:

class CopyClass {
  <T> void copy(List<? extends T> from, List<? super T> to) {
    for(T item : from) to.add(item);
  }
}

Since you are only reading from one list and writing to the other list, Unfortunately, this is something that is easily forgoten and you can even find classes in the Java core API that do not apply the get-and-put principle. (Note that the above method also describes a generic type constructor.)

Note that the types List<? extends T> and List<? super T> are both less specific than the requirement of List<T>. Also note that this kind of subtyping is already implicit for non-generic types. If you define a method that asks for a method parameter of type Number, you can automatically receive instances of any subtype as for example Integer. Nevertheless, it is always type safe to read this Integer object you received even when expecting the supertype Number. And since it is impossible to write back to this reference, i.e. you cannot overwrite the Integer object with for example an instance of Double, the Java language does not require you to waive your writing intention by declaring a method signature like void someMethod(<? extends Number> number). Similarly, when you promised to return an Integer from a method but the caller only requires a Number-typed object as a result, you can still return (write) any subtype from your method. Similarly, because you cannot read in a value from a hypothetical return variable, you do not have to waive these hypothetical reading rights by a wildcard when declaring a return type in your method signature.