Wednesday, May 14, 2014

Spring Framework Component Container Decomposition

In this post I explain how one could split one huge components container into a number of smaller ones


Spring Framework Component Container (or Spring Framework IoC Container) is an implementation of the Inversion of Control principle. This is also known as Dependency Injection. So a typical application is represented as a number of components (beans, services; read java classes) with dependencies (either declared or auto discovered). The Component Container uses an .xml configuration files and/or Annotations on java classes. For more details you may refer to the Spring Framework documentation

Problem statement

Say you have an Application that uses IoC. It is now very easy to add new components and to pass mostly every component as the dependency for the first one. But in the long run it could lead to several issues.

The first one is that most of components tend to depend transitively from most of the other components. So the Application turns into a knot of undetachable dependencies. So unit tests transform into complex integration tests, where the most of the Application components has to be created.

The second problem is with library dependencies classpath. One could easily use library A, while library B is used in the other part of the Application. Say A depends on C v1.0 and B depends on C v2.0. Here is the problem. The ideal solution is to remove A and B libraries from main classpath and load each separately to get rid of the need of resolving an ideal version of the library C.

Splitting components

How could we solve those issues? My answer is to split the application container into a number of sub containers. So we could hide some component implementation details from the other components. We split one container into the root container and a number of sub containers. The split would provide enough isolation both for component dependencies and for classpaths.

The split process could be done in the following way. We take one big component (facade) and move all it's implementation details into a sub container. Iteratively we may hide all huge components implementations from the rest of the application and thus resolve the first issue.

Speaking of the library dependencies. Each of sub container could load classes from extended classpath in a dedicated classloader. This helps to resolve the second issue.

There are another solutions for dependency isolation. For example, you may take a look at OSGi framework. The solution below would be a much easier and it is only up to you what solution to consider.


Say you decided to split one component container in a number of sub containers. There is the list of tasks to implement:

  • Create a sub container with parent of a root container (A)
  • Make sub container scan classes from a specific classloader (B)
  • Allow components from a sub container to depend from components from the root container (C)
  • Declaratively export some components from a sub container to root container (D)

Simple Spring Sub Container (A, C)

A child Spring container (context) is created in the following way. Call the constructor of a ClassPathXmlApplicationContext. Pass current container (from the ApplicationContextAware interface implementation), specify configuration resources and the name.

The created sub container includes parent container components in the dependencies resolution.

NOTE. Sub container configuration resources must NOT overlap with configuration resources of any other container in the application. Otherwise sub container may re-load all components from the application and crash.

I recommend to consider classpath*:META-INF/app-root-configuration-*.xml for the root context and classpath*:META-INF/app-child-configuration-*.xml for a sub container to avoid the possible clash. Same applies for classes scan for annotations as well

We put a sub container creation code into the root container component. The component implements InitializingBean to trigger sub container start.

Using Custom Classloader (B)

Spring Framework provides API to specify custom classloader for the container. This is done in the following way:

Exporting components to the root container (D)

There is an expected need to be able to export some component implementations back to the root container. This could be easily done via BeanFactory. What we need is to declare a non generic getter methods in the sub container creating component. The getter methods should be registered as factory methods in the root container (in .xml file or via annotations). For the getter implementation you may simply call a wrapping method with explicit type over ApplicationContext#getBean(Class t).

NOTE. Using a generic factory method may leave the container without knowing the exact type of the component. So application may randomly fail to resolve a component dependencies in the container. That is only why I recommend to avoid generic factory methods for the scenario.

Usage example

I decided to implement the pattern in the new application I started. The main reason for me was to split the application classpath in to several modules to hide nasty dependencies in them. I realised I need to avoid the dependency resolution hell.

Thanks to the pattern I split my application into several logical sub containers. This helped me to avoid dependencies hell. The pattern also forced me to hide implementation details. I have several components in the root container that are implemented by the number of internal components in sub containers. And each sub container depends on a number of libraries with intersecting transitive dependencies sets.

The definition of a sub-container in the application is the following:

To add the next sub container I only need to add a similar class to the root container. And that is it!


Crazy, but one may use this technique recursively to provide even better separation and/or interface/implementation design.

Hope I covered all details. Please let me know if you'd like me to cover some gaps

Saturday, April 26, 2014

Named Stack Frames for the JVM

In this post I'd like to introduce my new Java library called named-frames. The library allows including runtime-generated information into JVM/Java stack traces and thread dumps


Time from time I look into feedback emails and application logs. Some of such logs contain exceptions and thread dumps. It is always nice to know the build number of the product from which those dumps were captured. The truth is that the build number or the product version is usually not included in the dump.

I had a dream to include the build number of the application and some other meaningful data strait into execution call stack. So that every problem report contains those details, no matter how the report was generated.

In addition to the build number you may include much more information in the call stack. For example, current task names or any other data that is meaningful for faster debugging.

The Library Usage

The application should wrap it's code into the call to the library in the following way:

The captured stacktrace is the following:

The most exciting part of the call stack is the line with dynamically generated string inside:

In the same way you may include as much named stack frames as needed by wrapping each into a call to the NamedStackFrame#frame() method.

Implementation details

The library is implemented in pure Java 1.5, with Maven as a project model. I intentionally avoided any runtime dependencies in the library to avoid dependencies hell in usages.

The named method is added via dynamic code generation. I use the compiled class byte-code as the template for the code-generation. For each given unique stack frame name the library generates and loads a class into an internal classloader. The generated class is reused for all future calls for the same stack frame name.

It is important to notice, that each generated class consumes space in the PermGen of the JVM. (This is changed in Java 1.8). I recommend to check if the full possible set of the used names in your program is limited and will not lead to PermGen OOMs. Unused loaded classes can be garbage-collected by the JVM (depending on the provided JVM options)

Sources & Binaries

The library code is available under the MIT license

The library is available on GitHub:

I published the snapshot build of the library into a maven repository. In a several weeks I plan to apply for the Maven Central publishing

Saturday, March 8, 2014

Docker Vagrant TeamCity

In this post I'll introduce the brand new TeamCity.Virtual plugin that supports execution under virtual environments from Vagrant or Docker

Software is the problem

Every build running under CI requires some software/libraries/JVMs/* to be pre-configured on the machine. When your setup has only one build agent everything is simple. But if you need an easy recreatable environment things get more complicated. Every time you need to install some software on every build agent you have to do lots of routine work. When there is a pool of build machines its getting overcomplicated

There exists at least one way to cope with this complexity. Virtual machines. LinuX Containers. Thanks to Vagrant it became an easy, repeatable and scriptable solution. With LinuX Containers (for example Docker) things are even faster

TeamCity.Virtual plugin makes using Docker and Vagrant in TeamCity as easy as using a build runner. So for example, you may run your nodejs builds in a fresh environment of required version without any extra per-agent pre-configuration work

TeamCity.Virtual plugin

The build runner is called Docker / Vagrant. On the runner settings page you may specify the virtual environment configuration to start and the script to execute inside it:


Implementation Details

The plugin works as follows:

  • The plugin detects the installed on the build agent Vagrant and/or Docker (so only compatible build agents will be used)
  • It starts the virtual environment (box/container) on the build agent
  • Mounts the build checkout directory into the started machine
  • Maps the working directory into machines path
  • Executes the provided script in the started virtual environment in the working directory
  • Destroys the environment wiping all the state


Vagrant box is specified via Vagrantfile. For Docker you need to specify image name only

Run experiments

This is what you see in the build log of Vagrant VM command:

Similar builds under Docker:

What is is for

In the next posts I'll cover details on how to use TeamCity.Virtual plugin to run builds with Nodejs, JVM, Android and much more. Remember, all you need to configure for build agents is Docker or Vagrant. Here are links for publicly available images: Vagrant Boxes and Docker Images

Download and Run

Plugin is implemented under Apache 2.0. Sources are on GitHub. Builds are set up and running in TeamCity.

For more details, see

Your feedback is welcome! Share what you think