A Promising New Metric To Track Maintainability

A good metric to measure software maintainability is the holy grail of software metrics. What we would like to achieve with such a metric is that its values more or less conform with the developers own judgement of the maintainability of their software system. If that would succeed we could track that metric in our nightly builds and use it like the canary in the coal mine. If values deteriorate it is time for a refactoring. We could also use it to compare the health of all the software systems within an organization. And it could help to make decisions about whether it is cheaper to rewrite a piece of software from scratch instead of trying to refactor it.

A good starting point for achieving our goals is to look at metrics for coupling and cyclic dependencies. High coupling will definitely affect maintainability in a negative way. The same is true for big cyclic group of packages/namespaces or classes. Growing cyclic coupling is a good indicator for structural erosion.

A good design on other hand uses layering (horizontal) and a separation of functional components (vertical). The cutting of a software system by functional aspects is what I call “verticalization”. The next diagram shows what I mean by that:

A good vertical design

The different functional components are sitting within their own silos and dependencies between those are not cyclical, i.e. there is a clear hierarchy between the silos. You could also describe that as vertical layering; or as micro-services within a monolith.

Unfortunately many software system fail at verticalization The main reason is that there is nobody to force you to organize your code into silos. Since it is hard to do this in the right way the boundaries between the silos blur and functionality that should reside in a single silo is spread out over several of them. That in  turn promotes the creation of cyclic dependencies between the silos. And from there maintainability goes down the drain at an ever increasing rate.

Defining a new metric

Now how could we measure verticalization? First of all we must create a layered dependency graph of the elements comprising your system. We call those elements “components” and the definition of a component depends on the language. For most languages a component is a single source file. In special cases like C or C++ a component is a combination of related source and header files. But we can only create a proper layered dependency graph if we do not have cyclic dependencies between components. So as a first step we will combine all cyclic groups into single nodes. 

A layered dependency graph with a cycle group treated as a single logical node

In the example above nodes F, G and H form a cycle group, so we combine them into a single logical node called FGH. After doing that we get three layers (levels). The bottom layer only has incoming dependencies, to top layer only has outgoing dependencies. From a maintainability point of view we want as many components as possible that have no incoming dependencies, because they can be changed without affecting other parts of the system. For the remaining components we want them to influence as few as possible components in the layers above them.

Node A in our example influences only E, I and J (directly and indirectly). B on the other hand influences everything in level 2 and level 3 except E and I. The cycle group FGH obviously has a negative impact on that. So we could say that A should contribute more to maintainability than B, because it has a lower probability to break something in the layers above. For each logical node i we could compute a contributing value c_i to a new metric estimating maintainability:

    \[ c_i = \frac{size(i) * (1 - \frac{inf(i)}{numberOfComponentsInHigherLevels(i)})}{n} \]

where n is the total number of components, size(i) is the number of components in the logical node (only greater than one for logical nodes created out of cycle groups) and inf(i) is the number of components influenced by c_i

Now lets compute c_i for node A:

    \[ c_A = \frac{1 * (1 - \frac{3}{8})}{12} \]

If you add up c_i for all logical nodes you get the first version of our new metric “Maintainability Level” ML:

    \[ ML_1 = 100 * \sum_{i=1}^{k} c_i \]

where k is the total number of logical nodes, which is smaller than n if there are cyclic component dependencies. We multiply with 100 to get a percentage value between 0 and 100.

Since every system will have dependencies it is impossible to reach 100% unless all the components in your system have no incoming dependencies. But all the nodes on the topmost level will contribute their maximum contribution value to the metric. And the contributions of nodes on lower levels will shrink the more nodes they influence on higher levels. Cycle groups increase the amount of nodes influenced on higher levels for all members and therefore have a tendency to influence the metric negatively.

Now we know that cyclic dependencies have a negative influence on maintainability, especially if the cycle group contains a larger number of nodes. In our first version of ML we would not see that negative influence if the node created by the cycle group is on the topmost layer. Therefore we add a penalty for cycle groups with more than 5 nodes:

    \[     penalty(i) =  \begin{cases}     \frac{5}{size(i)},& \text{if } size(i)>5\\     1,              & \text{otherwise} \end{cases} \]

In our case a penalty value of 1 means no penalty. Values less than 1 lower the contributing value of a logical node. For example, if you have a cycle group with 100 nodes it will only contribute 5% (\frac{5}{100}) of its original contribution value. The second version of ML now also considers the penalty:

    \[ ML_2 = 100 * \sum_{i=1}^{k} c_i * penalty(i) \]

This metric already works quite well. When we run it on well designed systems we get values over 90. For systems with no recognizable architecture like Apache Cassandra we get a value in the twenties.

Apache Cassandra: 477 components in a gigantic cycle group

Fine tuning the metric

When we tested this metric we made two observations that required adjustments:

  • It did not work very well for small modules with less than 100 components. Here we often got relatively low ML values because a small number of components increases relative coupling naturally without really negatively affecting maintainability. 
  • We had one client Java project that was considered by its developers to have bad maintainability, but the metric showed a value in the high nineties. On closer inspection we found out that the project did indeed have a good and almost cycle free component structure, but the package structure was a total mess. Almost all the packages in the most critical module were in a single cycle group. This usually happens when there is no clear strategy to assign classes to packages. That will confuse developers because it is hard to find classes if there is no clear package assignment strategy.

The first issue could be solved by adding a sliding minimum value for ML if the scope to be analyzed had less than 100 components. 

    \[ ML_3 =  \begin{cases}     (100 - n) * \frac{n}{100} * ML_2,& \text{if } n<100\\     ML_2,              & \text{otherwise} \end{cases} \]

where n is again the number of components. The variant can be justified by arguing that small systems are easier to maintain in the first place. So with the sliding minimum value a system with 40 components can never have an ML value below 60.

The second issue is harder to solve. Here we decided to compute a second metric that would measure package cyclicity. The cyclicity of a package cycle group is the square of the number of packages in the group. A cycle group of 5 elements has a cyclicity of 25. The cyclicity of a whole system is just the sum of the cyclicity of all cycle groups in the system. The relative cyclicity of a system is defined as follows:

    \[ relativeCyclicity = 100 * \frac{\sqrt{sumOfCyclicity}}{n} \]

where n is again the total number of packages. As an example assume a system with 100 packages. If all these packages are in a single cycle group the relative cyclicity can be computed as 100 * \frac{\sqrt{100^2}}{100} which equal 100, meaning 100% relative cyclicity. If on the other hand we have 50 cycle groups of 2 packages we get 100 * \frac{\sqrt{50*2^2}}{100} – approx. 14%. That is what we want, because bigger cycle groups are a lot worse than smaller ones. So we compute ML_{alt} like this:

    \[ ML_{alt} = 100 * (1 - \frac{\sqrt{sumOfPackageCyclicity}}{n_p}) \]

where n_p is the total number of packages. For smaller systems with less than 20 packages we again add a sliding minimum value analog to ML_3.

Now the final formula for ML is defined as the minimum between the two alternative computations:

    \[ ML_4 = min(ML_3, ML_{alt}) \]

Here we simply argue that for good maintainability both the component structure and the package/namespace structure must well designed. If one or both suffer from bad design or structural erosion, maintainability will decrease too.

Multi module systems

For systems with  ore than one module we compute ML for each module. Then we compute the weighted average (by number of components in the module) for all the larger modules for the system. To decide which modules are weighted we sort the modules by decreasing size and add each module to the weighted average until either 75% of all components have been added to the weighted average or the module contains at least 100 components.

The reasoning for this is that the action usually happens in the larger more complex modules. Small modules are not hard to maintain and have very little influence on the overall maintainability of a system.

Try it yourself

Now you might wonder what this metric would say about the software you are working on. You can use our free tool Sonargraph-Explorer to compute the metric for your system written in Java, C# or Python. ML_{alt} is currently only considered for Java and C#. For systems written in C or C++ you would need our commercial tool Sonargraph-Architect.

ML in Sonargraph’s metric view

Of course we are very interested in hearing your feedback. Does the metric align with your gut feeling about maintainability or not? Do you have suggestions or ideas to further improve the metric? Please leave your comments below in the comment section.

References

The work on ML was inspired by a paper about another promising metrics called DL (Decoupling Level). DL is based on the research work of Ran Mo, Yuangfang Cai, Rick Kazman, Lu Xiao and Qiong Feng from Drexel University and the University of Hawaii. Unfortunately a part of the algorithm computing DL is protected by a patent, so that we are not able to provide this metric in Sonargraph at this point. It would be interesting to compare those two metrics on a range of different projects.

Automatic Detection of Singletons

Today, we released a new version of Sonargraph with an improved script to find Singletons. “Singleton” is one of the design patterns described by the “Gang of Four” [1]. It represents an object that should only exist once.
There are a couple of pros and cons for a Singleton that I won’t go into detail in this blog post. For anyone interested, I recommend “Item 3: Enforce a singleton property with a private constructor or an enum type” in “Effective Java”, written by Joshua Bloch [2]. Two interesting links that came up during a quick internet research are listed as references [3] [4]. Let’s just summarize that it is important to ensure that Singletons are properly implemented to avoid bad surprises (a.k.a bugs) in your software. And you should keep an eye on the existing Singletons and check that they are not misused as global variables.

This blog post describes, how you can detect Singletons by utilizing the Groovy scripting functionality of Sonargraph.
Read More

Finding Distributed Packages/Namespaces with the Sonargraph Scripting Engine

Today I will show how to make use of a very powerful, yet underutilized capability of Sonargraph-Architect. By writing simple Groovy scripts you are able to create your own code checkers or define your own metrics. Many of our most useful scripts are just about 50 lines of code and therefore not a big effort to create. As an example we will develop a script that finds packages (Java) or name spaces (C#, C++) that occur in more than one module.

The scripting engine of Sonargraph is based on our scripting API. Most scripts are based on the visitor pattern. Using this pattern a script can traverse specific elements of Sonargraph’s software system model, which is basically a very big tree data structure. At the root there is the software system node, which is accessible by a globally available instance of class CoreAccess, called “coreAccess”. This specific instance is language agnostic, i.e. it can be used for scripts that support all programming languages supported by Sonargraph. When creating a script you decide wether it will be language specific or language agnostic. Language specific scripts have access to more detailed language specific data and will use different root objects like “javaAccess” or “csharpAccess”.
Read More

Managing the “not so visible” dependencies in your Java code

In modern object-oriented languages, inheritance is massively used with its pros and cons. Moreover, languages such as Java offer simple inheritance but also allow classes to implement an arbitrary number of interfaces. With inheritance and interface implementation comes one additional ingredient that is naturally expected: method overriding. When a software evolves, you end up with hierarchies involving multiple classes and interfaces with methods definitions and implementations and then, the classes that are part of this hierarchy will be used by some other classes. In this context, it is difficult if not impossible to have control by hand over the usages or overriding classes of methods we would be interested in. Hereafter, I will present this problem in more detail with a very concrete and yet complex enough example, as well as some tools that can empower software architects and developers to gain more control over their code. Read More

Use SonarQube + Sonargraph Plugin to Detect Cyclic Dependencies

Cyclic dependencies have long been seen as a major code smell. We like to point to John Lakos as a reference [Lako1996], and a Google search about this topic will bring up valuable resources if you are unfamiliar with the negative effects. In this blog post, I take it as a given that you are interested in detecting cycles and that you agree that they should be avoided. If you see things differently, that’s fine by me – but then this blog post won’t be really interesting for you.

A number of static analysis tools exist that can detect those cycles in your code base automatically. SonarQube was one of them, until the Dependency Structure Matrix (DSM) and cycle detection was dropped with version 5.2. The DZone article by Patroklos Papapetrou (“Working with Dependencies to Eliminate Unwanted Cycles”) and the SonarQube documentation (“Cycles – Dependency Structure Matrix”) illustrate the previous functionality.

I noted that some people are missing those features badly and complain about their removal. The comments of the issue “Drop the Design related services and metrics” and the tweet of Oliver Gierke are two examples.

But thanks to the SonarQube ecosystem of plugins, there is a solution: Use the free Sonargraph Explorer and the Sonargraph Integration Plugin to get the checks for cycles back in SonarQube!
I will demonstrate that the setup and integration of Sonargraph into the build is fast and easy.

Read More

How to Organize your Code

In this article I am going to present a realistic example that will show you how to organize your code and how to describe this organization using our architecture DSL (domain specific language) implemented by our static analysis tool Sonargraph-Architect. Let us assume we are building a micro-service that manages customers, products and orders. A high level architecture diagram would look like this:

System Architecture

It is always a good idea to cut your system along functionality, and here we can easily see three subsystems. In Java you would map those subsystems to packages, in other languages you might organize your subsystem into separate folders on your file system and use namespaces if they are available.

Read More

Automate Cross-Project Analysis

Sonargraph is our tool to quickly assess the quality of a project. I get frequently asked, how Sonargraph supports the Enterprise Architect who needs to answer quality-related questions in the broader context across several projects.
Since we recently released new functionality that allows the automation of re-occurring quality checks, it is now the right time to write a blog post.
Example questions that an enterprise architect wants to answer:

  1. How frequently does a certain anti-pattern occur?
  2. How strong is the dependency on deprecated functionality?
  3. How many of my projects suffer from high coupling?

This article will demonstrate the following core functionality of Sonargraph to answer the above questions for a couple of projects and how to automate this analysis.

  1. Use a script to detect an anti-pattern (“Supertype uses Subtype”)
  2. Create a simple reference architecture to detect usage of sun.misc.Unsafe
  3. Add a threshold for a coupling metric (NCCD)
  4. Export a quality model
  5. Use Sonargraph Build Maven integration to execute the analysis.
  6. Create a small Java project to execute the Sonargraph Maven goal, access the data in the generated XML reports and create a summary.

Read More

Meet the Sonargraph Gradle Plugin – and Say Goodbye to JDepend

With the release of Sonargraph 8.8.0 today we also released the first version of our brand new Gradle plugin. It allows you to create reports for any Java project, even those that do not have a Sonargraph system specification. This is a quick and easy way to get some metrics and other findings like circular dependencies about your project. In this article I am going to show you, how you can use our Gradle plugin to make your build fail if your system contains cyclic package dependencies. Believe it or not, there are still people out there that use JDepend for this very reason (I did too, but that was more than a decade ago). Since you can do everything I am describing now with our free Sonargraph-Explorer license including the interactive visualization of cycles, I think you won’t regret saying “Good Bye” to JDepend and “Hello” to Sonargraph. Read More

Designing a DSL to Describe Software Architecture (Part 3)

Connecting Complex Artifacts

After having covered the basics and some advanced concepts in the previous articles this post will examine the different possibilities to define connections between complex artifacts. Let us assume we use the following aspect to describe the inner structure of a business module:

// File layering.arc
exposed artifact UI 
{ 
    include "**/ui/**"
    connect to Business 
} 
exposed artifact Business 
{ 
    include "**/business/**"
 
    interface default
    {
        // Only classes in the "iface" package can be used from outside
        include "**/iface/*"
    }
 
    connect to Persistence
} 
artifact Persistence 
{ 
    include "**/persistence/**" 
}
exposed public artifact Model
{
    include "**/model/**"
}

This example also shows a special feature of our DSL. You can redefine the default interface if you want to restrict incoming dependencies to a subset of the elements assigned to an artifact. Our layer “Business” is now only accessible over the classes in the “iface” package. Read More