Jenkins parse git change log

There is this neat plugin in Jenkins called Git Changelog which provides information about what was the git change that triggered the build. Recently I wanted to parse this change so I could send notification emails when certain things changed e.g: DB schemas.

My process for doing the same was:

  • Parse the changelog to populate certain environment variables
  • Based on values of said environment variables, trigger a conditional action
  • Send notification emails with relevant attachments.
  1. Parsing the changelog:
* Get the current build: 
import hudson.model.*;
import hudson.util.*;
import hudson.scm.*;
import hudson.plugins.git.*

def thr = Thread.currentThread();
def build = thr?.executable;
* Get the gitlog changeset and entries within the changeset
def changeSet= build.getChangeSet();
def entries = changeSet.getItems();
* Parse entries to capture authors, comments and files changed
def map=[:]


def filePath = ""

for (int j = 0; j < entries.length; j++) {
                                   def entry = entries[0]
                                   map.put('AUTHOR',entry.getAuthor())
                                   def files = new ArrayList(entry.getAffectedFiles())
                                   for (int k = 0; k < files.size(); k++) {
                                                                                  def file = files[k]
                                                                                  if(file.getPath().contains("Schema"))
                                                                                  {
                                                                                    filePath = file.getPath() +","+filePath
                                                                                  }
                                                                            }
                                         }
if(filePath.length()>1)
{
   filePath = filePath.substring(0, filePath.length()-1)
}

map.put('FILE', filePath )

if(changeSet.getLogs())
{
                                   if(changeSet.getLogs().get(0))
                                   {
                                        def changeComment = changeSet.getLogs().get(0).getComment().split(":")[0];
                                        map.put('CHANGE_COMMENT',changeComment)
            
                                   }
}
return map

The above code parses the changeset entries to capture authors and logs and creates an environment variable with the files changed in this changeset.

  1. Conditional action to trigger based on files changed:

Conditional Action

  1. Editable Email notification if conditional action is triggered, with attachments:

Editable email notification

Calling Stored procedures with JPA

A lot of times we need to load entities into memory which contain a lot of nested relationships. Most of the time these nested relationships are lazy loaded which causes multiple queries being fired and causes performance issues. One solution to this to load the entity using fetches in hql but this can lead to another serious issue: cartesian product problem.

A better way to solve this is to leave the heavy lifting to the DB using a stored procedure and call the stored procedure from JPA as below:

  1. Annotate Entity with @NamedStoredProcedureQuery:

    @NamedStoredProcedureQuery(
        name = "LOAD_ENTITY", 
        procedureName = "DB.LOAD_ENTITY", 
        resultClasses = { Class1.class, Class2.class, Class3.class, Class4.class },
        parameters = {
        @StoredProcedureParameter(name = "Id", mode = ParameterMode.IN, type = Integer.class )
        }
    )
    

    The resultClasses are important to help Hibernate understand how to parse the results returned into the classes mentioned.

  2. Call from DAO using entity Manager:

    StoredProcedureQuery loadEntitySP = this.getEntityManager().createNamedStoredProcedureQuery("LOAD_ENTITY");
         loadClaimSP.setParameter("Id",Id);
         List<Object> results = loadEntitySP.getResultList();
    
  3. Iterate through the results to get the object needed. If the stored procedure returns multiple sql results, hibernate maps it into multiple entity objects(one instance, multiple references). The API does not provide a way to restrict this as it does with hql queries.

    
     Object[] resultArray = (Object[]) results.get(0); //get the first result from the multiple duplicate results
             Entity entity = (Entity) resultArray[0] ;// get the main entity, since the result will contain an array of all the entities which are part of the result classes. Only references, not multiple instances.
     			return entity;
    

Upgrading to log4j2

Logging is a critical part of any application. Without proper logging, teams resort to debugging in Production to diagnoze mysterious issues, Yikes!! I have been there and continue to live in that hell with legacy software which we all have to work with at some point.

Great, now everyone is on board with logging! Let’s fix this debugging in Production issue by adding logging in all our methods after every variable is set. Having all the information will fix everything right? -_____-. Pretty soon we realize that our excessive need for information and logging is bringing our application down. All that I/O is not cheap. Logging as with all of software development is something that has to be tweaked according to our needs as the code base grows. Too little or too much can have serious consequences.

Say we have tweaked our logging needs to log the exact amount of information with all the different levels configurable for different needs. Since we are working on a Java application, we use the widely used log4 framework. Everything goes great till we scale to process 10x the load we tested for. We scramble to make our code handle the load but we still run into mysterious bottlenecks. You got it, its logging again! Log4j-1-x logs synchronously, so as we process enormous loads, all that synchronous I/O eventually catches up and slows the application down. Logging is important but it is not critical enough to be the bottleneck. This brings us to Log4j2 and asynch loggers.

Log4j2 uses asynch loggers which use the LMAX disruptor to log asynchronously and performs ridiculously better than log4j1. Please read this for more information on how and why it is so much faster.

Here are the steps I followed to upgrade from log4j1 to log4j2:

  • Add the below entries to your pom. For log4j2 versions greater than 2.8, use disruptor version greater than 3.3:
		<dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-api</artifactId>
            <version>${log4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>${log4j.version}</version>
        </dependency>
        <dependency>
            <groupId>com.lmax</groupId>
            <artifactId>disruptor</artifactId>
            <version>${disruptor.version}</version>
        </dependency>
  • Since log4j2 changed the logger package from org.apache.log4j.Logger to org.apache.log4j.logging.Logger, I wanted to write a wrapper around this so I did not have to change thousands of class files next time the package changed:
public class CustomLogManager extends LogManager {

	public static CustomLogger getCustomLogger(Class<?> clazz)
	{
		Logger logger = getLogger(clazz);
		return new CustomLogger((org.apache.logging.log4j.core.LoggerContext)getContext(), logger.getName(), logger.getMessageFactory());
	}
}

public class CustomLogger extends org.apache.logging.log4j.core.Logger{

	protected CustomLogger(LoggerContext context, String name, MessageFactory messageFactory) {
		super(context, name, messageFactory);
	}

}
  • Create the below log4j2.xml to configure logging. I have mixed asynch and synch loggers here, but you can just set all loggers to asynch with the following system property:
log4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
    <Appenders>
       <RollingFile name="RollingFileAppender" fileName="d:/logs/WebApplog4j2.log"
      filePattern="d:/logs/WebApplog4j2-$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz"  immediateFlush="false">
        <PatternLayout>
            <Pattern>%d [%t] %-5p %c :: %m%n</Pattern>
        </PatternLayout>
        <Policies>
            <OnStartupTriggeringPolicy />
            <TimeBasedTriggeringPolicy />
            <SizeBasedTriggeringPolicy size="10 MB" />
        </Policies>
        <DefaultRolloverStrategy max="20" />
    </RollingFile>
    </Appenders>
    <Loggers>
	<AsyncLogger name="org.hibernate" level="ERROR">
		<AppenderRef ref="RollingFileAppender" />
	</AsyncLogger>
	<Root level="error">
			<AppenderRef ref="Console-Appender" />
	</Root>
	<AsyncLogger name="com.app" level="ERROR" additivity="false">
            <AppenderRef ref="RollingFileAppender" />
    </AsyncLogger>
	
    </Loggers>
</Configuration>
  • Pass in JVM argument to provide path to the log4j2.xml file:
-Dlog4j.configurationFile=D:\logs\log4j2.xml
  • This was enough to get most of my logging done asynchronously but I still could not get hibernate logging to work. I had to do some additional configuration for the same:

Add the following slf4j dependency:

	<dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-slf4j-impl</artifactId>
            <version>${log4j2.version}</version>
    </dependency>

Pass in JVM argument for the jboss logging provider since Hibernate uses jboss logging:

-Dorg.jboss.logging.provider=log4j2

Angular 5 Introduction

After a long hiatus from Angular, started building a website with Angular 5 recently and was pleasantly surprised by the complete overhaul of the framework compared to older versions that I was familiar with(Angular 2):

  • Much easier object oriented support
  • Easy to use HTTP Client
  • Typescript which offers Dynamic Strong typing which makes it easier for compilers to spot erors but the strong typing is removed when translated to Javascript.

Getting Started:

  • Install NodeJs and npm
  • install angular CLI:
npm install -g @angular/cli
  • Create a new project:
ng new <project-name>
  • Serve it:
npm start

Voila! Your Angular app is up and running.

Now let’s do something useful aka build a component(e.g: a login component):

ng g c login

This will generate a login component with the following files:

  • login.html
  • login.ts
  • login.css
  • login.spec

The structure is similar to earlier versions of Angular which used Javascript instead of Typescript.

To use this component, it has to be exported as an app module in app.module.ts:

import { LoginComponent } from './login/login.component';
@NgModule({
  declarations: [
    LoginComponent
	]})

Provide routing for the same in app.routing.module.ts:

import { LoginComponent } from './login/login.component';
const routes: Routes = [
 { path: '',
    redirectTo: '/login',
    pathMatch: 'full'
  },
  { path: 'login', component: LoginComponent }
 ];

Now to serve it on the app default webpage, edit app.component.html to contain:

<router-outlet></router-outlet>

That’s it. Easy as that. Now you can build slick interfaces using Angular and Angular Material. I will write more on this in subsequent posts.

spring actuator

Spring Boot comes with tools for building production ready microservices. One of the more important features Boot provides is Spring actuator. Spring actuator provides information about your microservice. Once the service is up and running, the actuator endpoints can be accessed using:

http://localhost:<port>/<application_name>/actuator/endpoint

e.g:

http://localhost:8080/applicationTest/actuator/info

{
  "build" : {
    "version" : "0.0.1-SNAPSHOT",
    "artifact" : "applicationTest",
    "name" : "applicationTest",
    "group" : "com.test",
    "time" : "2018-03-09T21:50:26.591Z"
  }
}

By default only the info and health endpoints are enabled and exposed. The other endpoints are enabled but need to be exposed over http:

management.endpoints.web.exposure.include=*

To enable pretty print like above:

spring.jackson.serialization.indent_output=true