When doing centralized logging, e.g. using Elasticsearch, Logstash and Kibana or Graylog2 you have several options available for your Java application. You can either write your standard application logs and parse those using Logstash, either consumed directly or shipped to another machine using something like logstash-forwarder. Alternatively you can write in a more appropriate format like JSON directly so the processing step doesn't need that much work for parsing your messages. As a third option is to write to a different data store directly which acts as a buffer for your log messages. In this post we are looking at how we can configure Logback in a Spring Boot application to write the log messages to Redis directly.
Redis
We are using Redis as a log buffer for our messages. Not everyone is happy with Redis but it is a common choice. Redis stores its content in memory which makes it well suited for fast access but can also sync it to disc when necessary. A special feature of Redis is that the values can be different data types like strings, lists or sets. Our application uses a single key and value pair where the key is the name of the application and the value is a list that contains all our log messages. This way we can handle several logging applications in one Redis instance.
When testing your setup you might also want to look into the data that is stored in Redis. You can access it using the redis-cli client. I collected some useful commands for validating your log messages are actually written to Redis.
Command | Description |
---|---|
KEYS * | Show all keys in this Redis instance |
LLEN key | Show the number of messages in the list for key |
LRANGE key 0 100 | Show the first 100 messages in the list for key |
The Logback Config
When working with Logback most of the time an XML file is used for all the configuration. Appenders are the things that send the log output somewhere. Loggers are used to set log levels and attach appenders to certain pieces of the application.
For Spring Boot Logback is available for any application that uses the spring-boot-starter-logging which is also a dependency of the common spring-boot-starter-web. The configuration can be added to a file called logback.xml
that resides in src/main/resources
.
Spring boot comes with a file and a console appender that are already configured correctly. We can include the base configuration in our file to keep all the predefined configurations.
For logging to Redis we need to add another appender. A good choice is the logback-redis-appender that is rather lightweight and uses the Java client Jedis. The log messages are written to Redis in JSON directly so it's a perfect match for something like logstash. We can make Spring Boot log to a local instance of Redis by using the following configuration.
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender">
<host>localhost</host>
<port>6379</port>
<key>my-spring-boot-app</key>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
<appender-ref ref="CONSOLE" />
<appender-ref ref="FILE" />
</root>
</configuration>
We configure an appender named LOGSTASH
that is an instance of RedisAppender
. Host and port are set for a local Redis instance, key identifies the Redis key that is used for our logs. There are more options available like the interval to push log messages to Redis. Explore the readme of the project for more information.
Spring Boot Dependencies
To make the logging work we of course have to add a dependency to the logback-redis-appender
to our pom. Depending on your Spring Boot version you might see some errors in your log file that methods are missing.
This is because Spring Boot manages the dependencies it uses internally and the versions for jedis and commons-pool2 do not match the ones that we need. If this happens we can configure the versions to use in the properties section of our pom.
<properties>
<commons-pool2.version>2.0</commons-pool2.version>
<jedis.version>2.5.2</jedis.version>
</properties>
Now the application will start and you can see that it sends the log messages to Redis as well.
Enhancing the Configuration
Having the host and port configured in the logback.xml
is not the best thing to do. When deploying to another environment with different settings you have to change the file or deploy a custom one.
The Spring Boot integration of Logback allows to set some of the configuration options like the file to log to and the log levels using the main configuration file application.properties
. Unfortunately this is a special treatment for some values and you can't add custom values as far as I could see.
But fortunately Logback supports the use of environment variables so we don't have to rely on configuration files. Having set the environment variables REDIS_HOST
and REDIS_PORT
you can use the following configuration for your appender.
<appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender">
<host>${REDIS_HOST}</host>
<port>${REDIS_PORT}</port>
<key>my-spring-boot-app</key>
</appender>
We can even go one step further. To only activate the appender when the property is set you can add conditional processing to your configuration.
<if condition='isDefined("REDIS_HOST") && isDefined("REDIS_PORT")'>
<then>
<appender name="LOGSTASH" class="com.cwbase.logback.RedisAppender">
<host>${REDIS_HOST}</host>
<port>${REDIS_PORT}</port>
<key>my-spring-boot-app</key>
</appender>
</then>
</if>
You can use a Java expression for deciding if the block should be evaluated. When the appender is not available Logback will just log an error and uses any other appenders that are configured. For this to work you need to add the Janino library to your pom.
Now the appender is activated based on the environment variables. If you like you can skip the setup for local development and only set the variables on production systems.
Conclusion
Getting started with Spring Boot or logging to Redis alone is very easy but some of the details are some work to get right. But it's worth the effort: Once you get used to centralized logging you don't want to have your systems running without it anymore.