Automated tests for Spring Boot WebSocket server

20 05 2017

Developing WebSocket server for your Spring Boot app is fairly simple and well described and documented. However when it comes to making sure that it ‘actually works’ is done manually in most cases.

Below I will show how I do the automated integration tests for Websocket server using Spring’s StompClient. I assume that you are familiar with the idea of WebSockets in Spring. If not, here is a very good article:

Source Code

Code of this tutorial is for you to see here:

System under test: configuration

The demo will be presented on the simpliest WS configuration which consists of one entry point endpoint (`/ws`) and in-memory message broker (under `/queue`):

public class WsConfig extends AbstractWebSocketMessageBrokerConfigurer {

    public void registerStompEndpoints(StompEndpointRegistry registry) { 

    public void configureMessageBroker(MessageBrokerRegistry registry) {

The idea behind the integration test

In the test I’m going to:
– use SpringRunner to start up the whole application with the full context
– Autowire Component that in production will be responsible for sending messages to WebSocket clients
– Build and configure Spring’s StompClient and connect a StompSession to my WebSocket server
send a message over WebSocket and verify if my test client received it

Starting the application for tests

With SpringRunner.class used within jUnit test I start the app context and autowire the WSProxy component (the one that sends messages to WS clients):

@SpringBootTest(webEnvironment = RANDOM_PORT)
public class WsConfigIntegrationTest {

    private int port;
    private WsProxy wsProxy;

WsProxy in this demo is a simple component sending message with a SimpMessagingTemplate:

public class WsProxy {

    private SimpMessagingTemplate messagingTemplate;

    public WsProxy(SimpMessagingTemplate messagingTemplate) {
        this.messagingTemplate = messagingTemplate;

    public void sendMessage(@RequestParam String clientId,
                            @RequestParam String payload){
        messagingTemplate.convertAndSend("/queue/" + clientId, payload);

In this configuration, the url of WS endpoint is:

String wsUrl = "ws://" + port + "/ws";

Configuring StompClient and connecting StompSession

Using the StompClient with a minimum configuration:

WebSocketStompClient stompClient = new WebSocketStompClient(new StandardWebSocketClient());
stompClient.setMessageConverter(new StringMessageConverter());

I create StompSession to my WS url:

StompSession stompSession = stompClient.connect(wsUrl, new MyStompSessionHandler()).get();

The connect() method returns a future, but here, in tests, I wait synchronously until this session is ready by calling get() on it to get the session instantly.

Oh, and don’t worry about the MyStompSessionHandler – in this configuration it does nothing, except debug logging on the ‘Connect to WS’ event (just overrides the StompSessionHandlerAdapter)

Now it’s time to subscribe the /queue/my-id Channel within the session:

    new MyStompFrameHandler((payload) -> resultKeeper.complete(payload.toString())));

The MyStompFrameHandler class is responsible for handling the incoming message in within the session and completing the CompletableFuture promise that it received as an argument. CompletableFuture is a helper variable needed to test asynchronous code:

CompletableFuture<String> resultKeeper = new CompletableFuture<>();

And the handler uses it as follows:

public class MyStompFrameHandler implements StompFrameHandler {

    private final Consumer<String> frameHandler;

    public MyStompFrameHandler(Consumer<String> frameHandler) {
        this.frameHandler = frameHandler;


    public void handleFrame(StompHeaders headers, Object payload) {"received message: {} with headers: {}", payload, headers);

Sending the message

Message is sent by a WsProxy with SimpMessagingTemplate:

public class WsProxy {

    private SimpMessagingTemplate messagingTemplate;

    public WsProxy(SimpMessagingTemplate messagingTemplate) {
        this.messagingTemplate = messagingTemplate;

    public void sendMessage(String clientId, String payload){
        messagingTemplate.convertAndSend("/queue/" + clientId, payload);

On some machines it’s also good to wait until the connection is fully established so don’t hesitate to add good old:


Testing the result asynchronously

The code in test is async so I pass the Future and wait until it completes with the expected result, or to fail test after timeout on waiting for the response, verifying its body:

assertThat(resultKeeper.get(2, SECONDS)).isEqualTo("test-payload");

That’s it

Now you can run the test, it will start your app, send a message, receive it and verify the contents. Which is everything you need to implement the WebSockets.

Source Code

I’m sure that seeing the source code will make you understand the article better. Grab it from my GitHub:

Java 8 StringJoiner demo

24 01 2016

Finally Java has convenient and intuitive API for joining strings with delimiters! Since Java 8 there is StringJoiner class. It is an API that you may know from Guava Joiner classes (see my post: Here is a short StringJoiner demo.

Basic String joins

The most basic usage is to create StringJoiner instance with delimiter as a constructor param and add() strings:

StringJoiner joiner = new StringJoiner(",");

System.out.println("Joiner result is: " + joiner.toString());

The result is:

Joiner result is: apple,banana,orange

If you prefer, you can chain add() calls:

StringJoiner joiner = new StringJoiner(",")

Join Collection of Strings

If you have Collection of Strings, the new static String.join() method can join them:

List<String> list = Arrays.asList("apple", "banana", "orange");
String joined = String.join(", ", list);

System.out.println("Join Array result is: " + joined);

With the result of:

Join Array result is: apple, banana, orange

Join inline

You can prepare joined String in one line with String.join() overloaded with varargs, like that:

String.join(", ", "apple", "banana", "orange");

Joining Collector in Stream API

When using streams you have joining Collector at your disposal:

List<String> list = Arrays.asList("apple", "banana", "orange");
String joined =
        .collect(Collectors.joining(", "));

System.out.println("Joined with collector: " + joined);

This will result with:

Joined with collector: apple, banana, orange

Source Code

As always, I share with you the source code for this demo on my github:

Guava Multimap demo

16 08 2015

The problem

Handling maps that store collection of items under each key is very common. The thing I have in mind is this:

Map<String, List<Integer>> playerScoresMap = new HashMap<String, List<Integer>>();

Let’s assume that it stores scores for players. The player name is the key, and the value is a list of points scored by the player in following rounds. The task is to add a new score for the player after the round.

Plain Java solution

The HashMap solution requires you to:

  1. check if the player key already exist in map
    1. if it does not exist – add the key and create new array with score
    2. if it exists, get its current score array
  2. add new score to the array
  3. store the array under the user key

Source code for it is:

private static void addToOldStyleMultimap(String playerName, int scoreToAdd) {
    List<Integer> scoresList = new ArrayList<Integer>();
        scoresList = playerScoresMap.get(playerName);

    playerScoresMap.put(playerName, scoresList);

Guava solution

Guava has dedicated type for maps of that kind. Java’s:

Map<String, List<Integer>> playerScoresMap = new HashMap<String, List<Integer>>();

is equal to Guava’s Multimap:

Multimap<String, Integer> playerScoresMultimap = HashMultimap.create();

The difference is noticable in adding scores use case. To add new score for the player just make a call:

private static void addToGuavaMultimap(String playerName, Integer scoreToAdd) {
    playerScoresMultimap.put(playerName, scoreToAdd);

Guava’s Multimap does all checks for you. If player key exists, it adds new score to its score array. If it does not exist, a new key with one element score array is added.

Guava Example

Calling addToGuavaMultimap in the following manner:

addToGuavaMultimap("Alan", 2);
addToGuavaMultimap("Alan", 4);
addToGuavaMultimap("Alan", 6);
addToGuavaMultimap("Paul", 87);
System.out.println("Guava multimap result: " + playerScoresMultimap);

results with output:

Guava multimap result: {Alan=[4, 2, 6], Paul=[87]}

Source code download

The source code for this post is on my github:

Guava Cache basic demo

25 07 2015

Here I go with the caching! Caching (and cache invalidation) is second one of the most difficult thing to do while programming (the first one is the naming things problem :P ). I’ll show the demo with Guava Cache (18.0). Source Code for this tutorial is on my GitHub:

Caches Explained

You may want to get familiar with this article to get the idea of how cache works:

The Demo Introduction

Here I explain the basic app that accesses data via DAO. Let’s assume that data access is costly so the cache is needed. I want cache entries to expire after specified amount of time. Pretty simple.

Build Cache

Guava provides the cache builder. In my case I make use of it is as follows:

cache = CacheBuilder.newBuilder()
        .expireAfterWrite(5, TimeUnit.SECONDS)
        .build(new CacheLoader<String, String>() {
            public String load(String key) throws Exception {
                return dataDao.getValueForKey(key);

I do two things here:

  1. set the desired expiration algorithm and time – entry will become invalid in 5 seconds after it was created or updated
  2. provide the method to load entry – Guava Cache will call this method when you will try to retrieve the value from the cache for the first time (when entry was not found in it yet) or if requested entry has expired. Here that method makes call to my DAO.


Among few types of eviction (Size-based, timed, Reference-based) I use Timed eviction, There are two algorythms of expiration in that case. From guava doc:

  • expireAfterAccess(long, TimeUnit) Only expire entries after the specified duration has passed since the entry was last accessed by a read or a write.
  • expireAfterWrite(long, TimeUnit) Expire entries after the specified duration has passed since the entry was created, or the most recent replacement of the value. This could be desirable if cached data grows stale after a certain amount of time.

expireAfterAccess works different. In opposed to expireAfterWrite, it expires entry if it was not accessed in cache in specified time. So if you constatly read that value within its expiration time, it will not get refreshed. expireAfterWrite expires entries based on its age in cache. So it will be refreshed if validity period passed, no matter how freqently you access it (it is done in lazy way, so if time passed, the value will be refreshed only when requested from cache).

Expiration details

In this StackOverflow answer the details are explained by Guava team member:

The Guava Cache implementation expires entries in the course of normal maintenance operations, which occur on a per-segment basis during cache write operations and occasionally during cache read operations. Entries usually aren’t expired at exactly their expiration time, just because Cache makes the deliberate decision not to create its own maintenance thread, but rather to let the user decide whether continuous maintenance is required.

I’m going to focus on expireAfterAccess, but the procedure for expireAfterWrite is almost identical. In terms of the mechanics, when you specify expireAfterAccess in the CacheBuilder, then each segment of the cache maintains a linked list access queue for entries in order from least-recent-access to most-recent-access. The cache entries are actually themselves nodes in the linked list, so when an entry is accessed, it removes itself from its old position in the access queue, and moves itself to the end of the queue.

When cache maintenance is performed, all the cache has to do is to expire every entry at the front of the queue until it finds an unexpired entry. This is straightforward and requires relatively little overhead, and it occurs in the course of normal cache maintenance. (Additionally, the cache deliberately limits the amount of work done in a single cleanup, minimizing the expense to any single cache operation.) Typically, the cost of cache maintenance is dominated by the expense of computing the actual entries in the cache.

The demo and the test

Test mechanism is simple. I set the cache entry expiration time to 5 seconds, and set up a loop to retrieve value from cache each second.

for (int i = 0; i < 20; i++) {

I have also added the log line in DAO on each data retrieval and a log line on cache value request. One in five log lines is a log by DAO data retrieval method. The cache value is a string with a timestamp. Notice that it gets updated each time the data is retrieved. This is how console output look like (bold lines are logged by DAO on data request):

Hello, Cache!
returning value from dao: value for key Blue, refreshed from DAO at 20:50:57.559
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:50:57.559
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:50:57.559
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:50:57.559
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:50:57.559
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:50:57.559
returning value from dao: value for key Blue, refreshed from DAO at 20:51:02.700
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:02.700
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:02.700
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:02.700
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:02.700
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:02.700
returning value from dao: value for key Blue, refreshed from DAO at 20:51:07.703
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:07.703
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:07.703
got value from cache for 'Blue': value for key Blue, refreshed from DAO at 20:51:07.703

Get the source code

Source code for this demo is on my GitHub:

Java 8 streams demo part 2: Stream of custom objects

11 07 2015

Following the List with primitive types like in post here: Java 8 Streams demo, now it’s time to demonstrate streams with custom objects. Please read mentioned post first to get the idea of this one faster.


Now I want to have a list of Users. User is a POJO having a name and age. I am going to filter out users younger than 18 years and sort them in list by their age, ascending. At the beginning I have a list of users:

List<User> users = new ArrayList<User>();
users.add(new User("John", 21));
users.add(new User("Jack", 13));
users.add(new User("Joe", 56));
users.add(new User("Michelle", 37));

Filtering and sorting stream
I convert the list to the stream with the stream() method, filter and sort items. Item is now the User class instance, so I can call its instance methods like getAge().
        .filter(item -> item.getAge() > 18)
        .sorted((item1, item2) -> (item1.getAge() - item2.getAge()))

As a result I get adult users, sorted by age:

User{name='John', age=21}
User{name='Michelle', age=37}
User{name='Joe', age=56}

Source Code

Find the source Code for this tutorial on my github:

Android SQLite schema migration with patches

3 05 2014

When upgrading your Android application you often need to change its data model. When the model is stored in SQLite database, then its schema must be updated as well. I recommend the concept of patching and versioning the database This is very well described in this article:

The idea behind it

Android lets you upgrade the schema version, detect its changes and react to it when user installs new app version with higher schema version. The SQLiteOpenHelper class that will notify you about this.

You need to override its onUpgrade() method. When onUpgrade() detects that current version is lower than new one, it can apply patches in order and migrate schema to the desired version.

I used that algorythm in one of my apps and it worked very well.

Custom Java Annotation tutorial – How often do you use them?

22 02 2014

@Override, @Test and @Deprecated annotations are widely used. They make code more readable and clean. You can facilitate Java Annotations features fully by creating your custom annotation for classes, fields etc.

The use case

Simple example, I figured out to illustrate this post, regards the PersonInfo class. This is in fact data class that contains… well… person info. Some of the data in it may be sensitive personal data (like email address), and some not personal (like name) and some may even not be personal data (like storageId in my example).
Let’s assume that the user of this class needs two print methods: first printing all data, and second to print data that is not sensitive.

The Idea behind it

To mark PersonInfo fields as sensitive data or not, I use custom annotation: @PersonData, that indicates that field is in fact person data, and a flag indicating if it is sensitive (class’ field annotation). Then my print method finds all annotated fields, checks the ‘sensitive’ flag value and decides whether to print particular field, or not.

This finding is made by reflection, the one that is safe and will not cause many errors in runtime, because it operates on classess well known during compilation time.

The source code

As always in my tutorials, you can download the source code from here.

1. How to use a custom annotation

This is the result of applying my custom annotation:

public class PersonInfo {

  private String name;

  private String surname;

  private String email;

  private String phoneNumber;

  private long storageId;


LooksOK, doesn’t it?

2. Parse annotatated fields

The print method (overloaded toString() in my case) algorythm is as follows:

1. it iterates all the fields in PersonInfo class

2. if field is annotated with PersonalData annotation, it checks the ‘sensitive’ flag value

3. if value is marked as sensitive it may decide whether to print it or not

The reflection is used to find all class fields and read their annotations. The method is designed to use it in PersonInfo class, since it uses this object. If you’d wish to put it in another file, you’d have to pass the class as a parameter. Analyze method’s body carefully:

public String processAnnotations(boolean includeSensitiveData) {
  StringBuilder sb = new StringBuilder();
   for(Field classField : this.getClass().getDeclaredFields()) {
    for(Annotation annotation : classField.getAnnotations()) {
     if(annotation.annotationType() == PersonalData.class) {
      PersonalData personalData = (PersonalData) annotation;
      String result = printPersonalData(includeSensitiveData, classField, personalData);
  } catch (IllegalArgumentException | IllegalAccessException e) {

 return sb.toString();

Note: Use Java 7 to compile it. Otherwise refactor the catch block to two separate blocks, each catching one exception.

The printPersonalData() method is as follows:

private String printPersonalData(boolean includeSensitiveData, Field classField, PersonalData personalData)
   throws IllegalAccessException {
  StringBuilder sb = new StringBuilder();
  return sb.toString();

And buildText() is the simple one. It concatenates the field’s name and its value:

private String buildText(Field f) throws IllegalArgumentException, IllegalAccessException {
  return f.getName() + ": " + f.get(this) + "\n";

3. Create your annotation

Creating custom annotation is as simple as creating Java interface. This is the PersonalData annotation’s implementation:


public @interface PersonalData {
  boolean isSensitive();

The interface definitione is simple. It is syntaxed like any other java interface, but with the @interface annotation. The interface withoud methods would be enough to make it work. The boolean isSensitive(); method declaration gives the ability to define more details while annotating the field.

There are three builtin annotations used in front of @interface definition. This is their meaning:

@Documented means that this annotation will appear in JavaDoc

@Target(ElementType.FIELD) indicates that this annotation may be used with fields only. Other options are methods, types, packages, etc.

@Retention(RetentionPolicy.RUNTIME) means that annotation will be preserved at runtime. Other option is SOURCE, meaning that it would be available only at development time.

4. Get the source code

You are welcome to tinker with my code freely, on your own. Just download the source code from here.

Did I help you?
I manage this blog and share my knowledge for free, sacrificing my time. If you appreciate it and find this information helpful, please consider making a donation in order to keep this page alive and improve quality

Donate Button with Credit Cards

Thank You!

%d bloggers like this: