Many people go for different solutions and sometimes overcomplicate their applciation with message queues like ActiveMQ, RabbitMQ and other options out there. Though this is yet another part in the application architecture that would require maintenance and support and it’s also a possible failiure point.
There are good reasons to use this approach but in most cases you can get through with normal in-memmory queue with multi-threading.
Building asynchronous controller
In spring Boot you can easily create an asynchronous method in a controller with one simple annotation @Async. Here is a simple method part of a REST controller. The @Async annotation takes extra parameter <strong>value</strong> which identifies what TaskExecutor to be loaded.
Configure queue and thread pool for async execution
Now we have our controller method all setup to be executed in threads and we need to build our TaskExecutor. I am using a @Bean configuration class to do that with method that generates ThreadPoolTaskExecutor fully configured with injected properties from application.properties file. You can see the meaning of each option further down where we set the properties file. Other than Bean annotation you need to add @Qualifier(“contactsExecutor”) that specifies a name for our bean so we could use that name in the Async annotation in the controller’s method.
Base configuration and properties
These are some basic configuration properties for our ThreadPoolTaskExecutor. For example contacts.queue.capacity is self explanatory and that is the max size of our queue backing the threading poll.
On the other hand there is core pool size and max pool size and those properties are a bit unclear. Let me explain those in a bit more detail. When our application start receiving message requests for each request we create a new thread up to the core poll size, which in our case is 10 threads in total. After we exceed the core pool new messages get added to a queue which by default is LinkedBlockingQueue. An optionally-bounded blocking queue based on linked nodes and it has FIFO (first-in-first-out) order of elements.
The applciation will add unprocessed message on the queue until it reach it capacity of 25. After that if we have no available core threads and no space in the queue it will spawn new thread up to the limit of the max pool size which is 25.
Lastly we have a timeout option that will be applied for all max pool threads, so if the execution last for longer then 2 seconds in that upper pool the request processing will timeout, the message is dropped and we move to next message. The executor could also be configured to follow special rejection policy but we will use the default one ThreadPoolExecutor.AbortPolicy()
Enable Async on your application
Just one small thing, to make sure the @Async is working properly make sure you have add the @EnableAsync on your main application class. Example: