Learn more about our current job openings and benefits of working at FSL.
Detailed reviews and feedback from past and current clients.
Get to know the Management Team behind FullStack Labs.
Our step-by-step process for designing and developing new applications.
Writings from our team on technology, design, and business.
Get answers to the questions most frequently asked by new clients.
Learn about our company culture and defining principles.
A high level overview of FullStack Labs, who we are, and what we do.
A JavaScript framework that allows rapid development of native Android and IOS apps.
A JavaScript framework maintained by Facebook that's ideal for building complex, modern user interfaces within single page web apps.
A server side programming language known for its ease of use and speed of development.
A lightweight and efficient backend javascript framework for web apps.
An interpreted high-level programming language great for general purpose programming.
A JavaScript framework maintained by Google that addresses many of the challenges encountered when building single-page apps.
A JavaScript framework that allows developers to build large, complex, scalable single-page web applications.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A progressive JavaScript framework known for its approachability, versatility, and performance.
View a sampling of our work implemented using a variety of our favorite technologies.
View examples of the process we use to build custom software solutions for our clients.
View projects implemented using this javascript framework ideal for building complex, modern user interfaces within single page web apps.
View projects implemented using this framework that allows rapid development of native Android and IOS apps.
View projects implemented using this backend javascript framework for web apps.
View projects implemented using this high-level programming language great for general purpose programming.
View projects implemented using this server side programming language known for its ease of use and speed of development.
We have vast experience crafting healthcare software development solutions, including UI/UX Design, Application Development, Legacy Healthcare Systems, and Team Augmentation. Our development services help the healthcare industry by enhancing accessibility, productivity, portability, and scalability.
We offer a range of custom software development solutions for education companies of all sizes. We're experts in Education Software Development and specialists in enhancing the learning experience across web, mobile, and conversational UI.
We're experts in developing Custom Software Solutions for the Logistics Industry. Our work offered a whole new and more efficient way for Logistics companies to manage their crucial operations.
We partner with various construction industry organizations to build custom software development solutions. Our Construction Software Development Services allow construction companies to manage projects, resources, and documentation.
We have vast experience crafting healthcare software development solutions, including UI/UX Design, Application Development, Legacy Healthcare Systems, and Team Augmentation. Our development services help the healthcare industry by enhancing accessibility, productivity, portability, and scalability.
We offer a range of custom software development solutions for education companies of all sizes. We're experts in Education Software Development and specialists in enhancing the learning experience across web, mobile, and conversational UI.
We're experts in developing Custom Software Solutions for the Logistics Industry. Our work offered a whole new and more efficient way for Logistics companies to manage their crucial operations.
We partner with various construction industry organizations to build custom software development solutions. Our Construction Software Development Services allow construction companies to manage projects, resources, and documentation.
Learn more about our current job openings and benefits of working at FSL.
Detailed reviews and feedback from past and current clients.
Get to know the Management Team behind FullStack Labs.
Our step-by-step process for designing and developing new applications.
Writings from our team on technology, design, and business.
Get answers to the questions most frequently asked by new clients.
Learn about our company culture and defining principles.
A high level overview of FullStack Labs, who we are, and what we do.
A JavaScript framework that allows rapid development of native Android and IOS apps.
A JavaScript framework maintained by Facebook that's ideal for building complex, modern user interfaces within single page web apps.
A server side programming language known for its ease of use and speed of development.
A lightweight and efficient backend javascript framework for web apps.
An interpreted high-level programming language great for general purpose programming.
A JavaScript framework maintained by Google that addresses many of the challenges encountered when building single-page apps.
A JavaScript framework that allows developers to build large, complex, scalable single-page web applications.
A progressive JavaScript framework known for its approachability, versatility, and performance.
A dynamic programming language used in all sorts of web and mobile applications.
A cross-platform programming language designed to run robust applications on any device.
A UI toolkit used to build natively compiled applications from a single codebase.
A functional programming language that’s ideal for scalability, maintainability, and reliability.
A Customer Relationship Management (CRM) platform that seamlessly integrates with your business operations.
A high-performance programming language that makes it easy to build simple, reliable, and efficient software.
View a sampling of our work implemented using a variety of our favorite technologies.
View examples of the process we use to build custom software solutions for our clients.
View projects implemented using this javascript framework ideal for building complex, modern user interfaces within single page web apps.
View projects implemented using this framework that allows rapid development of native Android and IOS apps.
View projects implemented using this backend javascript framework for web apps.
View projects implemented using this high-level programming language great for general purpose programming.
View projects implemented using this server side programming language known for its ease of use and speed of development.
In this article, we provide a quick overview of different ways to handle concurrency in the JVM, with a basic solution to the producer-consumer problem.
Concurrency is the execution of multiple tasks to achieve a goal, which may or may not execute simultaneously (that’s the difference with parallelism). This involves several challenges such as race conditions (multiple tasks access shared data and try to change it at the same time), memory consistency (multiple tasks have inconsistent views of what should be the same data), and deadlocks (multiple tasks block each other, each waiting to acquire a resource held by some other).
Every day the alternatives around the JVM become wider and nowadays there are several languages and frameworks that take advantage of the support the JVM offers and provide alternatives to tackle this kind of problem.
A common problem that involves concurrency is the producer-consumer, both are separate processes that share a common queue of data. The producer generates data and adds it to the common queue and the consumer takes the data from it. There could be several producers and consumers.
Using this problem as a base we are going to do a quick overview of several existing approaches to solve it.
Java’s concurrency basic structure is the Thread, a Thread requires defining a Runnable which contains the task’s logic that will be executed.
Also for the shared queue of the processes Java provides a BlockingQueue interface that is thread-safe (multiple threads can add and remove elements without concurrency issues), BlockingQueue provides two main methods: put() and take().
Its put() method blocks the calling thread if the queue is full. Similarly, if the queue is empty, its take() method blocks the calling thread.
Using this approach our producer implementation looks like this, our producer will execute indefinitely adding random values to the queue (using the put method) and notifying it in the console.
We will have 10 producers, the start() method call after the new thread declaration will start the producer execution:
In a similar fashion our 10 consumers will be the following:
A thread creation is an expensive process (it spawns an OS thread), also creating too many threads will affect performance due to context switching (the JVM stops processing one thread to start processing another, this requires storing the stopped thread data for future resuming).
To improve this situation there are the thread pools, a group of pre-allocated threads that will be reused. Thread pools can be created using the ExecutorService, in our scenario a FixedThreadPool is useful due we have a fixed number of long-running tasks.
After including the thread pool our producer will look a little bit different:
Scala provides its own approach to handle concurrent tasks called Futures, which are placeholder objects for a value that may not yet exist. The advantage of the Futures over the threads is that they can be composed (through combinators like flatMap, foreach, filter) in a non-blocking way.
Futures require an ExecutionContext which is supported by traditional Java thread pools, for example in this approach we could declare an execution context in the following way:
Creating a Future is an eager operation, it means it is immediately sent to an execution context to be executed. On each future creation or execution of map, flatMap, etc… a new task is being sent to an execution context to be scheduled.
Using BlockingQueue as the process queue our producer will look as the following:
Our producer will be a Future that executes the producer logic in a recursive fashion (instead of an infinite loop like the previous approach), our consumer will follow the same structure.
cats-effect is a fully-fledged runtime platform to execute parallel and concurrent computations.
One important cats-effect data type is IO, it is used for encoding side effects as pure values, capable of expressing both synchronous and asynchronous computations, it describes a chain of computations that will be executed.
Opposed to Futures, creating an IO does not send any task in a threadpool, manipulating IO means we are manipulating values while manipulating Futures means we are manipulating already running/enqueued computations.
IOs are executed on top of Fibers which are the fundamental concurrency primitive in cats-effect, they are lightweight threads (about 150 bytes per fiber), which allows to create about tens of millions of fibers if needed.
One advantage of the Fibers over the Futures is that they can be canceled, by default, fibers are cancelable at all points during their execution.
cats-effect has three independent thread pools to evaluate programs:
cats-effect includes the concept of semantic blocking, which means that a fiber that is blocked doesn’t block its related thread, instead the fiber gets descheduled giving the possibility to another fiber to run in one of the available threads of the pool.
cats-effect also provides a purely-functional concurrent implementation of a queue, this queue blocks the taking fiber when the queue is empty. It can be defined with different policies according to the desired behavior of offer fiber when the queue has reached capacity:
In cats-effect our provider will look like the following; the producer is invoked recursively and adds elements to the queue through its offer method (our consumer will have a very similar structure, obtaining elements from the queue through its take method), both producer and consumer will have an IO as a return type:
For our scenario we will have an unbounded queue and we will define 10 producers and consumers, the parSequence method takes the declared consumer and producer IOs and starts a fiber to execute each one of them concurrently:
Kotlin's concurrency approach is the coroutines, they are suspendable computations (functions can suspend their execution at some point and resume later on) that are executed independently of other blocks of code.
Coroutines can be blocked using the delay function, in a similar fashion to cats-effect this is a semantic blocking; it frees up the current Thread to do something else and then the rest of the coroutine is executed on some other Thread.
Kotlin coroutines runtime requires a coroutine context also called continuation, this is a data structure that stores all local context until the coroutine was blocked (it can be blocked several times) and then can be resumed in the next thread scheduled to continue the execution.
To run several suspend functions in sequence a coroutine scope is used, this will start a context for coroutines. Inside a coroutine scope, all the contained suspend functions are executed in sequence unless they are contained inside a launch block, and a launch block will essentially start a new coroutine that will execute in parallel.
As the shared data structure, Kotlin offers the channels, a channel is conceptually similar to a queue. One or more coroutines can write or read to/from a channel. A channel has a suspending send function and a suspending receive function.
There are four types of channels:
Using coroutines our provider will look like the following, the producer indefinitely will send values to the channel, and the random delay will allow the semantic blocking of the producer coroutines. The consumer will have a very similar structure, obtaining data from the channel using the receive method:
For our scenario we will have an unlimited channel and we will define 10 producers and consumers, declaring each of these coroutines inside a launch block executes each one of them concurrently:
Actors are objects which encapsulate state and behavior, they communicate exclusively by exchanging messages that are placed into the recipient’s mailbox. Akka actors conceptually each have their own lightweight thread, which is completely shielded from the rest of the system.
Instead of calling methods, actors send messages to each other. Sending a message does not transfer the thread of execution from the sender to the destination. An actor can send a message and continue without blocking.
Messages go into actor mailboxes. The behavior of the actor describes how the actor responds to messages (like sending more messages and/or changing state). An execution environment orchestrates a pool of threads to drive all these actions completely transparently.
An important difference between passing messages and calling methods is that messages have no return value. Instead, the receiving actor delivers the results in a reply message.
For our problem, instead of using a shared collection, we can take advantage of the actor model and have an actor instance as our shared queue, due to the actor’s received messages being enqueued and processed one at a time, we could support the existence of multiple producers and consumers.
Our producer actor will have the following structure, it will support one message type Produce, and has a reference to the queue actor. When a Produce message is received by the Producer actor, it will generate new data and send it to the queue actor using an Add message:
The consumer actor also has a reference to the queue actor and supports two kinds of messages Consume and Obtained. When a Consume message is received, this actor will request a value from the queue actor through a Retrieve message. When an Obtained message is received this actor will evaluate if the message actually contains a value and prints it in the console:
Our QueueActor will support three kinds of messages Init, Add, and Retrieve. When it receives an Init message it will change its behavior to support the other two messages and contain an initial instance of a queue collection which will be used to contain the produced values internally.
Then in the updated behavior of the actor when an Add message is received, the actor updates its behavior with the updated queue containing the new added value (this strategy allows keeping the actor’s state without using mutable collections or variables).
When a Retrieve message is received (sent by Consumer actors) the actor tries to obtain a value from the inner queue and sends it as a reply to the sender actor in the form of an Obtained message.
Finally, our main logic creates and initializes our QueueActor using an Init message. Then it creates instances of Producer and Consumer actors and indefinitely sends them Produce and Consume messages respectively.
Those messages are processed by the producers and consumers in a round-robin fashion thanks to the RoundRobinPool router that created them.
Until Project Loom appeared, every thread in the JVM was basically a wrapper of an OS thread. Now in recent Java versions (since 19) we are able to use virtual threads.
Virtual threads are a new type of thread that tries to overcome the resource limitation problem of platform threads and consist of lightweight threads in a similar fashion to some of the previously reviewed approaches.
One of the main advantages of these new virtual threads is the support of the existing API. To use virtual threads we only need to apply a couple of changes in our first examples.
In our first example we only need to update the new thread declaration:
In our second example we only need to update the definition of the executor service, due to the characteristics of the virtual threads is not necessary to maintain a fixed small number of threads and we can create a new virtual thread by task (this simplifies a lot the way we reason this kind of problem using the Java API):
As we reviewed the ecosystem around, the JVM offers several alternatives for concurrency handling (and there should be more that weren’t included in this quick overview), there are powerful runtimes that hide a lot of the complexity involved in concurrency like cats-effect, Akka, and Kotlin’s coroutines.
Understanding the advantages and characteristics of these different approaches can help us make better choices for our future projects. For instance, some problems may be more challenging to solve using the actors' approach while others may be significantly easier. The arrival of Virtual Threads will offer a new set of possibilities and the future of these concurrency runtimes will be linked to this big upgrade in the JVM.
The complete code of these examples can be found on GitHub.
We’d love to learn more about your project.
Engagements start at $75,000.