In many applications there is a need for two or several tasks to be able to share data. This can of course be done using a buffer protected by a semaphore, but many times it is much better to use a message queue, as this also adds task synchronization and the possibility to queue the data that another task does not have the possibility to handle at the moment.

What is a Message Queue?
A message queue is a number of buffers of a fixed or maximum size that is controlled by the RTOS and also a queue for tasks that are waiting for messages. The size and the number of buffers are specified when the message queue is created. The queue for tasks waiting is used for queuing up tasks that are waiting for messages to become available. Normally you can specify when you create the message queue in which order that the tasks should be waiting, either in FIFO or in priority order.

Any task can send a message to the message queue, and many times it can specified if the message should be placed at the tail of message queue or at the head of the message queue, basically a possibility to give different messages different priority. Some RTOSs also have the possibility to let a task decide if it wants to wait to send the message if the message queue is already full. When the message is sent to the message queue, it is copied from the sending task’s buffer into a buffer that is controlled by the RTOS.

Any task can read a message from a message queue. If the message queue is empty the task can decide if it wants to wait or not. When the message is received from the message queue, it is copied from a buffer that is controlled by the RTOS to the receiving task’s buffer. This means that sending a message from one task to another task, the message has to be copied twice, and this can of course be time consuming. So many times a message is just a pointer to a buffer that holds the data.

Please notice that messages are sent to a queue, not to a task, and also received from a queue, not from a task. But of course in many design it is obvious which tasks that are sending and receiving the messages, but in some designs this may be decided during execution time.

Usage of a Message Queue
A message queue is used to send data from an ISR or a task to another task. It is like a ”Shoot and forget” operation. The sending task or ISR sends the message and then hopes that another task will receive it later on. The message queue also includes task synchronization in the way that when a task sends a message and if there is a task waiting for messages from the message queue, that task will also receive the message and then be unblocked. So there is no need to inform the receiving task in another way about the message.

The RTOS does not care about what is the content of the message, it just copies the message into one of its own buffers, and when a task wants to receive the message, then the message is copied to the receiving task buffer. So the message can contain any type of data from the complete set of data to a just a pointer to the data or sometimes just be a dummy message that is simply used to unblock another task.

Design Examples
The most common design is that one task (the producer task) is sending messages to a queue and another task (the consumer task) is receiving messages, see figure 1.







But sometimes the producer task wants or needs some kind of answer from the consumer task, e.g. the consumer task has been asked to do some calculations or some formatting of data. Then a second queue is added to which the consumer task can send the reply which then can be read by the producer task, see figure 2.







If  the consumer task only needs to signal to the producer task i.e. it does not need to send any data, just signal that it is ready for the next message, then the second queue can be replaced by a semaphore, see figure 3.







It is possible to have several tasks receiving messages from the same queue, see figure 4. Most likely the tasks that consume the messages are instances of the same task, so it really does not matter which task that really receives the message.









And of course it is also possible for several tasks to send messages to the same queue, and there is only one task receiving messages from that queue, see figure 5. One example could e.g. be a consumer task that is responsible for printing some data on a display or on a printer or a client-server application.









If the clients need to receive any data back from the server, then the clients need to have a separate queue for each client to which the server can send its reply, see figure 6.








Message Queue of size Zero
When a message queue is created, the designer also have to specify the size of the queue, i.e. the number of messages that the RTOS should be able to store internally until another task wants to receive messages. Some RTOSs accept that you create a queue of size zero, which means that the message queue can not store any messages, see figure 7.







Can this be useful in a design?
The answer is of course: Yes.
But let us first see what can happen if we create a message queue of size zero and a task tries to send a message to the queue. Two things can happen:

  • There is no task waiting for receiving messages at this moment. That means that the producer task will receive an error code saying that the queue is full (as it has size zero no messages can be stored), so the send message call failed.
  • There is a task waiting for receiving messages at this moment. That means that the producer task will successfully be able to send a message, and the send call will return the code for success.

So this is the two cases that we can have. So the producer task will when it tries to send a message immediately find out if there is a task waiting for a message or not via the return code of the send call. And this can in some designs be useful, e.g. let us say that both the producer task and the consumer task are periodic tasks, meaning that they should execute with a fixed frequency, in this case the same frequency, and both tasks have the same deadlines (probably equal to the frequency), and the producer task has higher priority than the consumer task. That means that every time that the producer task sends a message it will also find out if the consumer task is waiting for a message or not. If the consumer task is waiting it means that it has been able to execute and finished what it should do, it has met its deadline. If the consumer task is not waiting for message, it means that it has not been able to finish its last cycle, it has not met its deadline. A very simple example (maybe not useful in every design) on how you in your design can build-in a way to check if a task has met its deadline or not.

Can a message queue get full?
Of course and that means that a producer task that tries to send a message to that queue will fail to do so. Is it harmful or not that a message queue is full? Well it depends of course on your design, and also if it is acceptable or not. Normally it is not acceptable as there may be a risk that data may be lost. So how can a queue get full? Well the most common design is that the producer task has higher priority than the consumer task and that the producer task will have a chance to execute several times before the consumer task gets a chance to execute and that the producer task in every loop sends a message to the message queue. If the consumer task has higher priority and it does not block on anything else than on the message queue, then there is no risk that the queue will get full.

If in your design the consumer task has lower priority than the producer task, then you have to analyze how the message queue will be used. The same situation, which may be even more complicated, is of course that messages are sent by an ISR. So the frequency of messages must be analyzed. If the frequency is periodic and stable, it is not too hard to calculate what should be the queue size, but if messages are sent in bursts with an unpredictable frequency a much deeper analyze needs to be done.