High Qpid Dequeue latency

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

High Qpid Dequeue latency

rammantripragada
Hello Qpid Experts,

We are looking for using qpid (broker-j v7.0.7 & very old client 0.16
version ) for low dequeue latency messaging. There are app servers which
send messages to broker and then these messages are dequeued back to the
same set of app servers. We have created some N number of queues and for
each queue there are consumers on each app server (will be say around 100
app servers) which dequeue message using MessageListener (onMessage())
interface . We created 20 listener ( jms session ) per app server and each
listener listens to all the queues. We did perf run to figure out the
dequeue latency  and we found that as the number of queues to which messages
were getting enqueued concurrently were increasing, the dequeue latency was
increasing.  

<http://qpid.2158936.n2.nabble.com/file/t396552/image293.png>

The increase in dequeue latency with increase in queues seems to be due to
the consumer dequeue pattern that we have where each listener (consumer)
listens to all the queues. It appears that Qpid sends current available
messages on the queues to the listeners in a round robin manner. The current
way in which we register all listeners to all queues causes qpid to send all
the available messages across the queues to a single consumer (listener) at
a time in a round robin manner between all the listeners.
E.g if we have say L1, L2& L3 as listeners and Q1, Q2, Q3 as queues. Each of
the listener subscribes to all the 3 queues. Now when messages arrive on
queue Q1, Q2, Q3,  qpid sends messages from Q1, Q2, Q3 to L1 first and next
set of messages (max 1 from each queue as we have a prefetch limit of 1) to
L2 and later L3 in a round robin manner.

<http://qpid.2158936.n2.nabble.com/file/t396552/Screenshot_2020-12-09_at_3.png>

We also found that each time jmssession.commit() was called on the listener
to get the next message it was taking around 13ms as P95 value.  So if the
listener L1 receives 3 messages m1, m2, m3 from Q1, Q2, Q3 then the dequeue
latency of the second msg will be 13ms more than the first and the third
message will be around 26ms more.  With increase in queues, more messages
will be received by one consumer and the dequeue latency will increase.

I need advice from the qpid experts for the following:

1. Is the above explanation on the behavior for increase in dequeue latency
as the number of queues to which messages are enqueued increases is correct?
2. Is this 13ms commit time normal and can this be tuned further. Is there
any way using MessageListener so that we can get all the messages in one
shot rather than calling commit on the session to get the next message on
the listener. So that we can reduce the overhead of commit time for each
message dequeued next on the listener.
3. Also could you please let me know what is the best pattern we can follow
to achieve low dequeue latency (<100ms) with higher number of queues. If we
have one listener per app server per queue and if there are say 100 app
servers and 100 queues we will need around 100*100*20KB of heap memory ( 1
session per listener) which seems to be very high.


Thanks in advance for your help. My knowledge on Qpid is fairly limited so
my apologies in advance for bearing with my explanation.

Regards
Ram



--
Sent from: http://qpid.2158936.n2.nabble.com/Apache-Qpid-users-f2158936.html

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: High Qpid Dequeue latency

Oleksandr Rudyy
Hi Ram,

I think that achieving a low latency (<100ms) all time could be a tricky
thing in a java broker. The full GC "stop the world" pauses can suspend
message delivery for seconds if not more. Though, the JVM GC can be tuned
to minimize a number of "stop the world" pauses and their duration which
might give you a relatively good average latency.

As for the described messaging use case, I agree that an increase of a
number of queues can cause a latency increase due to a design of the
application.  As far as I understood (please correct me if I am wrong),  20
MessageListeners are shared among N queue consumers.  If there are more
than 20 queues with messages, it could be that the prefetched messages
could be sitting waiting on the respective consumers blocked by the
MessageListeners processing 20 messages at a time. If a number of queues is
increased further, the more messages could be waiting for their turn to be
processed by the 20 MessageListeners.

The Qpid Broker-J itself can contribute to the latency increase with an
increase of queues and their consumers as it would need to handle more
connections and consumers. You are right, the Qpid Broker-J implements a
round-robin message delivery and a "fair" message distribution among
connected consumers. If you are interested in the implementation details,
you can find broker documentation at [1] describing how the consumer works.

As for improving commit time, I would like to suggest looking into newer
versions (8.0.x) of the Broker-J. There were some changes  made in
transaction code which can improve transaction performance for persistent
messages. Apart from that, I cannot think on top of my head about anything
else which might improve latency for the described architecture.

Kind Regards,
Alex

[1]
https://github.com/apache/qpid-broker-j/blob/master/doc/developer-guide/src/main/markdown/consumer-queue-interactions.md


On Mon, 11 Jan 2021 at 17:55, rammantripragada <[hidden email]>
wrote:

> Hello Qpid Experts,
>
> We are looking for using qpid (broker-j v7.0.7 & very old client 0.16
> version ) for low dequeue latency messaging. There are app servers which
> send messages to broker and then these messages are dequeued back to the
> same set of app servers. We have created some N number of queues and for
> each queue there are consumers on each app server (will be say around 100
> app servers) which dequeue message using MessageListener (onMessage())
> interface . We created 20 listener ( jms session ) per app server and *each
> listener listens to all the queues*. We did perf run to figure out the
> dequeue latency  and we found that as the number of queues to which
> messages
> were getting enqueued concurrently were increasing, the dequeue latency was
> increasing.
>
> <http://qpid.2158936.n2.nabble.com/file/t396552/image268.png>
>
> The increase in dequeue latency with increase in queues seems to be due to
> the consumer dequeue pattern that we have where each listener (consumer)
> listens to all the queues. It appears that Qpid sends current available
> messages on the queues to the listeners in a round robin manner. The
> current
> way in which we register all listeners to all queues causes qpid to send
> all
> the available messages across the queues to a single consumer (listener) at
> a time in a round robin manner between all the listeners.
> E.g if we have say L1, L2& L3 as listeners and Q1, Q2, Q3 as queues. Each
> of
> the listener subscribes to all the 3 queues. Now when messages arrive on
> queue Q1, Q2, Q3,  qpid sends messages from Q1, Q2, Q3 to L1 first and next
> set of messages (max 1 from each queue as we have a prefetch limit of 1) to
> L2 and later L3 in a round robin manner.
>
> <
> http://qpid.2158936.n2.nabble.com/file/t396552/Screenshot_2020-12-09_at_3.png>
>
>
> We also found that each time jmssession.commit() was called on the listener
> to get the next message it was taking around 13ms as P95 value.  So if the
> listener L1 receives 3 messages m1, m2, m3 from Q1, Q2, Q3 then the dequeue
> latency of the second msg will be 13ms more than the first and the third
> message will be around 26ms more.  With increase in queues, more messages
> will be received by one consumer and the dequeue latency will increase.
>
> I need advice from the qpid experts for the following:
>
> 1. Is the above explanation on the behavior for increase in dequeue latency
> as the number of queues to which messages are enqueued increases is
> correct?
> 2. Is this 13ms commit time normal and can this be tuned further. Is there
> any way using MessageListener so that we can get all the messages in one
> shot rather than calling commit on the session to get the next message on
> the listener. So that we can reduce the overhead of commit time for each
> message dequeued next on the listener.
> 3. Also could you please let me know what is the best pattern we can follow
> to achieve low dequeue latency (<100ms) with higher number of queues. If we
> have one listener per app server per queue and if there are say 100 app
> servers and 100 queues we will need around 100*100*20KB of heap memory ( 1
> session per listener) which seems to be very high.
>
>
> Thanks in advance for your help. My knowledge on Qpid is fairly limited so
> my apologies in advance for bearing with my explanation.
>
> Regards
> Ram
>
>
>
>
>
>
> --
> Sent from:
> http://qpid.2158936.n2.nabble.com/Apache-Qpid-developers-f7254403.html
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>