[Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Dan Langford
So over the past few weeks we have had a huge influx of messages on our
enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
portion) and when one of our clients struggled scaling their application up
it got us looking at prefetch. we thought it was odd that all 500k messages
in the queue were prefetched and it was due to the prefetch that when they
scaled out the new connections could help with those messages they could
only acquire new messages.

so i started running tests on a local instance of qpid java 6.1.2 and i was
able to duplicate the behavior which seems odd.

Setup.
my java code will use the JMS api to create a consumer, receiveNoWait a
message, acknowledge or commit the message, then Thread.sleep for a bit to
look at the Qpid Java Brokers web interface for stats around prefetched
messages.

Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on the
ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
After the first message came in the web interface showed the queue size
decrement and 19 messages pre-fetched
after second message queue size decremented again and 28 messages are
pre-fetched
after third message queue size also decremented and 37 messages prefetched
so on and so forth

Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
maxprefetch='10'
After the first message came in the web interface showed the queue size
decrement and 10 messages pre-fetched
after second message queue size decremented again and still 10 messages are
pre-fetched
after third message queue size also decremented and still 10 messages
prefetched
so on and so forth

could it be a link credit thing? could i not be understanding prefetch?
maybe jms.prefetchPolicy is not the same as maxprefetch?

Frame logs are here
https://pastebin.com/4NHGCWEa
Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Keith Wall
Hi Dan

Thanks for the comprehensive report.  I can reproduce what you see and
confirm there appears to be a bug.  I'll hope to be able to take a
closer look later today or Monday and get back to you with more
information.

Keith.

On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:

> So over the past few weeks we have had a huge influx of messages on our
> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
> portion) and when one of our clients struggled scaling their application up
> it got us looking at prefetch. we thought it was odd that all 500k messages
> in the queue were prefetched and it was due to the prefetch that when they
> scaled out the new connections could help with those messages they could
> only acquire new messages.
>
> so i started running tests on a local instance of qpid java 6.1.2 and i was
> able to duplicate the behavior which seems odd.
>
> Setup.
> my java code will use the JMS api to create a consumer, receiveNoWait a
> message, acknowledge or commit the message, then Thread.sleep for a bit to
> look at the Qpid Java Brokers web interface for stats around prefetched
> messages.
>
> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on the
> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
> After the first message came in the web interface showed the queue size
> decrement and 19 messages pre-fetched
> after second message queue size decremented again and 28 messages are
> pre-fetched
> after third message queue size also decremented and 37 messages prefetched
> so on and so forth
>
> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
> maxprefetch='10'
> After the first message came in the web interface showed the queue size
> decrement and 10 messages pre-fetched
> after second message queue size decremented again and still 10 messages are
> pre-fetched
> after third message queue size also decremented and still 10 messages
> prefetched
> so on and so forth
>
> could it be a link credit thing? could i not be understanding prefetch?
> maybe jms.prefetchPolicy is not the same as maxprefetch?
>
> Frame logs are here
> https://pastebin.com/4NHGCWEa

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Robbie Gemmell
Administrator
I can also reproduce this. I believe it is a deficiency in how/when
the client handles granting more link credit, and it will show
particularly badly in the scenario described where the broker is able
to significantly/totally use the existing credit between processing of
individual messages and there is a backlog of queued messages to
continuously feed the scenario.

To work around the issue and achieve the effect you are looking for,
of balancing the backlog between multiple consumers when some come up
later than others, you will need to reduce the prefetch setting to 0
or 1.

Robbie

On 5 May 2017 at 10:07, Keith W <[hidden email]> wrote:

> Hi Dan
>
> Thanks for the comprehensive report.  I can reproduce what you see and
> confirm there appears to be a bug.  I'll hope to be able to take a
> closer look later today or Monday and get back to you with more
> information.
>
> Keith.
>
> On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:
>> So over the past few weeks we have had a huge influx of messages on our
>> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
>> portion) and when one of our clients struggled scaling their application up
>> it got us looking at prefetch. we thought it was odd that all 500k messages
>> in the queue were prefetched and it was due to the prefetch that when they
>> scaled out the new connections could help with those messages they could
>> only acquire new messages.
>>
>> so i started running tests on a local instance of qpid java 6.1.2 and i was
>> able to duplicate the behavior which seems odd.
>>
>> Setup.
>> my java code will use the JMS api to create a consumer, receiveNoWait a
>> message, acknowledge or commit the message, then Thread.sleep for a bit to
>> look at the Qpid Java Brokers web interface for stats around prefetched
>> messages.
>>
>> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
>> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on the
>> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
>> After the first message came in the web interface showed the queue size
>> decrement and 19 messages pre-fetched
>> after second message queue size decremented again and 28 messages are
>> pre-fetched
>> after third message queue size also decremented and 37 messages prefetched
>> so on and so forth
>>
>> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
>> maxprefetch='10'
>> After the first message came in the web interface showed the queue size
>> decrement and 10 messages pre-fetched
>> after second message queue size decremented again and still 10 messages are
>> pre-fetched
>> after third message queue size also decremented and still 10 messages
>> prefetched
>> so on and so forth
>>
>> could it be a link credit thing? could i not be understanding prefetch?
>> maybe jms.prefetchPolicy is not the same as maxprefetch?
>>
>> Frame logs are here
>> https://pastebin.com/4NHGCWEa
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

rgodfrey
On 5 May 2017 at 14:14, Robbie Gemmell <[hidden email]> wrote:

> I can also reproduce this. I believe it is a deficiency in how/when
> the client handles granting more link credit, and it will show
> particularly badly in the scenario described where the broker is able
> to significantly/totally use the existing credit between processing of
> individual messages and there is a backlog of queued messages to
> continuously feed the scenario.
>
> To work around the issue and achieve the effect you are looking for,
> of balancing the backlog between multiple consumers when some come up
> later than others, you will need to reduce the prefetch setting to 0
> or 1.
>
>
To be clear then, it is a bug in the JMS client rather than the broker :-)

-- Rob


> Robbie
>
> On 5 May 2017 at 10:07, Keith W <[hidden email]> wrote:
> > Hi Dan
> >
> > Thanks for the comprehensive report.  I can reproduce what you see and
> > confirm there appears to be a bug.  I'll hope to be able to take a
> > closer look later today or Monday and get back to you with more
> > information.
> >
> > Keith.
> >
> > On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:
> >> So over the past few weeks we have had a huge influx of messages on our
> >> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
> >> portion) and when one of our clients struggled scaling their
> application up
> >> it got us looking at prefetch. we thought it was odd that all 500k
> messages
> >> in the queue were prefetched and it was due to the prefetch that when
> they
> >> scaled out the new connections could help with those messages they could
> >> only acquire new messages.
> >>
> >> so i started running tests on a local instance of qpid java 6.1.2 and i
> was
> >> able to duplicate the behavior which seems odd.
> >>
> >> Setup.
> >> my java code will use the JMS api to create a consumer, receiveNoWait a
> >> message, acknowledge or commit the message, then Thread.sleep for a bit
> to
> >> look at the Qpid Java Brokers web interface for stats around prefetched
> >> messages.
> >>
> >> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
> >> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on the
> >> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
> >> After the first message came in the web interface showed the queue size
> >> decrement and 19 messages pre-fetched
> >> after second message queue size decremented again and 28 messages are
> >> pre-fetched
> >> after third message queue size also decremented and 37 messages
> prefetched
> >> so on and so forth
> >>
> >> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
> >> maxprefetch='10'
> >> After the first message came in the web interface showed the queue size
> >> decrement and 10 messages pre-fetched
> >> after second message queue size decremented again and still 10 messages
> are
> >> pre-fetched
> >> after third message queue size also decremented and still 10 messages
> >> prefetched
> >> so on and so forth
> >>
> >> could it be a link credit thing? could i not be understanding prefetch?
> >> maybe jms.prefetchPolicy is not the same as maxprefetch?
> >>
> >> Frame logs are here
> >> https://pastebin.com/4NHGCWEa
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Dan Langford
Thanks for the replies and the work around. Getting this working will be
great as we mostly use the competing consumer approach here. When
somebodies queue gets backed up to half a million messages they want to
just scale out their instances in CloudFoundry to increase throughput.
On Fri, May 5, 2017 at 7:09 AM Rob Godfrey <[hidden email]> wrote:

> On 5 May 2017 at 14:14, Robbie Gemmell <[hidden email]> wrote:
>
> > I can also reproduce this. I believe it is a deficiency in how/when
> > the client handles granting more link credit, and it will show
> > particularly badly in the scenario described where the broker is able
> > to significantly/totally use the existing credit between processing of
> > individual messages and there is a backlog of queued messages to
> > continuously feed the scenario.
> >
> > To work around the issue and achieve the effect you are looking for,
> > of balancing the backlog between multiple consumers when some come up
> > later than others, you will need to reduce the prefetch setting to 0
> > or 1.
> >
> >
> To be clear then, it is a bug in the JMS client rather than the broker :-)
>
> -- Rob
>
>
> > Robbie
> >
> > On 5 May 2017 at 10:07, Keith W <[hidden email]> wrote:
> > > Hi Dan
> > >
> > > Thanks for the comprehensive report.  I can reproduce what you see and
> > > confirm there appears to be a bug.  I'll hope to be able to take a
> > > closer look later today or Monday and get back to you with more
> > > information.
> > >
> > > Keith.
> > >
> > > On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:
> > >> So over the past few weeks we have had a huge influx of messages on
> our
> > >> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
> > >> portion) and when one of our clients struggled scaling their
> > application up
> > >> it got us looking at prefetch. we thought it was odd that all 500k
> > messages
> > >> in the queue were prefetched and it was due to the prefetch that when
> > they
> > >> scaled out the new connections could help with those messages they
> could
> > >> only acquire new messages.
> > >>
> > >> so i started running tests on a local instance of qpid java 6.1.2 and
> i
> > was
> > >> able to duplicate the behavior which seems odd.
> > >>
> > >> Setup.
> > >> my java code will use the JMS api to create a consumer, receiveNoWait
> a
> > >> message, acknowledge or commit the message, then Thread.sleep for a
> bit
> > to
> > >> look at the Qpid Java Brokers web interface for stats around
> prefetched
> > >> messages.
> > >>
> > >> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
> > >> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on the
> > >> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
> > >> After the first message came in the web interface showed the queue
> size
> > >> decrement and 19 messages pre-fetched
> > >> after second message queue size decremented again and 28 messages are
> > >> pre-fetched
> > >> after third message queue size also decremented and 37 messages
> > prefetched
> > >> so on and so forth
> > >>
> > >> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
> > >> maxprefetch='10'
> > >> After the first message came in the web interface showed the queue
> size
> > >> decrement and 10 messages pre-fetched
> > >> after second message queue size decremented again and still 10
> messages
> > are
> > >> pre-fetched
> > >> after third message queue size also decremented and still 10 messages
> > >> prefetched
> > >> so on and so forth
> > >>
> > >> could it be a link credit thing? could i not be understanding
> prefetch?
> > >> maybe jms.prefetchPolicy is not the same as maxprefetch?
> > >>
> > >> Frame logs are here
> > >> https://pastebin.com/4NHGCWEa
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: [hidden email]
> > > For additional commands, e-mail: [hidden email]
> > >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [hidden email]
> > For additional commands, e-mail: [hidden email]
> >
> >
>
Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Dan Langford
Will you let me know if a Jira ticket is made as a result of this so I can
track which version gets an adjustment?

I did more testing around this and am convinced this is what caused our
Broker to get a Out Of Memory for Direct Memory. We saw our broker crashing
and our primary client of the large backed up queue also crashing due to
Memory issues. In my testing those problems went away with a prefetch of 1.
I think that when all the hundreds of thousands of messages were prefetched
both the client and broker where holding them in Memory and running out.
With prefetch = 1 we were able to push around millions with very little
problems

Thanks. I'm anxious for a Qpid JMS client I can encourage my customers to
upgrade to to avoid this in the future. Let me know if you would like me to
test any bug fixes
On Fri, May 5, 2017 at 8:34 AM Dan Langford <[hidden email]> wrote:

> Thanks for the replies and the work around. Getting this working will be
> great as we mostly use the competing consumer approach here. When
> somebodies queue gets backed up to half a million messages they want to
> just scale out their instances in CloudFoundry to increase throughput.
> On Fri, May 5, 2017 at 7:09 AM Rob Godfrey <[hidden email]>
> wrote:
>
>> On 5 May 2017 at 14:14, Robbie Gemmell <[hidden email]> wrote:
>>
>> > I can also reproduce this. I believe it is a deficiency in how/when
>> > the client handles granting more link credit, and it will show
>> > particularly badly in the scenario described where the broker is able
>> > to significantly/totally use the existing credit between processing of
>> > individual messages and there is a backlog of queued messages to
>> > continuously feed the scenario.
>> >
>> > To work around the issue and achieve the effect you are looking for,
>> > of balancing the backlog between multiple consumers when some come up
>> > later than others, you will need to reduce the prefetch setting to 0
>> > or 1.
>> >
>> >
>> To be clear then, it is a bug in the JMS client rather than the broker :-)
>>
>> -- Rob
>>
>>
>> > Robbie
>> >
>> > On 5 May 2017 at 10:07, Keith W <[hidden email]> wrote:
>> > > Hi Dan
>> > >
>> > > Thanks for the comprehensive report.  I can reproduce what you see and
>> > > confirm there appears to be a bug.  I'll hope to be able to take a
>> > > closer look later today or Monday and get back to you with more
>> > > information.
>> > >
>> > > Keith.
>> > >
>> > > On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:
>> > >> So over the past few weeks we have had a huge influx of messages on
>> our
>> > >> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
>> > >> portion) and when one of our clients struggled scaling their
>> > application up
>> > >> it got us looking at prefetch. we thought it was odd that all 500k
>> > messages
>> > >> in the queue were prefetched and it was due to the prefetch that when
>> > they
>> > >> scaled out the new connections could help with those messages they
>> could
>> > >> only acquire new messages.
>> > >>
>> > >> so i started running tests on a local instance of qpid java 6.1.2
>> and i
>> > was
>> > >> able to duplicate the behavior which seems odd.
>> > >>
>> > >> Setup.
>> > >> my java code will use the JMS api to create a consumer,
>> receiveNoWait a
>> > >> message, acknowledge or commit the message, then Thread.sleep for a
>> bit
>> > to
>> > >> look at the Qpid Java Brokers web interface for stats around
>> prefetched
>> > >> messages.
>> > >>
>> > >> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
>> > >> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on
>> the
>> > >> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
>> > >> After the first message came in the web interface showed the queue
>> size
>> > >> decrement and 19 messages pre-fetched
>> > >> after second message queue size decremented again and 28 messages are
>> > >> pre-fetched
>> > >> after third message queue size also decremented and 37 messages
>> > prefetched
>> > >> so on and so forth
>> > >>
>> > >> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
>> > >> maxprefetch='10'
>> > >> After the first message came in the web interface showed the queue
>> size
>> > >> decrement and 10 messages pre-fetched
>> > >> after second message queue size decremented again and still 10
>> messages
>> > are
>> > >> pre-fetched
>> > >> after third message queue size also decremented and still 10 messages
>> > >> prefetched
>> > >> so on and so forth
>> > >>
>> > >> could it be a link credit thing? could i not be understanding
>> prefetch?
>> > >> maybe jms.prefetchPolicy is not the same as maxprefetch?
>> > >>
>> > >> Frame logs are here
>> > >> https://pastebin.com/4NHGCWEa
>> > >
>> > > ---------------------------------------------------------------------
>> > > To unsubscribe, e-mail: [hidden email]
>> > > For additional commands, e-mail: [hidden email]
>> > >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: [hidden email]
>> > For additional commands, e-mail: [hidden email]
>> >
>> >
>>
>
Reply | Threaded
Open this post in threaded view
|

Re: [Java Client JMS] qpid-jms-client 0.22.0 vs qpid-client 6.1.2: prefetch behaving differently

Robbie Gemmell
Administrator
I have put a change in via
https://issues.apache.org/jira/browse/QPIDJMS-292 to address it. If
you want to give it a try you can either check out the code (mirrored
at https://github.com/apache/qpid-jms), or grab the latest
0.23.0-SNAPSHOT from
https://repository.apache.org/content/repositories/snapshots/ with
maven etc.

Robbie

On 11 May 2017 at 02:10, Dan Langford <[hidden email]> wrote:

> Will you let me know if a Jira ticket is made as a result of this so I can
> track which version gets an adjustment?
>
> I did more testing around this and am convinced this is what caused our
> Broker to get a Out Of Memory for Direct Memory. We saw our broker crashing
> and our primary client of the large backed up queue also crashing due to
> Memory issues. In my testing those problems went away with a prefetch of 1.
> I think that when all the hundreds of thousands of messages were prefetched
> both the client and broker where holding them in Memory and running out.
> With prefetch = 1 we were able to push around millions with very little
> problems
>
> Thanks. I'm anxious for a Qpid JMS client I can encourage my customers to
> upgrade to to avoid this in the future. Let me know if you would like me to
> test any bug fixes
> On Fri, May 5, 2017 at 8:34 AM Dan Langford <[hidden email]> wrote:
>
>> Thanks for the replies and the work around. Getting this working will be
>> great as we mostly use the competing consumer approach here. When
>> somebodies queue gets backed up to half a million messages they want to
>> just scale out their instances in CloudFoundry to increase throughput.
>> On Fri, May 5, 2017 at 7:09 AM Rob Godfrey <[hidden email]>
>> wrote:
>>
>>> On 5 May 2017 at 14:14, Robbie Gemmell <[hidden email]> wrote:
>>>
>>> > I can also reproduce this. I believe it is a deficiency in how/when
>>> > the client handles granting more link credit, and it will show
>>> > particularly badly in the scenario described where the broker is able
>>> > to significantly/totally use the existing credit between processing of
>>> > individual messages and there is a backlog of queued messages to
>>> > continuously feed the scenario.
>>> >
>>> > To work around the issue and achieve the effect you are looking for,
>>> > of balancing the backlog between multiple consumers when some come up
>>> > later than others, you will need to reduce the prefetch setting to 0
>>> > or 1.
>>> >
>>> >
>>> To be clear then, it is a bug in the JMS client rather than the broker :-)
>>>
>>> -- Rob
>>>
>>>
>>> > Robbie
>>> >
>>> > On 5 May 2017 at 10:07, Keith W <[hidden email]> wrote:
>>> > > Hi Dan
>>> > >
>>> > > Thanks for the comprehensive report.  I can reproduce what you see and
>>> > > confirm there appears to be a bug.  I'll hope to be able to take a
>>> > > closer look later today or Monday and get back to you with more
>>> > > information.
>>> > >
>>> > > Keith.
>>> > >
>>> > > On 4 May 2017 at 23:39, Dan Langford <[hidden email]> wrote:
>>> > >> So over the past few weeks we have had a huge influx of messages on
>>> our
>>> > >> enterprise message bus (qpid java 6.0.4 serves the AMQP1.0 messaging
>>> > >> portion) and when one of our clients struggled scaling their
>>> > application up
>>> > >> it got us looking at prefetch. we thought it was odd that all 500k
>>> > messages
>>> > >> in the queue were prefetched and it was due to the prefetch that when
>>> > they
>>> > >> scaled out the new connections could help with those messages they
>>> could
>>> > >> only acquire new messages.
>>> > >>
>>> > >> so i started running tests on a local instance of qpid java 6.1.2
>>> and i
>>> > was
>>> > >> able to duplicate the behavior which seems odd.
>>> > >>
>>> > >> Setup.
>>> > >> my java code will use the JMS api to create a consumer,
>>> receiveNoWait a
>>> > >> message, acknowledge or commit the message, then Thread.sleep for a
>>> bit
>>> > to
>>> > >> look at the Qpid Java Brokers web interface for stats around
>>> prefetched
>>> > >> messages.
>>> > >>
>>> > >> Test 1. qpid-jms-client 0.22.0 with prefetch of 10 set via jms url
>>> > >> parameter (jms.prefetchPolicy.all=10) OR set via PreFetchPolicy on
>>> the
>>> > >> ConnectionFactory (jmsDefaultPrefetchPolicy.setAll(10);)
>>> > >> After the first message came in the web interface showed the queue
>>> size
>>> > >> decrement and 19 messages pre-fetched
>>> > >> after second message queue size decremented again and 28 messages are
>>> > >> pre-fetched
>>> > >> after third message queue size also decremented and 37 messages
>>> > prefetched
>>> > >> so on and so forth
>>> > >>
>>> > >> Test 2. qpid-client 6.1.2 with prefetch of 10 set via url param
>>> > >> maxprefetch='10'
>>> > >> After the first message came in the web interface showed the queue
>>> size
>>> > >> decrement and 10 messages pre-fetched
>>> > >> after second message queue size decremented again and still 10
>>> messages
>>> > are
>>> > >> pre-fetched
>>> > >> after third message queue size also decremented and still 10 messages
>>> > >> prefetched
>>> > >> so on and so forth
>>> > >>
>>> > >> could it be a link credit thing? could i not be understanding
>>> prefetch?
>>> > >> maybe jms.prefetchPolicy is not the same as maxprefetch?
>>> > >>
>>> > >> Frame logs are here
>>> > >> https://pastebin.com/4NHGCWEa
>>> > >
>>> > > ---------------------------------------------------------------------
>>> > > To unsubscribe, e-mail: [hidden email]
>>> > > For additional commands, e-mail: [hidden email]
>>> > >
>>> >
>>> > ---------------------------------------------------------------------
>>> > To unsubscribe, e-mail: [hidden email]
>>> > For additional commands, e-mail: [hidden email]
>>> >
>>> >
>>>
>>

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]