proton server (azure SB) limit the incoming_window=5000

classic Classic list List threaded Threaded
9 messages Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

proton server (azure SB) limit the incoming_window=5000

Pankaj Bhagra
I am trying to extract bulk messages from azure SB.

As per their documentation the azure SDK doesn't support bulk read message
and recommends using the native amqp for the Azure Service Bus. White
trying to negotiate a session with the azure SB, i noticed that independent
of what client is requesting, the SB dials down the incoming_window=5000.
This limits a max of 5000B send per bulk read, thus my consumer runs dry
till the RTT (which is large for inter cloud) to fetch a new packet.

Is this a restriction of the azure SB or am i not setting any/some of the
parameters correctly from the client side to achieve the negotiated window
size  > 5000B.

I am using python proton MessagingHandler Class and clearly see that
on_message is called for few pkts composing of buffer size ~5000B and then
have to wait for the RTT delay to get the next batch.

Any suggestion to work around this problem and get larger bulked message ?
I can't reduce the RTT between the server and consumer. have some
workaround for parallel consumers but would like to solve the bulk problem
as that is most efficient way of achieving the high throughput.

class Recv(MessagingHandler):

    def __init__(self):

        super(Recv, self).__init__(prefetch=100, auto_accept=True,
auto_settle=True, peer_close_is_error=False)


    def on_start(self, event):

        conn = event.container.connect(connString)

        event.container.create_receiver(conn, subscription)


    def on_message(self, event):

        print(event.message.body)

        print datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S.%f')[:-3],
self.count, event.receiver.queued


[0xace380]:  -> SASL

[0xace380]:  <- SASL

[0xace380]:0 <- @sasl-mechanisms(64)
[sasl-server-mechanisms=@PN_SYMBOL[:MSSBCBS, :PLAIN, :ANONYMOUS, :EXTERNAL]]

[0xace380]:0 -> @sasl-init(65) [mechanism=:PLAIN,
initial-response=b"\x00iothubroutes_XXXXX\XXXXX="]

[0xace380]:0 <- @sasl-outcome(68) [code=0, additional-data=b"Welcome!"]

[0xace380]:  -> AMQP

[0xace380]:0 -> @open(16)
[container-id="0ad171ca-cefa-4a27-a7dc-0520e5393fa5", hostname="
nebhubsb.servicebus.windows.net", channel-max=32767]

[0xace380]:0 -> @begin(17) [next-outgoing-id=0, incoming-window=2147483647,
outgoing-window=2147483647]

[0xace380]:0 -> @attach(18)
[name="0ad171ca-cefa-4a27-a7dc-0520e5393fa5-kukatopic/Subscriptions/kukasub",
handle=0, role=true, snd-settle-mode=2, rcv-settle-mode=0,
source=@source(40) [address="kukatopic/Subscriptions/kukasub", durable=0,
timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0,
dynamic=false], initial-delivery-count=0, max-message-size=0]

[0xace380]:0 -> @flow(19) [incoming-window=2147483647, next-outgoing-id=0,
outgoing-window=2147483647, handle=0, delivery-count=0, link-credit=100,
drain=false]

[0xace380]:  <- AMQP

[0xace380]:0 <- @open(16)
[container-id="b970f07881334c658eb80ff336f2a683_G16", max-frame-size=65536,
channel-max=4999, idle-time-out=240000]

[0xace380]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1,
incoming-window=5000, outgoing-window=2147483647, handle-max=255]

[0xace380]:0 <- @attach(18)
[name="0ad171ca-cefa-4a27-a7dc-0520e5393fa5-kukatopic/Subscriptions/kukasub",
handle=0, role=false, rcv-settle-mode=1, source=@source(40)
[address="topic/Subscriptions/sub", durable=0, timeout=0, dynamic=false],
target=@target(41) [durable=0, timeout=0, dynamic=false],
initial-delivery-count=0, max-message-size=266240]
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Gordon Sim
On 08/08/17 02:05, Pankaj Bhagra wrote:

> I am trying to extract bulk messages from azure SB.
>
> As per their documentation the azure SDK doesn't support bulk read message
> and recommends using the native amqp for the Azure Service Bus. White
> trying to negotiate a session with the azure SB, i noticed that independent
> of what client is requesting, the SB dials down the incoming_window=5000.
> This limits a max of 5000B send per bulk read, thus my consumer runs dry
> till the RTT (which is large for inter cloud) to fetch a new packet.
>
> Is this a restriction of the azure SB or am i not setting any/some of the
> parameters correctly from the client side to achieve the negotiated window
> size  > 5000B.
>
> I am using python proton MessagingHandler Class and clearly see that
> on_message is called for few pkts composing of buffer size ~5000B and then
> have to wait for the RTT delay to get the next batch.
>
> Any suggestion to work around this problem and get larger bulked message ?
> I can't reduce the RTT between the server and consumer. have some
> workaround for parallel consumers but would like to solve the bulk problem
> as that is most efficient way of achieving the high throughput.

The incoming window is set by each peer independently (i.e. it is not
negotiated) and only covers the incoming transfers. For a link receiving
from service bus, the incoming window set by service bus isn't relevant.
It is the clients incoming window that would affect transfers from
service-bus to client.

How large are the messages? From the protocol trace it looks like you
are issuing 100 link credits. If the messages were all 50 bytes, that
might explain the limited batches you are seeing? You could try
increasing that link credit window.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Gordon Sim
In reply to this post by Pankaj Bhagra
On 09/08/17 08:22, Pankaj Bhagra wrote:

> Thanks Gordon for looking into my query. It makes sense what u said,
> however i am still searching for a reason of flow control and limited batch
> size.
>
> As per your suggestion i tried increasing the link-credit to 10k, 100k, but
> that doesn't change much. my understanding of prefetch was that its number
> of packets not the number of bytes (i confirmed this by reducing the
> prefetch to 2 and then i see only 1 pkt per bulk message (half of the
> window size)).
>
> The size of the each pkt is roughly 900B, and as u can see that i am not
> able to read more than 12 pkts per batch in the complete logs below. So
> looking back yes the size of 12x900B is greater than 5KB, so the heading
> may need correction - it looks like 2x of that which is 10kb.
>
> would appreciate if someone can suggest some more knobs i should play to
> figure out where this limit of 10kb is coming from ?

My guess is that it is a service-bus choice (i.e. the buffer size it
writes with). In itself that shouldn't require a roundtrip to get more.
If that is happening it could conceivably be something to do with the
number of unsettled messages?

It may be worth asking about the issue on the service bus forums.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Pankaj Bhagra
Gordon,

Further digging on network level sniffing shows that the bulk msg_size is
limited to = 16373 (16K). this observation is inline with previously
reported issue

http://grokbase.com/t/qpid/users/163z91rhdy/ssl-maximum-message-size

As suggested I posted the q on the Azure SB forum too to find if there are
knobs in the SB configuration to make this un-ack buffer size bigger on the
amqp ssl.

Coming back on your suggestion about unsettled messages. Can u guide me
what should be client side configuration (if any) to force server to keep
sending without waiting for flow control ack from the client (number of
unsettled messages ?). I would like server to stop on the link-credit
running out, but not on the max_buffer of 16kb. Ideally  i need is a
behavior of atleast-once, but I am ready to sacrifice this requirement to
get better perf.

currently client is requesting its recv-settle-mode to be "unsettled" and
server is sending its rev-settle-mode=settled. and my simplistic receiver
is initiating the messaging handler like
class Recv(MessagingHandler):
     def __init__(self):
                 super(Recv, self).__init__(prefetch=100, auto_accept=True,
auto_settle=True)


[name="bc599ddc-74df-46b0-800c-401aed27f321-kukatopic/Subscriptions/kukasub",handle=0,
role=true, snd-settle-mode=2, rcv-settle-mode=0, source=@source(40)
[address="kukatopic/Subscriptions/kuka\

sub", durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0,
timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]

[0xb58380]:0 -> @flow(19) [incoming-window=2147483647,
next-outgoing-id=0,outgoing-window=2147483647, handle=0, delivery-count=0,
link-credit=10000, drain=false]

[0xb58380]:0 <- @open(16)
[container-id="fa8f5d5577be485ebd7f5ebdbdfd9ca1_G13", max-frame-size=65536,
channel-max=4999, idle-time-out=240000]

[0xb58380]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1,
incoming-window=5000, outgoing-window=2147483647, handle-max=255]

[name="bc599ddc-74df-46b0-800c-401aed27f321-kukatopic/Subscriptions/kukasub",
handle=0, role=false, rcv-settle-mode=1, source=@source(40)
[address="kukatopic/Subscriptions/kukasub", durable=0, \

timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0,
dynamic=false], initial-delivery-count=0, max-message-size=266240]



On Wed, Aug 9, 2017 at 1:56 AM, Gordon Sim <[hidden email]> wrote:

> On 09/08/17 08:22, Pankaj Bhagra wrote:
>
>> Thanks Gordon for looking into my query. It makes sense what u said,
>> however i am still searching for a reason of flow control and limited
>> batch
>> size.
>>
>> As per your suggestion i tried increasing the link-credit to 10k, 100k,
>> but
>> that doesn't change much. my understanding of prefetch was that its number
>> of packets not the number of bytes (i confirmed this by reducing the
>> prefetch to 2 and then i see only 1 pkt per bulk message (half of the
>> window size)).
>>
>> The size of the each pkt is roughly 900B, and as u can see that i am not
>> able to read more than 12 pkts per batch in the complete logs below. So
>> looking back yes the size of 12x900B is greater than 5KB, so the heading
>> may need correction - it looks like 2x of that which is 10kb.
>>
>> would appreciate if someone can suggest some more knobs i should play to
>> figure out where this limit of 10kb is coming from ?
>>
>
> My guess is that it is a service-bus choice (i.e. the buffer size it
> writes with). In itself that shouldn't require a roundtrip to get more. If
> that is happening it could conceivably be something to do with the number
> of unsettled messages?
>
> It may be worth asking about the issue on the service bus forums.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Gordon Sim
On 10/08/17 00:13, Pankaj Bhagra wrote:
> Gordon,
>
> Further digging on network level sniffing shows that the bulk msg_size is
> limited to = 16373 (16K). this observation is inline with previously
> reported issue

What do you mean by msg_size here?

> http://grokbase.com/t/qpid/users/163z91rhdy/ssl-maximum-message-size

Interesting that the limit is the same, not sure what to make of that,
perhaps its just a commonly used buffer size. The 'solution' in that
issue was to break up large messages into multiple frames. In your case
as I understood it, the individual messages were smaller than this limit
already.

> As suggested I posted the q on the Azure SB forum too to find if there are
> knobs in the SB configuration to make this un-ack buffer size bigger on the
> amqp ssl.
>
> Coming back on your suggestion about unsettled messages. Can u guide me
> what should be client side configuration (if any) to force server to keep
> sending without waiting for flow control ack from the client (number of
> unsettled messages ?). I would like server to stop on the link-credit
> running out, but not on the max_buffer of 16kb. Ideally  i need is a
> behavior of atleast-once, but I am ready to sacrifice this requirement to
> get better perf.

To request that message be sent settled, you can create your receiver
with the AtMostOnce option (imported from proton.reactor), e.g.:

   container.create_receiver(url, options=[AtMostOnce()])

or

   container.create_receiver(conn, 'mysource', options=[AtMostOnce()])

You could see if that has any effect.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Pankaj Bhagra
My interest piqued by your pointer of forcing negotiation for getting
"settled" messages s.t. server can keep sending them without awaiting
client to acknowledge them. So i tried your recommendation of setting the
link option AtMostOnce. I went further and force the AtMostOnce to set the
"settled" for both send and receive message. However i still see server
doesn't release the next batch till it receive the acknowledgement from
client. this is puzzling, any other suggestion ?

I wanted to experiment by disabling the flowControl. i did that by setting
"prefetch"="None" , but that didn't get going. what would be way to disable
the flow control to try out ?

[0x24b0cd0]:0 -> @open(16)
[container-id="3feb8312-b228-4052-87ec-5ab12633be0c", hostname="
nebhubsb.servicebus.windows.net", channel-max=32767]

[0x24b0cd0]:0 -> @begin(17) [next-outgoing-id=0,
incoming-window=2147483647, outgoing-window=2147483647]

[0x24b0cd0]:0 -> @attach(18)
[name="3feb8312-b228-4052-87ec-5ab12633be0c-kukatopic/Subscriptions/kukasub",
handle=0, role=true, snd-settle-mode=1, rcv-settle-mode=1,
source=@source(40) [address="kukatopic/Subscriptions/kukasub", durable=0,
timeout=0, dynamic=false], target=@target(41) [durable=0, timeout=0,
dynamic=false], initial-delivery-count=0, max-message-size=0]

[0x24b0cd0]:0 -> @flow(19) [incoming-window=2147483647, next-outgoing-id=0,
outgoing-window=2147483647, handle=0, delivery-count=0, link-credit=100,
drain=false]

[0x24b0cd0]:  <- AMQP

[0x24b0cd0]:0 <- @open(16)
[container-id="2fdaeda28fb7483a9922790398ad1f0a_G20", max-frame-size=65536,
channel-max=4999, idle-time-out=240000]

[0x24b0cd0]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1,
incoming-window=5000, outgoing-window=2147483647, handle-max=255]

[0x24b0cd0]:0 <- @attach(18)
[name="3feb8312-b228-4052-87ec-5ab12633be0c-kukatopic/Subscriptions/kukasub",
handle=0, role=false, snd-settle-mode=1, source=@source(40)
[address="kukatopic/Subscriptions/kukasub", durable=0, timeout=0,
dynamic=false], target=@target(41) [durable=0, timeout=0, dynamic=false],
initial-delivery-count=0, max-message-size=266240]

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=0, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1220)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=1, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1231)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=2, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1219)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=3, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1219)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=4, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1219)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=5, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1270)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=6, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1218)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=7, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1229)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=8, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (826)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=9, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1218)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=10, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1220)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=11, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1216)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=12, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1216)

2017-08-10 01:49:00.417 pkt_seq = 1 remaining_in_batch = 12, 879

2017-08-10 01:49:00.417 pkt_seq = 2 remaining_in_batch = 11, 890

2017-08-10 01:49:00.417 pkt_seq = 3 remaining_in_batch = 10, 878

2017-08-10 01:49:00.418 pkt_seq = 4 remaining_in_batch = 9, 878

2017-08-10 01:49:00.419 pkt_seq = 5 remaining_in_batch = 8, 878

2017-08-10 01:49:00.419 pkt_seq = 6 remaining_in_batch = 7, 929

2017-08-10 01:49:00.419 pkt_seq = 7 remaining_in_batch = 6, 877

2017-08-10 01:49:00.420 pkt_seq = 8 remaining_in_batch = 5, 888

2017-08-10 01:49:00.420 pkt_seq = 9 remaining_in_batch = 4, 500

2017-08-10 01:49:00.421 pkt_seq = 10 remaining_in_batch = 3, 877

2017-08-10 01:49:00.421 pkt_seq = 11 remaining_in_batch = 2, 879

2017-08-10 01:49:00.421 pkt_seq = 12 remaining_in_batch = 1, 875

2017-08-10 01:49:00.422 pkt_seq = 13 remaining_in_batch = 0, 875

[0x24b0cd0]:0 -> @flow(19) [next-incoming-id=14,
incoming-window=2147483647, next-outgoing-id=0, outgoing-window=2147483647,
handle=0, delivery-count=13, link-credit=99, drain=false]

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=13, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1218)


and cycle continues ..




On Thu, Aug 10, 2017 at 3:01 AM, Gordon Sim <[hidden email]> wrote:

> On 10/08/17 00:13, Pankaj Bhagra wrote:
>
>> Gordon,
>>
>> Further digging on network level sniffing shows that the bulk msg_size is
>> limited to = 16373 (16K). this observation is inline with previously
>> reported issue
>>
>
> What do you mean by msg_size here?
>
> http://grokbase.com/t/qpid/users/163z91rhdy/ssl-maximum-message-size
>>
>
> Interesting that the limit is the same, not sure what to make of that,
> perhaps its just a commonly used buffer size. The 'solution' in that issue
> was to break up large messages into multiple frames. In your case as I
> understood it, the individual messages were smaller than this limit already.
>
> As suggested I posted the q on the Azure SB forum too to find if there are
>> knobs in the SB configuration to make this un-ack buffer size bigger on
>> the
>> amqp ssl.
>>
>> Coming back on your suggestion about unsettled messages. Can u guide me
>> what should be client side configuration (if any) to force server to keep
>> sending without waiting for flow control ack from the client (number of
>> unsettled messages ?). I would like server to stop on the link-credit
>> running out, but not on the max_buffer of 16kb. Ideally  i need is a
>> behavior of atleast-once, but I am ready to sacrifice this requirement to
>> get better perf.
>>
>
> To request that message be sent settled, you can create your receiver with
> the AtMostOnce option (imported from proton.reactor), e.g.:
>
>   container.create_receiver(url, options=[AtMostOnce()])
>
> or
>
>   container.create_receiver(conn, 'mysource', options=[AtMostOnce()])
>
> You could see if that has any effect.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [hidden email]
> For additional commands, e-mail: [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Gordon Sim
On 10/08/17 18:54, Pankaj Bhagra wrote:
> I wanted to experiment by disabling the flowControl. i did that by setting
> "prefetch"="None" , but that didn't get going. what would be way to disable
> the flow control to try out ?

You can't. Flow control is always in effect, all you can do is increase
or decrease the credit granted. You have already established that the
server is not waiting for credit. From a protocol perspective there is
nothing else I think you can try, I think you will need to see whether
the service bus forum can offer any insight into what is going on.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Pankaj Bhagra
In reply to this post by Pankaj Bhagra
Gordon,

what is the meaning of "more" flags. I assume this means that server
doesn't have more data to flush ? correct ?

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=0, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1220)

[0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=1, delivery-tag=b"",
message-format=0, settled=true, more=false, batchable=true] (1231)


If so, it appears that SB only caches 16K of data and doesn't refill this
cache unless there is a new fetch request (which is painfully take the
RTT).


Does it make sense to keep sending a prefetch request every few msec to
keep the server busy and pulling data from whatever its backend store ?


later,

pankaj




On Thu, Aug 10, 2017 at 10:54 AM, Pankaj Bhagra <[hidden email]> wrote:

> My interest piqued by your pointer of forcing negotiation for getting
> "settled" messages s.t. server can keep sending them without awaiting
> client to acknowledge them. So i tried your recommendation of setting the
> link option AtMostOnce. I went further and force the AtMostOnce to set the
> "settled" for both send and receive message. However i still see server
> doesn't release the next batch till it receive the acknowledgement from
> client. this is puzzling, any other suggestion ?
>
> I wanted to experiment by disabling the flowControl. i did that by setting
> "prefetch"="None" , but that didn't get going. what would be way to disable
> the flow control to try out ?
>
> [0x24b0cd0]:0 -> @open(16) [container-id="3feb8312-b228-4052-87ec-5ab12633be0c",
> hostname="nebhubsb.servicebus.windows.net", channel-max=32767]
>
> [0x24b0cd0]:0 -> @begin(17) [next-outgoing-id=0, incoming-window=
> 2147483647 <(214)%20748-3647>, outgoing-window=2147483647
> <(214)%20748-3647>]
>
> [0x24b0cd0]:0 -> @attach(18) [name="3feb8312-b228-4052-
> 87ec-5ab12633be0c-kukatopic/Subscriptions/kukasub", handle=0, role=true, snd-settle-mode=1,
> rcv-settle-mode=1, source=@source(40) [address="kukatopic/Subscriptions/kukasub",
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0,
> timeout=0, dynamic=false], initial-delivery-count=0, max-message-size=0]
>
> [0x24b0cd0]:0 -> @flow(19) [incoming-window=2147483647 <(214)%20748-3647>,
> next-outgoing-id=0, outgoing-window=2147483647 <(214)%20748-3647>,
> handle=0, delivery-count=0, link-credit=100, drain=false]
>
> [0x24b0cd0]:  <- AMQP
>
> [0x24b0cd0]:0 <- @open(16) [container-id="2fdaeda28fb7483a9922790398ad1f0a_G20",
> max-frame-size=65536, channel-max=4999, idle-time-out=240000]
>
> [0x24b0cd0]:0 <- @begin(17) [remote-channel=0, next-outgoing-id=1,
> incoming-window=5000, outgoing-window=2147483647, handle-max=255]
>
> [0x24b0cd0]:0 <- @attach(18) [name="3feb8312-b228-4052-
> 87ec-5ab12633be0c-kukatopic/Subscriptions/kukasub", handle=0, role=false,
> snd-settle-mode=1, source=@source(40) [address="kukatopic/Subscriptions/kukasub",
> durable=0, timeout=0, dynamic=false], target=@target(41) [durable=0,
> timeout=0, dynamic=false], initial-delivery-count=0,
> max-message-size=266240]
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=0, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1220)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=1, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1231)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=2, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1219)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=3, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1219)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=4, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1219)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=5, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1270)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=6, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1218)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=7, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1229)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=8, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (826)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=9, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1218)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=10,
> delivery-tag=b"", message-format=0, settled=true, more=false,
> batchable=true] (1220)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=11,
> delivery-tag=b"", message-format=0, settled=true, more=false,
> batchable=true] (1216)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=12,
> delivery-tag=b"", message-format=0, settled=true, more=false,
> batchable=true] (1216)
>
> 2017-08-10 01:49:00.417 pkt_seq = 1 remaining_in_batch = 12, 879
>
> 2017-08-10 01:49:00.417 pkt_seq = 2 remaining_in_batch = 11, 890
>
> 2017-08-10 01:49:00.417 pkt_seq = 3 remaining_in_batch = 10, 878
>
> 2017-08-10 01:49:00.418 pkt_seq = 4 remaining_in_batch = 9, 878
>
> 2017-08-10 01:49:00.419 pkt_seq = 5 remaining_in_batch = 8, 878
>
> 2017-08-10 01:49:00.419 pkt_seq = 6 remaining_in_batch = 7, 929
>
> 2017-08-10 01:49:00.419 pkt_seq = 7 remaining_in_batch = 6, 877
>
> 2017-08-10 01:49:00.420 pkt_seq = 8 remaining_in_batch = 5, 888
>
> 2017-08-10 01:49:00.420 pkt_seq = 9 remaining_in_batch = 4, 500
>
> 2017-08-10 01:49:00.421 pkt_seq = 10 remaining_in_batch = 3, 877
>
> 2017-08-10 01:49:00.421 pkt_seq = 11 remaining_in_batch = 2, 879
>
> 2017-08-10 01:49:00.421 pkt_seq = 12 remaining_in_batch = 1, 875
>
> 2017-08-10 01:49:00.422 pkt_seq = 13 remaining_in_batch = 0, 875
>
> [0x24b0cd0]:0 -> @flow(19) [next-incoming-id=14, incoming-window=
> 2147483647 <(214)%20748-3647>, next-outgoing-id=0, outgoing-window=
> 2147483647 <(214)%20748-3647>, handle=0, delivery-count=13,
> link-credit=99, drain=false]
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=13,
> delivery-tag=b"", message-format=0, settled=true, more=false,
> batchable=true] (1218)
>
>
> and cycle continues ..
>
>
>
>
> On Thu, Aug 10, 2017 at 3:01 AM, Gordon Sim <[hidden email]> wrote:
>
>> On 10/08/17 00:13, Pankaj Bhagra wrote:
>>
>>> Gordon,
>>>
>>> Further digging on network level sniffing shows that the bulk msg_size is
>>> limited to = 16373 (16K). this observation is inline with previously
>>> reported issue
>>>
>>
>> What do you mean by msg_size here?
>>
>> http://grokbase.com/t/qpid/users/163z91rhdy/ssl-maximum-message-size
>>>
>>
>> Interesting that the limit is the same, not sure what to make of that,
>> perhaps its just a commonly used buffer size. The 'solution' in that issue
>> was to break up large messages into multiple frames. In your case as I
>> understood it, the individual messages were smaller than this limit already.
>>
>> As suggested I posted the q on the Azure SB forum too to find if there are
>>> knobs in the SB configuration to make this un-ack buffer size bigger on
>>> the
>>> amqp ssl.
>>>
>>> Coming back on your suggestion about unsettled messages. Can u guide me
>>> what should be client side configuration (if any) to force server to keep
>>> sending without waiting for flow control ack from the client (number of
>>> unsettled messages ?). I would like server to stop on the link-credit
>>> running out, but not on the max_buffer of 16kb. Ideally  i need is a
>>> behavior of atleast-once, but I am ready to sacrifice this requirement to
>>> get better perf.
>>>
>>
>> To request that message be sent settled, you can create your receiver
>> with the AtMostOnce option (imported from proton.reactor), e.g.:
>>
>>   container.create_receiver(url, options=[AtMostOnce()])
>>
>> or
>>
>>   container.create_receiver(conn, 'mysource', options=[AtMostOnce()])
>>
>> You could see if that has any effect.
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [hidden email]
>> For additional commands, e-mail: [hidden email]
>>
>>
>
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: proton server (azure SB) limit the incoming_window=5000

Gordon Sim
On 10/08/17 19:47, Pankaj Bhagra wrote:
> Gordon,
>
> what is the meaning of "more" flags. I assume this means that server
> doesn't have more data to flush ? correct ?

No, it means is there any more data to be transferred for that specific
delivery i.e. that specific message.

> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=0, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1220)
>
> [0x24b0cd0]:0 <- @transfer(20) [handle=0, delivery-id=1, delivery-tag=b"",
> message-format=0, settled=true, more=false, batchable=true] (1231)
>
>
> If so, it appears that SB only caches 16K of data and doesn't refill this
> cache unless there is a new fetch request (which is painfully take the
> RTT).

What is triggering the next batch isn't really a fetch, since there was
already existing credit.

I think I may have misunderstood what you wanted to do in the previous
mail btw. Did you want to disable sending the flow? If you you can set
prefetch to 0, but then you have to do a receiver.flow(10000) (or
similar) when you first create the receiver, to give it credit. That way
it will not send a flow again after receiving the first set of
transfers, and you can see what effect that has, if any.

> Does it make sense to keep sending a prefetch request every few msec to
> keep the server busy and pulling data from whatever its backend store ?

The flow is already being sent back as soon as it has processed the
messages you have. The client is single threaded so while it is
processing the incoming transfers it won't be sending anything out.
However from the timestamps that seems to take very few millisecs.

---------------------------------------------------------------------
To unsubscribe, e-mail: [hidden email]
For additional commands, e-mail: [hidden email]

Loading...