Quantcast

Rabbitmq : Memory Utilization overshoots indefinitely

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Rabbitmq : Memory Utilization overshoots indefinitely

nkhattar88
This post has NOT been accepted by the mailing list yet.
In my organization, we are using RabbitMq in a clustering mode with 3 servers in the cluster. There are 2 such clusters. One of the cluster has RabbitMq version 2.8.7 whereas other one has version 3.3.3. The memory utilization of the latter one having version 3.3.3 shoots up indefinitely sometimes. This scenario occurs mainly on one of the Slave(mirror) node first and then after a considerable amount of time, all the nodes become unresponsive. This issue has occurred almost 4-5 times in the last month. After doing considerable analysis, we found that memory utilization of Queue Processes overshoots to a very high mark, whereas the total memory consumed by all the queues is much much lower.

In the last incident, the memory of one of the slave crossed the water-mark limit of 13 GB. On checking 'rabbitmqctl status' we found that the memory used by Queue Processes was 11GB, whereas the total memory used by individual queues was not more than 100 MB. Also, there was not much load on any of the queues. As a remedy, we reduced the load on the queues to almost nil, but still the memory utilization was above the watermark. After a certain amount of time (close to an hour), the memory utilization of the second slave started to grow at a larger pace in-spite of having any load on queues. Also the 'sync-mode' is manual for all the queues as part of the default policy. The moment we ran 'rabbitmqctl stop_app' and then 'rabbitmqctl start_app' on the first slave node, the memory utilization of both the slave nodes came back to normal, without any loss of messages.

Please provide any solution for this issue as this occurring again & again.

PS- I have attached the output of rabbitmqctl report. The status of faulty slave node was -

Status of node 'rabbit@sd-bhii1' ...

[{pid,8655},

 {running_applications,

     [{rabbitmq_management,"RabbitMQ Management Console","3.3.3"},

      {rabbitmq_management_agent,"RabbitMQ Management Agent","3.3.3"},

      {rabbit,"RabbitMQ","3.3.3"},

      {os_mon,"CPO  CXC 138 46","2.2.10"},

      {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.3.3"},

      {webmachine,"webmachine","1.10.3-rmq3.3.3-gite9359c7"},

      {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.3.3-git680dba8"},

      {amqp_client,"RabbitMQ AMQP Client","3.3.3"},

      {xmerl,"XML parser","1.3.2"},

      {inets,"INETS  CXC 138 49","5.9.1"},

      {mnesia,"MNESIA  CXC 138 12","4.7.1"},

      {sasl,"SASL  CXC 138 11","2.2.1"},

      {stdlib,"ERTS  CXC 138 10","1.18.2"},

      {kernel,"ERTS  CXC 138 10","2.15.2"}]},

 {os,{unix,linux}},

 {erlang_version,

     "Erlang R15B02 (erts-5.9.2) [source] [64-bit] [smp:24:24] [async-threads:30] [hipe] [kernel-poll:true]\n"},

 {memory,

     [{total,12887292592},

      {connection_procs,2628880},

      {queue_procs,10754379080},

      {plugins,873488},

      {other_proc,15718848},

      {mnesia,651720},

      {mgmt_db,12352},

      {msg_index,41158072},

      {other_ets,24285840},

      {binary,2013003992},

      {code,18575212},

      {atom,703377},

      {other_system,15301731}]},

 {alarms,[memory]},

 {listeners,[{clustering,25672,"::"},{amqp,5672,"0.0.0.0"}]},

 {vm_memory_high_watermark,0.4},

 {vm_memory_limit,13446106316},

 {disk_free_limit,50000000},

 {disk_free,235134709760},

 {file_descriptors,

     [{total_limit,9900},

      {total_used,61},

      {sockets_limit,8908},

      {sockets_used,13}]},

 {processes,[{limit,1048576},{used,767}]},

 {run_queue,0},

 {uptime,310882}]
Loading...