I restarted it (when it reached 5GB) but then the other nodes also raised their "Other process memory" use too. What may be the reason for that? The docs (http://www.rabbitmq.com/memory-use.html) aren't too clear to me.
I can see a very high deliver/redeliver rate, compared to publish rate, may that be a clue?
On 23/03/13 02:47, carlhoerberg wrote:
> Have a 3 node cluster, normally it hovers around 200-300mb of ram, but
> suddenly the "Other process memory" started to grow:
"Other" is all use left after connections, queues and plugins are
accounted for. This could be due to internal Erlang VM processes
consuming memory. One way to dig further is to look at the Erlang
processes using the most memory:
Would you be able to provide some more details for us to investigate
this further? We will need to logfiles and the sasl logfiles from all 3
nodes. Please provide entries from at least a while before the memory
use on the one node started to increase.
Then we'd like you to execute a command on the node with abnormal memory
(tiger01), but take some precautions first. If you are able to bring
more swapspace online then please do so. If there are any other
non-critical processes on the machine then please stop them, as this
command has the potential to impose heavy load. Please be aware that
this command could cause the node to crash. If you have HA redundancy in
place then it shouldn't matter, but if you are not comfortable with this
prospect then don't proceed.
Otherwise please run this command:
rabbitmqctl eval 'io:put_chars(standard_error, "this should appear in
and check that the expected text appeared in startup_err (normally in
/var/log/rabbitmq). If the text did not appear then do not proceed.
The command will return and the startup_err file should start growing
soon afterwards. It could take a long time until it stops growing. When
it does stop growing please send it to [hidden email] along with
the logfiles, preferably compressed.