UniFi Cloud Key: MongoDB Out of Memory Issue

You may find the following issue if you run a UniFi setup:

tail -f /srv/unifi/logs/server.log

...
Wed Jun 27 21:52:34.250 [initandlisten] ERROR: mmap private failed with out of memory. You are using a 32-bit build and probably need to upgrade to 64

After googling for it you may find a Ubiquiti staff post a prune script on their forum.

But you may find that script can only be executed while the MongoDB is running. However no one mentioned how to solve it when you can’t start your MongoDB. Here’s the solution, actually you don’t even need to repair your database in this situation:

Make sure unifi service is stopped:

systemctl stop unifi

Download the prune script from Ubiquity support

wget https://ubnt.zendesk.com/hc/article_attachments/115024095828/mongo_prune_js.js

Start a new SSH session, run MongoDB without --journal, all others parameters are copied from the unifi service:

mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --noprealloc --nohttpinterface --smallfiles --bind_ip 127.0.0.1

Run the prune script:

mongo --port 27117 < mongo_prune_js.js

You should get the similar output:

MongoDB shell version: 2.4.10
connecting to: 127.0.0.1:27117/test
[dryrun] pruning data older than 7 days (1541581969480)... 
switched to db ace
[dryrun] pruning 12404 entries (total 12404) from alarm... 
[dryrun] pruning 16036 entries (total 16127) from event... 
[dryrun] pruning 76 entries (total 77) from guest... 
[dryrun] pruning 24941 entries (total 25070) from rogue... 
[dryrun] pruning 365 entries (total 379) from user... 
[dryrun] pruning 0 entries (total 10) from voucher... 
switched to db ace_stat
[dryrun] pruning 0 entries (total 313) from stat_5minutes... 
[dryrun] pruning 21717 entries (total 22058) from stat_archive... 
[dryrun] pruning 715 entries (total 736) from stat_daily... 
[dryrun] pruning 3655 entries (total 5681) from stat_dpi... 
[dryrun] pruning 15583 entries (total 16050) from stat_hourly... 
[dryrun] pruning 372 entries (total 382) from stat_life... 
[dryrun] pruning 0 entries (total 0) from stat_minute... 
[dryrun] pruning 56 entries (total 56) from stat_monthly... 
bye

Then edit the prune script and rerun the prune script with dryrun=false:

MongoDB shell version: 2.4.10
connecting to: 127.0.0.1:27117/test
pruning data older than 7 days (1541582296632)... 
switched to db ace
pruning 12404 entries (total 12404) from alarm... 
pruning 16036 entries (total 16127) from event... 
pruning 76 entries (total 77) from guest... 
pruning 24941 entries (total 25070) from rogue... 
pruning 365 entries (total 379) from user... 
pruning 0 entries (total 10) from voucher... 
{ "ok" : 1 }
{ "ok" : 1 }
switched to db ace_stat
pruning 0 entries (total 313) from stat_5minutes... 
pruning 21717 entries (total 22058) from stat_archive... 
pruning 715 entries (total 736) from stat_daily... 
pruning 3655 entries (total 5681) from stat_dpi... 
pruning 15583 entries (total 16050) from stat_hourly... 
pruning 372 entries (total 382) from stat_life... 
pruning 0 entries (total 0) from stat_minute... 
pruning 56 entries (total 56) from stat_monthly... 
{ "ok" : 1 }
{ "ok" : 1 }
bye

Start the unifi service

systemctl start unifi

The root cause of this issue is that Cloud Key is currently running on ARMv7, a 32-bit based custom Debian system. so MongoDB cannot handle data larger than 2 GB. I haven’t tried the Cloud Key 2 and 2 Plus I hope they’re ARMv8 based. At the moment you can limit data retention as a workaround.