Month: November 2018

  • Get Telegram Bot ID

    1. Paste the following link in your browser. Replace with the API access token that you identified or created in the previous section: https://api.telegram.org/bot<API-access-token>/getUpdates?offset=0
    2. Send a message to your bot in the Telegram application. The message text can be anything. Your chat history must include at least one message to get your chat ID.
    3. Refresh your browser.
    4. Identify the numerical chat ID by finding the id inside the chat JSON object. In the example below, the chat ID is 123456789.
    {  
       "ok":true,
       "result":[  
          {  
             "update_id":XXXXXXXXX,
             "message":{  
                "message_id":2,
                "from":{  
                   "id":123456789,
                   "first_name":"Mushroom",
                   "last_name":"Kap"
                },
                "chat":{  
                   "id":123456789,
                   "first_name":"Mushroom",
                   "last_name":"Kap",
                   "type":"private"
                },
                "date":1487183963,
                "text":"hi"
             }
          }
       ]
    }
    

    If you have jq installed on your system, youcan also use the folowing command:

    curl https://api.telegram.org/bot<API-access-token>/getUpdates?offset=0 | jq -r .message.chat.id
  • UniFi Cloud Key: MongoDB Out of Memory Issue

    You may find the following issue if you run a UniFi setup:

    tail -f /srv/unifi/logs/server.log
    
    ...
    Wed Jun 27 21:52:34.250 [initandlisten] ERROR: mmap private failed with out of memory. You are using a 32-bit build and probably need to upgrade to 64
    

    After googling for it you may find a Ubiquiti staff post a prune script on their forum.

    But you may find that script can only be executed while the MongoDB is running. However no one mentioned how to solve it when you can’t start your MongoDB. Here’s the solution, actually you don’t even need to repair your database in this situation:

    Make sure unifi service is stopped:

    systemctl stop unifi
    

    Download the prune script from Ubiquity support

    wget https://ubnt.zendesk.com/hc/article_attachments/115024095828/mongo_prune_js.js
    

    Start a new SSH session, run MongoDB without --journal, all others parameters are copied from the unifi service:

    mongod --dbpath /usr/lib/unifi/data/db --port 27117 --unixSocketPrefix /usr/lib/unifi/run --noprealloc --nohttpinterface --smallfiles --bind_ip 127.0.0.1
    

    Run the prune script:

    mongo --port 27117 < mongo_prune_js.js
    

    You should get the similar output:

    MongoDB shell version: 2.4.10
    connecting to: 127.0.0.1:27117/test
    [dryrun] pruning data older than 7 days (1541581969480)... 
    switched to db ace
    [dryrun] pruning 12404 entries (total 12404) from alarm... 
    [dryrun] pruning 16036 entries (total 16127) from event... 
    [dryrun] pruning 76 entries (total 77) from guest... 
    [dryrun] pruning 24941 entries (total 25070) from rogue... 
    [dryrun] pruning 365 entries (total 379) from user... 
    [dryrun] pruning 0 entries (total 10) from voucher... 
    switched to db ace_stat
    [dryrun] pruning 0 entries (total 313) from stat_5minutes... 
    [dryrun] pruning 21717 entries (total 22058) from stat_archive... 
    [dryrun] pruning 715 entries (total 736) from stat_daily... 
    [dryrun] pruning 3655 entries (total 5681) from stat_dpi... 
    [dryrun] pruning 15583 entries (total 16050) from stat_hourly... 
    [dryrun] pruning 372 entries (total 382) from stat_life... 
    [dryrun] pruning 0 entries (total 0) from stat_minute... 
    [dryrun] pruning 56 entries (total 56) from stat_monthly... 
    bye
    

    Then edit the prune script and rerun the prune script with dryrun=false:

    MongoDB shell version: 2.4.10
    connecting to: 127.0.0.1:27117/test
    pruning data older than 7 days (1541582296632)... 
    switched to db ace
    pruning 12404 entries (total 12404) from alarm... 
    pruning 16036 entries (total 16127) from event... 
    pruning 76 entries (total 77) from guest... 
    pruning 24941 entries (total 25070) from rogue... 
    pruning 365 entries (total 379) from user... 
    pruning 0 entries (total 10) from voucher... 
    { "ok" : 1 }
    { "ok" : 1 }
    switched to db ace_stat
    pruning 0 entries (total 313) from stat_5minutes... 
    pruning 21717 entries (total 22058) from stat_archive... 
    pruning 715 entries (total 736) from stat_daily... 
    pruning 3655 entries (total 5681) from stat_dpi... 
    pruning 15583 entries (total 16050) from stat_hourly... 
    pruning 372 entries (total 382) from stat_life... 
    pruning 0 entries (total 0) from stat_minute... 
    pruning 56 entries (total 56) from stat_monthly... 
    { "ok" : 1 }
    { "ok" : 1 }
    bye
    

    Start the unifi service

    systemctl start unifi
    

    The root cause of this issue is that Cloud Key is currently running on ARMv7, a 32-bit based custom Debian system. so MongoDB cannot handle data larger than 2 GB. I haven’t tried the Cloud Key 2 and 2 Plus I hope they’re ARMv8 based. At the moment you can limit data retention as a workaround.

  • Custom CSS for Zammad

    cd /opt/zammad/app/assets/stylesheets/custom/
    vi custom.css
    # Editing...
    zammad run rake assets:precompile
    systemctl restart zammad-web
    
  • How to Solve Zammad `BackgroundJobSearchIndex` Errors

    1339 failing background jobs.
    Failed to run background job #1 ‘BackgroundJobSearchIndex’ 10 time(s) with 228 attempt(s).

    Run zammad run rails c then:

    items = SearchIndexBackend.search('preferences.notification_sound.enabled:*', 3000, 'User')
    items.each {|item|
      next if !item[:id]
      user = User.find_by(id: item[:id])
      next if !user
      next if !user.preferences
      next if !user.preferences[:notification_sound]
      next if !user.preferences[:notification_sound][:enabled]
      if user.preferences[:notification_sound][:enabled] == 'true'
        user.preferences[:notification_sound][:enabled] = true
        user.save!
        next
      end
      next if user.preferences[:notification_sound][:enabled] != 'false'
      user.preferences[:notification_sound][:enabled] = false
      user.save!
      next
    }
    
    Delayed::Job.all.each {|job|
      Delayed::Worker.new.run(job)
    } 
    

    Further reading: Failing background jobs.;Failed to run background job #1 ‘BackgroundJobSearchIndex’ 10 time(s)

  • Disable Elasticsearch for Existing Zammad Installs

    Zammad can actually work very well without Elasticsearch for small amount of tickets. Please note, after disabling Elasticsearch, you may see many BackgroundJobSearchIndex errors like this, so use it at your own risk.

    zammad run rails r "Setting.set('es_url', '')" # set it empty
    zammad run rake searchindex:rebuild
    systemctl stop elasticsearch
    systemctl disable elasticsearch
    systemctl mask elasticsearch 

    Further reading: Set up Elasticsearch – Zammad Docs

  • OpenJDK 64-Bit Server VM warning: XX:ParallelGCThreads=N

    Just adding -XX:-AssumeMP to /etc/elasticsearch/jvm.options