Gitlab still a pain to update…

But it gets smoother.

Still no ‘just call the update command’ but at least better than before. This time a dependency created the most headache: ruby. But creating the package again after upgrade of the whole system from the AUR solved that issue. There are still some warnings and exceptions here and there, but the application is running again.

The upgrade of postgreSQL from 9.3 to 9.4 in the archlinux repository still caught me red-handed and prevented some services from starting. The upgrade instructions are straight forward and the problem was solved in less than 3 minutes.

Server completely up to date and running again is quite a satisfying experience. 🙂

My next concerns are now some more sophisticated backup routines…

Gitlab updates over arch-aur

Again are GitLab updates quite exhausting. Better to say, not GitLab itself, but the repository updates of Arch Linux in combination with GitLab.

ruby got a new release 2.2 and my GitLab instance is still running on 2.1. A downgrade of ruby prevented my instance from collapsing, but as usual upgrading GitLab is quite painful, because the aur package had never quite completed all tasks correctly as you would expect. And the downgrade messed up other packages relying on ruby, so I’m in the need of getting this fixed. 😐

Maybe I should just announce a GitLab upgrade day every two months on a weekend? Well, won’t do that today for sure, too much can-go-wrongs for a chilly sunday evening. Maybe next weekend. 🙂

Mag-Lite™

Got MagLite™ 6D-Cell flashlight. [check]
Got 6x D-Cell 5000mAh rechargeable battery. [check]
Got LED replacement with 220Lumen. [check]Tested flashlight with dog in complete darkness with fog. [check]

Got flood light. [check!]

stackoverflow.careers

Surprise! I’ve got a job offer on stackoverflow.careers.

I got a invitation for that career website quite a long time ago when that thing was still beta and I thought european companies are more addicted to monster, linked.in and xing when it comes to recruitment needs.

Maybe that offer is a hint that this fresh career platform is gaining ground against the big players. The invite only fashion of so.careers may be a bit of a show stopper for popularity, on the other hand they try to gain the elite hackers through this approach.

The stackexchange websites are the quasi standard for FAQs, finally nice to see that platform grow on the careers sector too.

Evil gitlab update…

During the weekend I’ve managed to bring down my webserver and my gitlab service, due to a failed gitlab update.

Well, the install script of the new gitlab version did some not so good things, which prevented me from updating without hassle. But after all I’ve just put two projects up there, which were easily restored so I did not try to restore the database, instead I just dropped the old one and created the projects within the new gitlab installation again.

Everything back to normal. 🙂

Fast Howto: half-automated backup for hdd-dockingstations

Disclaimer: “Fast howto” means in my world just dumping my config files for everyone to use, explanations are optional. :)

Used tools: systemd, rsync (and cryptsetup)

Just write a systemd service file like my ‘projectBackup.service’:

You can replace ‘projects’ with whatever you like/need.

[Unit]
#device file in question look for it with:
#systemctl --all --full | grep plugged
#you could also use uuid's which would be safer
#following is just my case:
After=dev-mapper-projects.device

[Service]
# script to start
ExecStart=/root/projectBackup.sh

[Install]
#obvious
WantedBy=dev-mapper-projects.device

The script:

#!/bin/bash

#just mount it...
mount /dev/mapper/projects /mnt/backup
#the magic happens here...
rsync -avhim --partial --exclude '**/tmp/**' /mnt/somewhere/projects/ /mnt/backup
# we're done so get da fuq out!
umount /mnt/backup
# hide my data... :)
cryptsetup luksClose /dev/mapper/projects

The only thing I’ve still to do after plugging the hdd in is:

cryptsetup luksOpen /dev/sdd1 projects

Everything else goes by itself. Of course you have no progress bar or whatever to know when it’s done. But you can use

journalctl -fu projectBackup.service

and you see all output of rsync like when you see it in your terminal when you run that command manually plus a really nice timestamp.

 

Fast howto: Weekly backup of your home dir with systemd timers and duplicity

Disclaimer: “Fast howto” means in my world just dumping my config files for everyone to use, explanations are optional. 🙂

 

Hint: duplicity is very good in encrypting your backuped data, but my backup data disks are already encrypted through luks, so I’m not using this feature!

/etc/systemd/system/homeBackup.timer

[Unit]
Description=weekly full home directory backup

[Timer]
# see systemd.time(7) manual page for other scheduling options
OnCalendar=Sun
# run immediately if we missed a backup for some reason
Persistent=true
OnBootSec=15min
#OnUnitActiveSec=1w 

[Install]
WantedBy=timers.target

/etc/systemd/system/homeBackup.service

[Unit]
Description=full home backup

[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/root/homeBackup.sh

/root/homebackup.sh

#!/bin/bash

#backup to my NAS through NFS mount 
#(mounted at boot)
duplicity --no-encryption --asynchronous-upload --volsize 100 -v6 --exclude '**/*/.cache' --exclude '**/whatever/you/like'  /home file:///mnt/your/backup/path

Further hints:

#maybe a reload is needed
systemctl daemon-reload
#run manually (for testing maybe?)
systemctl start homeBackup.service
#enable timer
systemctl enable homeBackup.timer
#just enabling is not enough! 
#start the timer
systemctl start homeBackup.timer
#after start the first run starts immediately, after that at the specified time stamps with "OnCalendar"
#check all timers (with inactives, --all)
systemctl list-timers --all
#check backup log
journalctl -u homeBackup.service

Shellshock

First Heartbleed, now Shellshock.

The new big bugs in important libraries/software have their own ‘brand’ name now. A bit crazy and a bit overkill in my opinion, but at least now such bad security holes are getting enough attention to be fixed FAST, hopefully.

I’ve updated my server immediately after good patches were available, so for the Shellshock vulnerability the update was applied just 5 minutes before this posting. 😉

So apart from not using cgi scripts and stripped down ssh shells, my server is as secure as possible again.

[edit] Well, some patches will are still on the way…

When you’re not at home the mice dance on the table…

Long planned holiday – you’re on your way, you don’t have a reliable internet connection at hand – and exactly then the worst things happen.

Got my server shut down due to a abuse message of my email service. Of course I did not recognise the problem, because I were not at home. I only got a email from my provider.

I’ve immediately checked the logs through my smartphone (!) and deactivated the compromised test account on my email service, but that did not prevent the server from being shutdown, because the queues of the postfix daemon were full of spam mails to be sent.

Well, learned something the hard way: Never forget to get rid of test accounts with too simple passwords.

After one and a half week I finally managed to be back home to get a statement letter out to the postoffice for the reactivation of my server.

First moves after reactivation: checking all logs, removing possible entrances, clearing mailboxes, installing fail2ban and other tools to prevent future brute force login attacks.

Last but not least: Searching for more monitoring possibilities when I’m somewhere out in the world, even with no internets…

Laptop crash during update

On Friday I finally learned how to f*** up my laptop. Just unplug it accidentally during running a system update with “pacman -Su”.

Result: kernel panic on reboot, no help until I found a USB stick containing a archlinux image.

After booting and fixing the kernel image I encountered multiple strange errors from libraries which weren’t been touched during that failed update run.

So after searching and deeper searching, with no evidence of the damaged packages I finally decided to just reinstall the whole system.

First I reinstalled all packages installed by dependencies and then all explicitly installed packages.

#deps:
pacman -S --asdeps $(pacman -Qnqd)
#explicit packages:
pacman -S $(pacman -Qnqe) --force

That ‘force’ was required due to some nasty errors about already existing files, which I did not care about.