Got 6x D-Cell 5000mAh rechargeable battery. [check]
Got LED replacement with 220Lumen. [check]Tested flashlight with dog in complete darkness with fog. [check]
Got flood light. [check!]
Surprise! I’ve got a job offer on stackoverflow.careers.
I got a invitation for that career website quite a long time ago when that thing was still beta and I thought european companies are more addicted to monster, linked.in and xing when it comes to recruitment needs.
Maybe that offer is a hint that this fresh career platform is gaining ground against the big players. The invite only fashion of so.careers may be a bit of a show stopper for popularity, on the other hand they try to gain the elite hackers through this approach.
The stackexchange websites are the quasi standard for FAQs, finally nice to see that platform grow on the careers sector too.
During the weekend I’ve managed to bring down my webserver and my gitlab service, due to a failed gitlab update.
Well, the install script of the new gitlab version did some not so good things, which prevented me from updating without hassle. But after all I’ve just put two projects up there, which were easily restored so I did not try to restore the database, instead I just dropped the old one and created the projects within the new gitlab installation again.
Everything back to normal. 🙂
Disclaimer: “Fast howto” means in my world just dumping my config files for everyone to use, explanations are optional.
Used tools: systemd, rsync (and cryptsetup)
Just write a systemd service file like my ‘projectBackup.service’:
You can replace ‘projects’ with whatever you like/need.
[Unit] #device file in question look for it with: #systemctl --all --full | grep plugged #you could also use uuid's which would be safer #following is just my case: After=dev-mapper-projects.device [Service] # script to start ExecStart=/root/projectBackup.sh [Install] #obvious WantedBy=dev-mapper-projects.device
#!/bin/bash #just mount it... mount /dev/mapper/projects /mnt/backup #the magic happens here... rsync -avhim --partial --exclude '**/tmp/**' /mnt/somewhere/projects/ /mnt/backup # we're done so get da fuq out! umount /mnt/backup # hide my data... :) cryptsetup luksClose /dev/mapper/projects
The only thing I’ve still to do after plugging the hdd in is:
cryptsetup luksOpen /dev/sdd1 projects
Everything else goes by itself. Of course you have no progress bar or whatever to know when it’s done. But you can use
journalctl -fu projectBackup.service
and you see all output of rsync like when you see it in your terminal when you run that command manually plus a really nice timestamp.
Disclaimer: “Fast howto” means in my world just dumping my config files for everyone to use, explanations are optional. 🙂
Hint: duplicity is very good in encrypting your backuped data, but my backup data disks are already encrypted through luks, so I’m not using this feature!
[Unit] Description=weekly full home directory backup [Timer] # see systemd.time(7) manual page for other scheduling options OnCalendar=Sun # run immediately if we missed a backup for some reason Persistent=true OnBootSec=15min #OnUnitActiveSec=1w [Install] WantedBy=timers.target
[Unit] Description=full home backup [Service] Nice=19 IOSchedulingClass=2 IOSchedulingPriority=7 ExecStart=/root/homeBackup.sh
#!/bin/bash #backup to my NAS through NFS mount #(mounted at boot) duplicity --no-encryption --asynchronous-upload --volsize 100 -v6 --exclude '**/*/.cache' --exclude '**/whatever/you/like' /home file:///mnt/your/backup/path
#maybe a reload is needed systemctl daemon-reload #run manually (for testing maybe?) systemctl start homeBackup.service #enable timer systemctl enable homeBackup.timer #just enabling is not enough! #start the timer systemctl start homeBackup.timer #after start the first run starts immediately, after that at the specified time stamps with "OnCalendar" #check all timers (with inactives, --all) systemctl list-timers --all #check backup log journalctl -u homeBackup.service
First Heartbleed, now Shellshock.
The new big bugs in important libraries/software have their own ‘brand’ name now. A bit crazy and a bit overkill in my opinion, but at least now such bad security holes are getting enough attention to be fixed FAST, hopefully.
I’ve updated my server immediately after good patches were available, so for the Shellshock vulnerability the update was applied just 5 minutes before this posting. 😉
So apart from not using cgi scripts and stripped down ssh shells, my server is as secure as possible again.
 Well, some patches will are still on the way…
Long planned holiday – you’re on your way, you don’t have a reliable internet connection at hand – and exactly then the worst things happen.
Got my server shut down due to a abuse message of my email service. Of course I did not recognise the problem, because I were not at home. I only got a email from my provider.
I’ve immediately checked the logs through my smartphone (!) and deactivated the compromised test account on my email service, but that did not prevent the server from being shutdown, because the queues of the postfix daemon were full of spam mails to be sent.
Well, learned something the hard way: Never forget to get rid of test accounts with too simple passwords.
After one and a half week I finally managed to be back home to get a statement letter out to the postoffice for the reactivation of my server.
First moves after reactivation: checking all logs, removing possible entrances, clearing mailboxes, installing fail2ban and other tools to prevent future brute force login attacks.
Last but not least: Searching for more monitoring possibilities when I’m somewhere out in the world, even with no internets…
On Friday I finally learned how to f*** up my laptop. Just unplug it accidentally during running a system update with “pacman -Su”.
Result: kernel panic on reboot, no help until I found a USB stick containing a archlinux image.
After booting and fixing the kernel image I encountered multiple strange errors from libraries which weren’t been touched during that failed update run.
So after searching and deeper searching, with no evidence of the damaged packages I finally decided to just reinstall the whole system.
First I reinstalled all packages installed by dependencies and then all explicitly installed packages.
#deps: pacman -S --asdeps $(pacman -Qnqd) #explicit packages: pacman -S $(pacman -Qnqe) --force
That ‘force’ was required due to some nasty errors about already existing files, which I did not care about.
Setting up gitlab is quite straight forward on archlinux, following this article.
The wiki article even mentions the procedure to run gitlab behind an apache https proxy. The only thing I’ve missed was the hint for changing the base url of gitlab so that a repository link would work with the correct domain instead of ‘localhost’.
For this it is necessary to change following section in gitlab.yml :
## GitLab settings gitlab: ## Web server settings (note: host is the FQDN, do not include http://) host: localhost port: 80 https: false
Here you have to change localhost to your domain name.
Looks simple, but somethings you search a long time for simple solutions.
Now my domain mmo.to and all subdomains are certified by StartSSL.
Hopefully never again you have to make a ssl certificate exception within your browser. 🙂
Nice service from StartSSL, the whole personal validation was done within 3 hours, and the next day I was able to generate my ssl certificates.
And of course I updated my server to get rid of the vulnerable openssl version.