[tek-nol-uh-jee]

Lets Encrypt Wildcard certs

Posted on

I searched a half hour for a how-to for Let’s Encrypt Wildcard certificates with automatic renewal.

All sites I’ve found just promoted the manual method, where I would have to manually add dns entries every 3 months – neeeeeever!

Then I stumbled upon acme.sh. This acme client tool for Let’s Encrypt even has plugins for the most providers who offer DNS configuration and expose an API. And there exists a plugin for my provider netcup.de

Couldn’t be better. Just set the environment variables like mentioned in the quite small how-to for the plugin. And run the command to get a new cert.

It might be a good idea to also add a bigger key size, because the default is just 2048bits.

acme.sh --issue --dns dns_netcup -d example.com -d *.example.com -k 4096

And you’re done. 

Now you just have the work to point your services to the new certificates.

For me those were:

  • apache 
  • quasselcore
  • postfix
  • dovecot
  • prosody (xmpp/jabber)

Again Qualys SSL Labs and mxtoolbox were a great help in checking if everything works as expected, thanks for that guys!

[tek-nol-uh-jee]

Editing office documents directly inside nextcloud

Posted on

It bothered me for long, that I couldn’t edit office documents directly online on my own/nextcloud. Then I found the collabora plugin in the nextcloud apps and checked the nextcloud website about that.

It’s easier than you think.

First Step: Get yourself the docker container running

The simplest solution would be a docker-compose.yml file like this one:

version: '2'
services:
  collabora:
    image: collabora/code
    environment:
      - domain=cloud.mmo.to
      - username=<username>
      - password=<password>
    restart: always
    ports:
      - 127.0.0.1:9980:9980
    networks:
      - collabora
networks:
  collabora:
    driver: bridge

It’s the latest developement version in collabora/code so for private use it’s okay. 🙂

As a sidenote: I have no idea what’s the username and password for in the docker container, but I’ve set it just to be sure.

Don’t forget to configure your webserver with an subdomain vhost and all the proxy configuration parts which are mentioned inside the nextcloud tutorial.

Second Step: Configure your Let’s Encrypt cert for the subdomain

Well, that’s kinda obvious and god damn simple, so I’ll skip to the next and last step.

Last Step: Configure your collabora app in nextcloud

… with the subdomain of your collabora docker instance behind the webproxy. 

And it magically works. I was surprised too! 

If anything doesn’t work as expected check back with the nextcloud site mentioned above or maybe on the website of collabora itself.

[tek-nol-uh-jee]

IPv6

Posted on

Well, it’s about time that I handle that topic too.

So here’s a list what I had to do to get it working on all services I have running

  • Checking the IPv6 subnet I got from my provider
    • Setting one of those IPs on the network device
  • Checking DNS entries
    • Adding AAAA records for “*”, “@” and the server name
    • Adding an IPv6 reverse DNS name
    • For email I had to correct my SPF entry
  • Service configurations I had to change or to check
    • Apache, just had to check that the Listen configuration listens on all interfaces
    • Postfix, here I had to add the IPv6 protocol
  • Gladly the docker internal network is completly hidden, so I do not have to care about anything running behind my apache proxy, also the SSH server is listening on all devices and I do not care currently about my gitlab instance external ssh access, that may still stay on IPv4 for a while

What might help when you’re testing IPv6 is following test website: https://www.mythic-beasts.com/ipv6/health-check

So far everything is working, what’s still bugging me is that you can’t force your browser to use IPv6 when visiting a site which supports it, you don’t even know it… 

[tek-nol-uh-jee]

Zuul, Jenkins, Gerrit and Git submodules

Posted on

So, we got a Git -> Gerrit -> Zuul (w/ Gearman) -> Jenkins setup at work and we started to use Git submodules with one repository lately.

Setting up the quality gate with Zuul and Gerrit for a normal git repository is quite straight forward and I won’t mention that any further. We got the problem, that we wanted to do a build of the parent repository of our submodule repository, when a change is committed for review or merge.

Zuul doesn’t give you any options here, it just has a single project configuration, and doesn’t support project dependencies.

BUT it supports build job dependencies!

So the solution is to build your submodule standalone in the first job, which can be the standard review job, based on a Jenkinsfile inside the submodule repository. And then starting a build job with the parent repository which depends on the result of the submodule standalone build. This second job can’t be a standard review build job because it has to do some different things. The standard Jenkinsfile for the review of the parent repository can be used with minor modifications.

So for your parent repository, you’ll be already using a checkout method which also retrieves the submodule repository and may look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'refs/heads/zuul']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[refspec: '+$ZUUL_REF:refs/heads/zuul', url: '$ZUUL_URL/$ZUUL_PROJECT']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]
}

Because of the fact that you have to use a special job for the task, you also have to change the fetch function away from the generic $ZUUL_URL/$ZUUL_REPO to a hardcoded checkout url.

These variables you have to use to update your submodule repository to the zuul change provided, the resulting fetch function could look like this:

def zuul_fetch_repo() {
    checkout changelog: true, poll: false, scm: [$class: 'GitSCM', branches: [[name: 'master']], doGenerateSubmoduleConfigurations: false, extensions: [], submoduleCfg: [], userRemoteConfigs: [[url: 'ssh://<user>@your-gerrit.url:29418/parent-repo']], extensions: [[$class: 'SubmoduleOption', disableSubmodules: false, parentCredentials: true, recursiveSubmodules: true, reference: '', trackingSubmodules: true], [$class: 'CleanBeforeCheckout']]]

    sh '''
    cd path/to/your/submodule/repository
    git pull $ZUUL_URL/$ZUUL_PROJECT +$ZUUL_REF:refs/heads/zuul
    '''
}

And that’s it! You just have to somehow get the change from the project you configured in zuul into the submodule, and you have a build of the parent project with integrated change commit from the submodule. Of course you can do that a bit fancier, but that’s left as an exercise for the reader.

At last here a little snippet of the zuul config part reflecting that.

projects:
  - name: submodule-repo
    review:    # the zuul pipeline
      - review:    # standard review job, submodule standalone
        - review-parent-with-submodule    # parent project with submodule checkout

 

[whatever]

Getting rid of an old email address…

Posted on

Just if anyone wonders by chance, I’ve deleted one of my old Emailaddresses, which was called ‘phoenix@minhiriath.org’ (2004-2017 rest in peace, now without spam).

It just grew to a spam honeypot lately and I already stopped using the address actively in 2013, as far as I remember.

I held it active for the unlikely chance that anyone would have wanted to get in contact with me, who hasn’t got my current addresses. Now I don’t care anymore, it’s deleted.

[tek-nol-uh-jee]

Integrity checks of different file types after hard disk crashes

Posted on

So, your hard disk crashed?

You rescued it with ddrescue/other tools?

Now you don’t know if the files are still intact?

Here some solutions for some media file formats:

  • Movies (avi, mpeg, mp3, mkv, webm, … everything ffmpeg can decode.)
    • ffmpeg -v error -i "$1" -f null - 2>"$1".log
    • decodes a movie or audio file and reports all errors into a logfile, so if you do that for every of your files you get a bunch of logfiles which you can grep for Read errors, then you have an idea which movies are damaged.
    • #!/bin/bash                                 
      
      if [ ! -e "$1.log" ]; then                  
              echo "Checking file: $1"            
              ffmpeg -v error -i "$1" -f null - 2>"$1".log                                    
              ls -l "$1".log                      
      else                                        
              echo "$1 already checked."          
      fi

      find . -type f -size +1M -exec ./check.sh "{}" \;

  • Pictures (png, jpg/jpeg)
    • Use pngcheck for pngs
    • Use jpeginfo -c for jpgs
  • Music (flac)
    • Just use flac -t to test a flac file

When I find some other useful integrity check methods for other file types or media I may add some.

[tek-nol-uh-jee]

Fuck you, probability!

Posted on

So why do we nerds use NAS/Server storage with RAID systems? Because we know hardware can fail, especially mechanical hardware.

And there are different variants of RAID (redundant array of independent disks) systems

  • RAID 0 (All disks just appear as one)
  • RAID 1 (One disk is mirrored to another one)
  • RAID 10 (Half of the disks is mirrored to the other half)
  • RAID 5 (Redundant parity information is stored so that 1 out of N disks can fail without data loss)
  • RAID 6 (Same as RAID 5 with one additional spare disk, so that 2 out of N can fail)

Back to topic, why am I writing ‘FUCK YOU, probability’?

The idea for RAID 5/6 came out of the typical probability how often a mechanical disk may fail. So if you’re almost paranoid you’re choosing RAID 6 for your Data to be ‘absolutely’ sure you won’t loose data.

Yeah, of course, until you come to that point, like me, when over a fucking weekend 3 out of your 5 disk RAID 6 system fail.

(more…)

[tek-nol-uh-jee]

Rescuing data from a damaged Windows HDD pt.II

Posted on

When I did and wrote the first post I did not know of ddrescue.

And it worked like a walk in the park, it just took eight and a half hours…

/tmp # ddrescue -d -r3 /dev/sdd2 blah.img blah.logfile
GNU ddrescue 1.21
Press Ctrl-C to interrupt
     ipos:  113173 MB, non-trimmed:        0 B,  current rate:       0 B/s
     opos:  113173 MB, non-scraped:        0 B,  average rate:   5280 kB/s
non-tried:        0 B,     errsize:    1362 kB,      run time:  8h 28m  5s
  rescued:  160990 MB,      errors:     1259,  remaining time:         n/a
percent rescued:  99.99%      time since last successful read:      1m  6s
Finished

Just 1362kB couldn’t be recovered. Everything of importance could be viewed/retrieved. Perfect.

[tek-nol-uh-jee]

The Browser Struggle

Posted on

I were a long-time Firefox user because of the unique addon called ‘tree style tabs’ and this browser is still the only one implementing this feature the right way.

Basically there’s no alternative, period.

So now I’m facing the problem that Firefox gets slower and slower.

First of course I’m a tab whore, 50-100 tabs are normal, because I also use tabbed websites for some kind of ‘read-later’ feature. Tree style tabs come in handy here, because you’re able to collapse a tree and forget it for some time until you need those tabs again. So this is the explainable slowdown of my so beloved Firefox.

Second, it gets slow loading websites – wtf? – that’s not really explainable, all other browsers I’ve tested, even with alot of tabs still load pages fast. And Firefox has even the feature (also with plugins) to not load pages until you activate them. So a lot of concurrent page loadings can’t cause that slowdown.

Last but not least, Firefox is eating ALOT of RAM, I even got problems at my workstation @work with 16GB RAM, swapping like hell. Of course the browser isn’t my only application running… but when you read the stats and the browser is eating more than one third of your memory you’re beginning to ask questions…

So I just do not get around the fact that I have to find another browser supporting my needs. That goes so far that I’m really going beyond my so fuc**ng needed tree style tabs plugin and that I’m ready to ditch it. (more…)

[tek-nol-uh-jee]

Jenkins Pipeline: using string keys of a map to unstash artifacts in a loop

Posted on

Just using my blog as my permanent public notepad.

def my_keys = build_result.keySet() as List;

for (int i = 0; i < my_keys.size(); ++i) {
  def key_to_unstash = my_keys[i];
  print 'Unstashing ' + key_to_unstash;
  unstash "${key_to_unstash}";
}

my_keys = test_result.keySet() as List;

for (int i = 0; i < my_keys.size(); ++i) {
  def key_to_unstash = my_keys[i];
  print 'Unstashing ' + key_to_unstash;
  unstash "${key_to_unstash}";
}