On 24th of January we had the first Python meetup for 2017 as always
in Hackerspace.gr. Based on our typical setup we
started with two talks:
Myself, I kicked off the meetup with a Micropython
on ESP8266 talk. I started with
quick intros on Micropython and the ESP8266, moved to using the RERL over
Serial with minicom and ended spectacularly with blinking neopixels.
Spyros followed talking
about Nameco based microservices and
his Tweetmark project to save tweets
for later reading and do it in style. Moved on with a live demo and
successfully fixed the obligatory demo bug defeating ...
While working on migrating support.mozilla.org away from
Kitsune (which is a great community
support platform that needs love, remember that internet) I needed to convert
about 4M database rows of a custom, Markdown inspired, format to HTML.
The challenge of the task is that it needs to happen as fast as possible so we
can dump the database, convert the data and load the database onto the new
platform with the minimum possible time between the first and the last step.
I started a fresh MySQL container and started hacking:
In a Debian server I'm using LVM to create a single logical volume from multiple
different volumes. One of the volumes is a loop-back device which refers to a
file in another filesystem.
The loop-back device needs to be activated before the LVM service starts or the
later will fail due to missing volumes. To do so a special systemd unit needs to
be created which will not have the default dependencies of units and will get
executed before lvm2-activation-early service.
Systemd will set a number of dependencies for all units by default to bring the
system into a ...
I blogged before about
building Docker images on Travis and suggested
uploading images after successful test runs to Docker Hub and use them as Cache
after downloading them in next Travis runs.
Travis upgraded recently to Docker version 1.12 (from 1.9) and since version
1.10 Docker features Content Addressability for layers. This change breaks
caching and we need to implement a workaround using Travis cache.
Changes need to be made in .travis.yml:
Start by requesting Travis to cache /home/travis/docker directory.
Piwik is a great FLOSS website analytics platform. I've
been self hosting it for different small websites I've managed through the
years. Although it's fairly easy to setup and maintain at this level of use, I
want to avoid having another service in my maintenance list.
While looking for user and web respecting alternatives -read looking for
something else than Google Analytics- I realized that
Sandstorm Oasis does support Piwik.
I logged-in and setup my Piwik instance, or Grain as Sandstorm calls instances,
in less than 30 of seconds. The tricky part is to copy the code ...
The road towards the absolute CI/CD pipeline goes through building Docker images
and deploying them to production. The code included in the images gets unit
tested, both locally during development and after merging in master branch using
But Travis builds its own environment to run the tests on which could be
different from the environment of the docker image. For example Travis may be
running tests in a Debian based VM with libjpeg version X and our to-be-deployed
docker image runs code on-top of Alpine with libjpeg version Y.
Git crypt is a neat git extension
to encrypt some files - if not all - in a git repository. Integrates nicely with
git using filters and it's use is completely transparent once you have unlocked
Using git-crypt you can still share a repository in public and maintain a set of
files with secrets that are accessible to a limited number of users. Especially
useful for open source projects.
At some point maybe you'll need to remove one of the users who have access to
the encrypted files. Git-crypt does not provide a command to remove users (yet ...
Over at Mozilla's Engagement Engineering we use Docker to ship our websites. We
build the docker images in CI and then we run tests against them. Our tests
usually need a database or a cache server which you can get it running
simply with a single command:
docker run -d mariadb
The problem is that this container will take some time to initialize and become
available to accept connections. Depending on what your test and how you run
your tests this delay can cause a test failure to due database connection
I like to self-host services to make things fit my needs instead of the other
way around and to satisfy my need to control my data. Also remember that self
hosting is really important for the health of the open web and internet.
There are a couple of options for self-hosted media servers. I always go for
FLOSS solutions, so Plex and friends are not an option. A
couple of years ago I tried Subsonic, including the
Musicabinet forks. Subsonic looked
really outdated at that time, so that didn't last long. Musicabinet was second
in my evaluation ...
Pro Trinket is
a ATmega328 based board by Adafruit. It's
compatible with Arduino IDE and most of the code written for Arduino.
It's small factor, cheap and comes with neat accessories like this
LiPo backpack addon.
Pro Trinket is cheaper, smaller and consumes less power than a typical
Arduino board because it lacks things like the
FTDI chip used to communicate
between the programmer and the board.
To program the Pro Trinket you need to have an external FTDI board or
figure out how to make programming over USB work with Linux. I opted
for the first option ...
Some WebTV or other video streaming websites use a special Adobe
(surprise, surprise) inspired protocol to stream their content. The
special thing about this protocol is that the file is split into
chunks which I guess is good if you want to jump on different points
in the video or you have an unstable internet connection but really
annoying if you just want to download the video and view it in a
Apps on Deis run within Docker containers which run on CoreOS machines
that form the Deis cluster. Each new release of your code, i.e. each
new deis pull or git push, creates a new Docker image that is
stored in the internal Deis Docker ...
I'm a big fan of Duplicity and I use to backup my laptop to my office based server over scp which in its turn encrypts everything with EncFS and syncs everything up to my SpiderOak account. I was going under a typical maintenance today and I run duplicity collection-status to see how many backup sets I have.
Surprisingly duplicity returned only one full backup set more than a year old and no other backup sets. After some digging I found that the returned backup set was indeed the only full set I had but besides that I had many ...