All posts by axel

Wireguard not working after update to Fedora 34: a simple fix

I recently updated my work laptop to Fedora 34 (from Fedora 32, oops!) and while i’ve had a very, very positive impression overall, one thing just stopped working there and then after the upgrade: my Wireguard config.

I use Wireguard to keep a VPN tunnel open to a server, which gets automatically started after the laptop boots by a systemd service unit. I also run dnsmasq to make it possible to split DNS resolution between work’s internal servers and the general Internet.

But here, it would just fail, with this one line indicating the culprit:
resolvconf[597860]: Failed to set DNS configuration: Could not activate remote peer.

Is this a problem of dnsmasq not starting? Nope, it’s working and resolving queries.

If you look into this, you’ll see things about installing openresolv (from source, but it’s super easy!) and end up, as one always does, on the excellent Arch Linux documentation, trying to muster the motivation to read through the thorough page. I didn’t really want to install openresolv when it’s not the default (even though, it’s worth mentioning in the two years since the recommendation to install it from source it has actually been packaged and is in the default Fedora repos)

Thankfully, the answer in this case was much simpler. Since Fedora 33, systemd-resolved (systemd’s DNS resolver) is enabled by default on new installs.

Except in my case (and maybe in yours), it wasn’t enabled and therefore not started after the upgrade to F34.

A simple command resolved the issue:

systemctl enable systemd-resolved

And now resolvectl --status also shows something useful.

Now, i just need to learn more about what what resolved can do for me (or what i might not like about it). It looks like it might simplify, or at least be a different option for my split DNS configuration. We’ll see!

Touche Pas à mon poste, rempart contre le fascisme

Hanouna, une autorité reconnue sur Harendt, se moque des « bien-pensants » qui lui reprochent de donner un porte-voix aux fascistes de Génération Identitaire. Parce que Hanouna, il fait de la télé depuis longtemps, il aime la controverse, it sait la gérer, il s’en nourrit.

Et il a la prétention de penser qu’il peut inviter des fascistes dans son émission, mais de manière contrôlée, avec des contradicteurs. Ces gens le méprisent, pour tout ce qu’il est, tout ce qu’il représente, mais qu’on se rassure, ça sera un débat, un vrai débat vous dis-je !

Ce qu’Hanouna n’a pas compris, dans son arrogance et sa suffisance de type bloqué dans son microcosme télévisuel, c’est que les fascistes s’en foutent de débattre. Ils sont immunisés contre vos Très Bonnes Questions, comme disait @KetanJ0 :

Ils veulent juste se déplacer tactiquement de la frange extrême et inacceptable à la frange un peu moins extrême et un peu moins inacceptable. C’est tout. L’exposition leur suffit, et tu leur offres. Ils sont là pour recruter, pas débattre, et augmenter leur visibilité c’est augmenter leur capacité de recrutement, qui se fait par l’exposition –?entre autres médiatique?– et non par le noble débat d’idées.

Quelle tolérance pour les ennemis de la tolérance, demandait justement Karl Popper ? Apparemment, bien que posée en 1945, la question n’a pas encore atteint l’équipe de production de TPMP. Quand on cherche à débattre ces gens, à une heure de grande écoute, ils ont déjà gagné la manche, car ce qui est en jeu est l’acceptabilité même de leur présence, et non leurs idées, contrairement à ce que pense Hanouna, visiblement grand spécialiste de la fenêtre d’Overton.

Le simple fait d’être passé dans l’émission sera célébré comme une victoire, peu importe si les contradicteurs ont posé « de bonnes questions ». Le comble de l’ironie –?et le reflet de sa suffisance?– est quand dans une vidéo de 2 min 30 justifiant son choix de donner la parole public et de légitimer (tout en pensant délégitimer) un groupe fasciste, Hanouna passe à peu près une minute à se moquer de celles et ceux… qui lui reprochent d’inviter un groupe fasciste, en leur expliquant de manière moqueuse et arrogante que toute publicité pour son émission est bonne à prendre, et qu’il est « dans une cage, et vous vous m’apportez de la viande ! ».

Et que penses-tu faire exactement là, Cyril, à part apporter à la cage de Génération Identitaire de la viande, des caméras, tout ton cirque médiatique et suffisamment d’esprits un peu perdus, en colère ou dans un mal-être suffisant pour être réceptifs à ces fascistes ?

Incapable de voir la contradiction la plus évidente, mais parfaitement équipé pour interviewer des fascistes sans que ça leur bénéficie ?
Allons donc.

Reverse tethering an Android device

One line version: enable adb debugging on device, ensure you have a recent version (>= 29) of adb on computer, download and execute ./gnirehtet run.

Why?

There are cases where you may want to have your phone get Internet connectivity through your computer. This is called reverse tethering.
(Tethering is when doing the opposite, giving your computer Internet access via your phone)

This may be done because you are travelling, and have no data allowance when out of your country, but your laptop happens to have Internet access (either over Wifi or Ethernet). Or, in a further scenario, because a hotel might charge per extra device connected to Wifi, and with reverse tethering you only need to connect one device.

It may also be done so you can inspect the trafic coming out of the phone, using standard linux tools such as tcpdump or wireshark.

A further advantage is that if your computer is connected to your home network via a VPN, your phone also gets access to anything in that network

In truth, reverse tethering is not something that is useful on a regular basis, but it is a nice tool to have. Also, if you are one of those people that enjoys learning semi-useful networking tricks, it’s fun.

With all this being said, let’s see how to make it work.

It turns out, with the right tool, it’s surprisingly simple.

Step 1: Android

You need to have Android debugging enabled. If that’s already the case, skip to step 2, otherwise read on.

If you haven’t already, activate the developer options (go to “Settings > About phone” or “Settings > System > About phone” or “Settings > About phone > Software information” and tap multiple (8) times on “Build number”).

Then go the developer options (in “Settings > System > Advanced” on the device i’m using) and under “Debugging” toggle “Android debugging” to on.

That’s all for now.

Step 2: computer

You’ll need gnirehtet (tethering backwards, very clever) and a recent version of adb, the Android Debug Bridge (from Android Platform Tools v29 or above, from what i can tell).

adb

As Android tools aren’t always as up to date as we’d like in distribution repos, you are likely to have to download them from the Android website. Extract the package to a directory and add that directory to your PATH, in order for that version of adb to be used. To do so, from within directory you extracted the platform-tools_r29.x.x-linux, execute:

export PATH=$PWD:$PATH

A which adb should confirm you are using the right version (not the system version).

gnirehtet

Download gnirehtet on its Github page.

Unzip the file, you can then execute it with ./gnirehtet

More specifically, to set up your system to tether a single phone (which is what we are going for):

./gnirehtet run

That’s it.

Step 3: Android

You should now see a request to accept adb connections from the computer if it’s the first time adb is used to connect to the device from this computer. Accept, and you should then see a message telling you gnirehtet wants to setup a VPN connection.

Traffic should now be going through the “VPN” and through the computer.

Bonus step

Quoting the gnirehtet documentation:

you can enable reverse tethering for all connected devices (present and future) by calling:

./gnirehtet autorun

Reconnaissance faciale et fausse bienveillance

Dans la course à l’acception publique de la reconnaissance faciale, certains proposent qu’elle puisse détecter dans une foule une personne souffrant de la maladie d’Alzheimer n’arrivant pas à rentrer chez elle.

“Crowds at the Cauldron” by scalpel3000 is licensed under CC BY-NC-SA 2.0

Sous couvert de bonnes intentions, ce genre de discours est dangereux. Ce discours pousse une profonde déresponsabilisation collective. Il prend pour acquis ou encourage les comportements individualistes.

Dans une société qui encourage le soin mutuel, la responsabilité d’aider une personne qui ne retrouve pas le chemin de chez soi est partagée. Elle incombe à qui peut, à ce moment et à cet endroit, porter assistance. Ce genre de principe existe depuis longtemps, y compris en droit (cf. l’assistance à personne en danger). Une tolérance, mieux, une reconnaissance doit exister pour celles et ceux qui aident autrui. Ce que ce genre de techno-solutionisme dit, sous couvert de bonnes intentions, c’est qu’il est normal et légitime que nous n’ayons pas à nous préoccuper de la personne qui est perdue, qui a besoin d’aide. Que notre devoir est de continuer à aller au travail, faire des courses.

© Benoît Prieur / Wikimedia Commons

Ce que ce genre de techno-solutionisme dit, sous couvert de bonnes intentions, c’est qu’il est normal et légitime que nous n’ayons pas à nous préoccuper de la personne qui est perdue, qui a besoin d’aide. Que notre devoir est de continuer à aller au travail, faire des courses.

Et que cette légitimité à ignorer les autres membres de la société, autour de nous, y compris celles et ceux qui ont besoin d’aide, est construite dans la structure technologique de la ville même : des machines s’occuperont de faibles, des perdus, des autres. Aucune raison de s’arrêter pour aider la personne perdue, la construction technique justifie, légitime et encourage de rentrer chez soi pour aller regarder Netflix et commander à manger par une appli sur son téléphone.

La technologie comme structure de l’individualisme.
Une raison, parmi d’autres, de s’opposer à la reconnaissance faciale.

Si vous voulez agir, rejoignez le mouvement contre la Technopolice : technopolice.fr/


Note : initialement publié sous la forme d’une série de tweets / toots. Republié ici parce que je ne suis pas le dernier à dire « get a blog » quand je vois des threads intéressants et long sur Twitter ou Mastodon.

Accessing your Google Calendar (and contacts and tasks) from your smartphone in a somewhat private way

Say, for instance, that your employer in its great wisdom has decided to use Google’s Google Apps for, among other things, employees and company calendar.

Say, for instance, that you don’t want to let Google have full view and control over your phone, even if you run Android.

What to do?

You might, for a start, not connect a Google account to your phone.

Or better, you might instead run an Android-based system on your phone (such as GrapheneOS or LineageOS) and not install the Google services on your phone

But having the reminders pop-up on your phone is very, very convenient and to connect to your Calendar you’re supposed to add a Google account to your phone.

Fortunately, you can combine DAVx? with Orbot to access your calendar without setting up a full Google account on your phone.

Here’s how.

Short version

  • Configure DAVx5 to use Orbot’s HTTP proxy
  • Add a DAVx? account with url <https://www.google.com/calendar/dav/<emailaddress>/events>, login <emailaddress> and an application password you’ve generated in your Google account for password .
  • Check the Google calendar is synchronising and that your calendar app is displaying it.

Longer version

On the Google side

Create a an “App Password”, separate from your usual Google account password. The Google documentation on this is good.

Essentially: go to your Google account > Settings > Security, “Signing in to Google” section > App Passwords.

Generate a new password. I called my app “Dav on Android”.

Write the password down (in a password manager, ideally).

That’s it.

On the Android side

These steps come from the DAVx? documentation which is helpful and to the point

Connecting privately to the Google servers: we’ll connect via Tor using Orbot

Install Orbot. You can install it from F-droid.

Ensure Orbot is offering an HTTP proxy (for local apps to connect through to Tor): Settings > `Debug: Tor HTTP`.

Recent versions of Orbot will even show you the port on which the proxy is offered on the app’s main screen (it’s often 8118, but it can be different if you have multiple Tor apps on your phone running at the same time. It was 8119 in my case).

Calendar access: we’ll use the CalDav protool using DAVx?

Install DAVx?. You can install it from F-droid.

Tell DAVx? to connect to the Internet using Tor.

Settings > tick `Override proxy settings` and set `HTTP proxy host name` to `localhost` and `HTTP proxy port` to `8118` (or the port that appears on Orbot’s main screen, it was 8119 in my case, as mentioned above)

Go back to DAVx?’s main screen and click the “+” to add an account.

Choose “Login with URL and user name” and enter the following:

– Base URL: `https://www.google.com/calendar/dav/<youremailaddress>/events`

– User name: <youremailaddress>

– Password: <the app password you generated in step 1>


DAVx? should log in, find your stuff and offer to synchronise calendars, contacts and tasks.

You can choose what to sync. I chose to only sync the calendar (i’m not sure contacts would even work considering my company uses an LDAP directory and i don’t know how that’s connected to Google’s “contacts” feature. As for tasks. i use taskwarrior and sync that in a different way).

The last step is then to go into your phone’s calendar app and check the Google calendar is selected both for syncing and displaying.

Result

Your phone will now have access to your Google calendar, using your normal user login and a specific password (which you can revocate if your phone gets lost or stolen).

Google should only see you make the requests for the calendar from Tor and thus not know where you are requesting this from.

Bear in mind that further correlation between the IP addresses you are accessing Google-related resources from (say, from your laptop) will probably make it possible to determine where you are, even if your phone is not giving that information away.

But this means you can at least not worry too much about your phone pinging your IP and thus your general geographical area to Google all the time.

Alternative route

Depending on how your Google apps admin has set up things, you may or may not have a “private address” you can use to access your calendar (read only).

Unfortunately, as of writing (Sept 2019), ICSx? (DAVx?’s webcal/ical sister app’) does not have a setting to set it up to use proxy. If you have a rooted phone you can tell Orbot to force apps to connect via Tor and tell it to put ICSx? in that list.

In this case, add the Google calendar “private address” to ICSx? and ask Orbot to force ICSx? to connect to the Internet via Tor.

So you’re confused about the gandi.cli API keys?

Yeah, me too.

The Gandi cli looks great on paper, however, it’s a bit unfriendly to get running,

Here is the situation, as of September 2019:

  • you actually need two different API keys
  • the old V4 interface uses a XMLRPC API.
  • the new V5 interface uses a REST API

So you need to go to v4.gandi.net, log in with your handle, and in your account management go to the API management and (re)generate your (XMPRPC) API key.

Then you also need to go to gandi.net, log in with your username, and in your security settings (re)generate a (REST) API key.

Finally, when running `gandi setup` you will put the v4 key first and the v5 key last.

This will give you a config file $HOME/.config/gandi/config.yaml that will look something like this:

api:
host: https://rpc.gandi.net/xmlrpc/
key: ikaitooquu4ahfun5Gidaen
apirest:
key: ien5quun1Eezaer3iesh7ph

I get a “IOError: unsupported XML-RPC protocol” error when trying to use the gandi record command.

But using the gandi dns command works fine.

record is the command to manage old (v4) domains .

dns is the command to manage “LiveDNS” (v5) domains.

Enabling a U2F security key on Github with Firefox (even if Github tries to stop you)

So, there’s this cool thing called U2F, for Universal 2nd Factor, a dead simple second authentication method in the form of a physical token (I’m using a Yubikey Neo, but that’s not specially relevant to we’ll be talking about here as it should apply to any security key).

By Tony Webster from Minneapolis, Minnesota, United States – Hardware Authentication Security Keys (Yubico Yubikey 4 and Feitian MultiPass FIDO), CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=71716914

To put in simpler term: with U2F to log in to a website you need the password and a physical doodad plugged in the computer. No doodad, no access. Sorry evildoers.

The idea being that while it’s possible to steal credentials (login and password), if you need also a physical thing, then just the credentials on their own are not useful.

With “trust-us-because-we-run-a-super-advanced-global-scale-Internet-infrastructure” companies like Facebook storing hundreds of millions of credentials in the clear (good job Facebook, no, really), it makes sense to use something that can’t just be stolen over the Internet.

I mean, you wouldn’t download a car, right?

So enabling U2F wherever you can is a good idea (as is having multiple physical security keys, as you will lose one or have it stolen).

Just show me how and stop blabbering

Fair enough. Let’s look at how to enable U2F security keys on Github in April 2019.

First of all, you’ll need to go to your Github account’s security settings and enable Two Factor Authentication (or 2FA as we cool kids call it, yo.). Github currently forces you to enable another 2FA method first, either SMS (erk) or TOTP (yes), so you’ll have to do that first. (Hint: you can use decent, FOSS apps to do TOTP on your phone).

Unfortunately for us, U2F is not enabled by default in current versions of Firefox (66.0.1 as i write this).

Luckily, it’s very simple to enable however, visiting `about:config`, searching for “U2F” and setting the following to true:

Security.webauth.U2F = true

More disheartening is the fact that even with this setting enabled, Github won’t let you add a key to your account, insisting instead that you “update to the latest version of Google Chrome”.

Not going to do that.

Instead, you can simply use Firefox’s developer tools to unhide the button that lets you add a security key.

To do so, open the Developer Tools (hitting F12 will do nicely) and in the Inspector, search HTML for:

new-u2f-registration

You should find a div element with a CSS display set to “none”, as shown in the CSS viewer (located to the bottom or to the right of the main inspector pane, depending on if your dev tools ar docked to the right or to the right, respectively).

Then, just untick the box next to “display: none;” and the “Register new device” button will appear.

The following screenshot might help:

Unhiding the Register new device button using Firefox’s Dev Tools

After that, everything works as you’d expect: you click the button, plug your key in, touch its button if it has one, give it a name to recognise with on Github, and you’re done.

Good, one less website to authenticate to without 2FA.

Adding a ED25519 SSH key to an ubiquiti edgemax router

Contrary to what I thought¹, it is possible to use an elliptic curve-based public SSH key on a Edgemax router, runnning a (recent?) EdgeOS.

Connect to the router over SSH and issue the following, to add your key to EdgeOS’s (/Vyatta’s) configuration:

configure
set system login user $your_router_user authentication public-keys user@host key "KEY-BODY-HERE"
set system login user $your_router_user authentication public-keys user@host type ssh-ed25519
commit
save

A few things to note:

  • user@host is whatever you want, it’s just the way one describes the key (technically, the config tree entry)
  • you’ll probably want to use YourUser@YourHost, YourHost as in: the host you are connecting from. That’s what is normally generated by OpenSSH as a comment at the end of public keys but…
  • …EdgeOS doesn’t understand any comment at the end of public SSH keyfiles. Even if they are a standard feature of OpenSSH keys.
  • In fact, it doesn’t recognise anything before the key itself either, so the usual ssh-rsa or ssh-ed25519 at the beginning of a keyfile make it choke.
  • So you must put nothing but the key body, in between quotes, when setting the config value system login user $your_router_user key
  • Finally, as you have probably guessed from the previous bullet points, setting the system login user $your_router_user type to ed_25519 is you tell EdgeOS what kind of key this is. Yes, this is the part that is at the beginning of a normal SSH keyfile.

This also explains why one hits the following error, when trying to paste when pasting the whole keyfile in the set system login etc. command.

Invalid public key character not base-64

Unfortunately, I was hoping that would explain why the loadkey command doesn’t accept the key from the keyfile, but… no. Even if you strip your public key file of the opening key type declaration (such as ssh-ed25519) and the ending comment (such as axel@master-switch), loadkey still complains and I get a:

Not a valid key file format (see man sshd) at /opt/vyatta/sbin/vyatta-load-user-key.pl line 96, <$in> line 1

Oh well.

  1. It’s not like EdgeOS’s public SSH key management is super user friendly.

I tried Netflix for two months. Then I cancelled.

Even though I have some strong reservations about Netflix’s model¹,  I thought I should give it a go and test it, as most of my friends use it.

It also happened that my Kodi box was messed up and it was taking me too long to get my act together and re-install it, so this was a good occasion to try Netflix.

What I find was a service that was quite far from my expectations. And on the whole, not very enjoyable.

Continue reading I tried Netflix for two months. Then I cancelled.